id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,629,535 |
https://bgr.com/2019/04/08/starhopper-launch-spacex-starship-test-vehicle/
|
SpaceX's Starhopper completed its first mini-launch 'hop'
|
Mike Wehner
|
Between tests of its Crew Dragon spacecraft, regular launches for commercial clients, and the impending second launch of the massive Falcon Heavy rocket, SpaceX has a whole lot on its plate these days. Despite all that, it’s still working hard on its Starship program, which will (hopefully) one day result in a spacecraft capable of traveling throughout our solar system and maybe even to neighboring stars.
At present, the Starship itself doesn’t exist, but a small-scale version of it, called the Starhopper, does indeed exist, and it just completed a major milestone. The pint-sized spacecraft performed its first official “hop test,” firing its engines and lifting off its launch pad briefly as powerful tethers kept it from flying skyward.
A hop test isn’t like a normal rocket launch. The spacecraft isn’t ready for a trip into space just yet, but SpaceX still needs to test its engines and other vital systems to see how they respond to real-world stresses. As such, the Starhopper was tethered to its launchpad for the duration of the hop, and as Elon Musk notes in a tweet, it “hit tether limits,” which indicates that it did what it was supposed to do.
Starhopper just lifted off & hit tether limits! pic.twitter.com/eByJsq2jiw
— Elon Musk (@elonmusk) April 6, 2019
SpaceX hasn’t said much regarding how this first hop test went, but Musk noted “all systems green,” which is a fancy way of saying that nothing broke.
A pic from tonight's Raptor Static Fire test and StarHopper's tethered hop.@NASASpaceflight https://t.co/TPT6AijFqq pic.twitter.com/a2AugscAMz
— Mary (@BocaChicaGal) April 6, 2019
A full-sized version of SpaceX’s Starship won’t see action for a while yet, but this smaller test vehicle will eventually pave the way for higher test flights and eventually full-scale launches. Musk and SpaceX have bet big on Starship being the vehicle of choice for manned missions to Mars and beyond, and while there’s still a long way to go before that is a reality, progress is certainly being made.
| true | true | true |
Between tests of its Crew Dragon spacecraft, regular launches for commercial clients, and the impending second launch of the massive Falcon Heavy rocket, SpaceX has …
|
2024-10-12 00:00:00
|
2019-04-08 00:00:00
| null |
article
|
bgr.com
|
BGR
| null | null |
7,323,488 |
http://www.linkedin.com/today/post/article/20140224190309-95015-why-facebook-is-killing-silicon-valley
|
Why Facebook is Killing Silicon Valley
|
Steve Blank Steve Blank Adjunct Professor Stanford University Published Feb
|
# Why Facebook is Killing Silicon Valley
We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win…
*— John F. Kennedy, September 1962*
**Innovation**
I teach entrepreneurship for ~50 student teams a year from engineering schools at Stanford, Berkeley, and Columbia. For the National Science Foundation Innovation Corps this year I’ll also teach ~150 teams led by professors who want to commercialize their inventions. Our extended teaching team includes venture capitalists with decades of experience.
The irony is that as good as some of these nascent startups are in material science, sensors, robotics, medical devices, life sciences, etc., more and more frequently VCs whose firms would have looked at these deals or invested in these sectors, are now only interested in whether it runs on a smart phone or tablet. And who can blame them.
**Facebook and Social Media**Facebook has adroitly capitalized on market forces on a scale never seen in the history of commerce. For the first time, startups can today think about a Total Available Market in the billions of users (smart phones, tablets, PC’s, etc.) and aim for hundreds of millions of customers. Second, social needs previously done face-to-face, (friends, entertainment, communication, dating, gambling, etc.) are now moving to a computing device. And those customers may be using their devices/apps continuously. This intersection of a customer base of billions of people with applications that are used/needed 24/7 never existed before.
The potential revenue and profits from these users (or advertisers who want to reach them) and the speed of scale of the winning companies can be breathtaking. The Facebook IPO has reinforced the new calculus for investors. In the past, if you were a great VC, you could make $100 million on an investment in 5-7 years. Today, social media startups can return 100’s of millions or even billions in less than 3 years. Software is truly eating the world.
If investors have a choice of investing in a blockbuster cancer drug that will pay them nothing for fifteen years or a social media application that can go big in a few years, which do you think they’re going to pick? If you’re a VC firm, you’re phasing out your life science division. As investors funding clean tech watch the Chinese dump cheap solar cells in the U.S. and put U.S. startups out of business, do you think they’re going to continue to fund solar? And as Clean Tech VC’s have painfully learned, trying to scale Clean Tech past demonstration plants to industrial scale takes capital and time past the resources of venture capital. A new car company? It takes at least a decade and needs at least a billion dollars. Compared to IOS/Android apps, all that other stuff is hard and the returns take forever.
Instead, the investor money is moving to social media. Because of the size of the market and the nature of the applications, the returns are quick – and huge. New VC’s, focused on both the early and late stage of social media have transformed the VC landscape. (I’m an investor in many of these venture firms.) *But what’s great for making tons of money may not be the same as what’s great for innovation or for our country*. Entrepreneurial clusters like Silicon Valley (or NY, Boston, Austin, Beijing, etc.) are not just smart people and smart universities working on interesting things. If that were true we’d all still be in our parents garage or lab. Centers of innovation require* investors funding smart people working on interesting things* — and they invest in those they believe will make their funds the most money. And for Silicon Valley the investor flight to social media marks the beginning of the end of the era of venture capital-backed big ideas in science and technology.
**Don’t Worry We Always Bounce Back**The common wisdom is that Silicon Valley has always gone through waves of innovation and each time it bounces back by reinventing itself.
[Each of these waves of having a clean beginning and end is a simplification. But it makes the point that each wave was a new investment thesis with a new class of investors as well as startups.] The reality is that it took venture capital almost a decade to recover from the dot-com bubble. And when it did Super Angels and new late stage investors whose focus was social media had remade the landscape, and the investing thesis of the winners had changed. This time the pot of gold of social media may permanently change that story.
**What Next**It’s sobering to realize that the disruptive
*startups*in the last few years not in social media - Tesla Motors, SpaceX, Google driverless cars, Google Glasses - were the efforts of two individuals, Elon Musk, and Sebastian Thrun (with the backing of Google.) (The smartphone and tablet computer, the other two revolutionary products were created by one visionary in one extraordinary company.) We can hope that as the Social Media wave runs its course a new wave of innovation will follow. We can hope that some VC’s remain contrarian investors and avoid the herd. And that some of the newly monied social media entrepreneurs invest in their dreams. But if not, the long-term consequences for our national interests will be less than optimum.
For decades the unwritten manifesto for Silicon Valley VC’s has been: *We choose to invest in ideas, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win*.
Here’s hoping that one day they will do it again.
*Read more Steve Blank posts at www.steveblank.com.*
Executive | Technologist | Entrepreneur
10yIf every human uses FB for a year we can eradicate common cold, this is monumental achievement for mankind.
Sabanci Business School - Sabancı University
10yThere was a story about rabbit and turtle...
Principal, Kaplan & Associates, Market Research and Marketing Consulting, Start-up Funding, Writer
10yAn interesting article on this topic in the NYT http://www.nytimes.com/2014/03/16/magazine/silicon-valleys-youth-problem.html?_r=0
| true | true | true |
We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to pos
|
2024-10-12 00:00:00
|
2014-02-24 00:00:00
|
https://static.licdn.com/aero-v1/sc/h/en3f1pk3qk4cxtj2j4fff0gtr
|
article
|
linkedin.com
|
LinkedInEditors
| null | null |
22,319,262 |
https://nodramadevops.com/2020/02/computing-a-risk-estimate-using-netflixs-riskquant/
|
Computing a Risk Estimate using Netflix's riskquant - #NoDrama DevOps
|
Stephen Kuenzli
|
RT: 5 minutes
Modeling Risk in Cloud Deployments described how to estimate and record threat impact and likelihood information in tags applied to Cloud resources such as databases and object stores. You can compute the risk of those threats by plugging that impact and likelihood into the general risk calculation:
```
risk = (likelihood_confidentiality_loss * impact_confidentiality_loss)
+ (likelihood_integrity_loss * impact_integrity_loss)
+ (likelihood_availability_loss * impact_availability_loss)
```
But that’s not something you “just do.” We know that the actual impact and likelihood are generally unknowable and so we’ll need to estimate an expected loss probabilistically.
In this post, we will compute a realistic annual loss estimate in dollars for an ecommerce application using a tool that models the distribution of possible impacts and probabilities appropriately.
The threats that were modeled for the example ecommerce application were:
Lost Availability due to ‘bad’ changes and load.
- Impact: ranging between $250 and $19,000 per incident
- Likelihood: 3 times per year
Lost Confidentiality due to an internal threat or attack:
- Impact: at least $1,000 for an internal leak and at most $100k if the data is exfiltrated by an attacker
- Likelihood: we didn’t define this previously, but let’s say the probability of an internal leak is 0.5 events per year and an external leak is 0.2 events per year (once every 5 years)
What are the estimated losses for these threats?
Netflix just released the riskquant tool to help you answer precisely these questions. From the announcement:
riskquant takes a list of loss scenarios, each with estimates of frequency, low loss magnitude, and high loss magnitude, and calculates and ranks the annualized loss for all scenarios. The annualized loss is the mean magnitude averaged over the expected interval between events, which is roughly the inverse of the frequency (e.g. a frequency of 0.1 implies an event about every 10 years).
Let’s put our threat scenarios in the table form that `riskquant`
understands (csv):
Identifier | Name | Probability | Low loss ($) | High loss ($) |
WebLoss ConfInternal | Lose Prod User DB Confidentiality Internally | 0.5 | 1000 | 10,000 |
WebLoss ConfPublic | Lose Prod User DB Confidentiality to Attacker | 0.2 | 10,000 | 100,000 |
WebLoss AvailAnnual | Lose Availability | 0.99 | 250 | 19,000 |
WebLoss AvailDaily | Lose Availability | 0.00822 | 250 | 19,000 |
The `Identifier`
and `Name`
columns identify a threat to simulate.
The `Low_loss`
and `High_loss`
columns specify the lower and upper bounds of the impact.
The `Probability`
column contains plain, unit-less probabilities and `riskquant`
doesn’t care what time periods you simulate, technically. This is useful when an event occurs multiple times per year, because we can’t express a probability as `300%`
. So to model the availability threat, we need to make a couple adjustments. Either model that the threat:
- occurs with 100% probability annually to get the expected impact of one event and then multiply by 3
- occurs with (3/365) 0.00822% probability and then multiply by 365
This approach to modeling event frequency makes some assumptions about independence and uniformity that I’ll skip for now. My hope is that this approach appears more accurate and definitely more precise than characterizing the event as having, e.g. a ‘Low’ frequency. *Better* information *now* is useful for managing risks we already have.
Ok, on to modeling the range of possible threat impacts.
`riskquant`
models impact value with the Log-normal distribution and reports the distribution mean as the expected loss for each threat.
Log-normal distributions always produce positive and sometimes extreme values, and the peak can be configured to resemble the most frequently observed values. These properties help it fit some phenomena better than other distributions such as a normal or uniform distribution. Log-normal distributions are often used to model losses by a cyberattack, fatigue-stress failure lifetimes, and project costs.
Let’s produce those loss estimates now. If you’d like to follow along, the files in this example are available on GitHub at qualimente/riskquant-example.
Riskquant requires tensorflow and other data analysis libraries that were easier for me to get working in Linux via Docker than OSX. You can check out the Dockerfile used to build the Docker image I used in this pull request. The image is available on Docker Hub at `qualimente/riskquant`
.
Run `riskquant`
on the threat model described in the data directory:
```
docker container run --rm -it \
-v "$(PWD)/data":/data/ \
qualimente/riskquant --file /data/webapp.threat-model.csv
```
The riskquant program runs successfully and reports the results were written to a file:
```
Writing prioritized threats to:
/data/webapp.threat-model_prioritized.csv
```
Let’s inspect the loss estimates with `cat data/webapp.threat-model_prioritized.csv`
, formatted below for readability:
Identifier | Name | Expected loss ($/event) |
WebLossConfPublic | Lose Prod User DB Confidentiality to Attacker | $8,080 |
WebLossAvailAnnual | Lose Availability | $5,130 |
WebLossConfInternal | Lose Prod User DB Confidentiality Internally | $2,020 |
WebLoss AvailDaily | Lose Availability | $43 |
`riskquant`
outputs the expected losses in order of greatest to least. The expected annual losses are:
- Lose Prod User DB Confidentiality to Attacker: $8,080 / year
- Lose Prod User DB Confidentiality Internally: $2,020 / year
- Lose Availability: $15,695 / year (365*$43) or $15,390 / year (3*$5,130)
This example was explored through `riskquant`
‘s command-line interface and the results of the `SimpleLoss`
model were presented here. You can perform more sophisticated analyses when using `riskquant`
as a library and configuring shape distributions directly. In particular, the library offers a `pertloss`
function that allows much more control over the shape of the probability distribution that produces threat events.
Let’s stop here, because we’ve improved our decision making capability significantly.
## Use the Information
These estimates are great information to have when deciding whether it makes sense to invest time and money in addressing the factors that caused the availability incidents or in protecting confidentiality of the ecommerce system’s user database.
Consider that if this team had $5,000 to invest in risk reduction, this information would suggest looking for ways to:
- significantly reduce the risk of losing confidentiality to an attacker
- reducing availability incidents from from 3 to 1 or pulling the repair time in significantly, both of these are key Aspects of Software Delivery Performance
Improvements in these areas are likely to have positive ROI within a one year time horizon with demonstrable results to the organization’s leadership.
Also, keep in mind that the risk management solution doesn’t always need to be technical or a single investment.
For example, the team might people available to improve availability through by implementing a more robust delivery process that detects failures and helps operators rollback quickly. The team might decide to invest $4k of the risk management budget in that area. This would leave $1k to increase cybersecurity insurance coverage that might limit the organization’s public data breach loss exposure to $50k.
The effectiveness of risk management processes depend heavily on the quality of the information available to decision makers. Quantifying those risks using a robust, consistent contextual model is a way to improve the accuracy and precision of the information used within the risk management process and help you repeat that analysis in a scalable way over time.
I’m building k9 Security to help engineers using the Cloud understand and and improve their risks continuously by improving the security policies that protect their data — hit reply if you’d like to learn more.
Stephen
#NoDrama
| true | true | true |
This post computes a realistic annual loss estimate in dollars for an ecommerce application using the riskquant tool that models the distribution of possible impacts and probabilities appropriately.
|
2024-10-12 00:00:00
|
2020-02-13 00:00:00
|
article
|
nodramadevops.com
|
#Nodrama Devops
| null | null |
|
14,171,286 |
https://www.youtube.com/watch?v=4OFH0uVJEss
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,416,918 |
http://tutorialzine.com/2010/06/making-first-chrome-extension/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,195,496 |
https://www.pravda.com.ua/eng/news/2023/08/19/7416283/
|
F-16s already touched down in Ukraine – Head of Ukraine’s Air Force
|
STANISLAV POHORILOV
|
# F-16s already touched down in Ukraine – Head of Ukraine's Air Force
Lieutenant General Mykola Oleshchuk, Head of Ukraine’s Air Force, has said that Ukrainian pilots have experience operating Western-made F-16 fighter jets, which have already touched down on Ukraine’s airfields.
**Source**: Mykola Oleshchuk on Ukraine’s national 24/7 newscast
**Quote from Oleshchuk**: "An F-16 jet has already been to Ukraine. It has touched down on our airfields, we’ve held joint training with F-16 pilots, and so we do have experience operating the F-16 jets. I think this is crucial."
**Details**: Oleshchuk also said that Ukraine is currently preparing its runways [for F-16 jets]. "We are making the necessary alterations, improving the surface, improving our airfields’ infrastructure, and building new defence facilities," he explained.
"So I think we will be able to bring these aircraft to Ukraine as soon as we acquire them," Oleshchuk concluded.
**Previously**: In April 2023, Yurii Ihnat, spokesman for Ukraine’s Air Force, said that US-made fighter jets had touched down on Ukraine’s airfields even before Russia’s full-scale invasion, in 2012 and 2018. "We have dozens of different airfields – operational and regular ones – that can be used for these aircraft," Ihnat said.
**Background**:
- Ukraine’s Defence Minister Oleksii Reznikov said earlier on 19 August that Ukrainian pilots have started training to operate Western-made F-16 fighter jets, with a minimum training period of 6 months.
**Ukrainska Pravda is the place where you will find the most up-to-date information about everything related to the war in Ukraine. Follow us on** **Twitter****,** **support** **us, or become** **our patron****!**
| true | true | true |
Lieutenant General Mykola Oleshchuk, Head of Ukraine’s Air Force, has said that Ukrainian pilots have experience operating Western-made F-16 fighter jets, which have already touched down on Ukraine’s airfields.
|
2024-10-12 00:00:00
|
2023-08-19 00:00:00
|
article
|
pravda.com.ua
|
Ukrainska Pravda
| null | null |
|
9,048,629 |
http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html?m=1
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,915,482 |
https://blog.stratumsecurity.com/2016/06/13/websockets-auth/
|
Journey Into WebSockets Security
|
Craig Arendt
|
# Journey into WebSockets Authentication/Authorization
One subject that is often mentioned in talks about WebSockets security, is how WebSockets does not implement authentication/authorization in the protocol.
This might not be as familiar because when the original research was done, there were not many applications using WebSockets. I wanted to demonstrate what this pattern looks like with an application that was using WebSockets for a critical application function.
"It is a common misconception that a user who is authenticated in the hosting web application, is also authenticated in the socket stream. These are two completely different channels." - José F. Romaniello, https://auth0.com/blog/2014/01/15/auth-with-socket-io/
Without repeating all the research about WebSockets, if you are new to WebSocket hacking, this is a great Black Hat talk which will help catch you up. talk | slides
tl;dr - Many of the same vulnerability classes exist in applications that are using WebSockets that would exist in web applications using HTTP polling, except that the way that these issues are exploited are WebSockets specific. Such as:
- Transmitting sensitive data in cleartext (WS:// instead of WSS://)
- User input validation issues
- Authentication/Authorization issues
- Origin Header Verification / Cross-site Request Forgery (CSRF)
Because authentication and authorization is not inherently handled in the protocol, it is the developers responsibility to implement this at the application level in WebSockets.
This is what the WebSockets RFC has to say about WebSocket client authentication.
This protocol doesn't prescribe any particular way that servers can
authenticate clients during the WebSocket handshake. The WebSocket
server can use any client authentication mechanism available to a
generic HTTP server, such as cookies, HTTP authentication, or TLS
authentication. RFC6455
###### WebSocket Opening Handshake Sec-WebSocket-Key Header
In the WebSocket opening handshake the Sec-WebSocket-Key header is used to ensure that the server does not accept connections from non-WebSocket clients. This is not used for authentication.
```
GET /socket/
Sec-WebSocket-Key: 01GkxdiA9js4QKT1PdZrQw==
Upgrade: websocket
```
```
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Sec-WebSocket-Accept: iyRn17RxQADfC/y254mArm4wRyI=
```
The Sec-WebSocket-Key header is just a base64 encoded 16-byte nonce value, and the Sec-WebSocket-Accept response is the Sec-WebSocket-Key value concatenated with the string "258EAFA5-E914-47DA-95CA-C5AB0DC85B11", SHA1 hashed, then base64 encoded.
###### Chrome Developer Tools WebSockets Testing
When reviewing WebSocket applications for security issues, ZAP or Burp may be able to read, or even modify some WebSockets frames. However, I have found that in some applications attempting to modify/replay messages may break the socket connection, or otherwise go wrong. You may have better results by calling the WebSockets API directly, or using the API of the implementation used by the application. Eg., Socket.io, SockJS, WS, etc.
Chrome Developer Tools provides an easy way to view WebSockets messages, correctly unmasks data frames, and will allow you to test applications that are using WebSockets. To view WebSocket frames, go to Developer Tools, Network, WS tab:
Reviewing Slack WebSocket messages in Chrome
###### WebSocket API Basic Usage
Using the WebSocket API to send and recieve messages.
```
//connect to the socket interface
var socket = new WebSocket('wss://host.com');
```
```
//on open event
socket.onopen = function(event) { console.log("Connected"); };
```
```
//on message event. return messages received from the server
socket.onmessage = function(event) { console.log(event.data); }
```
```
//send a message to the server
socket.send('simple message');
//send message syntax used in socket.io.
socket.send('42/namespace,["mymessage","hi"]');
//JSON.stringify message to the server
socket.send(JSON.stringify({"vId":null,"type":"UPDATE_USER","data":{"name":"admin","pass":"mypassword","priv":true}}));
```
###### Socket.io (WebSockets Realtime Framework) Basic Usage
If the application is using Socket.io, the server will serve the path /socket.io by default. This is where engine.io and socket.io.js are served from.
```
//jQuery load socket.io.js. (if it is not already loaded)
$.getScript('http://host/socket.io/socket.io.js');
```
```
//connect to the socket interface, and the defined namespace
var socket = io.connect('http://host/console');
```
```
//returning custom 'data' socket messages from the server
socket.on('data', function (data) { console.log(data); });
```
```
//emitting a message (equivalent to socket.send)
socket.emit('Simple message');
//emitting a custom socket message
socket.emit('command', 'cat /etc/passwd');
```
###### Server Console Application
This is one example of an application which required authentication for the web application, but not for the WebSocket connection.
There was server console functionality included in the application stack, that used Socket.io to communicate system commands in realtime.
In reviewing the socket frames when authenticated to the console, it was evident that WebSocket messages containing system commands were passed without authorization tokens, or authentication required before the socket connection was established.
So from this point, it was just a matter of connecting to the WebSocket endpoint directly which did not require any authentication:
`var socket = io.connect('http://host/console);`
Returning custom 'data' socket messages from the server (so we get responses to our commands):
`socket.on('data', function (data) { console.log(data); });`
Emitting a custom socket message:
`socket.emit('command', 'cat /etc/passwd');`
Emitting a command, and receiving the socket response
###### Round.io (Demo chat application)
Someone created an interesting concept for a chat application that uses WebSockets to allow you to chat with people around the world, and displays their location on a map. The UI will use the browser geolocation to show where you are chatting from; if this is not supplied, the UI will not allow the user to chat. For **demo purposes**, and to play with the concept of application authorization, deny the browser access to your geolocation when it is requested. https://round.io/chat/
Connect using the WebSocket API:
`var socket = new WebSocket('wss://round.io/socket.io/?EIO=3&transport=websocket');`
Send a chat message with coordinates, and nickname. eg.,
`socket.send('42["outgoing message",{"msgtext":"Nobody exists on purpose Summer","lat":53.06,"lng":6.57,"nickname":"Morty"}]');`
round.io chat application
Even if an application does not provide any visible user inputs, communication sent to the WebSocket can still be manipulated, allow attacks against users connected to the socket, or allow attacks against the server.
Auth0 has a nice post on how to require authentication in Socket.io with cookie-based or token-based authentication: https://auth0.com/blog/2014/01/15/auth-with-socket-io/
Authentication can also be passed in the WebSocket URI when connecting. The issue with this method is that authorization will be passed in a GET request which will remain latent in proxy logs, so that issue will need to be mitigated: http://dev.datasift.com/docs/api/streaming-api/websockets-streaming
The messaging service Slack takes this approach in authenticating to their real time messaging (RTM) API. Their API describes a single-use WebSocket URI that is only valid for 30 seconds. https://api.slack.com/rtm
If you have any comments about this, you can find me here.
*Craig is a security consultant at Stratum Security. Stratum is a boutique security consulting company specializing in application security, data exfiltration and network security.*
References:
| true | true | true |
One subject that is often mentioned in talks about WebSockets security, is how WebSockets does not implement authentication/authorization in the protocol.
|
2024-10-12 00:00:00
|
2016-06-13 00:00:00
|
article
|
stratumsecurity.com
|
Stratum Security Blog
| null | null |
|
1,866,543 |
http://moncurling.posterous.com/signup-forms
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,810,550 |
https://reallifemag.com/hold-the-line/
|
Hold the Line — Real Life
|
Lauren Fadiman
|
Seven years before Nikola Tesla imagined (in a 1926 interview with *Collier’s*)* *a future in which wireless connection would allow us to “see and hear one another as though we were face to face” with “instruments … [that] fit in our vest pockets,” the British cartoonist W.K. Haselden published his own prophetic comic in the *Daily Mirror*. It depicts a harried gentleman disturbed at the most inopportune moments by his “pocket telephone,” which rings while he is at a concert with a date, while he is being handed a crying baby, and while he is escorting his bride down the aisle. Whereas Tesla focused on the potential for heightened — or, literally, *extended* — human connection, Haselden illustrates a world where pocket telephones, “the latest modern horror,” alienate us from our immediate embodied circumstances and disrupt precious moments of domestic intimacy.
Such concerns about the sanctity and intactness of the domestic sphere abound in early discourse about the telephone. In newspaper articles of the 1870s and ’80s, featuring such optimistic titles as “The New Terror” and “An Electrical Outrage,” commentators speculated that homes with telephones would soon become sites of “enormous danger,” their residents “bared to the least vibrations of the roaring world.” In one such article, the author imagines a mother with a baby in her arms sustaining “fatal injuries” after hearing a violent political polemic over the phone. In the March 2, 1887, edition of the *Chicago Daily Tribune*, an article describes — and, I think, parodies — a mother who is afraid to call a home where someone has scarlet fever, as she is “sure that there would be great danger of infection over the wire.”
Through these rituals, our phones cease to be distant, technical objects
Of course, such fears did not halt the spread of telephone wires across the country. Nor did variations on these concerns ultimately prevent the adoption of cell phones. But since their widespread adoption, cell phones have taken on a different relation to the domestic: not as an infiltrator of, but a participant *in*, that intimacy. They mediate many of our closest relationships — and are themselves our most constant companions. After all, many of us spend all our waking hours with a phone within reach, and even sleep with them beside our pillows. We continually reaffirm the mutualism between human and machine by allowing our phones to track our steps, sleep cycle, menstruation, heart rate, and more, permitting them ever more insight into our bodies.
But we presume a kind of insight into their bodies as well, wielding a whole arsenal of at-home folk remedies to “treat” phones as we might treat scraped knees or earaches of a family member. Those cures — perhaps best represented by the bowl of dry rice that is the ubiquitous prescription for a soaked phone — are ostensibly about addressing technical issues but in practice they bring phones under a kind of discursive control, helping us make sense (albeit false sense) of their largely obfuscated inner workings. Through these rituals, our phones cease to be distant, technical objects and come to feel as familiar and intuitive as our own bodies.
In a 1987 paper, Carol Cohn famously showed how nuclear engineers used domesticating language to treat nuclear warheads as infants or pets: “Pat it,” she writes, “and its lethality disappears.” The canon of folklore and folk rituals surrounding phones reflect a similar tension — that of danger encroaching on the domestic sphere — followed by a similar urge to self-soothe, which is done by taking deliberate steps to integrate the danger *into* the domestic sphere. On the one hand, there are countless examples of folklore about the existential “threats” posed by phones, from practical warnings against keeping them *too *close to your head at night to elaborate stories of haunted phones transmitting texts and phone calls from the Other Side. On the other hand, there are folk rituals that make even our most extreme concerns manageable, claiming that our phones are actually so straightforward that they can be cured with extraordinarily modest means: rice, toothpaste, baking soda. How could something so simple haunt or hurt you?
These folk rituals help us assimilate the “new” into our lives by making it compatible with the old, making it easier to classify and explain. Despite our ostensible modernity and our sense of ourselves as comfortable with technological progress, folk beliefs about technology always bubble up through the cracks, of phone screens and otherwise.
The development of communication technology is widely figured as a kind of inevitable narrative of progress, arcing ever toward sleeker, smaller, smoother, more streamlined, more user-friendly. The world heralded by the cell phone is one where information, capital, and content flow freely. The current moves 24/7, and one can join in anywhere. But this was not inevitable; rather, that vision was prioritized over other alternatives by the military and emergency services that developed radiotelephony — the midcentury predecessor to cellular technology — as well as early adopters of the “car phone.” In his autobiography, Martin Cooper, the inventor of the first mobile phone as we know it, explained that “it took a team of skilled and energetic people to build that phone and make it work. And thousands more executives, engineers, and marketers to create today’s trillion-dollar industry.”
Folk rituals cure our phones with modest means: rice, toothpaste, baking soda. How could something so simple haunt or hurt you?
But for most phone users, the efforts of those thousands are invisible, taken for granted. When a gadget functions as expected, one needn’t actively think about *why* it works, what it means for it to “work,” or what particular uses for the device have or haven’t become naturalized over the course of its development. Sociologist Bruno Latour has pointed out this paradox: “The more science and technology succeed, the more opaque and obscure they become.”
But when that success sputters — when devices act up or break down — folklore may take over, giving narrative and ritual form to widely shared misconceptions and concerns about technology. The lack of practical knowledge about how phones actually work opens a liminal space characterized by — to use folklorist Tok Thompson’s definition from *Posthuman Folklore *— “new categorizations” and “new ontologies,” new ways of understanding what phones are, how they work, and why. Technical ignorance forces us to fill knowledge gaps with material that already exists in our cultural inventories. These inventories may well include practical knowledge about how *other *devices work, but they are just as likely to have been shaped by pop culture as popular science.
There are many ways to define folklore, but where phones are concerned, it often takes the form of jokes, rumors, and personal experience narratives, which feature in the conspiracy theories, panics, joke cycles, and the like that periodically circulate online — some so compelling as to achieve meme status, as with the tweets from 2017 about the “FBI agent watching me through my phone,” or a more recent TikTok challenge that claims to identify unfaithful boyfriends based on whether they put down their cell phones face up or not.
Folklore serves many affective purposes: Among other things, it helps us cope with and occasionally criticize the world around us. Simon Bronner notes in *The Meaning of Folklore *that we often call upon or produce folk material to “symbolize, and thereby control, anxiety or ambiguity.” Meanwhile, another folklorist, Alan Dundes, describes folklore as a “socially sanctioned outlet” through which to express all kinds of controversial, complex, tricky, and taboo thoughts. Amid a climate of presumed surveillance, folklore allows us to communicate without necessarily saying what we mean, expressing concerns without necessarily sounding paranoid, attracting suspicion, or inviting retribution.
Hence, it should come as no surprise that in social media feeds and in places like Reddit, WikiHow, and Quora, phone users continue to question to what extent the device tucked into their pocket or bra will burn them, give them cancer, make them infertile, track their movements, hemorrhage their data, eavesdrop on their conversations, tether them to the dead, blow up the gas pump, blow up their head, cause them to spontaneously combust, trick them into incest or infidelity, and any number of other things. In some cases, these curiosities and concerns may be allegorical or metaphorical, pointing toward larger or more nebulous anxieties: Are the experiences and relationships we mediate through our phones “real” in the same way that offline experiences are real? Do we control these devices, or do they control us? What else might be lurking inside of Pandora’s Box?
When we manage technical difficulties at home, we can distance ourselves from our growing dependence
Given the sheer amount of capital that has been poured into shaping and managing our relationships with communication technology, to read hauntedness, monstrousness, malfeasance, volatility, and danger into phones is a kind of inadvertent political statement — a refutation of the advertising that emphasizes ease, convenience, and a kind of moral obligation to strive after ever more advanced technologies. If, as Leah Lowthorp notes in an essay about the joke #CRISPRfacts hashtag, the folklore of science and technology offers “a glimpse into how the wider public is dealing with … complex scientific developments,” then the fears expressed about phones demonstrate lingering skepticism about cellular devices — even if most of us are nonetheless resigned to buying the next model.
Folklore isn’t just a medium for people to articulate their uncertainties or criticism about technology. It is also a mode through which we answer our own questions and produce our own meaning. Rituals can allow us to assert control and ownership over things that don’t quite feel like “ours” yet — whether that is an apartment we rent or a new phase of life we have just entered. Many vernacular and ritual treatments of phones are about control — reclaiming a gatekept technology, resisting how it has reshaped our lives, retrofitting it to better serve our particular needs. Common repair rituals allow users to reject the foreign expertise of tech companies (which may not serve our interests first) and the alienation that comes with sending intimate devices off for repairs. When we succeed in managing our technical difficulties at home, we can further distance ourselves from the unyielding fact of our growing dependence on tech companies for the devices that now make daily life seem possible.
But folklore about devices is about care as well as control. Unlike other technologies — microwaves, dishwashers, cars, even laptops, each with a relatively straightforward function — the phone’s rapid, ongoing evolution causes it to continuously evade stable categorization: It is a communications technology, a camera, a computer, a compass, a television, a radio, a watch, a thermometer, a pedometer, a weathervane, a map, a toy — and with interlocutors like Siri, something more even than that. It is certainly more than a mere tool. And that ontological nebulousness resonates with us at least as much as our lack of understanding confounds us: the multifunctionality of the phone resembles the multivalence of organic life forms.
“Our mental processes,” Thompson writes, are “increasingly enmeshed with the digital realm,” a merger that inevitably “changes our view of ourselves, our very nature.” It is easy enough to say that phones act as extensions of our body — they are, after all, designed to — but does that resonate with us in an affective, rather than merely symbolic, sense? At-home fixes for phones may be about more than asserting our autonomy in a rational way. I know I recall well the sick feeling in my stomach the last time I watched my phone go tumbling down the stairs, the queasy reluctance with which I retrieved its shattered pieces. Seeing into the guts of my phone left me in the throes of something like Kristevan abjection, reminded of the sad fact of my own materiality.
We reject a phone’s assertion of thingness with a personal vehemence, as though objecting to the notion that
weare things
In “Thing Theory,” Bill Brown writes, “we begin to confront the thingness of objects when they stop working for us … when their flow within the circuits of production and distribution, consumption and exhibition, has been arrested, however momentarily.” Such, he contends, are moments in which objects “assert themselves as things.” Many objects, when they assert themselves this way, prompt us to make counter-assertions: for instance, that they are trash. We quickly discard broken toys. But when phones assert themselves as things, the stakes are different: We reject their assertion of thingness with a kind of personal vehemence, as though objecting to the notion that *we *are things. How could something so close to us — something we rely on so heavily in everyday life — be alive one second and trash the next? What does that say about us?
So we turn on our devices the same canon of possibly spurious home remedies we turn on ourselves — rituals of care whose actual cogency, coherence, and effectiveness may be irrelevant. The meaning is embedded in the energy expended, the concentration required to complete the ritual successfully. By rejecting the thingness of cell phones, we reject the culture of disposability encouraged by their corporate makers. And at the same time we also acknowledge our strange, chimeric entanglement with these elusive devices.
Acts of care for our cell phones certainly do not do much to disempower the tech companies that shape our world, but even such small gestures of tenderness may enable us to imagine past the fatalism that dominates so much discourse about how we relate to our devices — and how, eventually, they may relate to us. In imagining cell phones to be vulnerable in the same way human bodies are, we displace back onto them some of the vulnerabilities they create in us: exposure to surveillance, data exfiltration, misinformation, potential physical side effects (ranging from radiation exposure to Carpal tunnel), and more. And when we care for our phones, we imagine relationships with these devices that defy planned obsolescence, that reject the inevitability of disposability. Instead, we practice a kind of symbiosis in which our phones are more than mere tools, and we are more than simply “users.”
And even when our phones are finally, irrevocably “dead” — for that is how we describe it — many of us weirdly refuse to discard them, instead keeping their lifeless bodies in a spare bedroom drawer. On TikTok, there are videos set to a backing track that is just the word *Apple* repeated, as dead phones are stacked 10, 12, 14 high. My three are in my childhood bedroom right now; I can picture well their final resting place.
There is one WikiHow article I have practically memorized, just in case: It instructs readers to submerge a wet phone for 48 to 72 hours in four cups of rice — specifically instant rice, because white and brown are less absorbent — and plan to “rotate the phone to a different position every hour until you go to sleep.” If I were to drop my phone in water right now, I would follow these steps to the letter — not because I think an hourly rotation makes much of a difference, but because I would want to feel that I had done everything I could.
| true | true | true |
On the emergence of a folklore of screens
|
2024-10-12 00:00:00
|
2022-08-15 00:00:00
|
article
|
reallifemag.com
|
Real Life
| null | null |
|
588,161 |
http://mast-economy.blogspot.com/2009/04/10-days-of-good-news-round-out-april.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,144,807 |
http://blog.veitheller.de/Scheme_Macros_IV:_Deconstructing_Classes.html
|
Scheme Macros IV: Deconstructing Classes
| null |
It’s time for another installation of my series on Scheme macros. Previously, we’ve talked about defining modules, generic functions, and local variable binding using macros. This time I want to write about classes and how we can define an interface for object-oriented programming. We will be going down a similar route as we did with both modules and generic functions, and if you read those posts, the definitions we explore here might come naturally to you. As always, the code is online, this time in form of a zepto package that I wrote a few years ago.
As usual, we will start by defining an API, then slowly walk through one possible way of implementing it before wrapping up and concluding with caveats and possible extensions. Are you excited? I’m excited!
## An API for OOP
We’re going to implement a single-inheritance system, more similar to Smalltalk than to Java. Let’s sketch out an API for defining classes. As always, we’re going to do so in my own little Lisp called zepto. This time we’re going to rely on some of the metaprogramming APIs of the language, and a concept that I call atoms. Called `keywords`
in some other Lisps, they are symbols prefixed with a colon, like `:this`
. They always evaluate to themselves. I will cover why they are useful in another blog post, for now, you know what they are.
```
(class MyClass
(properties
:mykey
(:myotherkey :default 0))
(functions
(->string (lambda (self)
(++ "<A Person: " (->string (Person:get-mykey)) ">)))))
```
Okay, what’s going on here? We define a class, give it the name `MyClass`
, and then separate properties from functions. Properties can optionally have defaults, and functions take a reference to `self`
1.
What functions get generated from this definition?
```
; we’re able to define instances
(define instance (MyClass :mykey "foo"))
; we can also get and set properties
(MyClass:get-mykey instance) ; => "foo"
(MyClass:set-mykey instance "bar") ; => MyClass
; we can call the functions we defined
(MyClass:->string instance) ; => "<A Person foo>"
; we can check whether somehing is an instance of the class
(MyClass? instance) ; => true
; we can get all the properties associated with a class
(MyClass:get-properties) ; => [:mykey, :myotherkey]
```
Wow, that’s a lot of generated code. Some of them are necessary to make classes useful, and some are just nice to have in there and interesting to implement.
So far, all we have implemented is a kind of typed hashmap with generated accessors—which is sufficient for some languages to be object-oriented. We also want to have inheritance, though, because that makes the whole implementation more fun and interesting.
```
; the first argument is the parent class
(inherits MyClass MyOtherClass)
```
We will limit ourselves to single inheritance for two reasons. Firstly, I like it better that way. More importantly, though, it avoids a discussion we otherwise would need to have about how to best resolve inheritance order. There are different ways to go up the inheritance chains, and some of them are quite interesting. It is, however, a discussion I’d like to avoid for the purposes of keeping this blog post short and crisp.
We will also only inherit functions. In the scheme we are implementing, this is a little simpler, but can also easily lead to bugs. If you want to work on this some more, I have some pointers for you at the end. The whole thing feels a bit more like prototypes than classes, really, but none of that is not fixable.
Anyway, let’s try to write a little bit of code, shall we?
## Implementing classes
We’re going to do something simple but sloppy by defining all functions and classes directly in the environment instead of keeping track of our objects in another data structure. We will talk about this tradeoff a little more when wrapping up.
### Implementing inheritance
I’m saying all of this because we’re going to start by implementing the simpler part of our API: inheriting. As always, let’s start with a skeleton macro.
```
(define-syntax inherits
(syntax-rules ()
((_ parent child)
; do something
)))
```
Okay, so we are getting the parent first, then the child. At this point, both of them have already been defined. We will thus reach into the environment and pull out all of the functions associated with both classes. This is where zepto specifics come into play, because we will be using the functions `with-environment`
, `env->hashmap`
, and `hash:keys`
. All of those are fairly straightforward, and I’ll talk about them a little bit when we discuss the implementation
```
(define-syntax inherits
(syntax-rules ()
((_ parent child)
(with-environment env
(let* ((funs (env->hashmap env))
(names (hash:keys funs))
(filter-names
(lambda (name)
(filter ($ (string:starts-with %
(++ (->string name) ":")))
names)))
(parent-funs (filter-names 'parent))
(child-funs (filter-names 'child)))
; do something
))))
```
Okay, this is a little weird, but I promise it is not as scary as it seems at first. First, we use `with-environemnt`
to bind the current interpreter environment to a name called `env`
. We then transform this environment into a hashmap where the keys are the names and the values are the objects bound to those names, and give it the name funs. We only need the names, so we get all the hash keys using `hash:keys`
. Then we define a filter function called `filter-names`
that reaches into those names and filters them by prefix. I should at this point probably explain the weird `($ ... % ...)`
syntax: this is just a shorthand for `(lambda (%) ...)`
to save typing.
When we’re done with all that, we are ready to filter the environment for anything that starts with the name of the parent and a colon and the name of the child and a colon. We assume these to be the parent and child functions.2
Okay, so now we have the parent and child functions. What do we do with them? We call `map`
on them, of course. That usually solves our problems. Let’s write a mapping skeleton and then think about what we could actually do to make these functions work.
```
(define-syntax inherits
(syntax-rules ()
((_ parent child)
(with-environment env
(let* ; our bindings
; ...
(map (lambda (parent-fun) ...) parent-funs))))))
```
Okay, this looks reasonable. We map over the parent functions, because we need to inherit those. But what do we need to do? First, we need to find out the new name the function should have. Maybe we can just use string substitution?
```
(define-syntax inherits
(syntax-rules ()
((_ parent child)
(with-environment env
(let* ; our bindings
; ...
(map (lambda (parent-fun)
(let ((nfun (string:substitute parent-fun
(->string 'parent)
(->string 'child))))
; ...
)
parent-funs))))))
```
Fig. 7: Mapping over functions II: Electric Boogaloo.
Alright, this looks about yanky enough to be correct. Now we need to check whether we already have a function of that name in the class, and define the new function otherwise.
```
(define-syntax inherits
(syntax-rules ()
((_ parent child)
(with-environment env
(let* ; our bindings
; ...
(map (lambda (parent-fun)
(let ; inner bindings...
(unless (in? child-funs nfun)
(eval `(define ,(string->symbol nfun)
,(funs parent-fun))
env))))
parent-funs))))))
```
`eval`
your way to freedom.
Don’t you just love the smell of `eval`
in the morning? In this case we use it to define the new function in the environment we started at (the one we obtained using `env`
). If we didn’t use that environment, this `define`
would be local to the lambda we execute it in, and basically useless. Important side note: remember that `funs`
is the environment as a hashmap here. We can reach into that hashmap by calling it with a key, like so: `(hash key) ; => val`
. We use this to get the actual function we are looking at from the name3.
Okay, so what are we doing, from start to finish? We reach into the environment and pick out all of the functions of parent and child. Then we go through the functions of the parent, rename them for the child, and if they are not defined in the child, we defined them using a templated `eval`
.
This approach is highly flawed, and I will talk a bit about why and how in the conclusion, but for now we can feel pretty good about ourselves: we basically implemented inheritance!
### Implementing `class`
Implementing the `class`
form will be much more work, but in many ways it will be simpler, so don’t despair at the walls of code I’m about to throw at you! You might want to take a little breather before continuing, though, for I also took one before writing this part. There’s a lot of ground to cover, and you might want to stretch your legs a little first.
As before, we start with a simple skeleton to break the ice. The `class`
macro takes a name and a number of forms.
```
(define-syntax class
(syntax-rules (properties functions)
((_ name (properties props ...) (functions funs ...))
; do something
)))
```
`class`
.
Okay, that doesn’t look too bad. So what do we do with these values now? Basically, we “just” have to define a few templates in which to insert the names and properties and then define the functions bound to the class we are looking at. That means we have to parse the `properties`
and `functions`
variables a bit4.
Let’s go through those function templates one by one. All of these individual functions will be simple, I promise. All of the complexity will come from the composition of those building blocks.
#### Typechecking and getting properties
Let’s begin by defining two simple functions, the function that checks whether an object is an instance of the class we’re defining, and a function that returns the properties of the class.
```
(define-syntax class
(syntax-rules (properties functions)
((_ name (properties props ...) (functions funs ...))
(with-environment env
(begin
; the typechecking function
(eval `(define (,(->symbol name "?") obj)
(and (hash? obj)
(eq? (hash:keys obj) (quote ,'props))))
env)
; get-properties
(eval `(define (,(->symbol name ":get-properties"))
(quote ,'props)) env)
; ... to be continued
)))))
```
Fig. 10: Defining the first functions on our object.
As before, we get the environment that we start out with, so that we can extend it. Then we begin evaluating templates. The name of the typechecking function will be the name of the class plus a question mark. It takes one argument and checks whether it is a hashmap and the keys are equal to the properties we received. This is a little primitive, but very simple.
`get-properties`
itself just returns the list of properties. Very simple, right?
#### Getting and setting properties
I think now we are ready to define our getters and setters.
```
(define-syntax class
(syntax-rules (properties functions)
((_ name (properties props ...) (functions funs ...))
(with-environment env
(begin
; ... type checking and get-properties
; the setters
(map ($ (let ((% (if (list? %) (car %) %)))
(eval
`(define (,(string->symbol
(++ (->string 'name) ":get-"
(->string (atom->symbol %)))) self)
(self ,%)) env)))
'props)
; the getters
(map ($ (let ((% (if (list? %) (car %) %)))
(eval
`(define (,(string->symbol
(++ (->string 'name) ":set-"
(->string (atom->symbol %)))) self val)
(hash:set self ,% val)) env)))
'props)
; to be continued
)))))
```
This is a little more involved, isn’t it? The good news is that they’re almost identical. The bad news is that even one of these forms is kind of complex. Let’s walk through the getters first.
We map over the properties that we defined, because we have to create a getter for each of them. First, we check what form we have in front of us. If it’s a list, we assume that it’s a form with default value and take the first argument. Otherwise we just take the symbol as is.
Then we enter a templated `eval`
again. We stitch together a name from the type and property, and a body that will just look up the value in the hashmap.
The only thing that changes in the setter is that, in the body, we set the value in the hashmap rather than getting it.
Operationally, all of this is quite straightforward: we just wrap hashmap accessors. Of course all of it is a little complicated because we dynamically create these functions, but the fact remains that the core of our functionality is very slim.
#### Instance functions
So, what’s missing? We have to define the initializer and the user-provided functions. Let’s start with the simpler part, the functions that the user defined.
```
(define-syntax class
(syntax-rules (properties functions)
((_ name (properties props ...) (functions funs ...))
(with-environment env
(begin
; a whole lot of functions
; defining user functions
(map ($
(eval `(define
,(string->symbol (++ (->string 'name) ":"
(->string (car %))))
,(cadr %))
env))
'funs)
; to be continued
)))))
```
This is very similar to what we did with getters and setters. We map over the functions, stitch together a name, and bind the function to it as is. And that’s all we have to do for this part of the definition.
#### The initializer
Now all that is left for us to do is create an initializer. We’re going to make this easy on ourselves and reuse another macro named `defkeywords`
. I will talk about the implementation of this macro in another installment of this series; for now I’ll just give you a little tutorial on how to use it, and then we will see how we can use it to implement a simple initializer.
```
(defkeywords (mykeywordfn myregulararg) (:mykeywordarg default 0)
(+ myregulararg mykeywordarg))
```
`defkeywords`
.
In a nutshell, `defkeywords`
adds another form to definitions that define optional arguments and their defaults. This is a very useful form in general, but you might have realized that it also is very similar to the form we use to define properties. We can use that to make the initializer implementation extremely simple.
```
(define-syntax class
(syntax-rules (properties functions)
((_ name (properties props ...) (functions funs ...))
(with-environment env
(begin
; all of our other functions ...
; generating our initializer
(eval
(macro-expand
(list 'defkeywords (list 'name)
(list:flatten 'props)
(cons 'make-hash
(list:flatten
(map ($
(if (list? %)
(list (car %)
(atom->symbol (car %)))
(list % (atom->symbol %))))
'props)))))
env))))))
```
Fig. 14: Using `defkeywords`
for our initializer.
This form, too, follows the general form of evaluating a template. But because `defkeyword`
is a macro, we also manually have to call `macro-expand`
in zepto. But what actually are we expanding and evaluating?
What we want to end up with is a definition using `defkeywords`
named after the class, with no regular arguments, and all of the properties as keyword arguments. This is what we do in Figure 14 above. The only work that we have to do to get to this point—other than concatenating the whole shebang—is flattening the properties list.
The body of the function should just create hashmap from the given properties. For this we use the function `make-hash`
. For the arguments we map over the properties once more and make key-value pairs, from the atoms that the macro was passed to the symbols that end up being defined in the function body.
This is a little arcane, so let’s look at one example expansion:
```
(class MyClass
(properties
:mykey
(:myotherkey :default 0))
; ...
)
; the initializer expands to:
(defkeywords (MyClass) (:mykey
:myotherkey :default 0)
(make-hash (list :mykey mykey) (list :myotherkey myotherkey)))
```
This should help clear things up a little.
This concludes our implementation! Let’s think a little bit about whether it is any good and how you could improve it if you felt so inclined!
## Caveats
I alluded to multiple weaknesses in the class implementation we just built. Now it’s time to review them, and to think about how to solve them. If this post excited you, I encourage you to try and come up with possible solutions for these problems; I’m happy to help you solve them if you shoot me a message!
Here is an unabridged list fit for crushing hopes and dreams:
- We’re not inheriting properties. This is both easily solvable and very bad, because every time a superclass references one of its own properties, we will have a bad time. You could rewrite the constructor using
`get-properties`
of both the parent and the child when inheriting. Don’t forget to rewrite`get-properties`
itself too! - We can’t actually use any functions of the superclass that we overwrote. There is no runtime resolution order, just flat functions operating on glorified hashmaps. This could be solved using a class registry (could simply be another global hashmap).
- While we’re on the topic of a class registry, let’s think about how we looked up the functions when inheriting. We just pulled out functions that fit a naming scheme. Anyone could inject functions into our unsuspecting environment that also fit this name. A class registry could fix this too, by making sure no extraneous functions end up in our class definitions.
- The type checking primitive is both too simple—which can be solved, again, with a class registry—and buggy. It doesn’t work with default values, because we do not strip them out of the
`props`
value that we receive in the macro.`get-properties`
suffers from the same bug. - For the sake of brevity, we do no error checking whatsoever. What if we put in numbers instead of symbols, or variables instead of function bodies? A mature system should check for that and make sure that the user gets actionable error messages.
None of these problems is unsolvable. They might require a decent amount of work, but it’s worth reminding yourself that the system you are starting with is less than 50 lines of code and is doing a whole lot of things for us already.
## Conclusion
Two years ago, while working on zepto, I asked myself how CLOS worked. Instead of looking at the source right away, however, I tried implementing my own little class system, and then compared it to CLOS. Of course my system ended up being orders of magnitude more primitive and clunky, but it was a fun little exercise and taught me more about object-oriented programming than that dreaded third semester in college when I had to implement design patterns in Java.
It also was an excuse for me to dive deeper into how a better function templating system could work. Above we mostly just interpolated `define`
forms and pushed them into `eval`
. This could very simply be abstracted into a neater API that better expresses intent without having to wade through all of the boilerplate. Dynamically generating functions is fun, but maybe next time we’ll learn how we can have the cake and eat it, too.
I hope you got as much out of reading this as I got out of writing it! See you very soon!
#### Footnotes
1. `self`
is alternatively called `this`
in other languages.
2. This is not necessarily true. We could easily generate another function that fits this naming scheme, but doesn’t actually belong to the class. If we want to avoid this bug, we need to keep track of the classes in another data structure. See my blog post on implementing generics for one possible method using a hashmap.
3. Unquoting `parent-fun`
would have a similar effect, I just want to make sure we are not using an accidently shadowed binding. Unlikely, but possible.
4. For those of you who aren’t as familiar with reserved words in `syntax-rules`
, let me give you a brief intro: the first argument to `syntax-rules`
is an optional list of reserved words that you can treat as literals in the pattern matching head. This makes it easier to define more complicated control structures, and is perfect for our use case. For more information I suggest you look at subchapter 3.3 of this wiki page.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-05-23 00:00:00
| null | null | null | null | null | null |
17,345,548 |
https://www.theatlantic.com/international/archive/2018/06/zte-huawei-china-trump-trade-cyber/563033/?single_page=true
|
Beijing Wants to Rewrite the Rules of the Internet
|
Samm Sacks
|
# Beijing Wants to Rewrite the Rules of the Internet
Xi Jinping wants to wrest control of global cyber governance from the market economies of the west.
It’s never been a worse time to be a Chinese telecom company in America. This evening, the Senate is set to vote on whether to restore a ban on U.S. company sales to prominent Chinese telecom player ZTE, a penalty for its illegal shipments to Iran and North Korea. The bill also includes a measure that would ban U.S. government agencies from buying equipment and services made by ZTE and Huawei, one of its competitors, to tackle cyber threats to U.S. supply chains. Meanwhile, a revelation that Huawei was among the companies with whom Facebook had data-sharing agreements, which allowed device makers to access user data and that of their friends, sparked fears that the Chinese government now possesses a treasure trove of sensitive data on U.S. citizens.
ZTE and Huawei have become flashpoints in the Trump administration’s confrontation with Beijing over cybersecurity, investment, trade, and technological leadership. All this comes as the administration slapped tariffs on $50 billion in Chinese goods last Friday. But amid the hysteria surrounding these two companies, we may be missing a less obvious but potentially more impactful challenge: China’s ambitions to radically overhaul the internet.
In late April, just days after the Commerce Department announced the denial order against ZTE, Xi Jinping, the president of China, gave a major speech laying out his vision to turn his country into a “cyber superpower.” His speech, along with other statements and policies he has made since assuming power, outlines his government’s ambition not just for independence from foreign technology, but its mission to write the rules for global cyber governance—rules that look very different from those of market economies of the West. This alternative would include technical standards requiring foreign companies to build versions of their products compliant with Chinese standards, and pressure to comply with government surveillance policies. It would require data to be stored on servers in-country and restrict transfer of data outside China without government permission. It would also permit government agencies and critical infrastructure systems to source only from local suppliers.
China, in other words, appears to be floating the first competitive alternative to the open internet—a model that it is steadily proliferating around the world. As that model spreads, whether through Beijing’s own efforts or through the model’s inherent appeal for certain developing countries with more similarities to China than the West, we cannot take for granted that the internet will remain a place of free expression where open markets can flourish.
China has been open about its intentions to change how the world addresses development. As part of that vision, for over a decade, it has advocated for something its leaders call “cyberspace sovereignty” as a rebuke to established actors in internet governance like the United States, Europe, and Japan. To advance this model, Xi created a powerful government body to centralize cyber policy. In addition to passing a major cybersecurity law, China has pushed through dozens of regulations and technical standards that, in conjunction, bolster the government’s control of and visibility into the entire internet ecosystem, from the infrastructure that undergirds the internet, to the flow of data, to the dissemination of information online, to the make-up of the software and hardware that form the basis of everything from e-commerce to industrial control systems. In a 2016 speech, Xi called for core internet technologies deemed critical to national and economic security to be “secure and controllable”—meaning that the government would have broad discretion, even without specific written regulations, to decide how it protects information networks, devices, and data.
China’s cyber governance plan appears to have three objectives. One is a legitimate desire to address substantial cybersecurity challenges, like defending against cyber attacks and keeping stolen personal data off the black market. A second is the impulse to support domestic industry, in order to wean the government off its dependence on foreign technology components for certain IT products deemed essential to economic and national security. (In effect, these requirements exclude foreign participation, or make foreign participation only possible on Beijing’s terms.) The third goal is to expand Beijing’s power to surveil and control the dissemination of economic, social, and political information online.
To achieve these objectives, Beijing has instituted standards that force foreign companies to build China-only versions of their products, and to comply with government surveillance policies. Government security audits allow Beijing to open up these companies’ products and review their source code, putting their intellectual property at risk, which was documented comprehensively for the first time last March in a report by the Office of the United States Trade Representative. Article 37 of the cybersecurity law also increases government control over the sort of data that can be transferred out of the country, while unwritten rules reward companies that store data on local servers.
Many of these elements serve a dual purpose: supporting domestic industry while further closing off the internet. Freedom House ranks China as “the worst abuser of internet freedom,” noting that its government affiliates “employ hundreds of thousands or even millions of people to monitor, censor, and manipulate online content.” Such policies also effectively exclude foreign content, leaving Chinese providers with uncontested market openings.
But Beijing wants not only to prevent the United States from interfering with its domestic cyber policies: It also wants to set the tone for how the rest of the world governs the internet. To exert influence on its partners, it uses direct outreach to foreign governments, as well as massive investments in internet technologies through the Belt and Road Initiative, extensive military-to-military cooperation, and growing participation in international institutions.
In 2015, for instance, China selected Tanzania (China is Tanzania’s largest trade partner) as a pilot country for China–Africa capacity-building, giving Beijing substantial influence over Tanzania’s government**. **China used that influence to foster collaboration around cyberspace governance. Since 2015, Tanzania has passed a cyber-crime law and subsequent restrictions on internet content and blogging activity that parallel China’s content controls. Both have been informed by technical assistance from the Chinese government. At a roundtable in Dar es Salaam sponsored by Beijing, Edwin Ngonyani, Tanzania’s deputy minister for transport and communications, explained, “Our Chinese friends have managed to block such media in their country and replaced them with their homegrown sites that are safe, constructive, and popular.” Among other countries where China invests heavily, Nigeria has adopted measures requiring that consumer data be hosted in Nigeria, while Egypt has pending legislation that would mandate ride-sharing companies to store data in-country while also making it more accessible to authorities. Chinese partners like Ethiopia, Sudan, and Egypt engage in aggressive online content control.
Other countries, meanwhile, have adopted only parts of China’s law. Independent of Beijing, Russia has forged a model akin to China’s, embracing an intrusive government role in cyberspace including the most expansive data localization and surveillance regime in the world. Last week Vietnam adopted a cybersecurity law that mirrors China’s. India has imposed some indigenous technical standards, and is considering legislation to enact domestic-sourcing requirements for cybersecurity technologies.
China’s model appeals to these countries because it provides them with tools to take control of an open internet. Online platforms used for terrorism and political dissent threaten national stability. The Edward Snowden revelations and crippling cyber attacks like WannaCry and Mirai create a sense of vulnerability that China’s model promises to fix.
The most alluring feature of the China model appears to be content control, as a broad range of China’s neighbors and partners engage in blocking, filtering, and manipulating internet content. Also alluring: its rules for storing data on servers in-country, which can help law enforcement and intelligence officials get access to user information.
The problem with China’s model is that it crashes headlong into the foundational principles of the internet in market-based democracies: online freedom, privacy, free international markets, and broad international cooperation**. **China’s model may also not even be effective in delivering on its promises. For example, government-imposed content-control measures have proven to be poor tools in fighting online extremism. Filtering or removing online content has been compared to a game of “whack-a-mole,” making it ineffective and cost-prohibitive. Such controls also suppress countervailing discourse from key anti-extremism influencers, which have proven to be effective in offering compelling alternative narratives and discrediting extremist ideas.
The implications for the strength and resilience of the global internet ecosystem are troubling. China’s control-driven model defies international openness, interoperability, and collaboration, the foundations of global internet governance and, ultimately, of the internet itself. The 21st Century will see a battle of whether it is the China model or the more inclusive, transparent, collaborative principles that underpinned the internet’s rise that come to dominate global cybersecurity governance.
| true | true | true |
Xi Jinping wants to wrest control of global cyber governance from the market economies of the west.
|
2024-10-12 00:00:00
|
2018-06-18 00:00:00
|
article
|
theatlantic.com
|
The Atlantic
| null | null |
|
9,668,361 |
https://chrome.google.com/webstore/detail/hndn/hkfhkpdkpjnbijpgfndjdghboghcplnc
|
HNDN - Chrome Web Store
| null |
## Overview
This extension shows a notifications for latest new/top stories from Hackernews.
HNDN is a chrome extension that brings you latest top/new stories from Hacker News as soon as they arrive. You can choose 8 different notification sounds and set auto-clear timing. If the extension does not run in the background automatically, go to Settings-> Show advanced settings -> System -> Tick: 'Continue running background apps when Google Chrome [or chromium] is closed'.
## Details
- Version1.2
- UpdatedJune 17, 2015
- Offered bySDSLabs
- Size667KiB
- LanguagesEnglish
- Non-traderThis developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.
## Privacy
The developer has not provided any information about the collection or usage of your data.
| true | true | true |
This extension shows a notifications for latest new/top stories from Hackernews.
|
2024-10-12 00:00:00
|
2015-06-17 00:00:00
|
https://lh3.googleusercontent.com/IRUf5sPwbrypuAsoyMweNWmWhbO7xCEfpzA7DGFuv0nD0NYFytYAom2auwguu29IMiuOiBF9oZ_Xui9XW4J3XQJLcA=s128-rj-sc0x00ffffff
|
website
|
google.com
|
chromewebstore.google.com
| null | null |
33,217,927 |
https://www.beamhealth.ai
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,528,220 |
http://flax.ie/windows-phone-7-game-development/
|
Windows Phone 7 Game Development
|
Carl Lange
|
## Windows Phone 7 Game Development
Hello, my name is James Kelly. This is my first post for Flax. I’m a student in IT Carlow, doing the games development course with Ciarán and Carl. I’m here to talk about Windows Phone 7, and my experience developing for it.
I first started working with the Windows Phone during the Global Game Jam earlier this year. It’s a competition, in which you build a game in 48 hours. I joined a team making a game called Petri Paradise (I was the main programmer). The premise of the game, is to use the phone’s touchscreen to move an amoeba around the screen, collecting sugar cubes while avoiding viruses, bacteria and salt which could kill you. Once 4 pieces of salt are collected, the amoebas split producing two amoebas. After the jam, I continued to work on the game hoping to put it up on the Windows Phone Marketplace.
I used Microsoft’s XNA in C# for this project. I found it easy to pick up, but it still took work.
Animations were quickly done, using a single image containing all the animation states and SpriteBatch. Using a simple animation class to iterate a rectangle 32 pixel across the image, the SpriteBatch only draws what’s in the rectangle you give it (similar to CSS spriting).
The touchscreen was easy to work with. The TouchPanel class provides methods for retrieving all touchscreen related information. TouchPanel is used to get the current state of the touchscreen, which returns a TouchCollection, which funnily enough provides a collection of touch information like location, and pressure on that part of the screen. We used a foreach loop to iterate through all the touch locations, in TouchCollection for multitouch. The TouchLocation object provides singular information on a touch location. In particular, it provides a position vector for me make the move formula to push the amoeba.
After GGJ, I had the basic game going with sprite animations, and not much else. I first had to install the tools needed to work on the phone available on AppHub. These tools provided me with an emulator (not being fortunate enough to own a windows phone myself). A few things I figured out after I returned home, is how Windows Phone 7 runs. It runs five methods: initialize, where variables are set up, then LoadContent where audio clips, sprites, etc , can be loaded from the content file. Then it loops through update, where the main bulk of your code goes, and Draw, which handles drawing content to the screen.
I had problems, though; rotation only recently started working with accelerometer. Before that, I had rotated graphics by display orientation, where I used the graphics device to check its orientation. The method SupportedOrientations sets and gets orientation if there is any change in the phone’s orientation. I then checked the orientation of the phone’s presentation parameters for a switch between horizontal and vertical, and I rotated all the entities in the game by ninety degrees, switching x and y position. A small side note, maths uses radians on a Windows Phone. I lost a lot of time before realising that.
A few problems: while trying to draw text to screen, I didn’t realise that the font variable must be provided to draw to screen. A font variable must be set up by loading a spritefont from the content file. A spritefont is an XML sheet where font information can be changed, like size, font-type, spacing etc.
Overall, I’m finding that developing for the WP7 is easy to pick up and work with; the emulator is providing a decent substitute for the real thing. I am going to try to get a Wiimote to emulate the phones accelerometer. I plan to complete the game soon, and get it up on the Windows Phone 7 Marketplace. I’ll post about how that turns out. Thanks for reading.
#### About the Author
#### James Kelly
I am a student at IT Carlow studying Games development. I really enjoy programming and I love to challenge myself, learning new languages and general putting what i have learnt to the test,but mostly i spend my time planning for the zombie Apocalypse it's real, it's coming.
## Related Posts
-
## Flax HTML5 Game Engine Development Diary Part 11
Well it’s that time again, it’s been just about three weeks since the first iteration of the Flax HTML5 Game Engine 0.1 and as promised […]
## Catch up with the Flax Project – Busy times!
Hey everyone, so it’s been nearly 5 weeks since we last posted. It’s been a crazy 5 weeks and it’s not over just yet. Unfortunately we […]
Whats up lads. Just said I would let you know that I’m currently finalizing my WP7 game and would be happy to share solutions to some of the problems encountered along the way.
Absolutely! I’ll give you an email later on.
| true | true | true |
Windows Phone 7 game development, of a game called petri paradise, using XNA with C# by James Kelly.
|
2024-10-12 00:00:00
|
2011-05-09 00:00:00
| null | null |
flax.ie
|
flax.ie
| null | null |
32,418,223 |
https://www.cnbc.com/2022/08/10/mark-cuban-buying-real-estate-in-the-metaverse-is-dumbest-idea-ever.html
|
Mark Cuban: Buying real estate in the metaverse is 'the dumbest' idea ever
|
Cheyenne DeVon
|
Buying digital land in the metaverse may not be the best use of your money, according to billionaire investor Mark Cuban.
Although Cuban is a well-documented cryptocurrency enthusiast, he called purchasing virtual real estate in the metaverse "the dumbest s--- ever" in a recent interview on the Altcoin Daily YouTube channel.
Despite being an investor in Yuga Labs, which owns popular NFT collections such as Bored Ape Yacht Club that has sold digital land plots, Cuban said buying virtual real estate is "dumb."
"It was great money for them, but that wasn't based off utility," he said.
In the physical world, real estate is valuable because land is a scarce resource. However, that scarcity doesn't necessarily apply to the metaverse.
In these virtual worlds, "there's unlimited volumes that you can create," Cuban said during the interview.
## The rise and fall of digital real estate
Last year, metaverse platforms experienced a virtual land rush as users collectively spent millions on digital real estate. Combined sales on four major platforms reached $501 million in 2021, according to MetaMetric Solutions.
In some cases, virtual real estate went for as much as a physical house. Republic Realm, an investment firm that owns and develops virtual real estate, dropped a massive $4.3 million on a digital property located within The Sandbox, one of the largest metaverse platforms, according to the Wall Street Journal.
A virtual plot next to Snoop Dogg's digital mansion within The Sandbox was purchased for $450,000 by an NFT collector who goes by the name "P-Ape" in 2021.
However, the virtual housing bubble may have popped.
As of August 7, the average sale price for a piece of virtual property on metaverse platform Decentraland was $14,385.27, according to WeMeta. That's down about 61% from a peak average sale price of $37,238.68 in November 2021, according to the site.
Given the unpredictable nature of the metaverse and cryptocurrency, financial advisors recommend only investing as much money as you're prepared to lose. There are no guarantees that you'll earn a profit from your investment.
**Sign up now: ****Get smarter about your money and career with our weekly newsletter**
**Don't miss: ****It’s easier than ever to buy crypto with a credit card—but here’s what to know first**
| true | true | true |
Billionaire investor Mark Cuban says buying virtual land on metaverse platforms is "the dumbest s--- ever." But some companies have spent millions on plots.
|
2024-10-12 00:00:00
|
2022-08-10 00:00:00
|
article
|
cnbc.com
|
CNBC
| null | null |
|
6,811,291 |
http://techcrunch.com/2013/11/27/keen-on-the-future-of-money-kickstarter-and-the-bitcoin-climax/
|
Keen On... The Future of Money: Kickstarter and the Bitcoin Climax | TechCrunch
|
Andrew Keen
|
Having raised over $37,000 on Kickstarter to make a TV show about the future of money, Heather Schlegel knows a thing or two about both social and financial value. “I ate my own dogfood,” she explained why she used Kickstarter – which she intriguingly describes as an “ATM to tap our social capital” – to finance a pilot for her TV show.
So, as a current victim of digital disruption, I asked Schlegel if the financial establishment could be about to go through the same meltdown as the media industry.
Perhaps she says. Schlegel believes that we need “more currencies” and describes what she calls the “Bitcoin climax” as a “harbinger of the future.” So let’s hope she raises that next $350,000- $600,000 which will finance her six-part tv show. The future of money is a critically important subject for both startup entrepreneurs and consumers, and we need well-informed futurists like Heather Schlegel to make sense of what she rightly calls this great “paradigm” shift.
| true | true | true |
Having raised over $37,000 on Kickstarter to make a TV show about the future of money, Heather Schlegel knows a thing or two about both social and financial value.
|
2024-10-12 00:00:00
|
2013-11-27 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
34,896,298 |
https://phys.org/news/2023-02-seismic-reveal-distinct-layer-earth.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
793,531 |
http://www.everything2.com/index.pl?node=MIT%20Guide%20to%20Lockpicking
|
MIT Guide to Lockpicking
|
AntonZ
|
While the "MIT guide to lockpicking" is reproducible on a "non-profit basis", some concern has been expressed on behalf of the MIT 'hacking community'. See:
*http://www.lysator.liu.se/mit-guide/lame.html*
*http://web.mit.edu/afs/sipb/project/www/stock-answers/lockpicking-guide*
Comment or guidance from any current members of the MIT community would be welcome.
Please read the text at one of the above links for full details. I reproduce the Executive Summary and a bit of other context below:
The MIT Hacking community is saddened by the series of recent
events which have made the "MIT Guide To Lockpicking" available
electronically in a indiscriminate fashion. We would like to state,
once again, that we believe such distribution is inappropriate. Since we
clearly have no control over the guide's dissemination, we would, at the
least, like those distributing the guide to do the following:
- Add an integral section on Hacking Ethics (which see);
- Disassociate the MIT name from the distributed guide
The guide was originally written to pass on non-destructive methods of
entry to members of the MIT Hacking community.
"Roof and tunnel" hacking at MIT is concerned primarily with
non-intrusive exploration. ... The goal is to discover and learn, not to
steal, destroy, or invade anyone's privacy. ...
The "MIT Guide" was never intended to be distributed separate from the
oral tradition and indoctrination associated with the MIT Hacking
community.
The MIT Hacking community does not support the guide's distribution in
electronic form ... we feel it is inappropriate for the guide to be labelled as
an "MIT Guide". At this point, the guide is neither being distributed by
MIT nor with the blessing of the MIT Hacking community.
| true | true | true |
Guide to Lock Picking Ted the Tool September 1, 1991 Distribution Copyright 1987, 1991 Theodore T. Tool. All rights reserved. Permission to reproduce th...
|
2024-10-12 00:00:00
|
2000-10-24 00:00:00
| null | null |
everything2.com
|
Everything2.com
| null | null |
38,511,697 |
https://www.sensible.so/blog/history-of-the-pdf
|
History of the PDF
|
Nick Moore
|
When was the last time you clicked a link, found you were opening a PDF, and didn’t groan in pain?
The PDF is one of the most popular formats in the world – and has been for years – but it’s also one of the most reviled. If you’ve seen the arrival and dominance of Microsoft Word, the rise of Google Docs across schools and offices, and the tides of companies and formats – Evernote and Notion; XML, Markdown, and HTML – the PDF likely stands out.
On the one hand, it’s eerily well-supported. You can open PDFs in your browser with or without an Adobe product and send them to others via Slack, iMessage, and more.
On the other hand, the PDF is decidedly anachronistic. PDFs are hard to read and edit on mobile devices. Worse, PDFs bear the increasingly surreal mark of clearly being a digital version of a physical object despite it being 2023.
The history of the PDF is three histories in one: a history of the PDF file format itself; a history of Adobe, the company that created the PDF and eventually released it as an independent standard; and a history of the concept of the digital document – a concept that the PDF pioneered, exemplified, and eventually restricted.
The PDF is also a lens through which we can better understand how technologies evolve, die, and persist. This history is a jumping-off point for understanding how and why businesses always seem to lag behind technology advancements – from the paperless office that never really happened to the era of digital transformation that never seems to finish transforming.
By the end of this article, you’ll have a greater appreciation of the cockroach-like format the PDF has proven itself to be. As ugly as it often is, it’ll likely outlast all of us. And there’s a lot of lessons to take from understanding why.
## The PDF as an idea
We’re starting at the beginning not because it’s the beginning but because the first, grand promise of the PDF – made over three decades ago – has made the format last so long.
In the 1990s, business leaders were excited about a concept that was equal parts genuine innovation and buzzword: The paperless office. Enabling the paperless office, these businesses thought, would be the next major disruption, the next paradigm shift. And the company that heralded this shift would be rich.
The beachhead for this transformation was digital paper, and a crowd of companies were chasing it: DjVu, WordPerfect, Common Ground, and more. But Adobe, which announced the PDF format at a tech conference in 1992, eventually won (even though it turned out that this beachhead was more of an island and the office in 2023 still relies on paper).
The PDF won, among other reasons, because Adobe’s founders started the company to create physical documents, not digital ones, and this background lent them the advantage they needed to make the ideal digital document.
We complain about ink cartridge prices and absurd printer DRM, but modern printers can at least reliably print a paper version of what we see on the screen. But back then, people had to rely on a dot-matrix printer, including its screeching soundtrack and pixelated text, or enormously expensive typesetting machines.
Adobe’s initial innovation was PostScript: a series of protocols that each desktop printer would carry and could, miraculously, render what the user wanted to print. PostScript debuted in 1985 on Apple’s LaserWriter.
John Warnock, Adobe’s co-founder, had a clear strategy – he wanted to make PostScript a universal standard. Resisting the norm of secrecy and private development at the time, Warnock pushed for openness. Years later, he said, “We had to publish it. We had to make it very, very open — because the trick was to get both [software] application developers and operating system developers to support it.”
“The only way to make standards is to get them out and just compete,” Warnock later said. Adobe got out there, competed, and a few years later, PostScript became the standard.
“At one point,” Warnock said, “We had 22 PostScript competitors. There were 22 clones out there that were trying to undercut us in the market. And as far as I know, not one of them succeeded — including Microsoft’s. I think they produced exactly one printer, of which they sold zero. It was a disaster.”
By publishing a standard, Charles Geschke, Adobe’s other cofounder, said, “You’re taking the risk that someone will do a better job of implementing it. We had the self-confidence that we would always have the best implementation, and that has turned out to be true.”
This strategy – outcompeting and then standardizing – became the blueprint for the PDF.
The mission that carried Adobe from PostScript through to PDF was to create, as David Parmenter, director of engineering for Adobe Document Cloud, put it, an “interchange format that preserved author intent.”
“Author intent” is the key idea here. Before the PDF, Mac, Windows, UNIX, and MS-DOS all interpreted files differently. If you were the rebellious sort and wanted to create a file in Windows but then move it to a Mac, your file “would likely have looked like Jackson Pollock got a hold of it.”
The initial idea emerged in a paper written by Warnock called Project Camelot. “This project’s goal,” he wrote, “is to solve a fundamental problem that confronts today’s companies.” The problem, he explained, was the lack of a universal way to “communicate and view printed information electronically.”
If documents could become viewable across all displays and printable across all printers, Warnock wrote, “the fundamental way people work will change.”
The vision exceeded the PDF, including “utilities, applications, and system software.” But the core idea that made the vision possible was that the PDF would be “completely self-contained.” It didn’t matter whether the receiving computer didn’t have the fonts the sending computer did. The PDF rendered the information as the author intended, regardless.
Warnock imagined a few possibilities as a result of the PDF, including the ability to send newspapers, magazine articles, and technical manuals over email and the ability to maintain databases of documents that people could access and print remotely. He imagined companies saving “millions of dollars in document inventory costs.”
Adobe started with two price points: a PDF-making program that cost about $700 and a PDF-reading program (Acrobat Reader) that cost $50.
It was not a fast success. Reflecting, Warnock said, “When Acrobat was announced, the world didn’t get it. They didn’t understand how important sending documents around electronically was going to be.”
According to Warnock, someone from Gartner told them, “This is the dumbest idea I’ve ever heard in my life.”
IBM executives agreed. James Fritz, who attended the conference where the PDF debuted, wrote that to many in 1992, the promise the PDF made was “heresy.”
Even Adobe’s board wanted to kill the PDF. But Warnock knew he had something good, something beyond good: “No one has to say this is a good idea or a bad idea. We can just make it a fait accompli.”
And they did.
## The PDF as revolution
On the day of the PDF’s release, Adobe made the specs for the format freely available, and soon after, Adobe made its reader software free, too.
Rob Walker, senior writer for the business publication *Marker*, explained the strategy well, writing that the company was “focusing entirely on the creation product as a revenue stream — but gambling that the more people who could read the format, the more attractive it would be to the creator side.” The PDF format would become a standard, and though this rising tide would lift many boats, none would rise higher than Adobe’s.
Of course, many other tides pushed the PDF higher, too. From the mid-1990s to the early 2000s, the Web became mainstream, and download speeds improved – making the already accessible, already compact PDF even more accessible and compact.
Amidst these trends, however, there’s still a clear pivot point: in 1996, the IRS became Adobe’s star customer. Before the PDF, the IRS was mailing tax forms to hundreds of millions of households, and the whole endeavor was complex and expensive. With the PDF, the IRS could make these forms available to the entire country via the Internet, and people could download and print them as they saw fit.
The IRS brought PDFs to everyone – average people, business leaders, academics, law firms, and more. It was a shift both innovative and familiar:
The magic of the PDF was that it truly was digital paper, and users could get many of the benefits of the Internet without having to parse a fundamentally new format.
Other tax software was available, including TurboTax and MacInTax, but asking your computer to do your taxes was a big leap for many people at the time. But downloading and printing forms? That they could and that they did.
In 1996, an Albuquerque journalist reported on the phenomenon: “If you need a form, forget about dragging yourself to the IRS office. Just point and click on the form on the Internet.”
People could save a lot of time and energy – well, those with the Internet and desktop printers, at least – but this move was beneficial to the IRS, too. In a case study, IRS representatives write that “the agency saves millions of dollars annually by decreasing the money it spends on printing, storing, and mailing tax materials.”
From here on, much of the progress was feature by feature. When Adobe released a plug-in that enabled Netscape users to view PDF files in the browser, adoption boomed. And when Adobe added the ability to link PDF files to and from HTML pages, the boom continued.
In 2000, Adobe released Acrobat 4.05, and by then, it was hard for anyone to dispute that the PDF had achieved the level of standardization Warnock and Geschke had pursued.
By then, people had downloaded over 100 million copies of Acrobat Reader, and even the industries that cared the most about preserving authorial intent – such as the graphics art and preprint industries – grew to accept the PDF. And their acceptance carried weight.
In 2001, The *Wall Street Journal* reviewed Adobe Acrobat. Already, the PDF, which feels ancient today, left users with a sense of boredom that belied how impactful it had been and would be.
The reviewer wrote that the technology sounded about as exciting as a TV ad and that it “isn’t that breathtaking unless you have tried to design a Web page that looks the same regardless of the program it is viewed with, or you have sent a resume in Microsoft Word format to a potential employer only to discover that it doesn't look quite as glitzy as when it left your desktop.”
But the reviewer extolled the benefits, writing that whether users created documents with “whiz-bang graphics” or simple text, Acrobat and the PDF could handle it. “This doesn't sound like a huge leap for mankind,” he wrote, but it was a huge leap for businesses: “Indeed, most big companies already use Acrobat for exactly this purpose. But not enough do.”
This reviewer was right, but an even larger leap was coming soon.
## The PDF as standard
Through the 1990s and the early 2000s, the primary strategy of Warnock, Geschke, and Adobe was to make the PDF a de facto standard. But in 2008, the company took a big step forward by making it an *actual* standard.
Adobe released the PDF format’s specs to the independent nongovernmental organization International Organization for Standardization (ISO) and gave this body the royalty-free right to publish and control the patents and specs. Adobe maintained a seat on the ISO committee in charge, but otherwise, it stepped back from the PDF standard.
If the PDF was accepted before, it became undeniable afterward.
“Once we made it available to everybody, there was a big halo effect,” said Parmenter.
Adobe built a natural association between itself and the PDF, but by making it a standard, Adobe could stand on the collective efforts of others, too. Microsoft Word added the ability to save Word documents as PDFs, and a flurry of other PDF-creation and reading tools emerged.
Adobe wasn’t alone, but it didn’t need to be by then – it was at the top.
Over the years, with the PDF remaining an independent standard that the ISO gradually iterated on, Adobe evolved and profited. According to Walker, “There’s no question that the close association with the PDF has been vital to the long-running success of Acrobat, Adobe’s document software.”
The long run demonstrates success overall, but there were mistakes as well as victories.
As the Internet grew, Adobe both reaped some rewards and remained passive despite potential other rewards.
Better download speeds made the PDF more practical, but Adobe avoided working with HTML. According to Warnock, “The early versions of HTML — from a design point of view — were awful. There was nothing beautiful about it.” Geschke, the son and grandson of letterpress photo engravers, had similar sensibilities. And Adobe suffered for it.
But the victories were even greater.
There was “Liquid Mode” in 2020, an improvement that better adapted the format for readability on smartphones. Around the same time, Adobe made it easier for developers to embed PDFs into websites. These features pale in comparison, however, to how well Adobe survived the transition to the cloud and SaaS eras.
By 2020, Adobe’s Document Cloud offering – now central to Acrobat – had revenues of $1.5 billion. And the COVID-19 pandemic, which ruined or damaged so many other businesses, boosted Adobe. According to a Forrester study, companies increased their spending on digital document processes and tools by more than 50%, leading to the share price of Adobe rising from $333 to more than $500.
Warnock – who was CEO until 2000 and chairman of the board (along with Geschke) until 2017 – pushed a vital idea at Adobe, one that failed them when it came to HTML but helped when it came to smartphones and the cloud.
“Companies build antibodies,” Warnock said. “They build resistance to change. They get comfort zones where they want to work, and employees don’t want to try something new for fear that they are going to fail. So, they reject ideas. One of the hardest things about keeping a company innovative is killing off the antibodies and forcing change.”
But to the extent that Adobe successfully killed its internal antibodies, it profited mightily by – intentionally or not – introducing a format that came laced with its own antibodies, a format that has staved off change, challengers, and killers for decades.
## The PDF as zombie
In the intro, we shared a common reaction to opening a PDF in 2023: a groan, an eye-roll, a pained sigh. But this sentiment doesn’t seem to affect the company or the format.
Over the years, no one has taken the throne. Microsoft, a similar standard-bearer, has faced challenges from Google and Apple toward Word and PowerPoint, but a PDF challenger – much less killer – has yet to emerge.
Adobe reports that in 2020, about 303 billion PDFs were opened using its Document Cloud products. This popularity represented an annual increase of about 17%, and even then, this rate doesn’t reflect the total amount of PDF usage due to its ISO-based standardization.
The persistent success and growth of the format comes from its original design: The PDF was designed to be compact and forward-compatible and to reflect the intent of the author across devices.
Does the PDF feel anachronistic? Yes, of course. But is that a bad thing? The printed book has lasted from the 15th century, and the Adobe founders, directly inspired by book printing, created a format meant to have a similar legacy.
But of course, it frequently is bad for users and businesses. Comments on places like HackerNews refer to it as “one of the worst file formats ever produced,” “soul-crushing,” and something that “should really be destroyed with fire.”
This sentiment, however, isn’t a case of new users and developers not respecting their elders. In 1996, the research-based user experience group Nielsen Norman criticized the PDF format. They were not wholly against PDFs, but they wanted the PDF to stay in its lane as digital paper and not encroach on the Web, where HTML remained the better format.
“PostScript and Acrobat files should never be read online,” writes Jakob Nielsen in 1996. “PostScript viewers are fine for checking out the structure of a document in order to determine whether to print it, but users should not be tricked into the painful experience of actually spending an extended period of time with online PostScript.”
Nielsen restated the case in 2001, writing, “PDF is great for distributing documents that need to be printed. But that is all it's good for. No matter how tempting it might be, you should never use PDF for content that you expect users to read online.”
Nonetheless, the PDF kept growing in popularity, and few limited how and where it was used. In 2020, Nielsen made the case again, writing, “After 20 years of watching users perform similar tasks on a variety of sites that use either PDFs or regular web pages, one thing remains certain: PDFs degrade the user experience.”
He couldn’t be clearer – “PDF should never be used for on-screen reading. Don’t force your users to suffer and slog through PDFs!” – but the lesson went unheeded.
In another 2020 article, Nielsen captured user responses that likely reflect some of the experiences you’ve felt yourself:
- “Information is outdated in those PDFs. So you’re getting stuff that isn’t current. They just haven’t taken those links off.”
- “I don’t know if they [a PDF with email-signature templates] are updated. I can’t confidently share it. Sometimes there are multiple versions of PDFs.”
- “All of the PDFs are horrible. There are so many old forms and version control is so difficult. We’re starting to move them into a database but first have to audit them and track down people to ask them if they still need the form. We’re taking the top-used forms and tackling those first.”
- “We’ve come across problems with PDF forms. Others have to download the form in order to use it the way we want them to use it. You have to download it to get the features to work, so we always have to specify at the top of our documents that our partners are using these and they might not have the latest PDF readers.”
Ultimately, the article's title makes the most forceful case: “PDF: Still Unfit for Human Consumption, 20 Years Later.”
Of course, humans kept consuming PDFs, ranging from your average person trying to parse a PDF restaurant menu to the highest levels of Federal power.
In 2018, *Slate* reported that PDF usability was a significant reason why then Special Counsel Robert Mueller was able to indict Paul Manafort as part of the investigation into President Trump’s ties to Russia.
Manafort had tried to defraud a potential lender by altering a profit-and-loss statement. Manafort emailed the PDF to an associate and asked him to convert it to a Word document so Manafort could make the fraudulent changes. Once he made the changes, Manafort’s associate helped him convert the Word document back to a PDF.
But as the PDF Association – yes, that’s a thing – points out, Slate missed a detail: “Converting from PDF to Word for the purposes of surreptitiously altering text in the PDF document is a foolish way to commit fraud and break federal law at several levels” because the Word file won’t perfectly resemble the original file and because PDF files are already editable.
“Manafort could have readily altered the PDF himself,” the Association writes. “Had he done so, he would have avoided a key part of the paper trail that may land him in federal prison. He probably even had a PDF editor already on his computer.”
## The PDF as digital document
Over the decades, much of the frustration with the PDF has emerged because the format has subtly and slowly shifted from functioning as digital paper to functioning as digital documentation.
The PDF is a perfect format for paper made digital. Though we haven’t reached the paperless office future, the need for paper has diminished, and the need for documentation has increased.
We have much more to document – think of all the SaaS contracts a business maintains, all the regulatory compliance work that needs to be written down, and all the processes for hiring, working, and communicating across offices, co-working spaces, and home offices – but we need documents to do so much more.
Documents were once outputs. Originally, PDFs outputted authorial intent for the sake of reader consumption via printer and digital paper. But over time, PDFs took on the role of inputs, too. As perfect as PDFs were for display, they became bad ways to store information and terrible ways to facilitate the interface between different functions and parties.
The effort to programmatically extract information from PDFs demonstrates this format is poorly suited for its modern needs. FilingDB, a company later acquired by Insig AI, has written in-depth about the struggle of extracting information from a format that was never really meant to serve as a medium for storage or interface.
A few examples include:
- Read protection (PDFs often have several access permissions flags that limit how content can be copied).
- Hidden text (PDFs frequently contain text outside the page’s bounding box that’s invisible to most PDF viewers, but that will show up during extraction).
- Too many and not enough spaces (PDFs often have extra spaces between letters in a word or too few spaces – usually for the sake of kerning).
- Embedded fonts (PDFs, meant initially to ignore font restrictions, sometimes have custom encoding and fonts that look fine to human eyes but confuse machines).
- Layout confusion (PDFs, always designed for humans first, often have layouts that a human might find readable but that can leave a machine bewildered, such as footnotes, asides, and varying column layouts).
“The main problem,” FilingDB writes, “is that PDF was never really designed as a data input format, but rather, it was designed as an output format giving fine-grained control over the resulting document.”
At the deepest level, they write, “The PDF format consists of a stream of instructions describing how to draw on a page [...] As a result, most of the content semantics are lost when a text or word document is converted to PDF - all the implied text structure is converted into an almost amorphous soup of characters floating on pages.”
Hacker News commenters reacting to the article wrote about the PDF in a much blunter fashion. But as the discussion continued, commenters also circled the primary problem, with one commenter writing that “Parsing pdf to extract data is like using a rock as a hammer and a screw as a nail” and another writing that “Actually, parsing text data from a pdf is more like using the rock to unscrew a screw, in that it was not meant to be done that way at all” and another still writing that “It's closer to using a screwdriver to screw in a rock. The task isn't supposed to be done in the first place, but the tool is the least wrong one.”
The PDF has outlived itself in many ways, but the revolution it created on the digital paper and digital document levels has had staying power that outstrips Warnock, the ISO standard, and Adobe itself. The PDF was built as a way of preserving an author’s aesthetic intentions, but software has eaten the world, APIs have eaten software, and information demands to be programmable, not beautiful.
The great irony is that the software and API revolutions hardly touched digital documents, which remain among the most important ways to communicate, store, and act on information in businesses worldwide.
What is a legally binding document, if not an API that connects an entity to a deliverable? And yet, even though a user can sign up for a service online, the enterprise version of that service will likely be codified in a PDF.
## The PDF as opportunity
In 1991, a *New York Times* review of Adobe Acrobat – then called Carousel – touched on a future yet to come.
“If it succeeds,” the reviewer wrote, “Carousel will alter the way computers are used in offices. Today, these machines are used primarily to create documents in word processors and spreadsheets. In the future, computers will increasingly be used to search for and view information.”
“In the future, all documents might become information databases,” the reviewer continued, and Adobe “could create a new market for corporate information systems.”
This review was right when the PDF became a de facto standard; it was right when it became a real standard; it was right when PDFs became a mechanism for information storage; and it was right when businesses found themselves struggling to extract the information contained inside PDFs and turn digital documents into the interfaces and programs they needed them to be.
The future of the PDF remains unclear, but if the past decades have taught us anything, it’s to bet on its survival and not its defeat. But that doesn’t mean the PDF – the file format itself or the broader swath of digital documentation it represents – won’t face disruption.
Adobe estimates that there are more than 2.5 trillion PDFs in the world today. As hard as it is to imagine a new technology finally dislodging the PDF, it seems just as hard to imagine no one ever finding a way to capitalize on and transform this market.
| true | true | true |
Explore the remarkable history of the PDF. This ubiquitous format, which has shaped the way we share and view documents, has a story as compelling as its widespread use. Discover why the PDF, often criticized yet universally used, continues to be a vital part of our digital world in 2023.
|
2024-10-12 00:00:00
|
2023-11-30 00:00:00
|
website
| null |
Sensiblehq
| null | null |
|
27,077,064 |
https://github.com/apps/iacbot
|
Build software better, together
| null |
# iacbot
## GitHub App
# iacbot
## GitHub App
Identify and fix configuration issues in **Terraform**, **CloudFormation**, and **Kubernetes**. Get rapid feedback directly in your pull requests.
Developer
**iacbot** is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.
| true | true | true |
GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null |
github.com
|
GitHub
| null | null |
|
13,311,390 |
http://www.twitlonger.com/show/n_1spgd0k
|
TwitLonger
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
4,190,906 |
http://www.lisperati.com/clojure-spels/casting.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,880,782 |
https://lwn.net/SubscriberLink/872869/0e62bba2db51ec7a/
|
A viable solution for Python concurrency
|
Jonathan Corbet October
|
# A viable solution for Python concurrency
Concerns over the performance of programs written in Python are often overstated — for some use cases, at least. But there is no getting around the problem imposed by the infamous global interpreter lock (GIL), which severely limits the concurrency of multi-threaded Python code. Various efforts to remove the GIL have been made over the years, but none have come anywhere near the point where they would be considered for inclusion into the CPython interpreter. Now, though, Sam Gross has entered the arena with a proof-of-concept implementation that may solve the problem for real.Benefits for LWN subscribersThe primary benefit from subscribing to LWN is helping to keep us publishing, but, beyond that, subscribers get immediate access to all site content and access to a number of extra site features. Please sign up today!
The concurrency restrictions in the CPython interpreter are driven by its garbage-collection approach, which uses reference counts on objects to determine when they are no longer in use. These counts are busy; many types of access to a Python object require a reference-count increment and (eventually) decrement. In a multi-threaded program, reference-count operations must be performed in a thread-safe manner; the alternative is to risk corrupted counts on objects. Given the frequency of these operations, corruption in multi-threaded programs would be just a matter of time, and perhaps not much time at that. To avoid such problems, the GIL only allows one thread to be running in the interpreter (i.e. to actually be running Python code) at a time; that takes away almost all of the advantage of using threads in any sort of compute-intensive code.
The reference-count problem can be trivially solved (for a relatively advanced value of "trivially") by using atomic operations to increment and decrement the counts. There is just one little problem with this solution, as outlined in this design document posted by Gross:
The simplest change would be to replace non-atomic reference count operations with their atomic equivalents. However, atomic instructions are more expensive than their non-atomic counterparts. Replacing Py_INCREF and Py_DECREF with atomic variants would result in a 60% average slowdown on the pyperformance benchmark suite.
Given that the vast majority of Python programs are single-threaded (and likely to remain so), it is not surprising that there has never been much appetite for solutions that impose this sort of cost.
#### Biases, immortals, and deferrals
Gross has taken three different approaches to the CPython reference-count problem, the first of which is called "biased reference counts" and is described in this paper by Jiho Choi et al. With this scheme, the reference count in each object is split in two, with one "local" count for the owner (creator) of the object and a shared count for all other threads. Since the owner has exclusive access to its count, increments and decrements can be done with fast, non-atomic instructions. Any other thread accessing the object will use atomic operations on the shared reference count.
Whenever the owning thread drops a reference to an object, it checks both reference counts against zero. If both the local and the shared count are zero, the object can be freed, since no other references exist. If the local count is zero but the shared count is not, a special bit is set to indicate that the owning thread has dropped the object; any subsequent decrements of the shared count will then free the object if that count goes to zero.
This algorithm improves reference-count performance because, of all the objects that any thread will create, few will be shared with other threads. So, most of the time, the shared reference count will be unused and the cost of using atomic operations to manipulate that count will be avoided.
There are, naturally, some subtleties in how the reference counts are
handled. One of those is that, for reasons to be described next, the two
least-significant bits of the local reference count are reserved. An
increment to the local reference count, thus, adds four to that count.
These details are hidden in the `Py_INCREF()` and
`Py_DECREF()` macros, so most code need not be aware of them.
Some objects are heavily shared between threads, though; these include
singletons like `True`, `False`, and `None`, as well
as small integer values, some type objects, and more. These objects will
also never go away during the execution of the program — they are
"immortal" objects for which reference counting is a waste. Gross's
CPython interpreter marks these objects by setting the lowest significant
bit in the local reference count. If that bit is set, the interpreter
doesn't bother tracking references for the relevant object at all. That
avoids contention (and cache-line bouncing) for the reference counts in
these heavily-used objects. This "optimization" actually slows
single-threaded accesses down slightly, according to the design document,
but that penalty becomes worthwhile once multi-threaded execution becomes possible.
Other objects in Python programs may not be immortal, but they are still
long-lived; functions and modules fall into this category. Here, too, it
can make sense to avoid the cost of reference counting. The idea makes
even more sense when one realizes that many function and module objects, by
virtue of appearing in the `globals` dictionary, essentially form
reference-count cycles anyway and their counts will never go to zero. For
these objects, a technique called "deferred reference counting" is used;
the second-least-significant bit in the local reference count is set, and
(most) reference counting is skipped. Instead, a garbage-collection pass
is used to find and free unused objects.
"Most" reference counting is skipped because the CPython interpreter does not, on its own, have a complete picture of whether an object using deferred reference counting is truly unused. Specifically, extension code written in C could be holding references that the interpreter cannot see. For this reason, reference counting is only skipped within the interpreter itself; any other code will manipulate the reference counts as usual.
#### Other changes and results
The reference-counting changes are a key part of Gross's work, but not all
of it. The interpreter's memory allocator has been replaced with mimalloc, which is
thread-safe, fast, and is able to easily support garbage-collection
operations. The garbage collector itself has been modified to take
advantage of mimalloc, but is still "a single-threaded,
stop-the-world implementation
". A lot of work has gone into the
list and dict implementations to make them thread-safe. And so on.
Gross has also put some significant work into improving the performance of
the CPython interpreter in general. This was done to address the concern
that has blocked GIL-removal work in the past: the performance impact on
single-threaded code. The end result is that the new interpreter is 10%
*faster* than CPython 3.9 for single-threaded programs. That will
certainly sweeten the pot when it comes to acceptance of this work, though,
as Guido van Rossum noted,
the Python developers could always just take the performance improvements
without the concurrency work and be even faster yet.
That seems like an unlikely outcome, though, if this work stands up to closer scrutiny. When pointed to the "ccbench" benchmark, Gross reported speedups of 18-20x when running with 20 threads. That is the kind of concurrency speedup that Python developers have been wanting for a long time, so it is unsurprising that this work has seen an enthusiastic reception. As an added bonus, almost all Python programs will run on the modified interpreter without changes. The biggest source of problems might be multi-threaded programs with concurrency-related bugs that have been masked by the GIL until now. Extensions written in C will need to be recompiled, but most of them will not need to be changed unless they (as some evidently do) access reference counts directly rather than using the provided macros.
The end result thus appears to be a GIL-removal effort that has a rather
better-than-average chance of making it into the CPython interpreter. That
would be cause for a lot of rejoicing among Python developers. That said,
a change this fundamental is unlikely to be rushed into the CPython
mainline; it will take a lot of testing to convince the community that it
is ready for production use. Interested developers may be able to hasten
that process by testing this work with their programs and reporting the
results.
Index entries for this article | |
---|---|
Python | Global interpreter lock (GIL) |
Posted Oct 14, 2021 15:57 UTC (Thu)
by
For example, say you have an extension that has a function foo() that takes a parameter that's a dict. Most extension functions don't drop the GIL by themselves, which means that while they are being called, no other thread can alter the dict that has been passed in to the function. However, if the GIL disappears, it may be possible that the dict is altered from another thread while the function is still accessing it. And even if all the PyDict_* functions are changed to be thread-safe themselves, people will typically call various PyDict functions in succession, and if the object changes in between calls, this can easily cause C extensions to misbehave.
This type of problem will be even more prominent when it comes to custom classes that extensions create, because no extension is currently performing any locking on their own objects right now (why would they?). On the other hand, certain types of concurrent access to objects from different threads might even be desirable -- if the structure doesn't change, I think different threads should be able to simultaneously access numpy arrays, for example, so the CPython API should provide some means of solving this.
I still think it's going to be worth-while of getting rid of the GIL and making Python much more useful when it comes to current CPUs (where core counts have increased in the past years much more than single-core processing speeds). And I welcome the efforts made here to walk in that direction. But it's clear that extension compatibility will go beyond just recompiling them -- the Python project will have to issue clear guidance for extension developers as to how to properly handle this (maybe by locking individual objects -- but if multiple objects are involved, e.g. multiple parameters passed to a function, you need to start thinking about deadlocks).
Posted Oct 14, 2021 17:50 UTC (Thu)
by
We fixed many of the problems by having objects owned by a single thread at a time, but then all code needs to know about transfering objects around, which is very painful for highly recursive objects.
Posted Oct 14, 2021 19:13 UTC (Thu)
by
Posted Oct 14, 2021 19:21 UTC (Thu)
by This is not limited to C extensions. Consider for example:
Assuming that l is a list, this function atomically appends x to it. We know that it must be atomic, because INPLACE_ADD is a single bytecode instruction, the GIL must be held for the entire execution of that instruction, and lists are implemented in C, so this instruction does not create an additional stackframe of Python bytecode. The STORE_FAST is a red herring; it just rebinds a local variable in a scope which we immediately destroy (so the interpreter could have optimized it out without altering the semantics of the function).
The problem is that I don't see a reasonable way to preserve that atomicity:
Posted Oct 15, 2021 16:25 UTC (Fri)
by
The design uses "automatic fine-grained locking". The locks are acquired around mutating operations, but not read-only operations. I estimate that the locks add about 1.5% overhead. (See the sections "Collection thread-safety" and "Performance")
https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfz...
Posted Oct 16, 2021 2:40 UTC (Sat)
by
1. Load the version counter from the collection
Suppose you've got a shared dict foo, and thread A is constantly fiddling with it, adding stuff, removing stuff, changing stuff. But your thread B just reads foo['myValue'] which you're sure thread A has no reason to change. Today under the GIL this seems fine, but the program is slow because the GIL forbids threads A and B from both getting work done. With the 7 step procedure, often reading foo['myValue'] will fail at step 6 because thread A meanwhile incremented the version counter. I believe you would take a lock on the container then, to avoid starvation even though you're a "read-only" operation (you actually write to the reference counter)?
Also, all this bother is to allow for concurrent writes to a shared data structure you're reading, which seems like a bad idea?
Posted Oct 17, 2021 2:35 UTC (Sun)
by
Well, it depends on how you look at it. If you're, say, an application developer, it's easy enough to say "that seems like a bad idea, let's not do it." And in that case (i.e. the case where there's no actual mutation of shared state), the performance penalty seems pretty minor to me (if I'm understanding the process you describe correctly, and in comparison to the existing CPython overhead, which is quite large to start with). On the other hand, if you're a language developer, you don't really have the luxury of saying "this seems like a bad idea, let's not support it." Somebody may have already deployed a real-world application which depends on this behavior, so you can't unilaterally break it unless you want another backcompat flag day a la Python 3. Nobody wants to do that again.
Posted Oct 19, 2021 11:12 UTC (Tue)
by
In this case the extra reads are pretty cheap, because the relevant cache lines can be kept loaded in "shared" state (in the MESI sense), and the branches are predictable. OTOH if reads had to acquire a lock, that would trigger cache line bouncing and be *very* expensive.
Python modules and classes are mutable data structures that are almost always read, rather than mutated. So this pattern of heavy cross-thread read-traffic to a mostly-immutable object is ubiquitous in python, so it makes sense to optimize for it.
Posted Oct 14, 2021 21:07 UTC (Thu)
by
At least a few years ago, the GIL was not specific to sub-interpreters. People have actually patched glibc to be able to load libpython as many times as they like to get more independent interpreters, each with its own global data structures and locks. CPython is very different from (for example) the Lua reference implementation in this regard.
Posted Oct 14, 2021 21:36 UTC (Thu)
by
Posted Oct 14, 2021 23:56 UTC (Thu)
by
https://github.com/ruby/ruby/blob/master/doc/ractor.md
I wonder if there should be a more socially encouraged expectation of cross pollination of ideas like this between the language communities?
Posted Oct 15, 2021 16:52 UTC (Fri)
by
Nathaniel already wrote below about the similarity between Ractors and sub-interpreters, but I'll give a few more examples specific to this project of ideas (or code) taken from other communities:
- Biased reference counting (originally implemented for Swift)
Posted Oct 15, 2021 1:18 UTC (Fri)
by
I don't really see the point – subinterpreters aren't meaningfully more efficient than subprocesses, and they're way more complicated and fragile. But some people are more hopeful.
Posted Oct 15, 2021 1:35 UTC (Fri)
by
For python code, the VM itself can take care of isolating subinterpreters from each other – for python code this is "free".
But for C code, each C extension has to manually implement subinterpreter isolation for its own internal state. So here, the subinterpreter model doesn't really simplify anything – in fact it makes it more complicated.
Posted Oct 15, 2021 3:19 UTC (Fri)
by
As far as I can tell, the most likely problem for "well designed" C extensions (those which do not have substantial global mutable state) is the PyGILState_* API, which doesn't (currently) support sub-interpreters (and by my read, it is unlikely to gain such support without an API break, because there's no plausible way for it to infer which PyInterpreterState* you want it to use). You mostly only need to use those functions if you're calling into the Python interpreter from foreign threads (i.e. threads which were not created by Python). The simplest solution is probably for the Python people to provide a version of those functions which accepts an explicit PyInterpreterState* argument. Currently, for example, PyGILState_Ensure() just grabs "the" interpreter state out of a global variable.[1] This still would require some plumbing on the C extension side, of course, because extensions would still need to actually pass those arguments. In the meantime, you can use PyThreadState_New() and friends to do the same job by hand, but it's a lower-level API and more cumbersome to work with (especially if you want to tackle the reentrancy issues which PyGILState_* is intended to solve for you).
[1]: https://github.com/python/cpython/blob/main/Python/pystat...
Posted Oct 14, 2021 22:42 UTC (Thu)
by
Posted Oct 15, 2021 7:47 UTC (Fri)
by
Posted Oct 15, 2021 19:49 UTC (Fri)
by
Python 4 - No new Python features, but a fast concurrent Python VM for the next 30-40 years.
Posted Oct 15, 2021 10:43 UTC (Fri)
by
I'm sure there will be corner cases that need to be fixed in a lot of places. But then it'll be only in situations where people use multiple threads before they hit the corner cases, and given the current multithreading state of things, that should be quite a filter.
P.S.: Regarding alternative approaches: In the PyTorch context, people have been exploring quite a few aspects of "how to get Python-based DL models to run efficiently" from the TorchScript JIT/interpreter for a subset of Python, to multiprocessing to subinterpreter-style multithreading, so to me this looks like part of this larger picture.
Posted Oct 14, 2021 23:55 UTC (Thu)
by
> We also aim for similar safety guarantees to Java and Go -- the language doesn’t prevent data races, but data races in user code do not corrupt the VM state.
This is confusing. Java and Go have, as I think I may have written on LWN before, quite different safety guarantees under concurrency.
Java doesn't just promise that your data race won't blow up the VM, it constrains the results of the race to just causing values touched by the race to have unpredictable but still legal values. Your Java program with a race may have strange behaviour, perhaps too strange to reasonably debug, but it's still well defined.
Go is quite different, I don't even know if you can say it won't blow up the VM. If your race touches a complex Go object such as a container, all bets are off, the program behaviour is undefined. Which very much seems like to me it could include "corrupt VM state".
Posted Oct 15, 2021 16:14 UTC (Fri)
by
Yes, the sentence is confusing. It's difficult to write about memory models precisely in a short amount of space. Go doesn't have "undefined behavior", but you're right that races (on multiword structures) can lead to memory corruption. I'll work on updating this section.
Russ Cox has a great article on Go's memory model: https://research.swtch.com/gomm#overview
Russ describes Go's approach to data races as a middle ground between Java's and C++'s approaches. The no-GIL CPython also occupies a middle ground, but it's a different one from Go. The motivation is similar: to make "errant programs more reliable and easier to debug."
The no-GIL CPython needs some behavior that is stronger than the Java's guarantees. For example, "dict" and "list" need to behave reasonably even with racy accesses. (In Java, racy accesses to HashMap can lead to infinite loops.) Other behavior is weaker. For example, CPython doesn't have the sandbox security of the JVM.
Posted Oct 16, 2021 1:26 UTC (Sat)
by
I'm also dubious about this "need to behave reasonably" with an infinite loop given as an example of Java weakness. Infinite loop is a completely safe and reasonable outcome from a race, even though of course it's undesirable and a bug in your program.
My misfortunate::Always type in Rust (a type which claims to be totally ordered and yet instances stubbornly insist on always giving the same result for every comparison even to themselves) can induce infinite loops in some completely reasonable algorithms, and that's totally safe it's just annoying, or it would be if you did it by mistake.
Anyway as I also concluded previously for Java, "easier to debug" turned out to be a foolish hope. I believe Ocaml has an improvement on Java's approach (bounding the inconsistency in time) but I don't hold out much more hope for that either, and clearly non-GIL Python is not going to be better. Assume that programs with data races are just broken and unsalvageable and the reality won't disappoint you.
Posted Oct 16, 2021 6:05 UTC (Sat)
by
You could certainly call it "unpredictable behavior," but it's likely more precise and informative to just call it "heap corruption" and be done with it.
Posted Oct 15, 2021 6:05 UTC (Fri)
by >
This left me wondering how a thread can efficiently determine if it is the creator of the object. Won't this require an additional field for the thread id, and checking it?
Posted Oct 15, 2021 10:42 UTC (Fri)
by
Posted Oct 15, 2021 15:38 UTC (Fri)
by
Yes, there's an additional field in the object header.
https://github.com/colesbury/nogil/blob/84c1b14af40d406a7...
The comparison is inexpensive (but not free). In the interpreter, the current thread id is kept in a local variable so the comparison compiles to something like:
cmp %r10, (%rax)
Posted Oct 18, 2021 11:06 UTC (Mon)
by
Posted Oct 18, 2021 14:07 UTC (Mon)
by
Posted Oct 26, 2021 3:56 UTC (Tue)
by
The amount of state maintained by a CPU branch predictor is finite. If the CPU is predicting this branch, it's *not* predicting some other branch.
Posted Oct 17, 2021 3:22 UTC (Sun)
by
And the same thing goes for CPython; it seems most atomic operations are using compiler intrinsics, which is nice to see, but there's at least one spot in https://github.com/colesbury/nogil/blob/nogil/Include/pya... that's using inline asm to implement _Py_atomic_uintptr_is_zero. Even if only as an optimization, it's harder to read and understand, and it's not a good thing to add into python code base, IMO.
And since we are talking memory allocators,
> The interpreter's memory allocator has been replaced with mimalloc, which is thread-safe, fast, and is able to easily support garbage-collection operations.
Any malloc worth their salt (and usable in the real world) is at least thread safe.
Furthermore, one concern that seems to permeate the mimalloc issue tracker is incorrect overloading of the libc malloc, making applications crash because of invalid pointers. Is there a chance memory allocated by python could be freed by a C extension or something like that, or vice versa? If python ended up using the version of mimalloc's free that doesn't crash on invalid pointers because of that, it'd be a huge loss in terms of security/hardening.
Posted Oct 18, 2021 8:38 UTC (Mon)
by
In general, Python objects have their allocation and deallocation behavior determined by their associated type object, which is itself a Python object (at the REPL, you can retrieve the type of an object with type(foo) - at the C level, it's a struct field on the object). Type objects have struct fields which point to allocation and deallocation functions, which are called when instances are allocated and deallocated. See https://docs.python.org/3/c-api/typeobj.html#pytypeobject.... Normally, these are called automatically by PyObject_New and PyObject_Del (or Py_DECREF() when the reference count hits zero), so a C extension running around calling malloc and free explicitly would be very weird and well outside how the API is meant to be used.
Posted Oct 19, 2021 4:47 UTC (Tue)
by
Posted Oct 22, 2021 17:40 UTC (Fri)
by
Posted Oct 26, 2021 18:27 UTC (Tue)
by
Posted Nov 1, 2021 0:39 UTC (Mon)
by
It can't be more painful surely?
BTW I get a choice (and it's one that must be made) between:
I am aware these are both ancient, this means I can't use half the new stuff anyway, so maybe that'd cause pain?
But yeah I'm used to python/python3 - why not python4....
It really couldn't be worse and this is worth getting right
## A viable solution for Python concurrency
**chris_se** (subscriber, #99706)
[Link] (14 responses)
## A viable solution for Python concurrency
**azumanga** (subscriber, #90158)
[Link]
## A viable solution for Python concurrency
**iabervon** (subscriber, #722)
[Link]
## A viable solution for Python concurrency
**NYKevin** (subscriber, #129325)
[Link] (4 responses)
>>> def foo(l, x):
... l += [x]
...
>>> dis.dis(foo)
2 0 LOAD_FAST 0 (l)
2 LOAD_FAST 1 (x)
4 BUILD_LIST 1
6 INPLACE_ADD
8 STORE_FAST 0 (l)
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
*smart* fine-grained locking, where the list is only locked if it's shared by multiple threads, is less likely to help than you might expect, because Python has a lot of shared mutable dictionaries in its internal implementation (basically, every scope is a dict or dict-like-thing), which are subject to the same problem. So you still end up with a lot of unnecessary (or "probably" unnecessary) locking in the multi-threaded case.
## A viable solution for Python concurrency
**colesbury** (subscriber, #137476)
[Link] (3 responses)
## A viable solution for Python concurrency
**tialaramex** (subscriber, #21167)
[Link] (2 responses)
2. Load the “backing array” from the collection
3. Load the address of the item (from the “backing array”)
4. Increment the reference count of the item, if it is non-zero (otherwise retry)
5. Verify that the item still exists at the same location in the collection (otherwise retry)
6. Verify that the version counter did not change (otherwise retry)
7. Return the address of the item
## A viable solution for Python concurrency
**NYKevin** (subscriber, #129325)
[Link]
## A viable solution for Python concurrency
**njs** (guest, #40338)
[Link]
## A viable solution for Python concurrency
**fw** (subscriber, #26023)
[Link] (6 responses)
## A viable solution for Python concurrency
**atnot** (subscriber, #124910)
[Link]
## A viable solution for Python concurrency
**ms-tg** (subscriber, #89231)
[Link] (1 responses)
## A viable solution for Python concurrency
**colesbury** (subscriber, #137476)
[Link]
- mimalloc (originally developed for Koka and Lean)
- The design of the internal locks is taken from WebKit (https://webkit.org/blog/6161/locking-in-webkit/)
- The collection thread-safety adapts some code from FreeBSD (https://github.com/colesbury/nogil/blob/nogil/Python/qsbr.c)
- The interpreter took ideas from LuaJIT and V8's ignition interpreter (the register-accumulator model from ignition, fast function calls and other perf ideas from LuaJIT)
- The stop-the-world implementation is influenced by Go's design (https://github.com/golang/go/blob/fad4a16fd43f6a72b6917ef...)
## A viable solution for Python concurrency
**njs** (guest, #40338)
[Link] (2 responses)
## A viable solution for Python concurrency
**njs** (guest, #40338)
[Link] (1 responses)
## A viable solution for Python concurrency
**NYKevin** (subscriber, #129325)
[Link]
## A viable solution for Python concurrency
**Paf** (subscriber, #91811)
[Link] (3 responses)
## A viable solution for Python concurrency
**tchernobog** (subscriber, #73595)
[Link] (1 responses)
## A viable solution for Python concurrency
**renejsum** (guest, #124634)
[Link]
## A viable solution for Python concurrency
**t-v** (guest, #112111)
[Link]
## Java and Go
**tialaramex** (subscriber, #21167)
[Link] (3 responses)
## Java and Go
**colesbury** (subscriber, #137476)
[Link] (2 responses)
## Java and Go
**tialaramex** (subscriber, #21167)
[Link] (1 responses)
## Java and Go
**NYKevin** (subscriber, #129325)
[Link]
## A viable solution for Python concurrency
**eru** (subscriber, #2753)
[Link] (5 responses)
*With this scheme, the reference count in each object is split in two, with one "local" count for the owner (creator) of the object and a shared count for all other threads. Since the owner has exclusive access to its count, increments and decrements can be done with fast, non-atomic instructions. Any other thread accessing the object will use atomic operations on the shared reference count.*
## A viable solution for Python concurrency
**Cyberax** (**✭ supporter ✭**, #52523)
[Link]
## A viable solution for Python concurrency
**colesbury** (subscriber, #137476)
[Link] (3 responses)
jne .L316
## A viable solution for Python concurrency
**winden** (subscriber, #60389)
[Link] (2 responses)
## A viable solution for Python concurrency
**jreiser** (subscriber, #11027)
[Link]
## A viable solution for Python concurrency
**dancol** (guest, #142293)
[Link]
## A viable solution for Python concurrency
**ericonr** (guest, #151527)
[Link] (2 responses)
## A viable solution for Python concurrency
**NYKevin** (subscriber, #129325)
[Link] (1 responses)
## A viable solution for Python concurrency
**ericonr** (guest, #151527)
[Link]
I've got to second this reaction. Never thought I'd see the day!
## A viable solution for Python concurrency
**flussence** (guest, #85566)
[Link]
## A viable solution for Python concurrency
**t-v** (guest, #112111)
[Link]
https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6...
## A viable solution for Python concurrency
**SomeOtherGuy** (guest, #151918)
[Link]
Python 3.4.3
and
Python 2.7.6
| true | true | true | null |
2024-10-12 00:00:00
|
2021-10-14 00:00:00
| null | null | null | null | null | null |
10,503,558 |
http://www.rollingstone.com/politics/news/outrageous-hsbc-settlement-proves-the-drug-war-is-a-joke-20121213
|
Outrageous HSBC Settlement Proves the Drug War Is a Joke
|
Matt Taibbi
|
# Outrageous HSBC Settlement Proves the Drug War Is a Joke
If you’ve ever been arrested on a drug charge, if you’ve ever spent even a day in jail for having a stem of marijuana in your pocket or “drug paraphernalia” in your gym bag, Assistant Attorney General and longtime Bill Clinton pal Lanny Breuer has a message for you: Bite me.
Breuer this week signed off on a settlement deal with the British banking giant HSBC that is the ultimate insult to every ordinary person who’s ever had his life altered by a narcotics charge. Despite the fact that HSBC admitted to laundering billions of dollars for Colombian and Mexican drug cartels (among others) and violating a host of important banking laws (from the Bank Secrecy Act to the Trading With the Enemy Act), Breuer and his Justice Department elected not to pursue criminal prosecutions of the bank, opting instead for a “record” financial settlement of $1.9 billion, which as one analyst noted is about five weeks of income for the bank.
The banks‘ laundering transactions were so brazen that the NSA probably could have spotted them from space. Breuer admitted that drug dealers would sometimes come to HSBC’s Mexican branches and “deposit hundreds of thousands of dollars in cash, in a single day, into a single account, using boxes designed to fit the precise dimensions of the teller windows.”
This bears repeating: in order to more efficiently move as much illegal money as possible into the “legitimate” banking institution HSBC, drug dealers specifically designed boxes to fit through the bank’s teller windows. Tony Montana’s henchmen marching dufflebags of cash into the fictional “American City Bank” in Miami was actually *more *subtle than what the cartels were doing when they washed their cash through one of Britain’s most storied financial institutions.
Though this was not stated explicitly, the government’s rationale in not pursuing criminal prosecutions against the bank was apparently rooted in concerns that putting executives from a “systemically important institution” in jail for drug laundering would threaten the stability of the financial system. The *New York Times* put it this way:
Federal and state authorities have chosen not to indict HSBC, the London-based bank, on charges of vast and prolonged money laundering, for fear that criminal prosecution would topple the bank and, in the process, endanger the financial system.
It doesn’t take a genius to see that the reasoning here is beyond flawed. When you decide not to prosecute bankers for billion-dollar crimes connected to drug-dealing and terrorism (some of HSBC’s Saudi and Bangladeshi clients had terrorist ties, according to a Senate investigation), it doesn’t protect the banking system, it does exactly the opposite. It terrifies investors and depositors everywhere, leaving them with the clear impression that even the most “reputable” banks may in fact be captured institutions whose senior executives are in the employ of (this can’t be repeated often enough) murderers* *and terrorists. Even more shocking, the Justice Department’s response to learning about all of this was to do exactly the same thing that the HSBC executives did in the first place to get themselves in trouble – they took money to look the other way.
And not only did they sell out to drug dealers, they sold out cheap. You’ll hear bragging this week by the Obama administration that they wrested a record penalty from HSBC, but it’s a joke. Some of the penalties involved will literally make you laugh out loud. This is from Breuer’s announcement:
As a result of the government’s investigation, HSBC has . . . “clawed back” deferred compensation bonuses given to some of its most senior U.S. anti-money laundering and compliance officers, and agreed to partially defer bonus compensation for its most senior officials during the five-year period of the deferred prosecution agreement.
Wow. So the executives who spent a decade laundering billions of dollars will have to *partially *defer their bonuses during the five-year deferred prosecution agreement? Are you fucking kidding me? That’s the punishment? The government’s negotiators couldn’t hold firm on forcing HSBC officials to *completely *wait to receive their ill-gotten bonuses? They had to settle on making them “partially” wait? Every honest prosecutor in America has to be puking his guts out at such bargaining tactics. What was the Justice Department’s opening offer – asking executives to restrict their Caribbean vacation time to nine weeks a year?
So you might ask, what’s the appropriate financial penalty for a bank in HSBC’s position? Exactly how much money should one extract from a firm that has been shamelessly profiting from business with criminals for years and years? Remember, we’re talking about a company that has admitted to a smorgasbord of serious banking crimes. If you’re the prosecutor, you’ve got this bank by the balls. So how much money should you take?
How about *all of it*?* *How about every last dollar the bank has made since it started its illegal activity? How about you dive into every bank account of every single executive involved in this mess and take every last bonus dollar they’ve ever earned? Then take their houses, their cars, the paintings they bought at Sotheby’s auctions, the clothes in their closets, the loose change in the jars on their kitchen counters, every last freaking thing. Take it all and don’t think twice. And *then *throw them in jail.
Sound harsh? It does, doesn’t it? The only problem is, that’s exactly what the government does just about every day to ordinary people involved in ordinary drug cases.
It’d be interesting, for instance, to ask the residents of Tenaha, Texas what they think about the HSBC settlement. That’s the town where local police routinely pulled over (mostly black) motorists and, whenever they found cash, offered motorists a choice: They could either allow police to seize the money, or face drug and money laundering charges.
Or we could ask Anthony Smelley, the Indiana resident who won $50,000 in a car accident settlement and was carrying about $17K of that in cash in his car when he got pulled over. Cops searched his car and had drug dogs sniff around: The dogs alerted twice. No drugs were found, but police took the money anyway. Even after Smelley produced documentation proving where he got the money from, Putnam County officials tried to keep the money on the grounds that he *could *have used the cash to buy drugs in the future.
Seriously, that happened. It happens all the time, and even Lanny Breuer’s own Justice Deparment gets into the act. In 2010 alone, U.S. Attorneys’ offices deposited nearly $1.8 billion into government accounts as a result of forfeiture cases, most of them drug cases. You can see the Justice Department’s own statistics right here: If you get pulled over in America with cash and the government even thinks it’s drug money, that cash is going to be buying your local sheriff or police chief a new Ford Expedition tomorrow afternoon.
And that’s just the icing on the cake. The real prize you get for interacting with a law enforcement officer, if you happen to be connected in any way with drugs, is a preposterous, outsized criminal penalty. Right here in New York, one out of every seven cases that ends up in court is a marijuana case.
Just the other day, while Breuer was announcing his slap on the wrist for the world’s most prolific drug-launderers, I was in arraignment court in Brooklyn watching how they deal with actual people. A public defender explained the absurdity of drug arrests in this city. New York actually has fairly liberal laws about pot – police aren’t supposed to bust you if you possess the drug in private. So how do police work around that to make 50,377 pot-related arrests in a single year, just in this city? Tthat was 2010; the 2009 number was 46,492.)
“What they do is, they stop you on the street and tell you to empty your pockets,” the public defender explained. “Then the instant a pipe or a seed is out of the pocket – boom, it’s ‘public use.’ And you get arrested.”
People spend nights in jail, or worse. In New York, even if they let you off with a misdemeanor and time served, you have to pay $200 and have your DNA extracted – a process that you have to pay for (it costs 50 bucks). But even beyond that, you won’t have search very far for stories of draconian, idiotic sentences for nonviolent drug crimes.
Just ask Cameron Douglas, the son of Michael Douglas, who got five years in jail for simple possession. His jailers kept him in solitary for 23 hours a day for 11 months and denied him visits with family and friends. Although your typical non-violent drug inmate isn’t the white child of a celebrity, he’s usually a minority user who gets far stiffer sentences than rich white kids would for committing the same crimes – we all remember the crack-versus-coke controversy in which federal and state sentencing guidelines left (predominantly minority) crack users serving sentences up to 100 times harsher than those meted out to the predominantly white users of powdered coke.
The institutional bias in the crack sentencing guidelines was a racist outrage, but this HSBC settlement blows even that away. By eschewing criminal prosecutions of major drug launderers on the grounds (the patently absurd grounds, incidentally) that their prosecution might imperil the world financial system, the government has now formalized the double standard.
They’re now saying that if you’re not an important cog in the global financial system, you can’t get away with anything, not even simple possession. You will be jailed and whatever cash they find on you they’ll seize on the spot, and convert into new cruisers or toys for your local SWAT team, which will be deployed to kick in the doors of houses where more such inessential economic cogs as you live. If you don’t have a systemically important job, in other words, the government’s position is that your assets may be used to finance your own political disenfranchisement.
On the other hand, if you are an important person, and you work for a big international bank, you won’t be prosecuted even if you launder nine billion dollars. Even if you actively collude with the people at the very top of the international narcotics trade, your punishment will be far smaller than that of the person at the very bottom of the world drug pyramid. You will be treated with more deference and sympathy than a junkie passing out on a subway car in Manhattan (using two seats of a subway car is a common prosecutable offense in this city). An international drug trafficker is a criminal and usually a murderer; the drug addict walking the street is one of his victims. But thanks to Breuer, we’re now in the business, officially, of jailing the victims and enabling the criminals.
This is the disgrace to end all disgraces. It doesn’t even make any sense. There is no reason why the Justice Department couldn’t have snatched up everybody at HSBC involved with the trafficking, prosecuted them criminally, and worked with banking regulators to make sure that the bank survived the transition to new management. As it is, HSBC has had to replace virtually all of its senior management. The guilty parties were apparently not so important to the stability of the world economy that they all had to be left at their desks.
So there is absolutely no reason they couldn’t all face criminal penalties. That they are not being prosecuted is cowardice and pure corruption, nothing else. And by approving this settlement, Breuer removed the government’s moral authority to prosecute anyone for any other drug offense. Not that most people didn’t already know that the drug war is a joke, but this makes it official.
| true | true | true |
If you're suspected of drug involvement, U.S. takes your house; HSBC admits to laundering cartel billions, loses five weeks' income and execs have to partially defer bonuses.
|
2024-10-12 00:00:00
|
2012-12-13 00:00:00
|
article
|
rollingstone.com
|
Rolling Stone
| null | null |
|
27,253,171 |
https://www.cnn.com/2021/05/22/china/china-runners-deal-intl-hnk/index.html
|
Extreme weather kills 21 ultra-marathon runners in China | CNN
|
Jenni Marsh; Eric Cheung
|
Twenty one ultra-marathon runners have died after extreme weather conditions hit a 100-kilometer (62-mile) mountain race in northwest China.
The high-altitude Huanghe Shilin Mountain Marathon began on Saturday morning in sunny conditions. But by 1 p.m. local time weather conditions had turned, with freezing rain, hail stones and gale winds lashing runners in Gansu County, according to the state-run Global Times.
Liang Jing, one of China’s well known ultra-marathon runners, was among those who died, a Hong Kong marathon group called Hong Kong 100 Ultra Marathon confirmed via a statement released on Sunday.
The marathon group said Liang had been a “favorite” member of the Hong Kong trail-racing community. He regularly participated in the annual Hong Kong 100-kilometer trail race, and was the runner-up in the last two years, it added.
It also described him as “one of the best ultra-endurance athletes in the world” and expressed condolences to his family.
As temperatures dropped in the Yellow River Stone Forest, runners started reported suffering from hypothermia, while others went missing.
The marathon organizers called off the race and launched a search party of 1,200 people to scour the complicated terrain. The search operation continued after dark.
Most competitors were wearing thin shorts and T-shirts.
Janet Ng, a race director of the Hong Kong 100 Ultra Marathon, told CNN on Sunday that she was not in a position to comment on the importance or safety of the Gansu marathon, but noted that the trail-running community is mourning with great sadness.
One participant told local publication Red Star News: “At one point, I couldn’t feel my fingers (because it was so cold). At the same time my tongue felt frozen, too.”
He said he decided to abandon the race. “I retreated back to halfway down the mountain, and entered a wooden cabin at the direction of a rescuer. There were already about 10 more runners who came down earlier and we waited for rescue in the cabin for about an hour. Eventually about 50 runners came and took shelter in the cabin.”
By Sunday morning, 151 of the 172 race participants had been confirmed safe, with eight in hospital. Another 21 were found dead, according to the state-run People’s Daily.
The race’s distance of 100 kilometers was more than double that of a standard marathon.
| true | true | true |
Twenty one ultra-marathon runners have died after extreme weather conditions hit a 100-kilometer (62-mile) mountain race in northwest China.
|
2024-10-12 00:00:00
|
2021-05-22 00:00:00
|
article
|
cnn.com
|
CNN
| null | null |
|
16,627,885 |
https://opensource.com/article/18/3/how-11-open-source-projects-got-their-names
|
How 11 open source projects got their names
|
Jeff Macharyas
|
What is the meaning of "life"?
Well, it's the condition that distinguishes animals and plants from inorganic matter, of course. So, what is the meaning of "open source life"? Leo Babauta, writing for LifeHack, says:
"It can apply to anything in life, any area where information is currently in the hands of few instead of many, any area where a few people control the production and distribution and improvement of a product or service or entity."
Phew! Now that we have that figured out, what is the meaning of "Kubernetes"? Or, "Arduino"?
Like many well-known brand names we take for granted, such as "Kleenex" or "Pepsi," the open source world has its own unique collection of strange names that meant something to someone at some time, but that we simply accept (or mispronounce) without knowing their true origins.
**Let's take a look at the etymology of 11 such open source names.**
## Arduino
"So, two open source developers walk into a bar..." Arduino derives its name from one of co-founder Massimo Banzi's favorite bars in Ivrea, Italy, where the founders of this "hardware and software ecosystem" used to meet. The bar was named for Arduin of Ivrea, who was king of Italy a bit more than 1,000 years ago.
## Debian
First introduced in 1993 by Ian Murdock, Debian was one of the first operating systems based on the Linux kernel. First released as the "Debian Linux Release," Debian's name is a portmanteau (a word created by combing two other words, such as "[mo]dulator [dem]odulator"—so that's what "modem" means!). By combining the first name of Murdock's then-girlfriend, Debra Lynn and his own name, Ian, they formed "Debian."
## Kubernetes
The open source system for automating deployment, scaling, and management of containerized applications, also called "K8s," gets its moniker from the Greek for "helmsman" or "pilot." Kubernetes traces its lineage to Google's Borg system and was originally codenamed "Project Seven," a reference to *Star Trek Voyager*'s previously assimilated Borg, Seven of Nine. The seven spokes in Kubernetes' logo—a helmsman's wheel—are a visual reference to Seven.
## openSUSE
openSUSE gets its name from Germany. SUSE is an acronym for "Software und System-Entwicklung" or "software and system development." The "open" part was appended after Novell acquired SUSE in 2003 and when they opened distribution development to the community in 2005.
## PHP
PHP started as a simple set of CGI binaries written in C for helping its creator, Rasmus Lerdorf, maintain his personal homepage, thus the project was abbreviated "PHP." This later became an acronym for what the project became—a hypertext preprocessor—so "PHP: hypertext preprocessor" became the new meaning of "PHP" (yes, a recursive backronym).
## PostgreSQL
Originally just "postgres," PostgreSQL was created at the University of California-Berkeley by Michael Stonebraker in 1986 as a follow-up to the "Ingres" database system. Postgres was developed to break new ground in database concepts, such as object-relational technologies. Its pronunciation causes a lot of debate, as seen in this Reddit thread.
## Python
When he began implementing the Python programming language, Guido van Rossum was a fan of *Monty Python's Flying Circus*. Van Rossum thought he needed a short name that was unique and slightly mysterious, so he settled on Python.
## Raspberry Pi
Raspberry Pi co-founder Eben Upton explains: "Raspberry is a reference to a fruit-naming tradition in the old days of microcomputers," such as Tangerine Computer Systems, Apricot Computers, and Acorn. As the Raspberry Pi was intended to be a processor that booted into a Python shell, "Py" was added, but changed to "Pi" in reference to the mathematical constant.
## Red Hat
Red Hat was founded out of a sewing room in Connecticut and a bachelor pad in Raleigh, N.C., by co-founders Bob Young and Marc Ewing. The "red hat" refers to a red Cornell University lacrosse cap, which Ewing wore at his job helping students in the computer lab at Carnegie Mellon. Students were told: "If you need help, look for the guy in the red hat."
## Ubuntu
Ubuntu's About page explains the word's meaning: "Ubuntu is an ancient African word meaning 'humanity to others.'" It also means "I am what I am because of who we all are," and the operating system intends to bring "the spirit of Ubuntu to the world of computers and software." The word can be traced to the Nguni languages, part of the Bantu languages spoken in Southern African, and simply means "humanity."
## Wikipedia
To get the answer to this one, let's turn to Wikipedia! In 1995, Howard G. "Ward" Cunningham developed WikiWikiWeb, "the simplest online database that could possibly work." The word "wiki" is Hawaiian and means "quick" and "pedia" means, ummm, "pedia."
Acronyms, portmanteaus, pubs, foreign words—these are just some examples of the etymology of open source labels. There are many others. What other strange and alien words have you encountered in the open source universe? Where do they come from? What do they mean? Let us know in the comments section below.
*Thanks to Ben Nuttall, community manager for the Raspberry Pi Foundation, for providing definitions for PHP, Python, and Raspberry Pi.*
## 16 Comments
| true | true | true |
Learn how 11 open source projects got their names: Python, Raspberry Pi, and Red Hat to name a few.
|
2024-10-12 00:00:00
|
2018-03-20 00:00:00
| null |
opensource.com
|
Opensource.com
| null | null |
|
32,497,844 |
https://www.nordicsemi.com/Products/nRF7002
|
nRF7002 - Low-power, advanced security, seamless coexistence
| null |
### Learn from our experts
#### Watch our webinar to learn more about low-power Wi-Fi and our nRF70 Series.
Wi-Fi is the next big step in our product portfolio. Grab a coffee and enjoy!
### Introducing the nRF7002
### Nordic's first Wi-Fi product
#### nRF7002 Wi-Fi 6 companion IC
In August 2022 we announced our entrance into the Wi-Fi wireless IoT market with the introduction of our eagerly-awaited nRF7002 Wi-Fi 6 IC.
The nRF7002 is a ’companion IC’ which means it is designed to provide seamless Wi-Fi connectivity and Wi-Fi-based locationing (SSID sniffing of local Wi-Fi hubs) when used alongside Nordic’s existing products. These include the nRF52® and nRF53® Series Bluetooth Systems-on-Chip (SoCs), and Nordic’s nRF91® Series cellular IoT Systems-in-Package (SiPs). The nRF7002 can also be used in conjunction with non-Nordic host devices.
“This is a dream come true for Nordic and its customers.” said CTO, Svein-Egil Nielsen, “We were able to bring our first Wi-Fi chip to market very quickly as a result of acquiring an extremely capable Wi-Fi team alongside a portfolio of Wi-Fi assets that team had already developed.”
| true | true | true |
nRF7002 - Low-power, advanced security, seamless coexistence
|
2024-10-12 00:00:00
|
2022-08-01 00:00:00
| null |
nordicsemi.com
|
nordicsemi.com
| null | null |
|
19,061,034 |
https://www.joyfulbikeshedding.com/blog/2019-01-31-full-system-dynamic-tracing-on-linux-using-ebpf-and-bpftrace.html
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
13,704,957 |
https://www.bloomberg.com/news/articles/2017-02-22/humans-don-t-want-robots-to-help-them-shop
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
39,264,790 |
https://www.ft.com/content/2dff823d-6821-4519-8ca2-05bda288c574
|
Can Elon Musk derail Delaware?
| null |
Can Elon Musk derail Delaware?
was $468 now $279 for your first year, equivalent to $23.25 per month. Make up your own mind. Build robust opinions with the FT’s trusted journalism. Take this offer before 24 October.
Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.
FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday.
Terms & Conditions apply
See why over a million readers pay to read the Financial Times.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null |
website
| null |
Financial Times
| null | null |
38,846,667 |
https://podcasts.apple.com/us/podcast/things-web-devs-can-learn-from-game-devs-with-casey-muratori/id1602572955?i=1000637228476
|
Things Web Devs Can Learn from Game Devs with Casey Muratori
| null |
Richard talks with Casey Muratori, a game engine programmer who's known for creating the term Immediate Mode GUIs, for his Twitch series Handmade Hero, and most recently for his excellent Performance Aware Programming course. They talk about performance and the programming culture around it, how memory safety relates to progarm architecture, what Web development can learn from game development, and even some concrete improvements that could be made to, you guessed it...CSS!
Hosted on Acast. See acast.com/privacy for more information.
## Information
- Show
- FrequencyUpdated Weekly
- PublishedDecember 1, 2023 at 9:38 PM UTC
- Length2h 7m
- Season1
- Episode78
- RatingClean
| true | true | true |
Podcast Episode · Software Unscripted · 12/01/2023 · 2h 7m
|
2024-10-12 00:00:00
|
2023-12-01 00:00:00
|
website
|
apple.com
|
Apple Podcasts
| null | null |
|
17,921,504 |
https://towardsdatascience.com/assembling-an-entry-level-high-frequency-trading-hft-system-e7538545b2a9
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,441,070 |
https://backchannel.com/how-to-build-a-hard-tech-startup-4028d22f2c91#.rnigssam5
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,058,761 |
https://ondras.github.io/dragons/
|
Dragons
| null |
# Dragons
- This is a web-based realization of the Twenty generations of dragons project by Robert Fathauer.
- The pattern is a variation of the Dragon curve.
- You will need a modern browser that supports
*ES6 modules*. If you see no visuals, please update your browser.
- Change various appearance parameters using controls below.
- The final image is always scaled to fit the window. Right-click it to save as a full-size image.
- © 2018 Ondřej Žára, GitHub
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null | null | null | null | null | null |
28,903,419 |
https://github.com/SimonBrazell/privacy-redirect
|
GitHub - SimonBrazell/privacy-redirect: A simple web extension that redirects Twitter, YouTube, Instagram & Google Maps requests to privacy friendly alternatives.
|
SimonBrazell
|
**FIRO**`aEyKPU7mwWBYRFGoLiUGeQQybyzD8jzsS8`
**BTC:**`3JZWooswwmmqQKw5iW6AYFfK5gcWTrvueE`
**ETH:**`0x90049dc59365dF683451319Aa4632aC61193dFA7`
A web extension that redirects *Twitter, YouTube, Instagram, Google Maps, Reddit, Google Search, & Google Translate* requests to privacy friendly alternative frontends for those sites - Nitter, Invidious, FreeTube, Bibliogram, OpenStreetMap, SimplyTranslate & Private Search Engines like DuckDuckGo and Startpage.
It's possible to toggle all redirects on and off. The extension will default to using random instances if none are selected. If these instances are not working, you can try and set a custom instance from the list below.
Privacy Redirect allows setting custom instances, instances can be found here:
- Nitter instances
- Invidious instances
- Bibliogram instances
- SimplyTranslate instances
- OpenStreetMap tile servers
- Reddit alternatives:
- Libreddit
- Teddit
- Snew
- Old Reddit & Mobile Reddit, purported to be more privacy respecting than the new UI.
- Google Search alternatives:
- Node.js >=10.0.0 installed
`npm install`
`npm run build`
`open web-ext-artifacts/`
`npm run test`
Please note, access to all website navigation events ( all URLs), not just the target domains, is required to allow embedded video redirects to occur. At this time I know of no other way to achieve iframe redirects, happy to hear some suggestions on this though 🙂
See the Project Wiki.
| true | true | true |
A simple web extension that redirects Twitter, YouTube, Instagram & Google Maps requests to privacy friendly alternatives. - SimonBrazell/privacy-redirect
|
2024-10-12 00:00:00
|
2019-09-20 00:00:00
|
https://opengraph.githubassets.com/d392221d739ad94b31f6c8bf711942f4b6f8a9be09e710e0897ec24ecdfafda6/SimonBrazell/privacy-redirect
|
object
|
github.com
|
GitHub
| null | null |
9,521,080 |
https://marc-stevens.nl/research/sha1freestart/
|
Marc Stevens - Research
| null |
Return to Marc Stevens' research page
Our rump session presentation can be found here.
We have found an example colliding message pair (do your own verification using this software):
Unrolled Internal State Words Q-4,...,Q76: -4 11110001010000100001111000100011 -3 11101001010000001010001101010110 -2 000011111010011010011110001111+0 -1 0100000110111000001110110101110+ 0 10000001101111110010001100000110 1 100100011001100111100000000+0110 2 00111-1101101101111100+110111-11 3 10000-01011110010+000011-1001-00 4 0-0100111001+101100-1111-10010+1 5 1+001010001110-0101-000111+0+110 6 000-1-1-1--++++001+0-1--1110100- 7 1+0-1+-0-+10+-00++-+11100+010111 8 +-1100+-------------1-++110-11+0 9 --+001101011000010+-001100-11101 10 -1001011100011101100100111011001 11 111-1111100111001001101000011100 12 +0+10101011101101100111101011011 13 01+01011111111010001011000-10000 14 00+00001101101110001101001000101 15 1-110100100001101111110111011011 16 +0010001010001100111101000011110 17 +1-11001101101100001000010111100 18 +0001011010100101001111010000100 19 1-011001100001101010111101000001 20 -0010001100110011000110111011110 21 -1-10010100100001000000011101110 22 -1000001101101000111110101011101 23 -0+10101011010110111000111101010 24 00111101110100010111001101111101 25 11-00111011000110011001100000011 26 -0000100110110110101010100101001 27 0--10011111000100100010101011001 28 11011110100011000011001011100001 29 00010001011001010111011110111010 30 -0010000010001110100000000111000 31 -0011001111000101110101110011101 32 01010000010001111100100011000100 33 11111011101100101110011101100000 34 01010110110001101001100011001101 35 10101000100100010011010010110011 36 00000001100001110010100111111111 37 -0000111011010010010110111000011 38 01101100101010001001000111110100 39 +0110110010010000101101000000111 40 11101011011111010111111100101001 41 -1000111100011110100000101001010 42 00000110000100100010000111011011 43 1+100100000010110000110101110100 44 10011010110000111011001111111101 45 10010100001011110100100001111110 46 +1011111110011011000111001011010 47 01010011101111011100010111101101 48 00111101011100001010111101010111 49 10101111110000001100100011101010 50 00101010111111010010111110001101 51 +1010000001010000101111000000100 52 01111100100001110000101100011010 53 11101010100110111011010111101110 54 00011101001010111100101101110101 55 00001110000000101010000111110101 56 01111100110101111010001111001110 57 -1000000110100011011110011100011 58 00111000101000101001011111011010 59 -0010110110101001001001111011101 60 01110110001111110000100100010010 61 10101100011111010000001101100010 62 11001111110101111110110100110000 63 10001001111101110110000011100110 64 10110101110101101011110001101111 65 00111011101100110101110011101011 66 11010011001111101001011111110111 67 10110011010001000001100111001000 68 01001010111111110010110100001010 69 11000010111001011010111100101011 70 01110111111011010101000100001011 71 0011100100111110110101000110101+ 72 01101010001010000110111001110011 73 10001010010111110010101101111000 74 100000101101001001111011101001-1 75 0001000011001001111110011010011- 76 00101101100010100011101000001010 Output Chaining Value: 0 10101111010010010101110100010000 1 01010010100000100011010100000011 2 11100100100111100100011001111000 3 11011100111001111111001110110011 4 11010110110110101010001100100100
`marc`
AT `marc-stevens.nl`
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
3,821,645 |
http://lycos.com/
|
Lycos.com
| null |
Lycos.com
Mail
Lycos Weather
Tripod
Angelfire
Domains
Ashburn 75° F
English
Español
日本語
Deutsch
한국어
Français
Italiano
Nederlands
Svenska
Dansk
Suomi
Norsk
Gaeilge
Lycos SWAG
Mail
Tripod
Domains
Weather
Angelfire
| true | true | true |
Lycos, Inc., is a web search engine and web portal established in 1994, spun out of Carnegie Mellon University. Lycos also encompasses a network of email, webhosting, social networking, and entertainment websites.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null | null | null | null | null | null |
16,941,293 |
https://medium.com/@sbfcant/lo-and-behold-internet-anonymity-dehumanization-1e0f7eeb3532
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,192,063 |
https://www.autoevolution.com/news/nhtsa-complain-allegedly-reports-first-fsd-beta-crash-173973.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,889,247 |
https://www.popularmechanics.com/science/energy/a34096117/nasa-nuclear-lattice-confiment-fusion/
|
NASA Found Another Way Into Nuclear Fusion
|
Caroline Delbert
|
- NASA has made tiny, but promising steps toward lattice confinement nuclear fusion.
- Magnetic fusion requires massive heat and is still not sustainable for energy use.
- Deuterium is crammed into all the empty spaces in an existing metal structure.
NASA has unlocked __nuclear fusion on a tiny scale__, with a phenomenon called lattice confinement fusion that takes place in the narrow channels between atoms. In the reaction, the common nuclear fuel deuterium gets trapped in the “empty” atomic space in a solid metal. What results is a Goldilocks effect that’s neither supercooled nor superheated, but where atoms reach fusion-level energy.
**☢️ You like nuclear. So do we. Let's nerd out over it together.**
“Lattice confinement” may sound complex, but it's just a mechanism—by comparison, tokamaks like ITER and stellarators use “magnetic confinement.” These are the ways scientists plan to condense and then corral the fantastical amount of energy from the fusion reaction.
In a traditional magnetic fusion reaction, extraordinary heat is used to combat atoms’ natural reaction forces and keep them confined in a plasma together. And in another method called “inertial confinement,” NASA explains, “fuel is compressed to extremely high levels but for only a short, nano-second period of time, when fusion can occur.”
By contrast, the lattice is neither cold nor hot:
“In the new method, conditions sufficient for fusion are created in the confines of the metal lattice that is held at ambient temperature. While the metal lattice, loaded with deuterium fuel, may initially appear to be at room temperature, the new method creates an energetic environment inside the lattice where individual atoms achieve equivalent fusion-level kinetic energies.”
The fuel is also far more dense, because that’s how the reaction is triggered. “A metal such as erbium is “‘deuterated’ or loaded with deuterium atoms, ‘deuterons,’ packing the fuel a billion times denser than in magnetic confinement (tokamak) fusion reactors. In the new method, a neutron source ‘heats’ or accelerates deuterons sufficiently such that when colliding with a neighboring deuteron it causes D-D fusion reactions.”
With atoms packed so densely *within* the atomic lattice of another element, the required energy to induce fusion goes way, way down. It’s aided by the lattice itself, which works to filter which particles get through and pushes the right kinds even closer together. But there’s a huge gulf between individual atoms at energy rates *resembling* fusion versus a real, commercial-scale application of nuclear fusion.
**Read Up: The Best Nuclear History Books**
But, NASA says, this is an important first step and one that offers an alternative to the spectacular scale of major tokamak and stellarator projects around the world. Even the smallest magnetic confinement fusion reactors require sun-hot fusion temperatures that have continued to create logistical problems. There will always be use cases where that isn’t feasible to install or maintain, even after scientists finally make it work on a practical scale.
Scientists are doing cutting-edge work on all these kinds of reactors, but a way that didn’t require heating to and maintaining millions of degrees could be a lot simpler. At the very least, it could be suited to applications where a magnetic fusion reactor isn’t feasible. Before then, scientists will need to find a way to increase the rate of atomic reactions manyfold, and they say they have several ideas for how to try to do that.
Caroline Delbert is a writer, avid reader, and contributing editor at Pop Mech. She's also an enthusiast of just about everything. Her favorite topics include nuclear energy, cosmology, math of everyday things, and the philosophy of it all.
| true | true | true |
Magic happens in the Goldilocks Zone.
|
2024-10-12 00:00:00
|
2020-09-21 00:00:00
|
article
|
popularmechanics.com
|
Popular Mechanics
| null | null |
|
33,854,285 |
https://wondery.com/shows/how-i-built-this/
|
How I Built This with Guy Raz
| null | null | true | true | false |
Guy Raz dives into the stories behind some of the world's best known companies. How I Built This weaves a narrative journey about innovators, entrepreneurs and idealists—and the movements they built. Order the How I Built This book at https://www.guyraz.com
|
2024-10-12 00:00:00
|
2023-11-09 00:00:00
|
article
|
wondery.com
|
Wondery | Premium Podcasts
| null | null |
|
1,463,833 |
http://flashmobile.scottjanousek.com/2010/06/25/html5rocks-com-i-dont-see-it-but-im-hopeful-that-will-change-in-a-few-years-maybe/comment-page-1/#comment-136314
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,989,462 |
http://databasearchitects.blogspot.com/2023/04/the-great-cpu-stagnation.html
|
The Great CPU Stagnation
|
Viktor Leis
|
For at least five decades, Moore's law consistently delivered increasing numbers of transistors. Equally significant, Dennard scaling led to each transistor using less energy, enabling higher clock frequencies. This was great, as higher clock frequencies enhanced existing software performance automatically, without necessitating any code rewrite. However, around 2005, Dennard scaling began to falter, and clock frequencies have largely plateaued since then.
Despite this, Moore's law continued to advance, with the additional available
transistors being channeled into creating more cores per chip. The following graph displays the number of cores for the largest available x86 CPU at the time:
Notice the logarithmic scale: this represents the exponential trend we had become accustomed to, with core counts doubling roughly every three years. Regrettably, when considering cost per core, this impressive trend appears to have stalled, ushering in an era of CPU stagnation.
To demonstrate this stagnation, I gathered data from wikichip.org on AMD's Epyc single-socket CPU lineup, introduced in 2017 and now in its fourth generation (Naples, Rome, Milan, Genoa):
Model |
Gen |
Launch |
Cores |
GHz |
IPC |
Price |
7351P | Naples | 06/2017 | 16 | 2.4 | 1.00 | $750 |
7401P | Naples | 06/2017 | 24 | 2.0 | 1.00 | $1,075 |
7551P | Naples | 06/2017 | 32 | 2.0 | 1.00 | $2,100 |
7302P | Rome | 08/2019 | 16 | 3.0 | 1.15 | $825 |
7402P | Rome | 08/2019 | 24 | 2.8 | 1.15 | $1,250 |
7502P | Rome | 08/2019 | 32 | 2.5 | 1.15 | $2,300 |
7702P | Rome | 08/2019 | 64 | 2.0 | 1.15 | $4,425 |
7313P | Milan | 03/2021 | 16 | 3.0 | 1.37 | $913 |
7443P | Milan | 03/2021 | 24 | 2.9 | 1.37 | $1,337 |
7543P | Milan | 03/2021 | 32 | 2.8 | 1.37 | $2,730 |
7713P | Milan | 03/2021 | 64 | 2.0 | 1.37 | $5,010 |
9354P | Genoa | 11/2022 | 32 | 3.3 | 1.57 | $2,730 |
9454P | Genoa | 11/2022 | 48 | 2.8 | 1.57 | $4,598 |
9554P | Genoa | 11/2022 | 64 | 3.1 | 1.57 | $7,104 |
9654P | Genoa | 11/2022 | 96 | 2.4 | 1.57 | $10,625 |
Over these past six years, AMD has emerged as the x86 performance per dollar leader. Examining these numbers should provide insight into the state of server CPUs. Let's first observe CPU cores per dollar:
This deviates significantly from the expected exponential improvement graphs. In fact, CPU cores are becoming slightly more expensive over time! Admittedly, newer cores outperform their predecessors. When accounting for both clock frequency and higher IPC, we obtain the following image:
This isn't much better. The performance improvement over a 6-year period is underwhelming when normalized for cost. Similar results can also be observed for Intel CPUs in EC2.
Lastly, let's examine transistor counts, only taking into account the logic transistors. Despite improved production nodes from 14nm (Naples) over 7nm (Rome/Milan) to 5nm (Genoa), cost-adjusted figures reveal stagnation:
In conclusion, the results are disheartening. Rapid and exponential improvements in CPU speed seem to be relics of the past. We now find ourselves in a markedly different landscape compared to the historical norm in computing. The implications could be far-reaching. For example, most software is extremely inefficient when compared to what hardware can theoretically achieve, and maybe this needs to change. Furthermore, historically specialized chips enjoyed only limited success due to the rapid advancement of commodity CPUs. Perhaps, custom chips will have a much bigger role in the future.
P.S. Due to popular demand, here's how the last graph looks like after adjusting for inflation:
Comparing core performance per price of CPU is interesting, however, we need to compare it also to OpEx, how much energy we will consumer per same cores. So, cores becomes more efficient and over all price per core (not only CapEx, like overall cpu price, but OpEx as watts / cores) became lower.
ReplyDeleteSo, it isn't stagnation. Maybe it just one more interesting law for a decade ;-) And we should focus on performance of a single instance rather that cluster setups.
The "efficiency" link from Carmack is about organizational efficiency, not compute efficiency, I don't think it really helps your argument. Other than that, excellent article, and I completely agree.
ReplyDeleteI also think there is something to be said about Intel and AMD's 4S vs 2S vs 1S prices, and the ARM ecosystem as a competitive threat...
First, Moore dies, and now this happens :(
ReplyDeleteIf you have not, you should probably adjust for inflation. While it's effects have been most pronounced the last two years, over longer time periods it can have a large impact
ReplyDeleteVery nice conclusion. Thank you for this enjoyable read!
ReplyDeleteAdd in the Ampere CPUs. 192 cores now!
ReplyDeleteIs the cost adjusted for inflation over years in the charts above?
ReplyDelete
| true | true | true |
A blog by and for database architects.
|
2024-10-12 00:00:00
|
2023-04-09 00:00:00
|
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_swKf6SY39uqbTKqR-25HgWJrsycTsdyxp50ONTUA8WeicNAbsZQj7DSPFS6e8LRJUAQ1zwyrfsj269ciqI_xLrxrQxpQTW6PEb_6Z1lTRLPQXlNg=w1200-h630-p-k-no-nu
| null |
blogspot.com
|
databasearchitects.blogspot.com
| null | null |
15,860,768 |
https://github.com/myliang/fish-ui
|
GitHub - myliang/fish-ui: A Vue.js 2.0 UI Toolkit for Web
|
Myliang
|
A Vue.js 2.0 UI Toolkit for Web.
```
npm install less less-loader -S
npm install fish-ui -S
```
```
<link rel="stylesheet" href="https://cdn.bootcss.com/font-awesome/4.7.0/css/font-awesome.css"/>
<link rel="stylesheet" href="https://fonts.proxy.ustclug.org/css?family=Lato:400,700,400italic,700italic&subset=latin"/>
```
```
import Vue from 'vue'
import FishUI from 'fish-ui'
Vue.use(FishUI)
```
```
import 'fish-ui/styles/button.less'
import Button from 'fish-ui/src/components/Button.vue'
Vue.component(Button.name, Button)
```
And if you start with vue-webpack-boilerplate by vue-cli
https://myliang.github.io/fish-ui/
- Equip with Vue.js, Moment, Vue-Router, ES6 & Babel 6
- Cool with Webpack 2.0 & Vue Loader
- Semantic CSS Components
- Stylesheets in Less
- BackTop
- Button
- Buttons
- Calendar
- Card
- Carousel
- CarouselItem
- Cascader
- Checkbox
- Checkboxes
- Col
- DatePicker
- Dropdown
- Field
- Fields
- Form
- Input
- InputNumber
- Layout
- Menu
- Message
- Modal
- Option
- Pagination
- Radio
- Radios
- Row
- Select
- Steps
- Step
- Submenu
- Table
- TabPane
- Tabs
- Tag
- Tags
- TimePicker
- Upload
- Tree
- Tree Select
- Transfer
- Divider
- Image
- Timeline
Modern browsers and Internet Explorer 9+(no test).
MIT
| true | true | true |
A Vue.js 2.0 UI Toolkit for Web. Contribute to myliang/fish-ui development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2017-09-30 00:00:00
|
https://opengraph.githubassets.com/98d5ee1f342a5dc9f3ee21c7b9c5be673f103ee2274797d0ced34a7d6e3fbd18/myliang/fish-ui
|
object
|
github.com
|
GitHub
| null | null |
17,531,642 |
https://www.scmp.com/magazines/style/tech-design/article/2155079/here-are-5-best-laptops-travellers-and-apple-not-list
|
Sorry Apple: here are the 5 best laptops for travellers
|
Bloomberg
|
# Here are the 5 best laptops for travellers – and Apple is not on the list
The Google Pixelbook, the Microsoft Surface Laptop, Dell XPS 13, Lenovo ThinkPad Carbon X1 and Huawei Matebook X Pro are convenient, slim and powerful
Is the world’s best travel laptop dead? Ten years after Steve Jobs introduced the MacBook Air to the world, the laptop is on Apple’s back burner – and some fear that it is being phased out entirely.
Rather than redesigning and upgrading the hardware like all of the tech giant’s other marquee products, Apple has left the Air to collect dust, and now the MacBook and MacBook Pro are taking the spotlight. While they are more powerful, they are not as convenient for frequent travellers.
On the surface, the MacBook and MacBook Pro measure up to the Air. They are comparable in size and weight, though they lack the superskinny, sloping gradient design that makes the Air so easy to slide in and out of carry-ons. The also lack the Air’s “chiclet” keyboard, with its silent and spacious keys. The replacement “butterfly” design has been so prone to malfunction and sticky keys that Apple overhauled its warranty coverage for certain MacBooks. That does not take into account these models’ smaller screens, shorter battery life, higher prices and designs that have barely changed in more than a decade.
Add it all up, and it is no surprise that Mac-loyal warriors around the world are being seduced by lighter, sleeker, sexier, and more powerful laptops – ones that run Windows and Chrome OS.
These five MacBook replacements are guaranteed to meet your work and play needs, whether you are bored in a business-class suite, dashing off PowerPoint slides in a hotel room, or banking on the Shinkansen. Based on a road test that took us from New York to Los Angeles and Tokyo to Paris, these were the best of roughly a dozen new options – standing out for their excellent portability, keyboard comfort, battery life and computing power.
**If efficiency Is your middle name …**
… get the Google Pixelbook.
Why we like it: The supersexy, two-toned body, which features Gorilla Glass and brushed metal, will not smudge no matter how many times you have to unpack and repack it at the airport. And at just 2.4 pounds (about 1 kilogram), you will not feel the Pixelbook in your carry-on. There is top-of-the-line hardware inside this laptop, including quad-core i7 processors and 16 GB of RAM, making for ultra-fast loading speeds and easy multitasking. In just 15 minutes, you can add two hours of juice to the nine-hour battery thanks to the Pixelbook’s quick charger. Dash off emails in laptop mode, use automatic tethering to your Pixel phone to work online sans Wi-fi, or flip around the screen to watch movies in tablet mode. Besides the free security software, Google gives you 1 TB of complimentary online storage.
| true | true | true |
The Google Pixelbook, the Microsoft Surface Laptop, Dell XPS 13, Lenovo ThinkPad Carbon X1 and Huawei Matebook X Pro are convenient, slim and powerful
|
2024-10-12 00:00:00
|
2018-07-14 00:00:00
|
article
|
scmp.com
|
South China Morning Post
| null | null |
|
15,129,532 |
https://techcrunch.com/2017/08/29/herb-seed-funding/
|
Cannabis website Herb raises $4.1M | TechCrunch
|
Anthony Ha
|
We get a lot of weed-related pitches at TechCrunch, but most of them don’t come with the pedigree of Herb‘s investors.
Herb is announcing today that it has raised $4.1 million in seed funding led by Lerer Hippeau Ventures, with participation from Slow Ventures, Buddy Media co-founder Michael Lazerow, Bullpen Capital, Shiva Rajarama, Liquid 2 Ventures (the firm led by football legend Joe Montana), Shopify CEO Tobi Lutke, Shopify COO Harley Finkelstein and Adam Zeplain.
“During our research into the cannabis industry, it became clear to both myself and our team at Liquid 2 Ventures that HERB was the most professionally run business for relevant, informative, cannabis content,” Montana said in the funding announcement.
Herb’s articles and videos cover the latest cannabis-related news, with plenty of how-to and educational content. The site started as something called The Stoner’s Cookbook before it was acquired and rebranded by Gray in 2015. Since then, the company has grown to 200 million video views per month, reaching 5.3 million unique viewers, according to Tubular Labs.
And while Herb currently looks like a digital media business, Gray said, “We don’t see ourselves as just a website. We were always setting out to build a technology platform.”
Eventually, he wants Herb to become a site that you can visit for “everything cannabis-related,” including buying weed from local businesses and getting it delivered to your home in just a few minutes.
Gray compared the company to Uber and Airbnb, both in the sense that they’re an intermediary between consumers and service providers, and because they’ve had to fight some big legal battles along the way. He said Herb will respect local laws around cannabis sales, but at the same time, “I think these laws are changing — it’s about time. And as they change, Herb wants to be right there.”
To be clear, Herb is still a ways off from launching a marketplace business, but Gray said the site is adding new features that bring it closer to that goal, like creating detailed profiles of local dispensaries.
“There’s a very real stigma that exists around cannabis today and our viewpoint on things here at Herb is that yesterday’s social stigmas become tomorrow’s social norms,” he added. “We’re trying to present the best face that we can for this industry and bring cannabis into the mainstream.”
| true | true | true |
We get a lot of weed-related pitches at TechCrunch, but most of them don't come with the pedigree of Herb's investors. Herb is announcing today that it
|
2024-10-12 00:00:00
|
2017-08-29 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
31,990,009 |
https://startupsthisishowdesignworks.com
|
Startups, This Is How Design Works
|
Wells Riley
|
Companies like Apple are making design impossible for startups to ignore. Startups like GitHub, Airbnb, Square, and Fitbit have design at the core of their business, and they're doing phenomenal work. But what is ‘design’ actually? Is it a logo? A Wordpress theme? An innovative UI?
It’s so much more than that. It’s a state of mind. It’s an approach to a problem. It’s how you’re going to kick your competitor’s ass. This handy guide will help you understand design and provide resources to help you find awesome design talent.
The simplest definition. Design is so many things, executed in many different ways, but the function is always the same. Whether it’s blueprints, a clever UI, a brochure, or a chair – design can help solve a visual or physical problem. 1
This definition is not so simple. The best designs are notorious for seeming not designed at all – or ‘undesigned’.
It’s easier if we break things down a bit. If you know what to look for, it’s easier to identify good design when you see it; or perhaps **when you can’t see it at all.**
** Dieter Rams** is a German industrial designer closely associated with the consumer products company Braun and functionalist industrial design.
According to Vitsœ: Back in the early 1980s, aware that his design was a significant contributor to the world, he asked himself an important question:
*"Is my design good design?"*
Since good design can't be measured in a finite way, he set about expressing the ten most important principles for what he considered was good design. (Sometimes they are referred as the ‘Ten commandments’.) Here they are. 3
“We designers, we don’t work in a vacuum. We need business people. We are not the fine artists we are often confused with. Today you find few companies that take design seriously, as I see it.”
— Dieter Rams
Good design can’t be achieved with glossy buttons or masterful wireframes alone. It’s a merger of all these principles into something that is meaningful and deliberate.
Just like a great business plan is nothing without expert execution, a great Photoshop mockup is nothing, for example, without careful consideration to UI or the user’s needs.
A documentary film that provides a look at the creativity behind everything from toothbrushes to
tech gadgets. Watch the complete film here.
Take a look at your current product – is design contributing in an innovative way? Does it make the product useful, understandable, and aesthetic? Is it long-lasting, or will it look outdated or break in a few years?
These are really hard questions to answer. Designers enable you to work within these constraints to create a product customers will fall in love with. *Love is a really strong emotion*.
Dieter Rams and his contemporaries started a movement in 20th Century towards simple and beautiful products. Design was a strongly valued aspect of business, even 60 years ago. It totally has a place in business today – it’s a proven method.
This is a term that describes an array of different kinds of designers. Think of it like the term “entrepreneur”. It describes a wide variety of businesspeople - from founders to VC's to “Chief Ninjas” - but isn’t all-inclusive. Graphic designers work with graphical images, whether they be illustrations, typography, or images, and on a variety of media including print and web. Graphic design is typically rendered in 2D – printed on a physical surface or displayed on a screen.
A type of graphic designer that works exclusively with print media. Before the widespread adoption of computers, software, and the web, virtually all graphic designers worked on print media such as posters, magazines, billboards, and books. Print designers are typically masters of typography, illustration, and traditional printing processes like the Linotype machine or the letterpress machine, a 500-year-old printing method that has regained popularity in recent years for its handmade and traditional feel.
Interaction designers, on the other hand, focus on digital products and interactive software design. Some examples include web apps like Facebook and Pinterest, mobile apps like Tweetbot, and operating systems like OS X. While graphic design is meant to be observed, interaction design helps humans experience or manipulate software or interface with screen-based hardware in order to *achieve specific goals* – checking email, withdrawing money from an ATM, or "Liking" a webpage (such as this one!)
*"Interaction design is heavily focused on satisfying the needs and desires of the people who will use the product."*
User Interface (UI) design is the design of software or websites with the focus on the user's experience and interaction. *The goal of user interface design is to make the user's interaction as simple and efficient as possible*. Good user interface design puts emphasis on goals and completing tasks, and good UI design never draws more attention to itself than enforcing user goals.
"The design process must balance technical functionality and visual elements to create a system that is not only operational but also *usable and adaptable to changing user needs*." 7
User Experience (UX) design "incorporates aspects of *psychology, anthropology, sociology, computer science, graphic design, industrial design and cognitive science*. Depending on the purpose of the product, UX may also involve content design disciplines such as communication design, instructional design, or game design." 8
The goal of UX design is to create a seamless, simple, and useful interaction between a user and a product, whether it be hardware or software. As with UI design, user experience design focuses on creating interactions *designed to meet or assist a user's goals and needs*.
Industrial designers create physical products designated for mass-consumption by *millions of people.* Motorcycles, iPods, toothbrushes, and nightstands are all designed by industrial designers. These people are masters of physical goods and innovation within the constraints of production lines and machines.
"The objective is to study both function and form, and the connection between product, the user, and the environment." 9
I asked 78 CEOs, marketers, engineers, and designers about their opinions and definitions of design. Before I could come up with anything for this project, I had to check my assumptions at the door and get some legit data. *It seems that entrepreneurs / engineers and designers are thinking about the same things.*
Product design includes both digital and physical products. It represents not only the aesthetic qualities, but *what it does, how well a user thinks it's going to do it, and how easily & quickly they can complete a task.*
Think for a moment. How important is product design to *you?* How important do you think aesthetics and ease-of-use are to *your* customers?
Now we're getting somewhere. Great design is taking root in startup culture, and it seems like many people are open to change. Not only do many entrepreneurs, devs, and engineers see substantial room to improve their own products, *they overwhelmingly believe that designers belong on a founding team.*
For a long time, a pair of co-founders consisted of an executive and an engineer. **It worked for Facebook, Microsoft, and Apple**, just to name a few. These companies have excellent designers today, *because it’s a necessity they can’t afford to ignore*. It seems like design is becoming more and more prevalent in new startups as well – Square, Fitbit, Tapbots, and more are pushing the envelope.
*Design is becoming a key differentiator* for companies to acquire funding, press coverage,
and loyal users.
Drag »
According to The Designer Fund, startups with designer founders are generating *billions of dollars in growth*. 16 Below are profiles on five of the most influential designer founders and their incredibly hot startups.
Joe defines the Airbnb experience. He is dedicated to creating an inspiring and effortless user experience through sharp, intuitive design, and crafts the product roadmap to make it so. Joe values products that simplify life and have a positive impact on the environment, and ensures that the company adheres to these tenets.
Prior to Airbnb, Joe was employed by Chronicle Books, co-founded a green design website, and developed several consumer products. An alumni of the Rhode Island School of Design, Joe earned dual degrees in Graphic Design and Industrial Design. 11
Alexa Andrzejewski is the Founder and CEO of Foodspotting, a website and mobile app for finding and recommending dishes, not just restaurants. As the UX designer behind Foodspotting, Alexa sees herself as the "chief storyteller," responsible for capturing the imagination of her team, partners and investors through metaphors, mantras, user stories, sketches and detailed designs. Foodspotting has received attention from The Today Show, The Cooking Channel, Travel + Leisure, iTunes and Google Play (repeat "App of the Week"), as well as Mark Zuckerberg in his 2011 f8 keynote. Alexa has been profiled in Financial Times Magazine, Inc Magazine’s "30 Under 30" and Gourmet Live's "50 Women Game-Changers." 10
Jessica Hische is a letterer and illustrator best known for her personal projects Daily Drop Cap and the Should I Work for Free? flowchart as well as her work for clients like Wes Anderson, Penguin Books, and Google. She’s been named one of Print Magazine’s New Visual Artists, an ADC Young Gun, and one of Forbes 30 under 30 in Art and Design two years in a row. She is currently serving on the Type Directors Club board of directors, has traveled the world speaking about lettering and illustration, and has probably consumed enough coffee to power a small nation. 12
Mike is a user interface designer and cofounder of Push Pop Press, a digital publishing company that worked with Al Gore to create the first full-length interactive book Our Choice. Recently Push Pop Press was acquired by Facebook where he is now working, giving people better tools to explore and share ideas.
Prior to starting Push Pop Press he worked at Apple where he designed user interfaces and artwork for the iPhone, the iPad, and Mac OS X. Before that he cofounded Delicious Monster, a software company that created Delicious Library. 13
Jeffrey Veen is a founder and the CEO of Small Batch, Inc. where he’s leading a team of developers and creating user-centered web products. Their current effort is Typekit — a widely praised subscription font service that is bringing real typography to the Web for the first time.
Jeffrey was also one of the founding partners of Adaptive Path and project lead for Measure Map, the well-received web analytics tool acquired by Google in 2006, where he managed the user experience group responsible for some of the largest web apps in the world. 14
It’s getting harder and harder to differentiate based on tech talent alone. Designers like Jonathan Ive at Apple, Joe Gebbia at Airbnb, and the rockstar design team at Dropbox (just to name a few) are changing the world today – not entirely because Apple, Airbnb, or Dropbox have better tech, but because they make their products more usable, aesthetic, and *human*.
Founders need to share passion, drive, and vision. Find someone who can solve problems and think critically about more than just designing a website. Someone who makes your founding team unstoppable.
The design community is small and nuanced. Many designers aren’t aware of their increasing demand within startups, but that doesn't make them impossible to find.
*Here are a few places where you can find excellent local designers right now.*
Meetups and events are a great way to break into the design culture and mingle with prospective talent face-to-face. I strongly recommend you attend at least one design meetup – it’s really important to have that perspective going into your designer search.
Slack at Work
Dribbble Job Board
Meetup.com – Search for “Design”
Eventbrite.com – Search for “Design”
There’s always something going on.
Think of Zerply as LinkedIn for designers, developers, and entrepreneurs. It’s an exquisitely designed platform that operates on a network connections and recommendations. Members can be “recommended” for excellence in a variety of disciplines and skills.
Zerply allows you to search for designers by location, skills, and talents. The system is free to use.
Dribbble is an exclusive online community of designers from around the world. Signup is by (rare) invite only, which helps cultivate some of the best design talent in the world.
Designers post works-in-progress (wip), completed projects, teasers, and fun work so designers and 'spectators' from anywhere can catch a glimpse of what they're working on.
The site allows you to search for designers by skill, availability, and location. You can also advertise on the dribbble job board to allow some of the world’s best designers to come to you.
Behance: The Platform to Showcase & Discover Creative Work
Behance is a great place for anyone to browse top creative works attributed to the actual designers who created them, not agencies. It lets designers showcase work as their own and on sites like LinkedIn, RISD, Zerply, and AIGA, and their own personal websites – enticing some of the world's best talent to join the network.
The JobList makes it easy to reach over 1,000,000 skilled designers, sorted by field, location, or even specific tools and skills. Tons of startups and big companies (like Apple!) are already using Behance to recruit top designers around the world.
In a smart article about finding designers on TNW, Sacha Greif tells a cautionary tale. "Instead of looking for a unicorn ["a magical designer that can solve all [of a company’s] problems," according to Braden Kowitz], think about hiring a web designer who will focus on design, and a front-end engineer who will focus on code. Like WePay’s Aberman states, “When looking for a designer, you can’t have it all. You need to prioritize visual design, product design, front-end development, etc.”
If your budget doesn’t let you hire both, another option is to hire a horse and let them grow a horn on the job: find a good visual designer who’s also willing to learn front-end coding, or a great front-end engineer who wants to get better at design." 17
This is just a primer on design for startups. There is so much information out there, and so many brilliant minds talking about great design.
Here are a few resources I highly recommend:
I love startups and design, and I want them to be best friends forever. I'm Wells Riley, and I'm the founding designer at Runway, formerly Head of Design at Envoy, and the co-founder of Hack Design – an easy to follow design course for people who do amazing things.
It’s so exciting to see design taking a stronger role in new companies. I hope this will be a valuable resource to help designers and entrepreneurs speak the same language.
If you have any feedback, please feel free to tweet me @wr.
Sources:
| true | true | true |
A guide to understanding digital & physical product design for startups
|
2024-10-12 00:00:00
|
2012-03-31 00:00:00
|
website
|
startupsthisishowdesignworks.com
|
startupsthisishowdesignworks.com
| null | null |
|
8,734,418 |
http://techcrunch.com/2014/12/11/twitter-getting-pushy/
|
Twitter Pushes Its Message-Any-Of-Your-Followers Feature With Annoying Promo Overlay | TechCrunch
|
Natasha Lomas
|
**Update**: Turns out this is a rare case — for the ever-changing, A/B testing Twitter — of ‘it’s a bug not a feature’. TechCrunch understands the overlay should only be appearing for new users, not in the inboxes of multi-year veterans. So although Twitter is promoting its DM any follower feature, it’s only pushing this to newbies. Original story follows below.
Twitter’s push to expand its appeal and bump up usage of existing features often sees the company tweak its layout or shuffle feature furniture around in A/B tests.
The latest bit of tweaking, currently appearing sporadically on a few TechCrunch users’ Twitter accounts, is a promo that appears in the messages window on the Twitter desktop client, temporarily replacing any existing missives in your inbox.
The promo reminds users of a feature the company launched in October last year — enabling the sending of private messages to anyone who is following you, without you having to follow them back. It’s unclear why Twitter is seeing fit to nag users about this feature now. Presumably it’s hoping to encourage more private messaging, or more engagement on Twitter generally.
The promo overlay urges users to “Start a private conversation with anyone who follows you”, and includes a selection of follower accounts directly underneath, so could potentially be used as a space to promote specific accounts — given it’s relatively prime real estate, as it’s the place a Twitter web client user clicks if they want to go read their private messages. But instead of seeing those, they sometimes get this promo instead. Which is, frankly, pretty annoying.
Another unwelcome recent change to Twitter is the far more drastic decision to inject tweets into users’ streams from accounts they don’t follow — thereby eroding the fundamental usefulness of a service powered by human filters by polluting specifically selected signals with algorithmically generated noise.
Why so fickle Twitter? The company is evidently continuing to feel the pressure when it comes to slowing user growth in a fiercely competitive space. Earlier this week Facebook-owned social photo sharing service Instagram, which competes with Twitter in both social messaging stakes and also in photo sharing, surpassed the latter’s monthly active user count, reporting more than 300 million MAUs vs Twitter’s last count of around 284 million.
Onboarding new users and signposting existing features to try to drive more usage — such as with this promo overlay — are clear priorities for Twitter right now. On the former front the company announced another new feature, called Instant timeline, in November to try to make it easier for Twitter noobs to get started.
| true | true | true |
Twitter's push to expand its appeal and bump up usage of existing features often sees the company tweak its layout or shuffle feature furniture around in A/B tests. The latest bit of tweaking, currently appearing sporadically on a few TechCrunch users' Twitter accounts, is a promo that appears in the messages window on the Twitter desktop client, temporarily replacing any existing missives in your inbox.
|
2024-10-12 00:00:00
|
2014-12-11 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
17,256,935 |
https://lifehacker.com/we-trained-an-ai-to-generate-lifehacker-headlines-1826616918
|
How to Train Your Own Neural Network
|
Beth Skwarecki
|
Artificial intelligence (AI) seems poised to run most of the world these days: it’s detecting skin cancer, looking for hate speech on Facebook, and even flagging possible lies in police reports in Spain. But AIs aren’t all run by mega-corporations and governments; you can download some algorithms and play with them yourself, with often hilarious results.
There’s the faux Coachella poster full of fake band names, created by feeding a bunch of real band names into a neural network and asking it to come up with some of its own. There are the recipes created in a similar way, where “barbecue beef” calls for “1 beer - cut into cubes.” And then there’s my favorite, Janelle Shane’s AI-generated paint colors (tag yourself, I’m Dorkwood).
These were all made with neural networks, a type of AI modeled on the network-like nature of our own brains. You train a neural network by giving it input: recipes, for example. The network strengthens some of the connections between its neurons (imitation brain cells) more than others as it learns. The idea is that it’s figuring out the rules of how the input works: which letters tend to follow others, for example. Once the network is trained, you can ask it to generate its own output, or to give it a partial input and ask it to fill in the rest.
But the computer doesn’t actually understand the rules of, say, making recipes. It knows that beer can be an ingredient, and that things can be cut into cubes, but nobody has ever told it that beer is not one of those things. The outputs that look almost right, but misunderstand some fundamental rule, are often the most hilarious.
I was happy to just watch these antics from afar, until Shane mentioned on Twitter that a middle school coding class had generated better ice cream names than she had. And I thought, if *kids* can do this, I can do this.
### How to Train Your First Neural Net
I started with the same toolkit Shane used for ice cream flavors: a python module called textgenrnn, by Max Woolf of Buzzfeed. You’ll need a basic knowledge of the command line to work with it, but it works on any system (Mac, Linux, Windows) where you’ve installed the programming language/interpreter python.
Before you can train your own neural net, you’ll need some input to start with. The middle school class started with a list of thousands of ice cream flavors, for example. Whatever you choose, you’ll want at least a few hundred examples; thousands would be better. Maybe you’d like to download all your tweets, and ask the network to generate you some new tweets. Or check out Wikipedia’s list of lists of lists for ideas.
Whatever you choose, get it into a text file with one item per line. This may take some creative copy-and-paste or spreadsheet work, or if you’re an old hand at coding, you can write some ugly perl scripts to munge the data into submission. I’m an ugly perl script kind of girl, but when I ended up wanting Lifehacker headlines for one of my data sets, I just asked our analytics team for a big list of headlines and they emailed me exactly what I needed. Asking nicely is an underrated coding skill.
(If you’d like to feed Lifehacker headlines into your own neural net, here is that list. It’s about 10,000 of them.)
Create a folder for your new project, and write two scripts. First, one called train.py:
from textgenrnn import textgenrnn t = textgenrnn() t.train_from_file(‘input.txt’, num_epochs=5)
This script will get the neural net reading your input and thinking about what its rules must be. The script has a couple things you can modify:
`t = textgenrnn()`
is fine the first time you run the script, but if you’d like to come back to it later, enter the name of the .hdf5 file that magically appeared in the folder when you ran it. In that case, the line should look like this:`t=textgenrnn(‘textgenrnn_weights.hdf5’)`
`‘input.txt’`
is the name of your file with one headline/recipe/tweet/etc per line.`num_epochs`
is how many times you’d like to process the file. The neural network gets better the longer you let it study, so start with 2 or 5 to see how long that takes, and then go up from there.
It takes a while to train the network. If you’re running your scripts on a laptop, one epoch might take 10 or 15 minutes (bigger data sets will take longer). If you have access to a beefy desktop, maybe your or a friend’s gaming computer, things will go faster. If you’ve got a big data set, you may want to ask it for a few dozen or even hundreds of epochs, and let it run overnight.
Next, write another script called spit_out_stuff.py (you’re free to give these better names than I did):
from textgenrnn import textgenrnn t = textgenrnn(‘textgenrnn_weights.hdf5') t.generate(20, temperature=0.5)
This is the fun part! The script above will give you 20 fun new things to look at. The important parts of that last line are:
The number of things to generate: here, 20.
The temperature, which is like a creativity dial. At 0.1, you’ll get very basic output that’s probably even more boring than what you fed in. At 1.0, the output will get so creative that often what comes out isn’t even real words. You can go higher than 1.0, if you dare.
When you ran the training script, you’ll have noticed that it shows you sample output at different temperatures, so you can use that to guide how many epochs you run, and what temperature you’d like to use to generate your final output.
Not every idea your neural network comes up with will be comedy gold. You’ll have to pick out the best ones yourself. Here are some of the better Lifehacker headlines that my AI came up with:
The Best Way to Make a Baby Laptop
How to Survive a Backspace Drinking Game
The Best Way to Buy a Job Interview
How to Get the Best Bonfire of Your Life With This Handy Graphic
How to Make Your Own Podcast Bar
How to Get a New iPhone X If You’re an Arduino
How to Clean Up Your Own Measurements in a Museum
How to Get Started With Your Stories and Anxiety
The Best Way to Make Your Own Ink Out of the Winter
How to Keep Your Relationship With an Imaginary Concept
The Best Way to Make a Perfect Cup of Wine With a Raspberry Pi
The Best Way to Eat a Toilet Strawberry
How to Get a Better Job on Your Vacation
The Best Way to Eat a Stubborn Jar
I got these by playing with the temperature and the number of training epochs, and every time I saw something I liked I copied it into a text file of my favorites. I also experimented with the word-by-word version of the algorithm; the scripts above use the default character-by-character model. My final list of headlines includes results from both.
If you’re curious about some of the rejects, here’s what I get with a 0.1 temperature:
The Best Way to Stay Streaming to Stop More Alternative to Make Your Phone
The Best Way to Stream the Best Power When You Don’t Need to Know About the World
The Best Way to Stay Started to Stay Started to Your Common Ways to Stop Anyone
How to Get the Best Way to See the Best Popular Posts
The Best Way to Stay Started to Make Your Phone
And if I crank it up to 1.5 (dangerously creative):
Remains of the Day: How to Ad-Finger the Unsubual
Renew Qakeuage to Travel History, Ovenchime, or “Contreiting Passfled
The Risk-Idelecady’t Two-Copyns, Focusing Zoomitas
Ifo Went Vape Texts Battery Oro crediblacy Supremee Buldsweoapotties
DIY Grilling Can Now Edt My Hises Uniti to Spread Your Words
Clearly, human help is needed.
### Become Your AI’s Buddy
Even though neural nets can learn from data sets, they don’t truly understand what’s going on. That’s why some of the best results come from partnerships between people and machines. “I know it is a tool that I use,” says Janelle Shane, “but it is hard not to think of it as—‘come on little neural network, you can do it’ and ‘Oh, that was clever’ or ‘You’re getting confused, poor little thing.’
To make the most of your relationship, you’ll have to guide your AI buddy. Sometimes it might get so good at guessing the rules of your data set that it just recreates the same things you fed it—the AI version of plagiarism. You’ll have to check that its funny output is truly original.
Botnik studios pairs people with machines by training predictive-text keyboards. Imagine if you picked up your friend’s phone, and typed messages by just using the predictive text on their keyboard. You’d end up writing your own message, but in a style that reads like your friend’s. In the same way, you can train a Botnik keyboard with any data source you’d like, and then write with the words supplied by the keyboard. That’s where this amazing advice column duel came from: two Botnik keyboards trained on Savage Love and Dear Abby.
If you’d prefer to work against, rather than with, your algorithmic buddy, check out how Janelle Shane pranked a neural net that at first appeared to be good at recognizing sheep grazing in a meadow. She photoshopped out the sheep, and realized the AI was just looking for white blobs in grass. If she colored the sheep orange, the AI thought they were flowers. So she asked her Twitter followers for sheep in unusual places and found that the AI thinks a sheep in a car must be a dog, goats in a tree must be birds, and a sheep in a kitchen must be a cat.
This Tweet is currently unavailable. It might be loading or has been removed.
Serious AIs can have similar problems, and playing with algorithms for fun can help us understand why they’re so error-prone. For example, one early skin-cancer-detecting AI accidentally learned the wrong rules for telling the difference between cancerous and benign skin lesions. When a doctor finds a large lesion, they often photograph it next to a ruler to show the size. The AI accidentally taught itself that it’s easy to spot cancerous tumors: just look for rulers.
Another lesson we can learn is that an algorithm’s output is only as good as the data you feed in. ProPublica found that one algorithm used in sentencing was harsher on black defendants than white ones. It didn’t consider race as a factor, but its input led it to believe, incorrectly, that the crimes and backgrounds common to black defendants were stronger predictors of repeat offenses than the crimes and backgrounds associated with white defendants. This computer had no idea of the concept of race, but if your input data reflects a bias, the computer can end up perpetuating that bias. It’s best that we understand this limitation of algorithms, and not assume that because they aren’t human they must be impartial. (Good luck with your hate speech AI, Facebook!)
### Mix Up Your Data Sets
There’s no need to stop at one data set; you can mix up two of them and see what results. (I combined the product listings from the Goop and Infowars stores, for example. Slightly NSFW.)
You can also train a classifying algorithm. Shane says she already had a list of metal bands and a list of My Little Pony names, so she trained a classifier to tell the difference. (Pinky Doom: 99 percent metal.) Once you have a classifier trained, you can feed anything into it and get a reading. Benedict Cumberbatch: 96 percent metal.
This Tweet is currently unavailable. It might be loading or has been removed.
You can also feed anything you like into a trained textgenrnn network. When you specify how many items you want and what temperature (creativity) the network should use, you can also give it a prefix. It will then try to come up with words that should follow that prefix. After I trained the Lifehacker headlines, I asked the AI to give me headlines beginning with “3 Ingredient Happy Hour.” It responded with some wonderful fictional cocktails (again, these are my picks out of a longer list):
3 Ingredient Happy Hour: The Herb Stressful Upgrade
3 Ingredient Happy Hour: A Cake’s Strawbreak
3 Ingredient Happy Hour: The Darkled Pot
3 Ingredient Happy Hour: The Pizza and Beverage For They Are Trader Wings
3 Ingredient Happy Hour: The Ferrent Pot
3 Ingredient Happy Hour: The Throat Into a Refreshing
3 Ingredient Happy Hour: The Best Bar Order
3 Ingredient Happy Hour: The Leftover Party Controci
3 Ingredient Happy Hour: A Summer Rum Cutting
3 Ingredient Happy Hour: The Best Coconati
3 Ingredient Happy Hour: The Beautiful Shicline
3 Ingredient Happy Hour: The Cheekey Candy
Don’t be surprised if you see these in a future Lifehacker post; Claire Lower, our food and beverage editor, says she wants to attempt making some of these.
But instead of waiting for her expert recipes, I decided to feed *these* into a neural network as well. I gathered some cocktail recipes from Chris Lowder’s cocktail guide and the WikiBooks cocktails glossary, and arranged them so that each cocktail took up one line of a text file, with the title of the cocktail as the first few words. That means I could choose a cocktail name and ask my cocktail-trained neural net to provide the recipe that follows. Here are a few of the results:
The Best Coconati - oz. Benedictine e. 1 dash Aromatic b.
The Cheekey Candy i. 1 oz. Blendey Sherry b. 1.5 oz. Fresh Pineapple d. Lonstine Brandy Bowl De there at large Jamaic c. 2 Dashes Pineapple d. 1 dash Aromatic Bitters e. 1 dash Aromatic Gin ii. 1 oz. Vodka ii. .5 oz. Aged Rum c. 2 dashes of Angostura Bitters i. Stir/Strain/Nick & Nora glass/Ice/1
The Ferrent Pot - - 1.25 oz. Green Chartreuse 1.5 oz. London Dry Gin b. .75 oz. Fill Whiskey b. Orange half whiskey
You can ask it for anything, of course:
The Beth Skwarecki - 1 oz. Blended Scotch (Juice) Water b. 1 oz. Egg White in large rocks glass with dets 1934 or makes Babbino
The Lifehacker c. 14 Vodka Martini i. .75 oz. Campari i. Shake/Fine strain/Coupe/Lemon twist
The input data was only a few hundred cocktail recipes, so I had to turn the temperature *way *up to get anything interesting. And at a high temperature (1.0, in this case), sometimes you get words that aren’t really words. Good luck finding any Lonstine Brandy or Blendey Sherry in a store—but if you do, my pet AI will be very happy.
| true | true | true |
Artificial intelligence (AI) seems poised to run most of the world these days: it’s detecting skin cancer , looking for hate speech on Facebook , an
|
2024-10-12 00:00:00
|
2018-06-07 00:00:00
|
article
|
lifehacker.com
|
Lifehacker
| null | null |
|
4,485,547 |
http://techcrunch.com/2012/09/06/pose-ipad-commerce-revenue-sharing/
|
Pose, 1M Users Strong, Brings Its Fashion Photo App To The iPad -- And Starts To Make (And Share) Revenue | TechCrunch
|
Colleen Taylor
|
Pose has made some serious traction since it first launched its iPhone app in early 2011. Its mobile photo-sharing platform that lets people share snapshots of their outfits and tag them with brand and designer details now has one million users who have shared more than two million “poses” on the app. Right now, more than 10,000 poses are added to the platform each day from users all over the world, from Russia to Brazil to the US and far beyond — which CEO Dustin Rosen likes to put in perspective as “more fashion content than Vogue magazine produces in a year.”
And now, the Los Angeles-based startup is set to debut new features that could very well make its platform much more popular — for new user growth and current user stickiness.
Today Pose will launch its first ever native app made especially for the iPad. And along with the iPad app, Pose is launching its first ever revenue generating component, which will be active across all its platforms: “Shoppable content,” linking items tagged in Pose photos directly to e-commerce sites that sell them, letting users complete a purchase without ever leaving the Pose app. Pose will collect affiliate fees through each sale that has originated on its platform.
#### Sharing the spoils with users
Those may seem like pretty standard moves for Pose — but the real twist is how exactly the company is going to deploy that cash it will start making. Pose will take a “small piece” of each affiliate fee it collects when an item is sold, but will give the vast majority of the money to the user who originally shared the item in a photo. This money will be deposited through PayPal; Pose will also show each user detailed analytics on the purchasing activity around the items they’ve tagged and shared on Pose.
“We decided we wanted our bloggers to make a majority of the commission that we collect on each purchase because we think it’s really important that we let them really own their content and those links,” Rosen told me in a recent interview. “We just want to further incentivize them to share the best content. They’ve been asking for this [e-commerce] feature, and we really wanted to do it right.”
#### A different spin on user-generated content
It’s a very unique move. Most sites that are built on user-generated content — aka, pretty much all of social media — do not share the money they make around that content with the users who actually created it. At first it might seem crazy for a startup to give away the bulk of the cash it brings in on its very first revenue generating feature, but it shows that Pose is really thinking about the long-term. It's hard to think of something that would make users happier than receiving a check each month for doing what they do anyway — snap photos of their most stylish outfits. It'll be really interesting to see how this impacts Pose's growth trajectory going forward; it seems like something that could give the app a major boost in user growth and engagement.
#### More money will follow
Pose has raised some $4.6 million in venture capital from investors that include GRP Partners, True Ventures, Mousse Partners, and celebrity fashion designer and stylist Rachel Zoe. The company says that those investors are happy with its revenue-sharing strategy right now because they see the potential for Pose to make even more serious money down the line.
"We certainly believe in this grand vision of our company, where content meets commerce. We think of ourselves like a shopping funnel, and within that there are so many opportunities to make money at different points along the line," Pose’s co-founder and VP of creative and partnerships Alisa Gould-Simon told me. "Great content starts with our users, and we just want to be a platform to facilitate that. First and foremost, it's about building an audience."
| true | true | true |
Pose has made some serious traction since it first launched its iPhone app in early 2011. Its mobile photo-sharing platform that lets people share snapshots of their outfits and tag them with brand and designer details now has one million users who have shared more than two million "poses" on the app. Right now, more than 10,000 poses are added to the platform each day from users all over the world, from Russia to Brazil to the US and far beyond -- which CEO Dustin Rosen likes to put in perspective as "more fashion content than Vogue magazine produces in a year." And now, the Los Angeles-based startup is set to debut new features that could very well make its platform much more popular -- for new user growth and current user stickiness.
|
2024-10-12 00:00:00
|
2012-09-06 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
22,326,599 |
https://www.cypress.io/blog/2020/02/06/introducing-firefox-and-edge-support-in-cypress-4-0/
|
Introducing Firefox and Edge Support in Cypress 4.0
| null |
Today, we're excited to release the highly-anticipated support for Firefox and the new Microsoft Edge browsers in Cypress 4.0. Adding the capability to run Cypress tests in Firefox has been one of the most frequently requested features by the community. So our team has been working hard to provide this feature with the same great developer experience users have come to enjoy and expect from Cypress.
With the power of testing in multiple browsers, comes the responsibility of implementing the right CI strategy to achieve an optimal balance of confidence, performance, and cost. To guide these crucial decisions, we're launching a new Cross Browser Testing Guide with various recommendations to help you implement the right CI strategy for your project and team.
Cypress 4.0 marks a significant milestone in the development of Cypress and sets the stage for an exciting pipeline of capabilities that will continue to elevate the testing experience for everyone.
Install or upgrade (migration guide) to version 4.0 today. Get started by checking out the new Cross Browser Testing Guide. If you're entirely new to Cypress, check out our Getting Started Guide.
| true | true | true |
We're excited to release the highly-anticipated support for Firefox and the new Microsoft Edge browsers in Cypress 4.0.
|
2024-10-12 00:00:00
|
2019-07-26 00:00:00
|
website
|
cypress.io
|
Cypress
| null | null |
|
37,207,450 |
https://github.com/binpash/pash
|
GitHub - binpash/pash: PaSh: Light-touch Data-Parallel Shell Processing
|
Binpash
|
A system for parallelizing POSIX shell scripts.Hosted by the Linux Foundation.
Service | Main | Develop |
---|---|---|
Tests | ||
Build | ||
Pages |
Quick Jump: Running PaSh | Installation | Testing | Repo Structure | Community & More | Citing
To parallelize, say, `./evaluation/intro/hello-world.sh`
with parallelization degree of 2× run:
`./pa.sh ./evaluation/intro/hello-world.sh`
Run `./pa.sh --help`
to get more information about the available commands.
Jump to docs/tutorial for a longer tutorial.
On Ubuntu, Fedora, and Debian run the following to set up PaSh.
```
wget https://raw.githubusercontent.com/binpash/pash/main/scripts/up.sh
sh up.sh
export PASH_TOP="$PWD/pash/"
## Run PaSh with echo hi
"$PASH_TOP/pa.sh" -c "echo hi"
```
For more details, manual installation, or other platforms see installation instructions.
This repo hosts the core `pash`
development. The structure is as follows:
- compiler: Shell-dataflow translations and associated parallelization transformations.
- docs: Design documents, tutorials, installation instructions, etc.
- evaluation: Shell pipelines and example scripts used for the evaluation.
- runtime: Runtime component — e.g.,
`eager`
,`split`
, and associated combiners. - scripts: Scripts related to continuous integration, deployment, and testing.
Chat:
- Discord Server (Invite)
Mailing Lists:
- pash-devs: Join this mailing list for discussing all things
`pash`
- pash-commits: Join this mailing list for commit notifications
Development/contributions:
- Contribution guide: docs/contributing
- Continuous Integration Server: ci.binpa.sh
If you used PaSh, consider citing the following paper:
```
@inproceedings{pash2021eurosys,
author = {Vasilakis, Nikos and Kallas, Konstantinos and Mamouras, Konstantinos and Benetopoulos, Achilles and Cvetkovi\'{c}, Lazar},
title = {PaSh: Light-Touch Data-Parallel Shell Processing},
year = {2021},
isbn = {9781450383349},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3447786.3456228},
doi = {10.1145/3447786.3456228},
pages = {49–66},
numpages = {18},
keywords = {POSIX, Unix, pipelines, automatic parallelization, source-to-source compiler, shell},
location = {Online Event, United Kingdom},
series = {EuroSys '21}
}
```
| true | true | true |
PaSh: Light-touch Data-Parallel Shell Processing. Contribute to binpash/pash development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2019-06-30 00:00:00
|
https://opengraph.githubassets.com/a3189a78352136c6eb6e58e7b2e8df5957f01305efb9190a75b66d1e4812a6b7/binpash/pash
|
object
|
github.com
|
GitHub
| null | null |
11,673,731 |
http://tympanus.net/Development/DistortedButtonEffects/
|
Distorted Button Effects Using SVG Filters
|
Adrien Denat
|
Previous Demo
Back to the Codrops article
Distorted
Button Effects
Using SVG Filters
Sorry, but these effects are very experimental and currently not supported in your browser.
01
Click me
02
Click me
03
Click me
04
click
05
Click me
06
Click me
07
Click me
08
Click me
Based on
Blake Bowen's
code.
09
Click
10
Play
If you enjoyed this demo you might also like:
Button Styles and Effects
Progress Button Styles
| true | true | true |
A set of inspirational distorted button effects using SVG filters
|
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
13,316,148 |
http://bioinformatics.oxfordjournals.org/content/29/1/1.full.pdf
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,228,612 |
http://www.jpl.nasa.gov/news/news.php?release=2013-253
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
27,824,479 |
https://www.deconstructconf.com/2019/dan-abramov-the-wet-codebase
|
The Wet Codebase by Dan Abramov
| null |
### Transcript
(Editor's note: transcripts don't do talks justice.
This transcript is useful for searching and reference, but we recommend watching the video rather than reading the transcript alone!
For a reader of typical speed, reading this will take 15% less time than watching the video, but you'll miss out on body language and the speaker's slides!)
[APPLAUSE] Hi. I learned to drink a lot of water. Hi, my name is Dan Abramov. I work on a JavaScript library called React. This is actually the first conference that I speak at that is not specific to React with JavaScript. So I'm just curious, have any of you ever used React at all. OK. Yeah, a lot of people use React. That's cool. This talk is not about React. You can say it's a talk about something that, if I had a time machine and could come back to my past self, I would tell myself that talk. So it's a talk about the code base far, far away, deep under the sea.
And it's a code base that I worked on a long time ago. And in the code base, there were two different modules, two files. And my colleague and friend was working on a new feature in one of those files. And they noticed that actually that feature, something very similar was already implemented in another file. So they thought, well, why don't I just copy and paste that code because it's pretty much the same thing?
And they ask me to review the code. And I just read all the books about the best practices. Pragmatic Programmer, Clean Coder, Well Groomed Coder, and I knew that I needed to-- you're not supposed to copy and paste code because it creates a maintenance burden, it's pretty hard to work with. I just learned this acronym DRY, which stands for don't repeat yourself. And I was like this looks like a copy paste, so can we DRY it up a little bit?
And so my colleague was like, yeah, sure, I can totally extract that code to a separate module and make those two files depend on that new code. And so an abstraction was born. OK. So when I say abstraction, I mean it doesn't matter which language you're using. It could be a function or a class, a module, a package, something reusable that you can use from different places in your code base.
And so it seems like, this is great. And they live happily ever after. So let's see let's see how that abstraction evolved. So the next thing that happened, we hadn't looked at that code for a while but then we were working on a new feature and it actually needed something very similar. So let's say that the original abstraction was asynchronous, but we needed something that had pretty much the same exact shape, except it was synchronous.
So we couldn't directly reuse that code anymore, but it also felt really bad to copy and paste it because it's pretty much exactly the same code except it's slightly different. And, well, it looks like we shouldn't repeat ourselves so let's just unify those two parts and make our abstraction a bit fancier so that it can handle the case as well. And we felt really good about it. It is a bit unorthodox, but that's what happens when code meets real life, right? You make some compromises, and at least we didn't have to duplicate the code, because that would be bad, right?
So what happened next is we found out that actually, this new code, this new feature, had a bug in it, and that bug was because we thought that it needs exactly some the same code as we have. But actually it needed something slightly different. But we can fix that bug, of course, by adding a special case. So our abstraction, we can have an if statement. If it's like this particular case, then do something slightly different. Sure. Ship it. Because that happens to every abstraction, right?
And so as we were working with that code, we actually noticed that the original code also had a bug. So those two cases that we thought were the same, they were also slightly different, we just didn't realize it at the time. And so we added another special case. And at this point, this abstraction looks a bit weird and intimidating. So maybe lets make it more generic. Why do we have all those special cases in the abstraction?
Let's pull them out from the abstraction where they belong in our concrete use cases. So looks like this. So now our abstraction doesn't know about any concrete cases. It is very generic, very beautiful. Nobody really understands what it represents anymore. Oh, by the way, we need to add, now that it's parametrized from different places, we need to make sure that all code size are parametrized.
But it was such a gradual progression that at each step it makes sense to the people writing and reviewing the code, so we just left it at that. And some time passed. And so, during that time, some people have left the team, some people have joined the team. There were many fixes. Somebody needed to just do this one small fix here. I don't really know what this thing is supposed to be doing but just fix it up a little bit, add this new feature, improve the metrics. So we ended up with something like this, right?
And again, each of those individual steps kind of made sense. But if you lose track of what you were trying to do originally, you don't really know that you have a cyclical dependency or this weird thing that is growing somewhere to the side just because you don't see the whole picture anymore. And, of course, in real life, that's actually where the story ends because nobody wanted to touch the part of the code base and it just was stagnant for a long time and then somebody rewrote it. And maybe got a promotion. I don't know.
But if we could go back in time, because it's a talk, it's not real life, if we had a time machine we could go back and fix it, right? So I want to go back to the point where the abstraction still made sense. But if we had this third case and we really didn't want to duplicate that code even though it needed something slightly different. And they were like, yeah, sure, let's compromise on our abstraction. Make it funny. So this is if I from today was there, what I would've told myself is, please inline this abstraction.
And so what I mean by inline, I mean literally take that code and just copy and paste it back to the places that use it. And that creates some duplication but that destroys that potential monster we were in the process of creating. And of course duplication isn't perfect in long term, but wrong abstraction is also not perfect in long term. So we need to balance these two problems. And so the way this helps us is that now if we have a bug here and we realized actually this thing is supposed to do something different, we can just change it. And it doesn't affect any of the other places because it's isolated. And similarly, maybe we get a different bug here and we also change it.
And I'm not suggesting that you should always copy paste things. In longer term, maybe you realize that these pieces really stabilized and they make sense. And maybe you pull something out and it might not be the thing that you originally thought was a good abstraction. Might be something different. And a thing like this is as good as it gets in practice. And if I heard this when I was a sweet summer child, I would have said that that's not what they tell us. I heard that copy pasting is really bad.
And I think it's actually a self-perpetuating loop. So what happens is that developers learn best practices from the previous generation and they try to follow them. Because there were concrete problems and concrete solutions that were born out of experience. And so the next generation tries to pass them on. But it's hard to explain all this context and all this trade off, so they just get flattened into these ideas of best practices and anti-patterns.
And so they get taught to the new generation. But if the new generation doesn't understand the trade offs and the reasons they came to these conclusions, they don't have the context to decide when it's actually a bad idea and how far can you stretch this. So they run into their own problems from trying to take these best practices and anti-patterns to extreme. And so they teach the next generation. And maybe this is just you can't break out of this loop and it's just bound to happen over and over again, which is maybe fine.
I think one way to try to break this loop is just when we teach something to the next generation, we shouldn't just be two-dimensional and say here's best practices and anti-patterns. But we should try to explain what is it that you're actually trading away. What are the benefits and what are the costs of this idea? And so when we talk about the benefits of abstraction, of course it has benefits. The whole computer is a huge stack of abstractions. And I think concrete benefits are-- abstractions let you focus on a specific intent, right? So if you have this thing and they have to keep it all in their head.
But it's actually really nice to be able to focus on a specific layer. Maybe you have several places of code where you send an email and you don't want to know how an email is-- I don't know how emails are being sent. It's a mystery to me that they even arrive. But I can call a function called send email and well, it works most of the times. And it's really nice to be able to focus on it. And of course another benefit is just being able to reuse code written by you or other people and not remember how it actually works.
So if we need something, exactly the same thing that we already use from different places, it's very nice to be able to reuse it. So that's a benefit of abstraction. And abstraction also helps us avoid some some bugs. So in the example where we have a bug, maybe we copy pasted something. And that's an argument against copy paste, is we copy pasted something and then we found the bug in one version and we fix it, but then the other version stays broken because we forgot about the copy paste. So that's a good argument for why you'd want to extract something and pull it away.
But when we talk about benefits we should also talk about costs. And so one of these costs is that abstraction creates accidental coupling. And what I mean by that is, so we have these two modules using some abstraction, and then we realize that one of them has a bug. And we have to fix it in the abstraction because that's literally where the code is. But now it's your responsibility to consider all of the other call sites of this abstraction and whether you might have actually introduced a fix in another, introduced the bug in another part of the code base. So that's one cost. Maybe you can live with it. Most of us live with it. But it's a real cost.
And I think an even more dangerous cost is the extra indirection an abstraction can create. So what I mean by that is that the promise was that I would just be able to focus on this specific layer in my code and not actually care about all the layers. Is that really what happens? I'm sure most of you probably had this bug where you started one layer, oh, it goes here. And it's like, well, actually, no. You need to understand this layer and this other layer because the bug, it goes across all of those layers. And we have a very limited stack in our heads.
And so what happens is you just get a stack will fall, which is probably why the site was coded that way. And so what I see happen a lot is that we try so hard to avoid the spaghetti code that we create this lasagna code where there are so many layers that you don't know what's going on anymore at all. So that's extra indirection. And all of them wouldn't be that bad if they didn't entrench themselves.
So abstraction also creates inertia in your code base. And that's a social factor more than technical. What I've seen happen many times is you start with an abstraction that looks really promising and makes sense to you. And then with time it gets more and more complex. But nobody really has time to refactor or unwind this abstraction, especially if you're a new person on the team. You might think that it would be easier to copy and paste it, but first you don't really know how to do that anymore because you're not familiar with that code. And second you don't want to be the person who just suggests worst practices. Who wants to be the person who says, let's use copy paste here? How long do you think you're going to be on that team?
So you just accept the reality for what it is and keep doing it and hope that this code is not going to be your responsibility anymore soon. And the problem is that even if your team actually agrees that the abstraction is bad and it should be inlined, it might just be too late. So what might happen is that you're familiar with just this concrete usage and you know how to test it. If you unwind the abstraction, you can understand how to verify that change didn't break anything. But maybe there is another team who uses it here and another team who uses it there, and maybe this team has been reorged so there is no team that maintains that code, and you don't really know how to test it anymore. So you just can't make that change even if you want to.
So I really like this tweet. It's a bit hard to read. Easy-to-replace systems tend to get replaced with hard-to-replace systems, which is kind of like the Peters Principle. There's this Peter's Principle that everybody in the organization continuous raising until they become incompetent and then they can't raise anymore. And it's similar that if something is easy to replace, it will probably get replaced. And then at some point you hit the limit where it's just a mess and nobody understands how it works.
So I'm not saying that you shouldn't create abstractions. That would be a very two-dimensional or one-dimensional takeaway. I'm saying that there are things that, we're going to make mistakes. So how can we actually try to mitigate or reduce the risks from those mistakes? And so one of them that I learned on the React team in particular is to test code that has concrete business value. So what I mean by that is, say we have this a little bit wonky abstraction, but we finally got some time to write some proper tests, because we fixed some bugs and we have a gap before the new half of the year starts and we can fix some things.
So we want to write some unit test coverage for that part. And intuitively, where I would put unit test is, well, here's the abstraction where the complex code lies. So let's put unit test to cover that code. And that's actually a bad idea in my opinion, because what happens is that if later you decide that this abstraction was bad and you try to turn it into copy paste, well, guess what happens through your tests? They all fail. And now you're like, well, I guess I'll have to revert that because I don't want to rewrite all my tests. And I don't want to be the person who suggested to decrease the code coverage. So you don't do that.
But if you have a time machine you can go back and you can write your unit tests or integration tests or whatever you want to call them, fad of the day tests, against the code that we actually care about, that this code works against concrete features. And then there's this test that don't care about your abstraction. So you can inline the abstraction back. You can create five layers of abstraction. The test will tell you whether this code works. So actually they will guide you to refactor it because they can tell you that your refactoring is in fact a correct one. So testing concrete code is a good strategy.
Another one is just to restrain yourself. You see this full request. You get this itch, like, this looks duplicate. And you're like, no, take a walk. Because if you have this, you might have a high school crush and they are really into the same obscure bands on Last.fm that you're into. That doesn't mean that you have a lot in common and they're going to be a good life partner. So maybe you shouldn't do the same to the code. Just because the structure of these two snippets looks similar, it might just mean that you don't really understand the problem yet. And give it some time to actually show that this is the same problem and not just accidentally similar code.
And finally, I think it's just important that if that happens, if you make a mistake, it should be part of your team culture to be OK with, this abstraction is bad. We need to get rid of it. You should not only add abstraction, but you should also delete them as part of your healthy development process. So that means that it should be OK to leave a comment like this and say, hey, this is getting out of control. Let's spend some time to copy and paste this and later we'll figure out what to do with it.
But there is also a technical component to this. So if your dependency tree looks like this, it might actually be really challenging to inline anything because you're like, well, I have this thing I want to inline but, OK, I can copy it, but there's some mutable shared state that is now being duplicated. And I need to figure out how to rewire all of those dependencies together. And it might not even be feasible. So you just give up. And I don't really have a good solution for this. What I've noticed is that, for some code, you can't really avoid it. For example, in the source code of React itself, we do have a problem like this. Because we try to mutate things for you so you don't have to mutate them. So we have all this interdependencies between modules that can be a bit difficult to think about.
But then what's cool about React, in my opinion, is that it lets you write apps with dependency trees that are more like this. So you have a button component that's used from form, and that form is used from app. And so on like this. And it follows this tree shape. And we have these constraints for data flows only in one direction. So you don't really expect things to get weird circular. And what it means is that you're going to make mistakes, you're going to create bad abstractions, but does your technology make it easier for you to get rid of them?
Because I think with React components and some other constrained forms of dependency, like management, you have this nice property where it's usually a matter of copy and pasting things in order to inline them. And so even if you make a bad decision, you can actually undo it before it gets too late. So this is something to consider in both social and technology part of it. So don't repeat yourself. DRY is just one of those principles that are probably pretty good ideas.
And there are many good ideas that you might hear about as a developer and entering this industry. Or even as somebody who's been doing it for 15 years and then stepping outside for a few months. And we see a lot of evangelism around those things. And that is fine. But I think it's important that when we try to explain what those things do or why they're a good idea, we should always explain what exactly are you trading away and which things led us to that to that principle or idea. And what is the expiration date for those problems? Because sometimes there is some context that is assumed and that context actually changes but you don't realize that. And so the next generation needs to understand what exactly was traded off and why.
And so my challenge for you is to pick some best practices and anti-patterns that you strongly believe are true, whether from your experience or because somebody told you or because you came up with them, and really try to break it down and deconstruct why you believe these things and what exactly is being traded away. And if you found this talk interesting, you might like these other talks. So All the Little Things by Sandi Metz is an amazing talk that goes into way more detail on these ideas and many others. Minimal API Surface Area is a talk by my colleague, Sebastian, who I learn all of this stuff from. And On the Spectrum of Abstraction is an interesting talk by Cheng Lou, who goes into how abstractions help us trade the power and expressiveness for constraints and how those constraints can actually limit us, but let us do things we wouldn't be able to do otherwise. It's a good talk. And thank you for having me. That's all I have.
[APPLAUSE]
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null | null | null | null | null | null |
33,008,691 |
https://reddio.medium.com/reddio-announces-mainnet-launch-d91c7f6f0276
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,896,548 |
http://blog.diy.org/post/21854504159/introducing-diy-we-started-building-diy-a-few
|
Introducing DIY
|
Diy
|
**Introducing DIY**
We started building **DIY** a few months ago and now we’re sharing the first thing we’ve made. This is a company that we hope to spend decades crafting, but it’s important for us to do it out in the open, bit by bit, to encourage our community of kids and parents to share feedback with us continuously. From Zach’s experience making Vimeo, we understand that this sort of culture fosters collaboration and admiration between a company and its community, and ultimately leads to something that is loved.
Our ambition is for DIY to be the first app and online community in every kid’s life. It’s what we wish we had when we were young, and what we’ll give to our kids. Today we’re releasing **a tool to let kids collect everything they make as they grow up**.
We’ve all seen how kids can be like little MacGyvers. They’re able to take anything apart, recycle what you’ve thrown away – or if they’re **Caine**, build their own cardboard arcade. This is play, but it’s also creativity and it’s a valuable skill. Our idea is to encourage it by giving kids a place online to show it off, so family, friends and grandparents can see it and easily respond. Recognition makes a kid feel great, and motivates them to keep going. We want them to keep making, and by doing so learn new skills, use technology constructively, begin a lifelong adventure of curiosity, and hopefully spend time offline, too.
We’re looking to you parents as partners to make it all work. It used to be that you hang your kids’ work on the fridge to let them know you’re proud. Now the Web is becoming a part of their life at home and school — and there’s a new opportunity to connect you to their creations and cheer them on.
When you help your kid join DIY, you’re helping to recognize creativity as an essential part of every kid’s education, and possibly a requirement for their satisfaction as an adult. Sadly, most adults don’t believe they’re creative although we’re all capable of it at any age! We believe that to accept yourself as a creative adult you must start as a kid who is fearless of learning new skills and doing it yourself. Encouraging your kids to be inventive and self-reliant now will better prepare them to participate in a world that keeps changing.
**Here’s how it works today:**
- DIY kids sign up and get their own Portfolio, a public web page to show off what they make.
- They upload pictures of their projects using
**diy.org** or our **iOS app**. - Kids’ projects are online for everyone to see, you can add Stickers to show support.
- You also have your own dashboard to follow their activity and to make sure they’re not sharing anything that should be private.
Kids are ready for this. They’re instinctively scientists and explorers. They’re quick to build using anything at their disposal. They transform their amazement of the world into games. They’re often drawn to learning that’s indistinguishable from play (think about bug collecting!). And, most important, they embrace technology.
We’re grateful for your help to make this company, and grow the next – hopefully larger – generation of creative kids.
- Zach Klein, Isaiah Saxon, Andrew Sliwinski, Daren Rabinovitch
(and Dave, Brian, Mike, Courtney, David, Lucas, Shawn, and Sean!)
PS. See our **Parents page** for more information. Or you can follow **@DIY** to see important updates.
| true | true | true |
We started building DIY a few months ago and now we're sharing the first thing we've made. This is a company that we hope to spend decades crafting, but it's important for us to do it out in the open,...
|
2024-10-12 00:00:00
|
2012-04-26 00:00:00
|
article
|
blog.diy.org
|
Tumblr
| null | null |
|
14,394,980 |
https://searchingforsyria.org/en
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,168,674 |
http://www.9to5mac.com/99-mac-app-store-coming-3546093465
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,383,800 |
https://www.bbc.co.uk/news/business-62402874
|
Tinder: CEO Renate Nyborg to leave dating app after one year
|
Peter Hoskins
|
# Tinder: CEO Renate Nyborg to leave dating app after one year
- Published
**Tinder chief executive Renate Nyborg is leaving the firm less than a year after becoming the boss of the dating app.**
Her exit was one of a number of management changes at Tinder announced by parent company Match Group.
Tinder's plans to adopt new technology, including virtual currencies and metaverse-based dating, are also being reviewed in the strategic shake-up.
The announcements came as Match reported second-quarter results that missed Wall Street expectations.
"Today we're announcing the departure of Tinder CEO Renate Nyborg, and I have made some changes to the management team and structure that I am confident will help deliver Tinder's full potential," Match Group chief executive Bernard Kim said in a letter to shareholders, external.
Mr Kim will take up the role vacated by Ms Nyborg while the company looks for a permanent chief executive for Tinder.
"I have loved every moment of the last two years, working with an I.N.C.R.E.D.I.B.L.E team on the magic of human connection," Ms Nyborg said in a LinkedIn post., external
The announcement also included the reorganisation of Tinder's top management team as well as a review of plans for rolling out new technology.
"After seeing mixed results from testing Tinder Coins, we've decided to take a step back and re-examine that initiative... we also intend to do more thinking about virtual goods," Mr Kim added.
It came as Match - which also owns the dating apps OkCupid, Hinge and Plenty of Fish - said it expects sales in the three months to the end of September to be between $790m to $800m.
That was well below Wall Street expectations and would mean the company would see no sales growth for the period.
Mr Kim pointed to the impact of the pandemic on people's willingness to start using dating apps and currency fluctuations for the disappointing figures.
"While people have generally moved past lockdowns and entered a more normal way of life, their willingness to try online dating products for the first time hasn't yet returned to pre-pandemic levels," he said.
Match Group shares fell by more than 20% in after-hours trade in New York on Tuesday.
Ms Nyborg became Tinder's first female chief executive in September last year.
Mr Kim was appointed as Match's chief executive in May after his predecessor Shar Dubey stepped down after just over two years in the role.
**You may also be interested in:**
## Related topics
- Published22 July 2022
- Published1 December 2021
- Published22 June 2021
| true | true | true |
Renate Nyborg's exit is part of a major shake-up of the dating app's management and strategy.
|
2024-10-12 00:00:00
|
2022-08-03 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
12,173,962 |
https://pixelastic.github.io/css-flags/
|
CSS Flags
| null |
CSS Flags
Code available on GitHub
Afghanistan
Åland Islands
Albania
Algeria
American Samoa
Andorra
Angola
Anguilla
Antigua and Barbuda
Argentina
Armenia
Aruba
Ascension
Australia
Austria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Belize
Benin
Bermuda
Bhutan
Bolivia, Plurinational State of
Bonaire, Sint Eustatius and Saba
Bosnia and Herzegovina
Botswana
Brazil
British Indian Ocean Territory
Brunei Darussalam
Bulgaria
Burkina Faso
Burundi
Cambodia
Cameroon
Canada
Canary
Cape Verde
Cayman Islands
Central African Republic
Chad
Chile
China
Christmas Island
Cocos (Keeling) Islands
Colombia
Comoros
Congo
Congo, the Democratic Republic of the
Cook Islands
Costa Rica
Côte d'Ivoire
Croatia
Cuba
Curaçao
Cyprus
Czech Republic
Denmark
Djibouti
Dominica
Dominican Republic
Ecuador
Egypt
El Salvador
England
Equatorial Guinea
Eritrea
Estonia
Ethiopia
Falkland Islands (Malvinas)
Faroe Islands
Fiji
Finland
France
French Guiana
French Polynesia
French Southern Territories
Gabon
Gambia
Georgia
Germany
Ghana
Gibraltar
Greece
Greenland
Grenada
Guadeloupe
Guam
Guatemala
Guernsey
Guinea
Guinea-Bissau
Guyana
Haiti
Holy See (Vatican City State)
Honduras
Hong Kong
Hungary
Iceland
India
Indonesia
Iran, Islamic Republic of
Iraq
Ireland
Isle of Man
Israel
Italy
Jamaica
Japan
Jersey
Jordan
Kazakhstan
Kenya
Kiribati
Korea (North)
Korea (South)
Kuwait
Kyrgyzstan
Laos
Latvia
Lebanon
Lesotho
Liberia
Libya
Liechtenstein
Lithuania
Luxembourg
Macao
Macedonia
Madagascar
Malawi
Malaysia
Maldives
Mali
walta
Marshall Islands
Martinique
Mauritania
Mauritius
Mayotte
Mexico
Micronesia, Federated States of
Moldova, Republic of
Monaco
Mongolia
Montenegro
Montserrat
Morocco
Mozambique
Myanmar
Namibia
Nauru
Nepal
Netherlands
New Caledonia
New Zealand
Nicaragua
Niger
Nigeria
Niue
Norfolk Island
Northern Mariana Islands
Norway
Oman
Pakistan
Palau
Palestine, State of
Panama
Papua New Guinea
Paraguay
Peru
Philippines
Pitcairn
Poland
Portugal
Puerto Rico
Qatar
Réunion
Romania
Russian Federation
Rwanda
Saint Barthélemy
Saint Helena
Saint Kitts and Nevis
Saint Lucia
Saint Martin (French part)
Saint Pierre and Miquelon
Saint Vincent and the Grenadines
Samoa
San Marino
Sao Tome and Principe
Saudi Arabia
Scotland
Senegal
Serbia
Seychelles
Sierra Leone
Singapore
Sint Maarten (Dutch part)
Slovakia
Slovenia
Solomon Islands
Somalia
South Africa
South Georgia and the South Sandwich Islands
South Sudan
Spain
Sri Lanka
Sudan
Suriname
Svalbard and Jan Mayen
Swaziland
Sweden
Switzerland
Syrian Arab Republic
Taiwan
Tajikistan
Tanzania, United Republic of
Thailand
Timor-Leste
Togo
Tokelau
Tonga
Trinidad and Tobago
Tristan da Cunha
Tunisia
Turkey
Turkmenistan
Turks and Caicos Islands
Tuvalu
Uganda
Ukraine
United Arab Emirates
United Kingdom
United States
United States Minor Outlying Islands
Uruguay
Uzbekistan
Vanuatu
Venezuela, Bolivarian Republic of
Viet Nam
Virgin Islands, British
Virgin Islands, U.S.
Wallis and Futuna
Western Sahara
Yemen
Zambia
Zimbabwe
| true | true | true |
Flags of the world in just one div
|
2024-10-12 00:00:00
| null | null |
github.io
|
CSS Flags
| null | null |
|
97,515 |
http://www.cs.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,869,652 |
http://medcitynews.com/2017/07/technology-pioneers-cray-convergence-supercomputing-life-sciences/?utm_campaign=Contact+Quiboat+For+More+Referrer&utm_medium=twitter&utm_source=quiboat
|
Technology Pioneer And The Convergence Of Supercomputing And The Life Sciences - MedCity News
|
Brian Dalton
|
Recently, MedCity News spoke with a pair of top executives at global supercomputing company Cray: Per Nyberg, Senior Director, Artificial Intelligence and Analytics and Ted Slater, Global Head of Healthcare and Life Sciences.
Cray provides powerful computational tools to address a range of challenging workloads from modeling and simulation, to artificial intelligence and analytics. “We strive to deliver the tools that our customers need, whether on-premise or hosted in the cloud as a service,” Slater said.
The company’s long-term vision is built around the convergence of simulation and data analytics, combining multiple technologies into a single integrated system. With more than forty years of experience, Cray is focused on solving the most complex computing and analytics challenges.
**AI and life sciences**
The application of machine learning, and in particular deep learning, to biological problems will continue to develop and require ever more data and ever larger, more complex models. Training these models will require much more classic HPC computation.
In the life sciences, “Machine learning is applied to a wide variety of problems, including biomedical image analysis, molecular structure analysis, protein function prediction, and characterization of genotype-phenotype relationships, to name just a few,” reports Slater. Life sciences researchers working on problems like these routinely generate vast amounts of data. “This is where big data comes into play and where strong supercomputing capabilities flourish,” Slater explained.”
**Convergence of supercomputing and big data **
Slater points to cancer research as one area where there is an enormous amount of data being studied as scientists try to better understand the disease. Cray has provided high-performance computing to many organizations that have cancer research as a major focus, including the Broad Institute of MIT and Harvard where some researchers are conducting genome-wide variant analysis at scale.
Cray’s approach is especially well suited to artificial intelligence. “For several years we have been anticipating and preparing for a general convergence of data analytics and simulation and modeling HPC problems,” said Nyberg.
As data markets mature, Nyberg said the convergence of supercomputing and big data means that more businesses are going to see higher value in their data. Cray has built a suite of big data and AI software called , Urika-XC, for its flagship XC Series supercomputers. Urika-XC powers analytics and AI workloads that run alongside scientific modeling and simulations on the supercomputer, eliminating the need to move data between differing systems. Cray can run converged analytics and simulation workloads across a variety of scientific and commercial initiatives, such as precision medicine.
“At our core, we are technology pioneers who strive to deliver tools into the hands of researchers.,” Slater said. “We work with organizations to understand what they’re trying to do and what problems they aren’t able to solve with existing technologies. Then we develop complete supercomputing technologies that help them address their otherwise unanswerable questions.”
| true | true | true |
Recently, MedCity News spoke with a pair of top executives at global supercomputing ...
|
2024-10-12 00:00:00
|
2017-07-27 00:00:00
|
article
|
medcitynews.com
|
MedCity News
| null | null |
|
22,972,074 |
https://aws.amazon.com/blogs/machine-learning/amazon-a2i-is-now-generally-available/
|
Amazon A2I is now generally available | Amazon Web Services
| null |
## AWS Machine Learning Blog
# Amazon A2I is now generally available
AWS is excited to announce the general availability of Amazon Augmented AI (Amazon A2I), a new service that makes it easy to implement human reviews of machine learning (ML) predictions at scale. Amazon A2I removes the undifferentiated heavy lifting associated with building and managing expensive and complex human review systems, so you can ensure your ML models produce accurate predictions. Amazon A2I enables humans and machines to do what they do best by easily inserting human judgment into the ML pipeline.
Amazon A2I provides built-in human review workflows for common ML tasks such as content moderation and text extraction from documents, in combination with Amazon Rekognition and Amazon Textract. You can also create your own human review workflows for ML models built with Amazon SageMaker or with any on-premises or cloud tools via its API.
Amazon A2I also gives you the ability to work with your choice of human reviewers. You can use your own reviewers or choose from a workforce of over 500,000 independent contractors who already do ML-related tasks through Amazon Mechanical Turk. If your ML application requires confidentiality or special skills, you can use workforce vendors that are experienced and pre-screened by AWS for quality and security procedures.
Amazon A2I gives you the flexibility to incorporate human reviews based on your specific requirements. You simply set the business rules (how confident an ML model is in its predictions) to decide which predictions to use automatically or route to a human for validation.
When data comes through your ML pipeline, you can decide, based on your requirements, if the prediction meets your minimum confidence threshold. For example, if you want to extract a unique identifier like a Social Security Number (SSN) from numerous documents, they must be absolutely correct for your downstream application to be successful. You might set your threshold high (such as 99%) to achieve desired model accuracy.
Other data in your application might be secondary to the SSN and not require the same level of scrutiny. Therefore, you might set the threshold lower than SSN extraction. Amazon A2I gives you the flexibility to set different threshold levels based on your needs. Additionally, you can choose to randomly sample ML model outputs for human review so you can regularly evaluate if the model is still performing the way you intended.
Amazon A2I also works well with other services. You can use direct integrations with Amazon Textract and Amazon Rekognition, or use a custom workflow in Amazon A2I for human-in-the-loop validation with Amazon Comprehend, Amazon Translate or other AWS AI services. You can also use the Amazon A2I API to add human reviews to any ML application that uses a custom ML model built with Amazon SageMaker or any other on-premises or cloud tool.
## How Amazon A2I works with Amazon Textract
The following diagram shows how Amazon A2I integrates with Amazon Textract. Documents go through Amazon Textract and, based on your business rules, Amazon A2I sends low-confidence predictions to humans to review. You can store these results in an Amazon S3 bucket for your client application to use and be confident of their accuracy.
For example, a mortgage application has hundreds of documents associated with the application process. Each page has different information, signatures, and dates, which are scanned and uploaded into your system. At times, these document scans can be low-quality and require humans to review them to make sure the data is accurate and complete. With Amazon A2I, you can process these documents through Amazon Textract and send the low-confidence or hard-to-read documents to a human reviewer.
Other times, the prediction is at an acceptable confidence level for your application. When this occurs, Amazon A2I returns the prediction to the client application immediately, without a human review. You set the thresholds as needed for your use case.
## Using Amazon Textract and Amazon A2I
Belle Fleur Technologies, an AWS Partner Network (APN) Advanced Consulting Partner, believes the ML revolution is altering the way we live, work, and relate to one another, and will transform the way every business in every industry operates.
Belle Fleur knew Amazon Textract was the right solution when working with financial institutions. Amazon Textract allowed them to go through vast quantities of documents and extract the relevant data their clients needed. However, they spent significant time reviewing the more nuanced and critical data manually.
Adding Amazon A2I to the equation was a good fit for their customers. Amazon A2I decreased the time spent building human validation and pulled all the relevant extracted data into one place in an easy-to-understand workflow so reviewers could quickly and easily review ML outputs. Tia Dubuisson, President at Belle Fleur, says, “Amazon A2I not only provides us and our customers peace of mind that the more nuanced data extracted is reviewed by humans, but it also helps train and improve our ML models over time through continuous auditing and improvement.”
For more information about other ways our customers are using Amazon A2I in their ML workflows, see the Augmented AI Customer page.
## Getting started
To get started, sign in to the Amazon A2I console or search for Amazon Augmented AI on the AWS Management Console. For more information about creating a human review workflow, see Create a Flow Definition.
Amazon A2I is now available in 12 Regions. For more information about regions see AWS Region Table. For more information about getting started for free, see Amazon Augmented AI pricing.
### About the Authors
Andrea Morton-Youmans is a Product Marketing Manager on the AI Services team at AWS. Over the past 10 years she has worked in the technology and telecommunications industries, focused on developer storytelling and marketing campaigns. In her spare time, she enjoys heading to the lake with her husband and Aussie dog Oakley, tasting wine and enjoying a movie from time to time.
**Anuj Gupta i**s the Product Manager for Amazon Augmented AI. He focusing on delivering products that make it easier for customers to adopt machine learning. In his spare time, he enjoys road trips and watching Formula 1.
| true | true | true |
AWS is excited to announce the general availability of Amazon Augmented AI (Amazon A2I), a new service that makes it easy to implement human reviews of machine learning (ML) predictions at scale. Amazon A2I removes the undifferentiated heavy lifting associated with building and managing expensive and complex human review systems, so you can ensure your […]
|
2024-10-12 00:00:00
|
2020-04-24 00:00:00
| https://d2908q01vomqb2.c…[email protected]
|
article
|
amazon.com
|
Amazon Web Services
| null | null |
3,007,180 |
http://www.amazedsaint.com/2011/09/creating-10-minute-todo-listing-app-on.html
|
Account Suspended
| null |
Your account might be suspended for any one of the following reasons
Important Note:
Once the Suspension period is over, the website will be terminated and become inaccessible. So please take immediate action on this.
For More Information, please contact the Support Department.
101
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
17,132,031 |
https://www.monterail.com/blog/developing-a-skill-for-amazons-alexa-the-conf-room-manager
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,346,197 |
https://longreads.com/2018/02/05/a-teen-and-a-toy-gun/
|
A Teen and a Toy Gun - Longreads
|
Mikedang
|
This is the story of the last day of 17-year-old Quanice Hayes’s life. It involves a police department that says they have no good way of deciphering between real guns and fake ones, and a family still searching for answers.
# A Teen and a Toy Gun
# A Teen and a Toy Gun
*Leah Sottile | Longreads | February 2018 | 33 minutes (8,200 words)*
## I.
The night before Quanice Hayes was shot in the head by a police officer, the skinny 17-year-old was snapping selfies with his girlfriend in a seedy Portland, Oregon, motel room.
Bella Aguilar held her phone close when she clicked off the photos: In one, the 18-year-old girl pushes her tongue out through a smile, her boyfriend leaning over her right shoulder, lips pressed to her cheek, his dreads held back with one hand.
In another, Aguilar cradles her cheek against a black-and-sand-colored gun. It’s fake — the kind of air-powered toy that kids use to pop each other with plastic pellets in indoor arenas. Hayes peeks into the frame behind her.
If you know that the gun is fake, you see a snapshot of two kids playing tough; if you don’t, those photos looks like the beginning of a story about to go terribly wrong.
A few hours later, it did.
It was a cold night in February — a Wednesday. Aguilar and Hayes snapped photos and danced when friends came by the motel room where the couple had been crashing. They drank cough syrup and booze. There were pills and pot and a bag of coke.
They fired the toy gun at the motel’s dirty bathroom mirror, laughing when they couldn’t get the glass to break.
When the long night caught up with Aguilar and she lay down to pass out on the room’s queen-size bed, Hayes yanked on her arm, nagging her to stay awake. Two friends crashed on a pullout couch; two more were on the floor. But Hayes didn’t want to sleep. He walked outside.
Hours passed. The sun came up. Aguilar jolted awake and felt the bed next to her, but her boyfriend wasn’t there. His phone was — it sat on the table next to the bed. She felt frantic. Panicked. Confused. “I don’t know why, but it was that moment. I just felt really, really bad,” she said last summer, sitting outside a Portland Starbucks where she took drags from a Black and Mild.
She couldn’t remember why Hayes had left. She couldn’t remember so much of the night.
She frantically tapped out a text to her boyfriend’s mother, Venus: *Do you know where Quanice is?*
| true | true | true |
This is the story of the last day of 17-year-old Quanice Hayes’s life. It involves a police department that says they have no good way of deciphering between real guns and fake ones, and a family still searching for answers.
|
2024-10-12 00:00:00
|
2018-02-05 00:00:00
| null |
article
|
longreads.com
|
Longreads
| null | null |
32,753,136 |
https://timemachiner.io/2022/06/07/this-suckers-electrical/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,986,278 |
https://www.tweag.io/blog/2023-05-17-nickel-1.0-release/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
40,399,908 |
https://iopscience.iop.org/article/10.1088/1361-6382/ad26aa
|
We apologize for the inconvenience...
| null |
To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.
If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.
https://ioppublishing.org/contacts/
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
23,850,430 |
https://yetiops.net/posts/prometheus-service-discovery-openstack/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
3,772,121 |
http://mashable.com/2012/03/03/invisible-mercedes/
|
Mercedes Rolls Out Invisible Car [VIDEO]
|
Charlie White
|
When Mercedes wanted to promote its new fuel cell vehicle, instead of placing it squarely in front of everyone in the world, the company decided to make the car invisible. We have video.
In this clever publicity stunt, Mercedes wanted to emphasize that its F-Cell vehicle has no exhaust emissions, making it virtually invisible to the environment. If you take a look at the gallery below, you'll see how these clever dudes did it: by placing a mat of LEDs across one side of the vehicle and mounting a video-shooting Canon 5D Mark II digital SLR camera on the other side.
We saw a Halloween costume like this once. Mount an iPad on your belly, surround it with costuming that looks like a hole, place a webcam on your back shooting backward, and then feed that video into the iPad. Voilà! It looks like you have a gory hole going all the way through you:
Mercedes is doing basically the same trick. As you can see in the Mercedes video, even though people could still tell there was a car going by, they seemed impressed by the "invisible" fuel-cell vehicle.
Mercedes says its hydrogen-powered drive system is "ready for series production," but other reports have its commercialization set for 2014. However, fuel-cell technology is still notoriously expensive, partly because hydrogen is a difficult fuel to store and transport. The materials needed to create a viable fuel-cell are still hovering in the pricey stratosphere.
Practicality aside, we applaud Mercedes and its efforts to create a vehicle with zero emissions and less impact on the environment, and admire the lengths to which these artists went to bring home that point.
By the way, with all the ultra-cool cars in the Mercedes stable, why did the company pick a minivan for this showy demo? Oh, we get it: more surface area to mount that video screen.
| true | true | true |
Mercedes Rolls Out Invisible Car [VIDEO]
|
2024-10-12 00:00:00
|
2012-03-03 00:00:00
|
article
|
mashable.com
|
Mashable
| null | null |
|
14,425,851 |
http://www.seattletimes.com/business/real-estate/seattle-zestimates-are-off-by-40000-now-hundreds-of-data-crunchers-vie-to-improve-zillows-model/
|
Seattle Zestimates are off by $40,000; now hundreds of data crunchers vie to improve Zillow’s model
|
Mike Rosenberg
|
Zillow says the gap between man and machine is narrowing in the quest to predict home values, but its Zestimates are still off by an average of nearly $40,000 for a house sold in Seattle today. Now a $1 million prize is being dangled to whoever invents a better housetrap.
If you have a house or you’ve looked for one, you’ve probably checked its Zestimate — Zillow’s best guess as what a house is worth today.
The number might help you determine whether you should sell your house, or if the home you’re trying to buy is a rip-off or a bargain. Or you might just use it to gawk at what your friend or neighbor’s home is worth.
But how accurate is that number?
Long a source of complaints from real estate agents, Zillow’s online valuations are far from perfect, the company acknowledges: For instance, according to its own data, the Zestimate on the typical single-family house sold in Seattle today is off by an average of nearly $40,000.
### Most Read Business Stories
More than 200 teams have started working on new home value algorithms in the 48 hours since the Seattle-based company launched a new contest to improve its Zestimates model — with a top prize of $1 million.
To win, teams will use Zillow data to create an algorithm that better predicts sales prices for about 110 million U.S. homes listed on the site. Each team’s members and their scores are publicly available on the Kaggle platform, which is hosting the contest: The leading teams’s models have improved slightly just from making tweaks over a 24-hour period, but still lag behind Zillow’s predictions.
One hundred finalists will be picked next January, and the winners will be awarded in January 2019.
Zillow says its Zestimates have improved significantly since launching 11 years ago and now have a median error rate of 5.6 percent. So half of homes end up selling within 5 to 6 percent of what the Zestimate said it was worth.
But that means half are sold for a price that’s not so close to what the Zestimate predicted. In some cases, the error can be huge.
Most notably, and most embarrassingly for the company, Zillow CEO Spencer Rascoff sold his Seattle home last year for $1.05 million — 40 percent less than what the Zestimate said it was worth. Then a couple months later, Rascoff bought a house in Los Angeles for $20 million — or $1.6 million more than the Zestimate called for.
The error rate varies by location, but even the average error can amount to a lot of money in a pricey market like ours. The median single-family house in Seattle now goes for $722,000, and the local Zestimate error rate here is 5.4 percent. That means the Zestimate on a Seattle house selling today will be off by an average of about $39,000.
Across the wider Seattle metro area, Zillow says more than one-fourth of homes have a Zestimate that’s off by more than 10 percent. For 1 in every 13 homes, the Zestimate is wrong by more than 20 percent.
“That’s really not that accurate, especially in this market,” said Mark Corocoran, a local broker for Windermere.
He says about 90 percent of his clients will quote the Zestimate when they go to set the listing price, though he doesn’t take it into consideration. “It is something that everybody is looking at. But they don’t go into the homes. I just don’t trust it.”
Zillow acknowledges that real estate agents are still slightly better at predicting sales prices than its computers, but says the gap between man and machine is narrowing.
Data from Redfin shows that the listing price put together by agents is usually only about 1 percent different than the final sales price in the Seattle region, though in the last couple of months, homes have sold for an average of 4 percent above the list price. (There is a theory, however, that some agents deliberately list homes for less than they think it’s worth to draw in more buyers and drive up competition).
Zillow says it has the best online home value estimations, though other companies with similar tools, like Seattle-based Redfin, disagree.
Zillow chief analytics officer Stan Humphries, who created the Zestimate, said its team of 15 data scientists and machine learning engineers have been able to drop the home value predictor’s error rate significantly, from 14 percent when it launched in 2006.
They now run about 7.5 million statistical models nightly to update the Zestimates, using things like comparable home sales, local assessed values, home size and features, and hundreds of other data points. Lately, they’ve used machine learning to analyze home pictures from listings to determine how fancy various parts of the home are, taking the analysis beyond data like size and listing descriptions.
Humphries says the way to make the Zestimate better now is by using as much hyper-local data as possible, which lends itself better to a nationwide contest than to further research and tweaks from its Seattle-based team.
“At this point we think that future gains will come from widening the circle of ideas to the global data scientist community,” Humphries said.
“I believe we’ll be able to drive (the error rate) lower, I think a lot lower,” he said. “Neither a human nor a computer will get to zero. People will disagree with human and computer opinions, and that’s OK. But we think more opinions make consumers more comfortable.”
Although Zillow was the first to set up an online home estimate tool, in 2006, nowadays the Zestimate is just one of many ways to get a sense of your home value. Sites like Redfin and homes.com have their own algorithms, while local banks and real estate agencies can give you quotes almost instantly.
It’s also easy now to look up actual home sales data from comparable properties online — something that, just a decade ago, was information kept only by real estate professionals.
But the Zestimate is probably still the most-used computer estimate out there. It’s the first thing that shows up when you Google “How much is my home worth,” and Zillow is the most trafficked real estate information site in the United States, with 166 million average monthly users.
The algorithm has also been the subject of lawsuits. Just last week, the company was sued by suburban Chicago home builders who argue the numbers are being illegally passed off as an official appraisal, which are typically done by licensed professionals who visit homes up for sale. Zillow says the home value should be used as a starting point — just one piece of data to consider — and not an official appraisal.
| true | true | true |
Zillow says the gap between man and machine is narrowing in the quest to predict home values, but its Zestimates are still off by an average of nearly $40,000 for a house sold in Seattle today. Now a $1 million...
|
2024-10-12 00:00:00
|
2017-05-26 00:00:00
|
article
|
seattletimes.com
|
The Seattle Times
| null | null |
|
18,492,311 |
https://www.bbc.co.uk/news/world-australia-46258616
|
Wombat poop: Scientists reveal mystery behind cube-shaped droppings
| null |
# Wombat poop: Scientists reveal mystery behind cube-shaped droppings
- Published
**Scientists say they have uncovered how and why wombats produce cube-shaped poo - the only known species to do so.**
The Australian marsupial can pass up to 100 deposits of poop a night and they use the piles to mark territory. The shape helps it stop rolling away.
Despite having round anuses like other mammals, wombats do not produce round pellets, tubular coils or messy piles.
Researchers revealed on Sunday the varied elasticity of the intestines help to sculpt the poop into cubes.
"The first thing that drove me to this is that I have never seen anything this weird in biology. That was a mystery," Georgia Institute of Technology's Patricia Yang said.
After studying the digestive tracts of wombats put down after road accidents in Tasmania, a team led by Dr Yang presented its findings, external at the American Physical Society Division of Fluid Dynamics' annual meeting in Atlanta.
"We opened those intestines up like it was Christmas," said co-author David Hu, also from Georgia Tech, according to Science News.
The team compared the wombat intestines to pig intestines by inserting a balloon into the animals' digestive tracts to see how it stretched to fit the balloon.
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
In wombats, the faeces changed from a liquid-like state into a solid state in the last 25% of the intestines - but then in the final 8% a varied elasticity of the walls meant the poop would take shape as separated cubes.
This, the scientists explain, resulted in 2cm (0.8in) cube-shaped poops unique to wombats and the natural world.
## You may also like:
The marsupial then stacks the cubes - the higher the better so as to communicate with and attract other wombats.
"We currently have only two methods to manufacture cubes: We mould it, or we cut it. Now we have this third method," Dr Yang said.
"It would be a cool method to apply to the manufacturing process," she suggested, "how to make a cube with soft tissue instead of just moulding it."
- Published12 November 2018
- Published10 June 2018
- Published10 May 2018
- Published22 April 2018
- Published28 July 2017
- Published27 September 2017
| true | true | true |
Scientists discover how the marsupials are the only known species producing cube-shaped faeces.
|
2024-10-12 00:00:00
|
2018-11-19 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
4,987,660 |
http://katemats.com/becoming-a-manager/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,870,138 |
https://www.youtube.com/watch?v=xCGu5Z_vaps&list=PLw0jj21rhfkMY8jE99M0Um85-l8MdOeAA&index=6
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,349,649 |
https://m.youtube.com/watch?list=PL2FF649D0C4407B30&v=AD4b-52jtos
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,189,241 |
https://drewdevault.com/2019/10/07/HDCP-in-Weston.html
|
Why Collabora really added Digital Restrictions Management to Weston October 7, 2019 on Drew DeVault's blog
| null |
A recent article from Collabora, Why HDCP support in Weston is a good thing, proports to offer a lot of insight into why HDCP - a Digital Restrictions Management (DRM) related technology - was added to Weston - a well known basic Wayland compositor which was once the reference compositor for Wayland. But this article is gaslighting you. There is one reason and one reason alone that explains why HDCP support landed in Weston.
Q: Why was HDCP added to Weston?
A: $$$$$
Why does Collabora want you to *believe* that HDCP support in Weston is a good
thing? Let’s look into this in more detail. First: *is* HDCP a bad thing?
DRM (Digital Restrictions Management) is the collective term for software which attempts to restrict the rights of users attempting to access digital media. It’s mostly unrelated to Direct Rendering Manager, an important Linux subsystem for graphics which is closely related to Wayland. Digital Restrictions Management is software used by media owners to prevent you from enjoying their content except in specific, pre-prescribed ways.
There is universal agreement among the software community that DRM is
ineffective. Ultimately, these systems are defeated by the simple fact that no
amount of DRM can stop you from pointing your camera at your screen and pushing
record. But in practice, we don’t even need to resort to that - these systems
are far too weak to demand such measures. Here’s a $100 device on Amazon which
can break HDCP. DRM is shown to be impossible even in *theory*, as the
decryption keys have to live somewhere in your house in order to watch movies
there. Exfiltrating them is just a matter of putting forth the effort. For most
users, it hardly requires any effort to bypass DRM - they can just punch “watch
[name of movie] for free” into Google. It’s well-understood and rather obvious
that DRM systems completely and entirely fail at their stated goal.
No reasonable engineer would knowingly agree to adding a broken system like that to their system, and trust me - the entire engineering community has been made well-aware of these faults. Any other system with these obvious flaws would be discarded immediately, and if the media industry hadn’t had their hands firmly clapped over their ears, screaming “la la la”, and throwing money at the problem, it would have been. But, just adding a broken system isn’t necessarily going to hurt much. The problem is that, in its failure to achieve its stated goals, DRM brings with it some serious side-effects. DRM is closely tied to nonfree software - the RIAA mafia wants to keep their garbage a secret, after all. Moreover, DRM takes away the freedom to play your media when and where you want. Why should you have to have an internet connection? Why can’t you watch it on your ancient iPod running Rockbox? DRM exists to restrict users from doing what they want. More sinisterly, it exists to further the industry’s push to end consumer ownership of its products - preferring to steal from you monthly subscription fees and lease the media to you. Free software maintainers are responsible for protecting their users from this kind of abuse, and putting DRM into our software betrays them.
The authors are of the opinion that HDCP support in Weston does not take away
any rights from users. It doesn’t *stop* you from doing anything. This is true,
in the same way that killing environmental regulations doesn’t harm the
environment. Adding HDCP support is handing a bottle of whiskey to an abusive
husband. And the resulting system - and DRM as a whole - is known to be
inherently broken and ineffective, a fact that they even acknowledge in their
article. This feature *enables* media companies to abuse *your* users. Enough
cash might help some devs to doublethink their way out of it, but it’s true all
the same. They added these features to help abusive companies abuse their users,
in the hopes that they’ll send back more money or more patches. They say as much
in the article, it’s no secret.
Or, let’s give them the benefit of the doubt: perhaps their bosses forced them
to add this1. There have been other developers on this ledge, and I’ve talked
them down. Here’s the thing: it worked. Their organizations didn’t pursue DRM
any further. You are not the lowly code monkey you may think you are. Engineers
have real power in the organization. You can say “no” and it’s your
responsibility to say “no” when someone asks you to write unethical code.
Some of the people I’ve spoken to about HDCP for Wayland, particularly for
Weston, are of the opinion that “a protocol for it exists, therefore we will
implement it”. This is reckless and stupid. We already know what happens when
you bend the knee to our DRM overlords: look at Firefox. In 2014, Mozilla
added DRM to Firefox after a year of fighting against its standardization in the
W3C (a captured organization which governs2 web standards). They
capitulated, and it did absolutely nothing to stop them from being steamrolled
by Chrome’s growing popularity. Their market-share freefall didn’t even slow
down in 2014, or in any year since3. Collabora went down without a fight in
the first place.
Anyone who doesn’t recognize that self-interested organizations with a great
deal of resources are working against *our* interests as a free software
community is an idiot. We are at war with the bad actors pushing these systems,
and they are to be given no quarter.
Anyone who realizes this and turns a blind eye to it is a coward. Anyone who
doesn’t stand up to their boss, sits down, implements it in our free software
ecosystem, and cashes their check the next Friday - is not only a coward, but a
traitor to their users, their peers, and to society as a whole.
“HDCP support in Weston is a good thing”? It’s a good thing for *you*, maybe.
It’s a good thing for media conglomerates which want our ecosystem crushed
underfoot. It’s a bad thing for your users, and you know it, Collabora. Shame on
you for gaslighting us.
However… the person who *reverts* these changes is a hero, even in the face of
past mistakes. Weston, Collabora, you still have a chance to repent. Do what you
know is right and stand by those principles in the future.
P.S. To make sure I’m not writing downers all the time, rest assured that the next article will bring good news - RaptorCS has been working hard to correct the issues I raised in my last article.
-
This is just for the sake of argument. I’ve spoken 1-on-1 with some of the developers responsible and they stand by their statements as their personal opinions. ↩︎
-
Or at least attempts to govern. ↩︎
-
Source: StatCounter. Measuring browser market-share is hard, collect your grain of salt here. ↩︎
| true | true | true | null |
2024-10-12 00:00:00
|
2019-10-07 00:00:00
| null | null | null | null | null | null |
18,223,437 |
https://docs.python-guide.org/writing/gotchas/
|
Common Gotchas — The Hitchhiker's Guide to Python
| null |
# Common Gotchas¶
For the most part, Python aims to be a clean and consistent language that avoids surprises. However, there are a few cases that can be confusing for newcomers.
Some of these cases are intentional but can be potentially surprising. Some could arguably be considered language warts. In general, what follows is a collection of potentially tricky behavior that might seem strange at first glance, but are generally sensible, once you’re aware of the underlying cause for the surprise.
## Mutable Default Arguments¶
Seemingly the *most* common surprise new Python programmers encounter is
Python’s treatment of mutable default arguments in function definitions.
### What You Wrote¶
```
def append_to(element, to=[]):
to.append(element)
return to
```
### What You Might Have Expected to Happen¶
```
my_list = append_to(12)
print(my_list)
my_other_list = append_to(42)
print(my_other_list)
```
A new list is created each time the function is called if a second argument isn’t provided, so that the output is:
```
[12]
[42]
```
### What Actually Happens¶
```
[12]
[12, 42]
```
A new list is created *once* when the function is defined, and the same list is
used in each successive call.
Python’s default arguments are evaluated *once* when the function is defined,
not each time the function is called (like it is in say, Ruby). This means that
if you use a mutable default argument and mutate it, you *will* and have
mutated that object for all future calls to the function as well.
### What You Should Do Instead¶
Create a new object each time the function is called, by using a default arg to
signal that no argument was provided (`None`
is often a good choice).
```
def append_to(element, to=None):
if to is None:
to = []
to.append(element)
return to
```
Do not forget, you are passing a *list* object as the second argument.
### When the Gotcha Isn’t a Gotcha¶
Sometimes you can specifically “exploit” (read: use as intended) this behavior to maintain state between calls of a function. This is often done when writing a caching function.
## Late Binding Closures¶
Another common source of confusion is the way Python binds its variables in closures (or in the surrounding global scope).
### What You Wrote¶
```
def create_multipliers():
return [lambda x : i * x for i in range(5)]
```
### What You Might Have Expected to Happen¶
```
for multiplier in create_multipliers():
print(multiplier(2))
```
A list containing five functions that each have their own closed-over `i`
variable that multiplies their argument, producing:
```
0
2
4
6
8
```
### What Actually Happens¶
```
8
8
8
8
8
```
Five functions are created; instead all of them just multiply `x`
by 4.
Python’s closures are *late binding*.
This means that the values of variables used in closures are looked
up at the time the inner function is called.
Here, whenever *any* of the returned functions are called, the value of `i`
is looked up in the surrounding scope at call time. By then, the loop has
completed and `i`
is left with its final value of 4.
What’s particularly nasty about this gotcha is the seemingly prevalent
misinformation that this has something to do with lambdas
in Python. Functions created with a `lambda`
expression are in no way special,
and in fact the same exact behavior is exhibited by just using an ordinary
`def`
:
```
def create_multipliers():
multipliers = []
for i in range(5):
def multiplier(x):
return i * x
multipliers.append(multiplier)
return multipliers
```
### What You Should Do Instead¶
The most general solution is arguably a bit of a hack. Due to Python’s aforementioned behavior concerning evaluating default arguments to functions (see Mutable Default Arguments), you can create a closure that binds immediately to its arguments by using a default arg like so:
```
def create_multipliers():
return [lambda x, i=i : i * x for i in range(5)]
```
Alternatively, you can use the functools.partial function:
```
from functools import partial
from operator import mul
def create_multipliers():
return [partial(mul, i) for i in range(5)]
```
### When the Gotcha Isn’t a Gotcha¶
Sometimes you want your closures to behave this way. Late binding is good in lots of situations. Looping to create unique functions is unfortunately a case where they can cause hiccups.
## Bytecode (.pyc) Files Everywhere!¶
By default, when executing Python code from files, the Python interpreter
will automatically write a bytecode version of that file to disk, e.g.
`module.pyc`
.
These `.pyc`
files should not be checked into your source code repositories.
Theoretically, this behavior is on by default for performance reasons. Without these bytecode files, Python would re-generate the bytecode every time the file is loaded.
### Disabling Bytecode (.pyc) Files¶
Luckily, the process of generating the bytecode is extremely fast, and isn’t something you need to worry about while developing your code.
Those files are annoying, so let’s get rid of them!
```
$ export PYTHONDONTWRITEBYTECODE=1
```
With the `$PYTHONDONTWRITEBYTECODE`
environment variable set, Python will
no longer write these files to disk, and your development environment will
remain nice and clean.
I recommend setting this environment variable in your `~/.profile`
.
### Removing Bytecode (.pyc) Files¶
Here’s nice trick for removing all of these files, if they already exist:
```
$ find . -type f -name "*.py[co]" -delete -or -type d -name "__pycache__" -delete
```
Run that from the root directory of your project, and all `.pyc`
files
will suddenly vanish. Much better.
### Version Control Ignores¶
If you still need the `.pyc`
files for performance reasons, you can always add them
to the ignore files of your version control repositories. Popular version control
systems have the ability to use wildcards defined in a file to apply special
rules.
An ignore file will make sure the matching files don’t get checked into the repository.
Git uses `.gitignore`
while Mercurial uses `.hgignore`
.
At the minimum your ignore files should look like this.
```
syntax:glob # This line is not needed for .gitignore files.
*.py[cod] # Will match .pyc, .pyo and .pyd files.
__pycache__/ # Exclude the whole folder
```
You may wish to include more files and directories depending on your needs. The next time you commit to the repository, these files will not be included.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
article
|
python-guide.org
|
docs.python-guide.org
| null | null |
|
22,288,458 |
https://faunalytics.org/machine-learning-could-make-animal-tests-obsolete/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
17,155,230 |
https://developers.google.com/web/updates/2018/05/devtools
|
What's New In DevTools (Chrome 68) | Blog | Chrome for Developers
|
Kayce Basques X GitHub Glitch
|
New to DevTools in Chrome 68:
- Eager Evaluation. As you type expressions, the Console previews the result.
- Argument hints. As you type functions, the Console shows you the expected arguments for that function.
- Function autocompletion. After typing a function call such as
`document.querySelector('p')`
, the Console shows you the functions and properties that the return value supports. - ES2017 keywords in the Console. Keywords such as
`await`
are now available in the Console's autocomplete UI. - Lighthouse 3.0 in the Audits panel. Faster, more consistent audits, a new UI, and new audits.
`BigInt`
support. Try out JavaScript's new arbitrary-precision integer in the Console.- Adding property paths to the Watch pane. Add properties from the Scope pane to the Watch pane.
- "Show timestamps" moved to Settings.
Read on, or watch the video version of the release notes, below.
## Assistive Console
Chrome 68 ships with a few new Console features related to autocompletion and previewing.
### Eager Evaluation
When you type an expression in the Console, the Console can now show a preview of the result of that expression below your cursor.
**Figure 1**. The Console is printing the result of the `sort()`
operation before it has been
explicitly executed
To enable Eager Evaluation:
- Open the
**Console**. - Open
**Console Settings**. - Enable the
**Eager evaluation**checkbox.
DevTools does not eager evaluate if the expression causes side effects.
### Argument hints
As you're typing out functions, the Console now shows you the arguments that the function expects.
**Figure 2**. Various examples of argument hints in the Console
Notes:
- A question mark before an arg, such as
`?options`
, represents an optional arg. - An ellipsis before an arg, such as
`...items`
, represents a spread. - Some functions, such as
`CSS.supports()`
, accept multiple argument signatures.
### Autocomplete after function executions
After enabling Eager Evaluation, the Console now also shows you which properties and functions are available after you type out a function.
**Figure 3**. The top screenshot represents the old behavior, and the bottom screenshot represents
the new behavior that supports function autocompletion
### ES2017 keywords in autocomplete
ES2017 keywords, such as `await`
, are now available in the Console's autocomplete UI.
**Figure 4**. The Console now suggests `await`
in its autocomplete UI
## Faster, more reliable audits, a new UI, and new audits
Chrome 68 ships with Lighthouse 3.0. The next sections are a roundup of some of the biggest changes. See Announcing Lighthouse 3.0 for the full story.
### Faster, more reliable audits
Lighthouse 3.0 has a new internal auditing engine, codenamed Lantern, which completes your audits faster, and with less variance between runs.
### New UI
Lighthouse 3.0 also brings a new UI, thanks to a collaboration between the Lighthouse and Chrome UX (Research & Design) teams.
**Figure 5**. The new report UI in Lighthouse 3.0
### New audits
Lighthouse 3.0 also ships with 4 new audits:
- First Contentful Paint
- robots.txt is not valid
- Use video formats for animated content
- Avoid multiple, costly round trips to any origin
## BigInt support
Chrome 68 supports a new numeric primitive called `BigInt`
. `BigInt`
lets you represent
integers with arbitrary precision. Try it out in the Console:
**Figure 6**. An example of `BigInt`
in the Console
## Add property path to watch
While paused on a breakpoint, right-click a property in the Scope pane and select **Add property
path to watch** to add that property to the Watch pane.
**Figure 7**. An example of **Add property path to watch**
## "Show timestamps" moved to settings
The **Show timestamps** checkbox previously in **Console Settings**
has moved to Settings.
## Download the preview channels
Consider using the Chrome Canary, Dev or Beta as your default development browser. These preview channels give you access to the latest DevTools features, test cutting-edge web platform APIs, and find issues on your site before your users do!
## Getting in touch with the Chrome DevTools team
Use the following options to discuss the new features and changes in the post, or anything else related to DevTools.
- Submit a suggestion or feedback to us via crbug.com.
- Report a DevTools issue using the
**More options**>**Help**>**Report a DevTools issues**in DevTools. - Tweet at @ChromeDevTools.
- Leave comments on our What's new in DevTools YouTube videos or DevTools Tips YouTube videos.
## What's new in DevTools
A list of everything that has been covered in the What's new in DevTools series.
- Network panel improvements
- Network filters reimagined
- HAR exports now exclude sensitive data by default
- Elements panel improvements
- Autocomplete values for text-emphasis-* properties
- Scroll overflows marked with a badge
- Performance panel improvements
- Recommendations in live metrics
- Navigate breadcrumbs
- Memory panel improvements
- New 'Detached elements' profile
- Improved naming of plain JS objects
- Turn off dynamic theming
- Chrome Experiment: Process sharing
- Lighthouse 12.2.1
- Miscellaneous highlights
- Recorder supports export to Puppeteer for Firefox
- Performance panel improvements
- Live metrics observations
- Search requests in the Network track
- See stack traces of performance.mark and performance.measure calls
- Use test address data in the Autofill panel
- Elements panel improvements
- Force more states for specific elements
- Elements > Styles now autocompletes more grid properties
- Lighthouse 12.2.0
- Miscellaneous highlights
- Console insights by Gemini are going live in most European countries
- Performance panel updates
- Enhanced Network track
- Customize performance data with extensibility API
- Details in the Timings track
- Copy all listed requests in the Network panel
- Faster heap snapshots with named HTML tags and less clutter
- Open Animations panel to capture animations and edit @keyframes live
- Lighthouse 12.1.0
- Accessibility improvements
- Miscellaneous highlights
- Inspect CSS anchor positioning in the Elements panel
- Sources panel improvements
- Enhanced 'Never Pause Here'
- New scroll snap event listeners
- Network panel improvements
- Updated network throttling presets
- Service worker information in custom fields of the HAR format
- Send and receive WebSocket events in the Performance panel
- Miscellaneous highlights
- Performance panel improvements
- Move and hide tracks with updated track configuration mode
- Ignore scripts in the flame chart
- Throttle down the CPU by 20 times
- Performance insights panel will be deprecated
- Find excessive memory usage with new filters in heap snapshots
- Inspect storage buckets in Application > Storage
- Disable self-XSS warnings with a command-line flag
- Lighthouse 12.0.0
- Miscellaneous highlights
- Understand errors and warnings in the Console better with Gemini
- @position-try rules support in Elements > Styles
- Sources panel improvements
- Configure automatic pretty-printing and bracket closing
- Handled rejected promises are recognized as caught
- Error causes in the Console
- Network panel improvements
- Inspect Early Hints headers
- Hide the Waterfall column
- Performance panel improvements
- Capture CSS selector statistics
- Change order and hide tracks
- Ignore retainers in the Memory panel
- Lighthouse 11.7.1
- Miscellaneous highlights
- New Autofill panel
- Enhanced network throttling for WebRTC
- Scroll-driven animations support in the Animations panel
- Improved CSS nesting support in Elements > Styles
- Enhanced Performance panel
- Hide functions and their children in the flame chart
- Arrows from selected initiators to events they initiated
- Lighthouse 11.6.0
- Tooltips for special categories in Memory > Heap snapshots
- Application > Storage updates
- Bytes used for shared storage
- Web SQL is fully deprecated
- Coverage panel improvements
- The Layers panel might be deprecated
- JavaScript Profiler deprecation: Phase four, final
- Miscellaneous highlights
- Find the Easter egg
- Elements panel updates
- Emulate a focused page in Elements > Styles
- Color Picker, Angle Clock, and Easing Editor in
`var()`
fallbacks - CSS length tool is deprecated
- Popover for the selected search result in the Performance > Main track
- Network panel updates
- Clear button and search filter in the Network > EventStream tab
- Tooltips with exemption reasons for third-party cookies in Network > Cookies
- Enable and disable all breakpoints in Sources
- View loaded scripts in DevTools for Node.js
- Lighthouse 11.5.0
- Accessibility improvements
- Miscellaneous highlights
- The official collection of Recorder extensions is live
- Network improvements
- Failure reason in the Status column
- Improved Copy submenu
- Performance improvements
- Breadcrumbs in the Timeline
- Event initiators in the Main track
- JavaScript VM instance selector menu for Node.js DevTools
- New shortcut and command in Sources
- Elements improvements
- The ::view-transition pseudo-element is now editable in Styles
- The align-content property support for block containers
- Posture support for emulated foldable devices
- Dynamic theming
- Third-party cookies phaseout warnings in the Network and Application panels
- Lighthouse 11.4.0
- Accessibility improvements
- Miscellaneous highlights
- Elements improvements
- Streamlined filter bar in the Network panel
`@font-palette-values`
support- Supported case: Custom property as a fallback of another custom property
- Improved source map support
- Performance panel improvements
- Enhanced Interactions track
- Advanced filtering in Bottom-Up, Call Tree, and Event Log tabs
- Indentation markers in the Sources panel
- Helpful tooltips for overridden headers and content in the Network panel
- New Command Menu options for adding and removing request blocking patterns
- The CSP violations experiment is removed
- Lighthouse 11.3.0
- Accessibility improvements
- Miscellaneous highlights
- Third-party cookie phaseout
- Analyze your website's cookies with the Privacy Sandbox Analysis Tool
- Enhanced ignore listing
- Default exclusion pattern for node_modules
- Caught exceptions now stop execution if caught or passing through non-ignored code
`x_google_ignoreList`
renamed to`ignoreList`
in source maps- New input mode toggle during remote debugging
- The Elements panel now shows URLs for #document nodes
- Effective Content Security Policy in the Application panel
- Improved animation debugging
- 'Do you trust this code?' dialog in Sources and self-XSS warning in Console
- Event listener breakpoints in web workers and worklets
- The new media badge for
`<audio>`
and`<video>`
- Preloading renamed to Speculative loading
- Lighthouse 11.2.0
- Accessibility improvements
- Miscellaneous highlights
- Improved @property section in Elements > Styles
- Editable @property rule
- Issues with invalid @property rules are reported
- Updated list of devices to emulate
- Pretty-print inline JSON in script tags in Sources
- Autocomplete private fields in Console
- Lighthouse 11.1.0
- Accessibility improvements
- Web SQL deprecation
- Screenshot aspect ratio validation in Application > Manifest
- Miscellaneous highlights
- New section for custom properties in Elements > Styles
- More local overrides improvements
- Enhanced search
- Improved Sources panel
- Streamlined workspace in the Sources panel
- Reorder panes in Sources
- Syntax highlighting and pretty-printing for more script types
- Emulate prefers-reduced-transparency media feature
- Lighthouse 11
- Accessibility improvements
- Miscellaneous highlights
- Network panel improvements
- Override web content locally even faster
- Override the content of XHR and fetch requests
- Hide Chrome extension requests
- Human-readable HTTP status codes
Performance: See the changes in fetch priority for network events
- Sources settings enabled by default: Code folding and automatic file reveal
- Improved debugging of third-party cookie issues
- New colors
- Lighthouse 10.4.0
- Debug preloading in the Application panel
- The C/C++ WebAssembly debugging extension for DevTools is now open source
- Miscellaneous highlights
- (Experimental) New rendering emulation: prefers-reduced-transparency
- (Experimental) Enhanced Protocol monitor
- Improved debugging of missing stylesheets
- Linear timing support in Elements > Styles > Easing Editor
- Storage buckets support and metadata view
- Lighthouse 10.3.0
- Accessibility: Keyboard commands and improved screen reading
- Miscellaneous highlights
- Elements improvements
- New CSS subgrid badge
- Selector specificity in tooltips
- Values of custom CSS properties in tooltips
- Sources improvements
- CSS syntax highlighting
- Shortcut to set conditional breakpoints
- Application > Bounce Tracking Mitigations
- Lighthouse 10.2.0
- Ignore content scripts by default
- Network > Response improvements
- Miscellaneous highlights
- WebAssembly debugging support
- Improved stepping behavior in Wasm apps
- Debug Autofill using the Elements panel and Issues tab
- Assertions in Recorder
- Lighthouse 10.1.1
- Performance enhancements
- performance.mark() shows timing on hover in Performance > Timings
- profile() command populates Performance > Main
- Warning for slow user interactions
- Web Vitals updates
- JavaScript Profiler deprecation: Phase three
- Miscellaneous highlights
- Override network response headers
- Nuxt, Vite, and Rollup debugging improvements
- CSS improvements in Elements > Styles
- Invalid CSS properties and values
- Links to key frames in the animation shorthand property
- New Console setting: Autocomplete on Enter
- Command Menu emphasizes authored files
- JavaScript Profiler deprecation: Stage two
- Miscellaneous highlights
- Recorder updates
- Recorder replay extensions
- Record with pierce selectors
- Export recordings as Puppeteer scripts with Lighthouse analysis
- Get extensions for Recorder
- Elements > Styles updates
- CSS documentation in the Styles pane
- CSS nesting support
- Marking logpoints and conditional breakpoints in the Console
- Ignore irrelevant scripts during debugging
- JavaScript Profiler deprecation started
- Emulate reduced contrast
- Lighthouse 10
- Miscellaneous highlights
- Debugging HD color with the Styles pane
- Enhanced breakpoint UX
- Customizable Recorder shortcuts
- Better syntax highlight for Angular
- Reorganize caches in the Application panel
- Miscellaneous highlights
- Clearing Performance Panel on reload
- Recorder updates
- View and highlight the code of your user flow in the Recorder
- Customize selector types of a recording
- Edit user flow while recording
- Automatic in-place pretty print
- Better syntax highlight and inline preview for Vue, SCSS and more
- Ergonomic and consistent Autocomplete in the Console
- Miscellaneous highlights
- Recorder: Copy as options for steps, in-page replay, step's context menu
- Show actual function names in performance's recordings
- New keyboard shortcuts in the Console & Sources panel
- Improved JavaScript debugging
- Miscellaneous highlights
- [Experimental] Enhanced UX in managing breakpoints
- [Experimental] Automatic in-place pretty print
- Hints for inactive CSS properties
- Auto-detect XPath and text selectors in the Recorder panel
- Step through comma-separated expressions
- Improved Ignore list setting
- Miscellaneous highlights
- Customize keyboard shortcuts in DevTools
- Toggle light and dark themes with keyboard shortcut
- Highlight C/C++ objects in the Memory Inspector
- Support full initiator information for HAR import
- Start DOM search after pressing
`Enter`
- Display
`start`
and`end`
icons for`align-content`
CSS flexbox properties - Miscellaneous highlights
- Group files by Authored / Deployed in the Sources panel
- Linked stack traces for asynchronous operations
- Automatically ignore known third-party scripts
- Improved call stack during debugging
- Hiding ignore-listed sources in the Sources panel
- Hiding ignore-listed files in the Command Menu
- New Interactions track in the Performance panel
- LCP timings breakdown in the Performance Insights panel
- Auto-generate default name for recordings in the Recorder panel
- Miscellaneous highlights
- Step-by-step replay in the Recorder
- Support mouse over event in the Recorder panel
- Largest Contentful Paint (LCP) in the Performance insights panel
- Identify flashes of text (FOIT, FOUT) as potential root causes for layout shifts
- Protocol handlers in the Manifest pane
- Top layer badge in the Elements panel
- Attach Wasm debugging information at runtime
- Support live edit during debugging
- View and edit @scope at rules in the Styles pane
- Source map improvements
- Miscellaneous highlights
- Restart frame during debugging
- Slow replay options in the Recorder panel
- Build an extension for the Recorder panel
- Group files by Authored / Deployed in the Sources panel
- New User Timings track in the Performance insights panel
- Reveal assigned slot of an element
- Simulate hardware concurrency for Performance recordings
- Preview non-color value when autocompleting CSS variables
- Identify blocking frames in the Back/forward cache pane
- Improved autocomplete suggestions for JavaScript objects
- Source maps improvements
- Miscellaneous highlights
- Capture double-click and right-click events in the Recorder panel
- New timespan and snapshot mode in the Lighthouse panel
- Improved zoom control in the Performance Insights panel
- Confirm to delete a performance recording
- Reorder panes in the Elements panel
- Picking a color outside of the browser
- Improved inline value preview during debugging
- Support large blobs for virtual authenticators
- New keyboard shortcuts in the Sources panel
- Source maps improvements
- Preview feature: New Performance insights panel
- New shortcuts to emulate light and dark themes
- Improved security on the Network Preview tab
- Improved reloading at breakpoint
- Console updates
- Cancel user flow recording at the start
- Display inherited highlight pseudo-elements in the Styles pane
- Miscellaneous highlights
- [Experimental] Copy CSS changes
- [Experimental] Picking color outside of browser
- Import and export recorded user flows as a JSON file
- View cascade layers in the Styles pane
- Support for the
`hwb()`
color function - Improved the display of private properties
- Miscellaneous highlights
- [Experimental] New timespan and snapshot mode in the Lighthouse panel
- View and edit @supports at rules in the Styles pane
- Support common selectors by default
- Customize the recording's selector
- Rename a recording
- Preview class/function properties on hover
- Partially presented frames in the Performance panel
- Miscellaneous highlights
- Throttling WebSocket requests
- New Reporting API pane in the Application panel
- Support wait until element is visible/clickable in the Recorder panel
- Better console styling, formatting and filtering
- Debug Chrome extension with source map files
- Improved source folder tree in the Sources panel
- Display worker source files in the Sources panel
- Chrome's Auto Dark Theme updates
- Touch-friendly color-picker and split pane
- Miscellaneous highlights
- Preview feature: Full-page accessibility tree
- More precise changes in the Changes tab
- Set longer timeout for user flow recording
- Ensure your pages are cacheable with the Back/forward cache tab
- New Properties pane filter
- Emulate the CSS forced-colors media feature
- Show rulers on hover command
- Support
`row-reverse`
and`column-reverse`
in the Flexbox editor - New keyboard shortcuts to replay XHR and expand all search results
- Lighthouse 9 in the Lighthouse panel
- Improved Sources panel
- Miscellaneous highlights
- [Experimental] Endpoints in the Reporting API pane
- Preview feature: New Recorder panel
- Refresh device list in Device Mode
- Autocomplete with Edit as HTML
- Improved code debugging experience
- Syncing DevTools settings across devices
- Preview feature: New CSS Overview panel
- Restored and improved CSS length edit and copy experince
- Emulate the CSS prefers-contrast media feature
- Emulate the Chrome's Auto Dark Theme feature
- Copy declarations as JavaScript in the Styles pane
- New Payload tab in the Network panel
- Improved the display of properties in the Properties pane
- Option to hide CORS errors in the Console
- Proper
`Intl`
objects preview and evaluation in the Console - Consistent async stack traces
- Retain the Console sidebar
- Deprecated Application cache pane in the Application panel
- [Experimental] New Reporting API pane in the Application panel
- New CSS length authoring tools
- Hide issues in the Issues tab
- Improved the display of properties
- Lighthouse 8.4 in the Lighthouse panel
- Sort snippets in the Sources panel
- New links to translated release notes and report a translation bug
- Improved UI for DevTools command menu
- Use DevTools in your preferred language
- New Nest Hub devices in the Device list
- Origin trials in the Frame details view
- New CSS container queries badge
- New checkbox to invert the network filters
- Upcoming deprecation of the Console sidebar
- Display raw
`Set-Cookies`
headers in the Issues tab and Network panel - Consistent display native accessors as own properties in the Console
- Proper error stack traces for inline scripts with #sourceURL
- Change color format in the Computed pane
- Replace custom tooltips with native HTML tooltips
- [Experimental] Hide issues in the Issues tab
- Editable CSS container queries in the Styles pane
- Web bundle preview in the Network panel
- Attribution Reporting API debugging
- Better string handling in the Console
- Improved CORS debugging
- Lighthouse 8.1
- New note URL in the Manifest pane
- Fixed CSS matching selectors
- Pretty-printing JSON responses in the Network panel
- CSS grid editor
- Support for
`const`
redeclarations in the Console - Source order viewer
- New shortcut to view frame details
- Enhanced CORS debugging support
- Rename XHR label to Fetch/XHR
- Filter Wasm resource type in the Network panel
- User-Agent Client Hints for devices in the Network conditions tab
- Report Quirks mode issues in the Issues tab
- Include Compute Intersections in the Performance panel
- Lighthouse 7.5 in the Lighthouse panel
- Deprecated "Restart frame" context menu in the call stack
- [Experimental] Protocol monitor
- [Experimental] Puppeteer Recorder
- Web Vitals information pop up
- New Memory inspector
- Visualize CSS scroll-snap
- New badge settings pane
- Enhanced image preview with aspect ratio information
- New network conditions button with options to configure
`Content-Encoding`
s - shortcut to view computed value
`accent-color`
keyword- Categorize issue types with colors and icons
- Delete Trust tokens
- Blocked features in the Frame details view
- Filter experiments in the Experiments setting
- New
`Vary Header`
column in the Cache storage pane - Support JavaScript private brand check
- Enhanced support for breakpoints debugging
- Support hover preview with
`[]`
notation - Improved outline of HTML files
- Proper error stack traces for Wasm debugging
- New CSS flexbox debugging tools
- New Core Web Vitals overlay
- Moved issue count to the Console status bar
- Report Trusted Web Activity issues
- Format strings as (valid) JavaScript string literals in the Console
- New Trust Tokens pane in the Application panel
- Emulate the CSS color-gamut media feature
- Improved Progressive Web Apps tooling
- New
`Remote Address Space`
column in the Network panel - Performance improvements
- Display allowed/disallowed features in the Frame details view
- New
`SameParty`
column in the Cookies pane - Deprecated non-standard
`fn.displayName`
support - Deprecation of
`Don't show Chrome Data Saver warning`
in the Settings menu - [Experimental] Automatic low-contrast issue reporting in the Issues tab
- [Experimental] Full accessibility tree view in the Elements panel
- Debugging support for Trusted Types violations
- Capture node screenshot beyond viewport
- New Trust Tokens tab for network requests
- Lighthouse 7 in the Lighthouse panel
- Support forcing the CSS
`:target`
state - New shortcut to duplicate element
- Color pickers for custom CSS properties
- New shortcuts to copy CSS properties
- New option to show URL-decoded cookies
- Clear only visible cookies
- New option to clear third-party cookies in the Storage pane
- Edit User-Agent Client Hints for custom devices
- Persist "record network log" setting
- View WebTransport connections in the Network panel
- "Online" renamed to "No throttling"
- New copy options in the Console, Sources panel, and Styles pane
- New Service Workers information in the Frame details view
- Measure Memory information in the Frame details view
- Provide feedback from the Issues tab
- Dropped frames in the Performance panel
- Emulate foldable and dual-screen in Device Mode
- [Experimental] Automate browser testing with Puppeteer Recorder
- [Experimental] Font editor in the Styles pane
- [Experimental] CSS flexbox debugging tools
- [Experimental] New CSP Violations tab
- [Experimental] New color contrast calculation - Advanced Perceptual Contrast Algorithm (APCA)
- Faster DevTools startup
- New CSS angle visualization tools
- Emulate unsupported image types
- Simulate storage quota size in the Storage pane
- New Web Vitals lane in the Performance panel
- Report CORS errors in the Network panel
- Cross-origin isolation information in the Frame details view
- New Web Workers information in the Frame details view
- Display opener frame details for opened windows
- Open Network panel from the Service Workers pane
- Copy property value
- Copy stacktrace for network initiator
- Preview Wasm variable value on mouseover
- Evaluate Wasm variable in the Console
- Consistent units of measurement for file/memory sizes
- Highlight pseudo elements in the Elements panel
- [Experimental] CSS Flexbox debugging tools
- [Experimental] Customize chords keyboard shortcuts
- New CSS Grid debugging tools
- New WebAuthn tab
- Move tools between top and bottom panel
- New Computed sidebar pane in the Styles pane
- Grouping CSS properties in the Computed pane
- Lighthouse 6.3 in the Lighthouse panel
`performance.mark()`
events in the Timings section- New
`resource-type`
and`url`
filters in the Network panel - Frame details view updates
- Deprecation of
`Settings`
in the More tools menu - [Experimental] View and fix color contrast issues in the CSS Overview panel
- [Experimental] Customize keyboard shortcuts in DevTools
- New Media panel
- Capture node screenshots using Elements panel context menu
- Issues tab updates
- Emulate missing local fonts
- Emulate inactive users
- Emulate
`prefers-reduced-data`
- Support for new JavaScript features
- Lighthouse 6.2 in the Lighthouse panel
- Deprecation of "other origins" listing in the Service Workers pane
- Show coverage summary for filtered items
- New frame details view in Application panel
- Accessible color suggestion in the Styles pane
- Reinstate
**Properties**pane in the Elements panel - Human-readable
`X-Client-Data`
header values in the Network panel - Auto-complete custom fonts in the Styles pane
- Consistently display resource type in Network panel
- Clear buttons in the Elements and Network panels
- Style editing for CSS-in-JS frameworks
- Lighthouse 6 in the Lighthouse panel
- First Meaningful Paint (FMP) deprecation
- Support for new JavaScript features
- New app shortcut warnings in the Manifest pane
- Service worker
`respondWith`
events in the Timing tab - Consistent display of the Computed pane
- Bytecode offsets for WebAssembly files
- Line-wise copy and cut in Sources Panel
- Console settings updates
- Performance panel updates
- New icons for breakpoints, conditional breakpoints, and logpoints
- Fix site issues with the new Issues tab
- View accessibility information in the Inspect Mode tooltip
- Performance panel updates
- More accurate promise terminology in the Console
- Styles pane updates
- Deprecation of the
**Properties**pane in the Elements panel - App shortcuts support in the Manifest pane
- Emulate vision deficiencies
- Emulate locales
- Cross-Origin Embedder Policy (COEP) debugging
- New icons for breakpoints, conditional breakpoints, and logpoints
- View network requests that set a specific cookie
- Dock to left from the Command Menu
- The Settings option in the Main Menu has moved
- The Audits panel is now the Lighthouse panel
- Delete all Local Overrides in a folder
- Updated Long Tasks UI
- Maskable icon support in the Manifest pane
- Moto G4 support in Device Mode
- Cookie-related updates
- More accurate web app manifest icons
- Hover over CSS
`content`
properties to see unescaped values - Source map errors in the Console
- Setting for disabling scrolling past the end of a file
- Support for
`let`
and`class`
redeclarations in the Console - Improved WebAssembly debugging
- Request Initiator Chains in the Initiator tab
- Highlight the selected network request in the Overview
- URL and path columns in the Network panel
- Updated User-Agent strings
- New Audits panel configuration UI
- Per-function or per-block code coverage modes
- Code coverage must now be initiated by a page reload
- Debug why a cookie was blocked
- View cookie values
- Simulate different prefers-color-scheme and prefers-reduced-motion preferences
- Code coverage updates
- Debug why a network resource was requested
- Console and Sources panels respect indentation preferences again
- New shortcuts for cursor navigation
- Multi-client support in the Audits panel
- Payment Handler debugging
- Lighthouse 5.2 in the Audits panel
- Largest Contentful Paint in the Performance panel
- File DevTools issues from the Main Menu
- Copy element styles
- Visualize layout shifts
- Lighthouse 5.1 in the Audits panel
- OS theme syncing
- Keyboard shortcut for opening the Breakpoint Editor
- Prefetch cache in the Network panel
- Private properties when viewing objects
- Notifications and push messages in the Application panel
- Autocomplete with CSS values
- A new UI for network settings
- WebSocket messages in HAR exports
- HAR import and export buttons
- Real-time memory usage
- Service worker registration port numbers
- Inspect Background Fetch and Background Sync events
- Puppeteer for Firefox
- Meaningful presets when autocompleting CSS functions
- Clear site data from the Command Menu
- View all IndexedDB databases
- View a resource's uncompressed size on hover
- Inline breakpoints in the Breakpoints pane
- IndexedDB and Cache resource counts
- Setting for disabling the detailed Inspect tooltip
- Setting for toggling tab indentation in the Editor
- Highlight all nodes affected by CSS property
- Lighthouse v4 in the Audits panel
- WebSocket binary message viewer
- Capture area screenshot in the Command Menu
- Service worker filters in the Network panel
- Performance panel updates
- Long tasks in Performance panel recordings
- First Paint in the Timing section
- Bonus tip: Shortcut for viewing RGB and HSL color codes (video)
- Logpoints
- Detailed tooltips in Inspect Mode
- Export code coverage data
- Navigate the Console with a keyboard
- AAA contrast ratio line in the Color Picker
- Save custom geolocation overrides
- Code folding
- Frames tab renamed to Messages tab
- Bonus tip: Network panel filtering by property (video)
- Visualize performance metrics in the Performance panel
- Highlight text nodes in the DOM Tree
- Copy the JS path to a DOM node
- Audits panel updates, including a new audit that detects JS libraries and new keywords for accessing the Audits panel from the Command Menu
- Bonus tip: Use Device Mode to inspect media queries (video)
- Hover over a Live Expression result to highlight a DOM node
- Store DOM nodes as global variables
- Initiator and priority information now in HAR imports and exports
- Access the Command Menu from the Main Menu
- Picture-in-Picture breakpoints
- Bonus tip: Use
`monitorEvents()`
to log a node's fired events in the Console (video) - Live Expressions in the Console
- Highlight DOM nodes during Eager Evaluation
- Performance panel optimizations
- More reliable debugging
- Enable network throttling from the Command Menu
- Autocomplete Conditional Breakpoints
- Break on AudioContext events
- Debug Node.js apps with ndb
- Bonus tip: Measure real world user interactions with the User Timing API
- Eager Evaluation
- Argument hints
- Function autocompletion
- ES2017 keywords
- Lighthouse 3.0 in the Audits panel
- BigInt support
- Adding property paths to the Watch pane
- "Show timestamps" moved to Settings
- Bonus tip: Lesser-known Console methods (video)
- Search across all network headers
- CSS variable value previews
- Copy as fetch
- New audits, desktop configuration options, and viewing traces
- Stop infinite loops
- User Timing in the Performance tabs
- JavaScript VM instances clearly listed in the Memory panel
- Network tab renamed to Page tab
- Dark theme updates
- Certificate transparency information in the Security panel
- Site isolation features in the Performance panel
- Bonus tip: Layers panel + Animations Inspector (video)
- Blackboxing in the Network panel
- Auto-adjust zooming in Device Mode
- Pretty-printing in the Preview and Response tabs
- Previewing HTML content in the Preview tab
- Local Overrides support for styles inside of HTML
- Bonus tip: Blackbox framework scripts to make Event Listener Breakpoints more useful
- Local Overrides
- New accessibility tools
- The Changes tab
- New SEO and performance audits
- Multiple recordings in the Performance panel
- Reliable code stepping with workers in async code
- Bonus tip: Automate DevTools actions with Puppeteer (video)
- Performance Monitor
- Console Sidebar
- Group similar Console messages
- Bonus tip: Toggle hover pseudo-class (video)
- Multi-client remote debugging support
- Workspaces 2.0
- 4 new audits
- Simulate push notifications with custom data
- Trigger background sync events with custom tags
- Bonus tip: Event listener breakpoints (video)
- Top-level await in the Console
- New screenshot workflows
- CSS Grid highlighting
- A new Console API for querying objects
- New Console filters
- HAR imports in the Network panel
- Previewable cache resources
- More predictable cache debugging
- Block-level code coverage
- Mobile device throttling simulation
- View storage usage
- View when a service worker cached responses
- Enable the FPS meter from the Command Menu
- Set mousewheel behavior to zoom or scroll
- Debugging support for ES6 modules
- New Audits panel
- 3rd-Party Badges
- A new gesture for Continue To Here
- Step into async
- More informative object previews in the Console
- More informative context selection in the Console
- Real-time updates in the Coverage tab
- Simpler network throttling options
- Async stacks on by default
- CSS and JS code coverage
- Full-page screenshots
- Block requests
- Step over async await
- Unified Command Menu
| true | true | true |
Eager evaluation, argument hints, function autocompletion, Lighthouse 3.0, and more.
|
2024-10-12 00:00:00
|
2018-05-21 00:00:00
| null |
website
|
chrome.com
|
Chrome for Developers
| null | null |
35,110,344 |
https://devblogs.microsoft.com/typescript/typescripts-migration-to-modules/
|
TypeScript's Migration to Modules - TypeScript
|
Daniel Rosenwasser
|
One of the most impactful things we’ve worked on in TypeScript 5.0 isn’t a feature, a bug fix, or a data structure optimization. Instead, it’s an infrastructure change.
In TypeScript 5.0, we restructured our entire codebase to use ECMAScript modules, and switched to a newer emit target.
## What to Know
Now, before we dive in, we want to set expectations. It’s good to know what this does and doesn’t mean for TypeScript 5.0.
As a general user of TypeScript, you’ll need to be running Node.js 12 at a minimum.
`npm install`
s should go a little faster and take up less space, since the `typescript`
package size should be reduced by about 46%.
Running TypeScript will get a nice bit faster – typically cutting down build times of anywhere between 10%-25%.
As an API consumer of TypeScript, you’ll likely be unaffected.
TypeScript won’t be shipping its API as ES modules yet, and will still provide a CommonJS-authored API.
That means existing build scripts will still work.
If you rely on TypeScript’s `typescriptServices.js`
and `typescriptServices.d.ts`
files, you’ll be able to rely on `typescript.js`
/`typescript.d.ts`
instead.
If you’re importing `protocol.d.ts`
, you can switch to `tsserverlibrary.d.ts`
and leverage `ts.server.protocol`
.
Finally, as a contributor of TypeScript, your life will likely become a lot easier. Build times will be a lot faster, incremental check times should be faster, and you’ll have a more familiar authoring format if you already write TypeScript code outside of our compiler.
## Some Background
Now that might sound surprising – modules?
Like, files with `import`
s and `export`
s?
Isn’t almost all modern JavaScript and TypeScript using modules?
Exactly! But the current TypeScript codebase predates ECMAScript’s modules – our last rewrite started in 2014, and modules were standardized in 2015. We didn’t know how (in)compatible they’d be with other module systems like CommonJS, and to be frank, there wasn’t a huge benefit for us at the time for authoring in modules.
Instead, TypeScript leveraged `namespace`
s – formerly called *internal modules*.
Namespaces had a few useful features. For example, their scopes could merge across files, meaning it was easy to break up a project across files and expose it cleanly as a single variable.
```
// parser.ts
namespace ts {
export function createSourceFile(/*...*/) {
/*...*/
}
}
// program.ts
namespace ts {
export function createProgram(/*...*/) {
/*...*/
}
}
// user.ts
// Can easily access both functions from 'ts'.
const sourceFile = ts.createSourceFile(/*...*/);
const program = ts.createProgram(/*...*/);
```
It was also easy for us to reference exports across files at a time when auto-import didn’t exist.
Code in the same namespace could access each other’s exports without needing to write `import`
statements.
```
// parser.ts
namespace ts {
export function createSourceFile(/*...*/) {
/*...*/
}
}
// program.ts
namespace ts {
export function createProgram(/*...*/) {
// We can reference 'createSourceFile' without writing
// 'ts.createSourceFile' or writing any sort of 'import'.
let file = createSourceFile(/*...*/);
}
}
```
In retrospect, these features from namespaces made it difficult for other tools to support TypeScript; however, they were very useful for our codebase.
Fast-forward several years, and we were starting to feel more of the downsides of namespaces.
## Issues with Namespaces
TypeScript is written in TypeScript. This occasionally surprises people, but it’s a common practice for compilers to be written in the language they compile. Doing this really helps us understand the experience we’re shipping to other JavaScript and TypeScript developers. The jargon-y way to say this is: we bootstrap the TypeScript compiler so that we can dog-food it.
Most modern JavaScript and TypeScript code is authored using modules.
By using namespaces, we weren’t using TypeScript the way most of our users are.
*So many* of our features are focused around using modules, but we weren’t using them ourselves.
So we had two issues here: we weren’t just missing out on these features – we were missing a ton of the experience in using those features.
For example, TypeScript supports an `incremental`
mode for builds.
It’s a great way to speed up consecutive builds, but it’s effectively useless in a codebase structured with namespaces.
The compiler can only effectively do incremental builds across *modules*, but our namespaces just sat within the global scope (which is usually where namespaces will reside).
So we were hurting our ability to iterate on TypeScript itself, along with properly testing out our `incremental`
mode on our own codebase.
This goes deeper than compiler features – experiences like error messages and editor scenarios are built around modules too. Auto-import completions and the "Organize Imports" command are two widely used editor features that TypeScript powers, and we weren’t relying on them at all.
## Runtime Performance Issues with Namespaces
Some of the issues with namespaces are more subtle. Up until now, most of the issues with namespaces might have sounded like pure infrastructure issues – but namespaces also have a runtime performance impact.
First, let’s take a look at our earlier example:
```
// parser.ts
namespace ts {
export function createSourceFile(/*...*/) {
/*...*/
}
}
// program.ts
namespace ts {
export function createProgram(/*...*/) {
createSourceFile(/*...*/);
}
}
```
Those files will be rewritten to something like the following JavaScript code:
```
// parser.js
var ts;
(function (ts) {
function createSourceFile(/*...*/) {
/*...*/
}
ts.createSourceFile = createSourceFile;
})(ts || (ts = {}));
// program.js
(function (ts) {
function createProgram(/*...*/) {
ts.createSourceFile(/*...*/);
}
ts.createProgram = createProgram;
})(ts || (ts = {}));
```
The first thing to notice is that each namespace is wrapped in an IIFE.
Each occurrence of a `ts`
namespace has the same setup/teardown that’s repeated over and over again – which *in theory* could be optimized away when producing a final output file.
The second, more subtle, and more significant issue is that our reference to `createSourceFile`
had to be rewritten to `ts.createSourceFile`
.
Recall that this was actually something we *liked* – it made it easy to reference exports across files.
However, there is a runtime cost.
Unfortunately, there are very few zero-cost abstractions in JavaScript, and invoking a method off of an object is more costly than directly invoking a function that’s in scope.
So running something like `ts.createSourceFile`
is more costly than `createSourceFile`
.
The performance difference between these operations is usually negligible. Or at least, it’s negligible until you’re writing a compiler, where these operations occur millions of times over millions of nodes. We realized this was a huge opportunity for us to improve a few years ago when Evan Wallace pointed out this overhead on our issue tracker.
But namespaces aren’t the only construct that can run into this issue – the way most bundlers emulate scopes hits the same problem. For example, consider if the TypeScript compiler were structured using modules like the following:
```
// parser.ts
export function createSourceFile(/*...*/) {
/*...*/
}
// program.ts
import { createSourceFile } from "./parser";
export function createProgram(/*...*/) {
createSourceFile(/*...*/);
}
```
A naive bundler might always create a function to establish scope for every module, and place exports on a single object. It might look something like the following:
```
// Runtime helpers for bundle:
function register(moduleName, module) { /*...*/ }
function customRequire(moduleName) { /*...*/ }
// Bundled code:
register("parser", function (exports, require) {
exports.createSourceFile = function createSourceFile(/*...*/) {
/*...*/
};
});
register("program", function (exports, require) {
var parser = require("parser");
exports.createProgram = function createProgram(/*...*/) {
parser.createSourceFile(/*...*/);
};
});
var parser = customRequire("parser");
var program = customRequire("program");
module.exports = {
createSourceFile: parser.createSourceFile,
createProgram: program.createProgram,
};
```
Each reference of `createSourceFile`
now has to go through `parser.createSourceFile`
, which would still have more runtime overhead compared to if `createSourceFile`
was declared locally.
This is partially necessary to emulate the "live binding" behavior of ECMAScript modules – if someone modifies `createSourceFile`
within `parser.ts`
, it will be reflected in `program.ts`
as well.
In fact, the JavaScript output here can get even worse, as re-exports often need to be defined in terms of getters – and the same is true for every intermediate re-export too!
But for our purposes, let’s just pretend bundlers always write properties and not getters.
So if bundled modules can also run into these issues, why did we even mention the issues around boilerplate and indirection with namespaces?
Well because the ecosystem around modules is rich, and bundlers have gotten surprisingly good at optimizing some of this indirection away!
A growing number of bundling tools are able to not just aggregate multiple modules into one file, but they’re able to perform something called *scope hoisting*.
Scope hoisting attempts to move as much code as possible into the fewest possible shared scopes.
So a bundler which performs scope-hoisting might be able to rewrite the above as
```
function createSourceFile(/*...*/) {
/*...*/
}
function createProgram(/*...*/) {
createSourceFile(/*...*/);
}
module.exports = {
createSourceFile,
createProgram,
};
```
Putting these declarations in the same scope is typically a win simply because it avoids adding boilerplate code to simulate scopes in a single file – lots of those scope setups and teardowns can be completely eliminated.
But because scope hoisting colocates declarations, it *also* makes it easier for engines to optimize our uses of different functions.
So moving to modules was not just an opportunity to build empathy and iterate more easily – it was a chance for us to make things faster!
## The Migration
Unfortunately there’s not a clear 1:1 translation for every codebase using namespaces to modules.
We had some specific ideas of what we wanted our codebase to look like with modules. We definitely wanted to avoid too much disruption to the codebase stylistically, and didn’t want to run into too many "gotchas" through auto-imports. At the same time, our codebase had implicit cycles and that presented its own set of issues.
To perform the migration, we worked on some tooling specific to our repository which we nicknamed the "typeformer". While early versions used the TypeScript API directly, the most up-to-date version used David Sherret‘s fantastic ts-morph library.
Part of the approach that made this migration tenable was to break each transformation into its own step and its own commit. That made it easier to iterate on specific steps without having to worry about trivial but invasive differences like changes in indentation. Each time we saw something that was "wrong" in the transformation, we could iterate.
A small (see: *very annoying*) snag on this transformation was how exports across modules are implicitly resolved.
This created some implicit cycles that were not always obvious, and which we didn’t really want to reason about immediately.
But we were in luck – TypeScript’s API needed to be preserved through something called a "barrel" module – a single module that re-exports all the stuff from every other module. We took advantage of that and applied an "if it ain’t broke, don’t fix it (for now)" approach when we generated imports. In other words, in cases where we couldn’t create direct imports from each module, the typeformer simply generated imports from that barrel module.
```
// program.ts
import { createSourceFile } from "./_namespaces/ts"; // <- not directly importing from './parser'.
```
We figured eventually, we could (and thanks to a proposed change from Oleksandr Tarasiuk, we will), switch to direct imports across files.
## Picking a Bundler
There are some phenomenal new bundlers out there – so we thought about our requirements. We wanted something that
- supported different module formats (e.g. CommonJS, ESM, hacky IIFEs that conditionally set globals…)
- provided good scope hoisting and tree shaking support
- was easy to configure
- was fast
There are several options here that might have been equally good; but in the end we went with esbuild and have been pretty happy with it! We were struck with how fast it enabled our ability to iterate, and how quickly any issues we ran into were addressed. Kudos to Evan Wallace on not just helping uncover some nice performance wins, but also making such a stellar tool.
## Bundling and Compiling
Adopting esbuild presented a sort of weird question though – should the bundler operate on TypeScript’s output, or directly on our TypeScript source files?
In other words, should TypeScript transform its `.ts`
files and emit a series of `.js`
files that esbuild will subsequently bundle?
Or should esbuild compile *and* bundle our `.ts`
files?
The way *most* people use bundlers these days is the latter.
It avoids coordinating extra build steps, intermediate artifacts on disk for each step, and just tends to be faster.
On top of that, esbuild supports a feature most other bundlers don’t – `const enum`
inlining.
This inlining provides a crucial performance boost when traversing our data structures, and until recently the only major tool that supported it was the TypeScript compiler itself.
So esbuild made building directly from our input files truly possible with no runtime compromises.
But TypeScript is also a compiler, and we need to test our own behavior! The TypeScript compiler needs to be able to compile the TypeScript compiler and produce reasonable results, right?
So while adding a bundler was helping us actually experience what we were shipping to our users, we were at risk of *losing* what it’s like to downlevel-compile ourselves and quickly see if everything still works.
We ended up with a compromise.
When running in CI, TypeScript will also be run as unbundled CommonJS emitted by `tsc`
.
This ensures that TypeScript can still be bootstrapped, and can produce a valid working version of the compiler that passes our test suite.
For local development, running tests still requires a full type-check from TypeScript by default, with compilation from esbuild.
This is partially necessary to run certain tests.
For example, we store a "baseline" or "snapshot" of TypeScript’s declaration files.
Whenever our public API changes, we have to check the new `.d.ts`
file against the baseline to see what’s changed; but producing declaration files requires running TypeScript anyway.
But that’s just the default. We can now easily run and debug tests without a full type-check from TypeScript if we really want. So transforming JavaScript and type-checking have been decoupled for us, and can run independently if we need.
## Preserving Our API and Bundling Our Declaration Files
As previously mentioned, one upside of using namespaces was that to create our output files, we could just concatenate our input files together.
But, this also applies to our output * .d.ts files* as well.
Given the earlier example:
```
// src/compiler/parser.ts
namespace ts {
export function createSourceFile(/*...*/) {
/*...*/
}
}
// src/compiler/program.ts
namespace ts {
export function createProgram(/*...*/) {
createSourceFile(); /*...*/
}
}
```
Our original build system would produce a single output `.js`
and `.d.ts`
file.
The file `tsserverlibrary.d.ts`
might look like this:
```
namespace ts {
function createSourceFile(/*...*/): /* ...*/;
}
namespace ts {
function createProgram(/*...*/): /* ...*/;
}
```
When multiple `namespace`
s exist in the same scope, they undergo something called *declaration merging*, where all their exports merge together.
So these `namespace`
s formed a single final `ts`
namespace and everything just worked.
TypeScript’s API did have a few "nested" namespaces which we had to maintain during our migration.
One input file required to create `tsserverlibrary.js`
looked like this:
```
// src/server/protocol.ts
namespace ts.server.protocol {
export type Request = /*...*/;
}
```
Which, as an aside and refresher, is the same as writing this:
```
// src/server/protocol.ts
namespace ts {
export namespace server {
export namespace protocol {
export type Request = /*...*/;
}
}
}
```
and it would be tacked onto the bottom of `tsserverlibrary.d.ts`
:
```
namespace ts {
function createSourceFile(/*...*/): /* ...*/;
}
namespace ts {
function createProgram(/*...*/): /* ...*/;
}
namespace ts.server.protocol {
type Request = /*...*/;
}
```
and declaration merging would still work fine.
In a post-namespaces world, we wanted to preserve the same API while using solely modules – and our declaration files had to be able to model this as well.
To keep things working, each namespace in our public API was modeled by a single file which re-exported everything from individual smaller files. These are often called "barrel modules" because they… uh… re-package everything in… a… barrel?
We’re not sure.
Anyway! The way that we maintained the same public API was by using something like the following:
```
// COMPILER LAYER
// src/compiler/parser.ts
export function createSourceFile(/*...*/) {
/*...*/
}
// src/compiler/program.ts
import { createSourceFile } from "./_namespaces/ts";
export function createProgram(/*...*/) {
createSourceFile(/*...*/);
}
// src/compiler/_namespaces/ts.ts
export * from "./parser";
export * from "./program";
// SERVER LAYER
// src/server/protocol.ts
export type Request = /*...*/;
// src/server/_namespaces/ts.server.protocol.ts
export * from "../protocol";
// src/server/_namespaces/ts.server.ts
export * as protocol from "./protocol";
// src/server/_namespaces/ts.ts
export * from "../../compiler/_namespaces/ts";
export * as server from "./ts.server";
```
Here, distinct namespaces in each of our projects were replaced with a barrel module in a folder called `_namespaces`
.
Namespace | Module Path within Project |
---|---|
`namespace ts` |
`./_namespaces/ts.ts` |
`namespace ts.server` |
`./_namespaces/ts.server.ts` |
`namespace ts.server.protocol` |
`./_namespaces/ts.server.protocol.ts` |
There is some "needless" indirection, but it provided a reasonable pattern for the modules transition.
Now our `.d.ts`
emit can of course handle this situation – each `.ts`
file would produce a distinct output `.d.ts`
file.
This is what most people writing TypeScript use;
however, our situation has some unique features which make using it as-is challenging:
- Some consumers already rely on the fact that TypeScript’s API is represented in a single
`d.ts`
file. These consumers include projects which expose the internals of TypeScript’s API (e.g.`ts-expose-internals`
,`byots`
), and projects which bundle/wrap TypeScript (e.g.`ts-morph`
). So keeping things in a single file was desirable. - We export many enums like
`SyntaxKind`
or`SymbolFlags`
in our public API which are actually`const enum`
s. Exposing`const enum`
s is generally A Bad Idea, as downstream TypeScript projects may accidentally assume these`enum`
s’ values never change and inline them. To prevent that from happening, we need to post-process our declarations to remove the`const`
modifier. This would be challenging to keep track of over every single output file, so again, we probably want to keep things in a single file. - Some downstream users
*augment*TypeScript’s API, declaring that some of our internals exist; it’d be best to avoid breaking these cases even if they’re not*officially*supported, so whatever we ship needs to be similar enough to our old output to not cause any surprises. - We track how our APIs change, and diff between the "old" and "new" APIs on every full test run. Keeping this limited to a single file is desirable.
- Given that each of our JavaScript library entry points are just single files, it really seemed like the most "honest" thing to do would be to ship single declaration files for each of those entry points.
These all point toward one solution: declaration file bundling.
Just like there are many options for bundling JavaScript, there are many options for bundling `.d.ts`
files:
`api-extractor`
, `rollup-plugin-dts`
, `tsup`
, `dts-bundle-generator`
, and so on.
These all satisfy the end requirement of "make a single file", however, the additional requirement to produce a final output which declared our API in namespaces similar to our old output meant that we couldn’t use any of them without a lot of modification.
In the end, we opted to roll our own mini-`d.ts`
bundler suited specifically for our needs.
This script clocks in at about 400 lines of code, naively walking each entry point’s exports recursively and emitting declarations as-is.
Given the previous example, this bundler outputs something like:
```
namespace ts {
function createSourceFile(/*...*/): /* ...*/;
function createProgram(/*...*/): /* ...*/;
namespace server {
namespace protocol {
type Request = /*...*/;
}
}
}
```
This output is functionality equivalent to the old namespace-concatenation output, along with the same `const enum`
to `enum`
transformation and `@internal`
removal that our previous output had.
Removing the repetition of `namespace ts { }`
also made the declaration files slightly smaller (~200 KB).
It’s important to note that this bundler is *not* intended for general use.
It naively walks imports and emits declarations *as-is*, and cannot
handle:
-
**Unexported Types**– if an exported function references an unexported type, TypeScript’s`d.ts`
emit will still declare the type locally.`export function doSomething(obj: Options): void; // Not exported, but used by 'doSomething'! interface Options { // ... }`
This allows an API to talk about specific types, even if API consumers can’t actually refer to these types by name.
Our bundler cannot emit unexported types, but can detect when it needs to be done, and issues an error indicating that the type must be exported. This is a fine trade-off, since a complete API tends to be more usable.
-
**Name Conflicts**– two files may separately declare a type named`Info`
– one which is exported, and the other which is purely local.`// foo.ts export interface Info { // ... } export function doFoo(info: Info) { // ... } // bar.ts interface Info { // ... } export function doBar(info: Info) { // ... }`
This shouldn’t be a problem for a robust declaration bundler. The unexported
`Info`
could be declared with a new name, and uses could be updated.But our declaration bundler isn’t robust – it doesn’t know how to do that. Its first attempt is to just drop the locally declared type, and keep the exported type. This is very wrong, and it’s subtle because it usually doesn’t trigger any errors!
We made the bundler a little smarter so that it can at least detect when this happens. It now issues an error to fix the ambiguity, which can be done by renaming and exporting the missing type. Thankfully, there were not many examples of this in the TypeScript API, as namespace merging already meant that declarations with the same name across files were merged.
-
**Import Qualifiers**– occasionally, TypeScript will infer a type that’s not imported locally. In those cases, TypeScript will write that type as something like`import("./types").SomeType`
. These`import(...)`
qualifiers can’t be left in the output since the paths they refer to don’t exist anymore. Our bundler detects these types, and requires that the code be fixed. Typically, this just means explicitly annotating the function with a type. Bundlers like`api-extractor`
can actually handle this case by rewriting the type reference to point at the correct type.
So while there were some limitations, for us these were all perfectly okay (and even desirable).
## Flipping the Switch!
Eventually all these decisions and meticulous planning had to go somewhere!
What was years in the making turned into a hefty pull request with over **282,000 lines** changed.
Plus, the pull request had to be refreshed periodically given that we couldn’t freeze the TypeScript codebase for a long amount of time.
In a sense, we were trying to replace a bridge while our team was still driving on it.
Luckily, the automation of our typeformer could re-construct each step of the migration with a commit, which also helped with review. On top of that, our test suite and all of our external test infrastructure really gave us confidence to make the move.
So finally, we asked our team to take a brief pause from making changes.
We hit that merge button, and just like that, ~~Jake convinced git he was the author of every line in the TypeScript codebase~~ TypeScript was using modules!
## Wait, What Was That About Git?
Okay, we’re half joking about that git issue.
We do often use git blame to understand where a change came from, and unfortunately by default, git *does* think that almost every line came from our "Convert the codebase to modules" commit.
Fortunately, git can be configured with `blame.ignoreRevsFile`
to ignore specific commits, and GitHub ignores commits listed in a top-level `.git-blame-ignore-revs`
files by default.
## Spring Cleaning
While we were making some of these changes, we looked for opportunities to simplify everything we were shipping.
We found that TypeScript had a few files that truthfully weren’t needed anymore.
`lib/typescriptServices.js`
was the same as `lib/typescript.js`
, and all of `lib/protocol.d.ts`
was basically copied out of `lib/tsserverlibrary.d.ts`
from the `ts.server.protocol`
namespace.
In TypeScript 5.0, we chose to drop these files and recommend using these backward-compatible alternatives. It was nice to shed a few megabytes while knowing we had good workarounds.
## Spaces and Minifying?
One nice surprise we found from using esbuild was that on-disk size was reduced by more than we expected. It turns out that a big reason for this is that esbuild uses 2 spaces for indentation in output instead of the 4 spaces that TypeScript uses. When gzipping, the difference is very small; but on disk, we saved a considerable amount.
This did prompt a question of whether we should start performing any minification on our outputs. As tempting as it was, this would complicate our build process, make stack trace analysis harder, and force us to ship with source maps (or find a source map host, kind of like what a symbol server does for debug information).
We decided against minifying (for now).
Anyone shipping parts of TypeScript on the web can already minify our outputs (which we do on the TypeScript playground), and gzipping already makes downloads from npm pretty fast.
While minifying *felt* like "low hanging fruit" for an otherwise radical change to our build system, it was just creating more questions than answers.
Plus, we have other better ideas for reducing our package size.
## Performance Slowdowns?
When we dug a bit deeper, we noticed that while end-to-end compile times had reduced on all of our benchmarks, we had actually *slowed down* on parsing.
So what gives?
We didn’t mention it much, but when we switched to modules, we also switched to a more modern emit target.
We switched from ECMAScript 5 to ECMAScript 2018.
Using more native syntax meant that we could shed a few bytes in our output, and would have an easier time debugging our code.
But it also meant that engines had to perform the *exact semantics* as mandated by these native constructs.
You might be surprised to learn that `let`
and `const`
– two of the most commonly used features in modern JavaScript – have a little bit of overhead.
That’s right!
`let`
and `const`
variables can’t be referenced before their declarations have been run.
```
// error! 'x' is referenced in 'f'
// before it's declared!
f();
let x = 10;
function f() {
console.log(x);
}
```
And in order to enforce this, engines usually insert guards whenever `let`
and `const`
variables are captured by a function.
Every time a function references these variables, those guards have to occur at least once.
When TypeScript targeted ECMAScript 5, these `let`
s and `const`
s were just transformed into `var`
s.
That meant that if a `let`
or `const`
-declared variable was accessed before it was initialized, we wouldn’t get an error.
Instead, its value would just be observed as `undefined`
.
There had been instances where this difference meant that TypeScript’s downlevel-emit wasn’t behaving as per the spec.
When we switched to a newer output target, we ended up fixing a few instances of use-before-declaration – but they were rare.
When we finally flipped the switch to a more modern output target, we found that engines spent *a lot* of time performing these checks on `let`
and `const`
.
As an experiment, we tried running Babel on our final bundle to only transform `let`
and `const`
into `var`
.
We found that often *10%-15% of our parse time* could be dropped from switching to `var`
everywhere.
This translated to up to *5%* of our end-to-end compile time being just these `let`
/`const`
checks!
At the moment, esbuild doesn’t perform provide an option to transform `let`
and `const`
to `var`
.
We could have used Babel here – but we really didn’t want to introduce another step into our build process.
Shu-yu Guo has already been investigating opportunities to eliminate many of these runtime checks with some promising results – but some checks would still need to be run on every function, and we were looking for a win today.
We instead found a compromise. We realized that most major components of our compiler follow a pretty similar pattern where a top-level scope contains a good chunk of state that’s shared by other closures.
```
export function createScanner(/*...*/) {
let text;
let pos;
let end;
let token;
let tokenFlags;
// ...
let scanner = {
getToken: () => token,
// ...
};
return scanner;
}
```
The biggest reason we really wanted to use `let`
and `const`
in the first place was because `var`
s have the potential to leak scope out of blocks;
but at the top level scope of a function, there’s way fewer "downsides" to using `var`
s.
So we asked ourselves how much performance we could win back by switching to `var`
in just these contexts.
It turns out that we were able to get rid of most of these runtime checks by doing just that!
So in a few select places in our compiler, we’ve switched to `var`
s, where we turn off our "no `var`
" ESLint rule just for those regions.
The `createScanner`
function from above now looks like this:
```
export function createScanner(/*...*/) {
// Why var? It avoids TDZ checks in the runtime which can be costly.
// See: https://github.com/microsoft/TypeScript/issues/52924
/* eslint-disable no-var */
var text;
var pos;
var end;
var token;
var tokenFlags;
// ...
let scanner = {
getToken: () => token,
// ...
};
/* eslint-enable no-var */
return scanner;
}
```
This isn’t something we’d recommend most projects do – at least not without profiling first. But we’re happy we found a reasonable workaround here.
## Where’s the ESM?
As we mentioned previously, while TypeScript is now written with modules, the actual JS files we ship have *not* changed format.
Our libraries still act as CommonJS when executed in a CommonJS environment (`module.exports`
is defined), or declare a top-level `var ts`
otherwise (for `<script>`
).
There’s been a long-standing request for TypeScript to ship as ECMAScript modules (ESM) instead (#32949).
Shipping ECMAScript modules would have many benefits:
- Loading the ESM can be faster than un-bundled CJS if the runtime can load multiple files in parallel (even if they are executed in order).
- Native ESM doesn’t make use of export helpers, so an ESM output can be as fast as a bundled/scope-hoisted output. CJS may need export helpers to simulate live bindings, and in codebases like ours where we have chains of re-exports, this can be slow.
- The package size would be smaller because we’re able to share code between our different entrypoints rather than making individual bundles.
- Those who bundle TypeScript could potentially tree shake parts they aren’t using. This could even help the many users who only need our parser (though our codebase still needs more changes to make that work).
That all sounds great! But we aren’t doing that, so what’s up with that?
The main reason comes down to the current ecosystem.
While a lot of packages are adding ESM (or even going ESM only), an even greater portion are still using CommonJS.
It’s unlikely we can ship *only* ESM in the near future, so we have to keep shipping some CommonJS to not leave users behind.
That being said, there is an interesting middle ground…
## Shipping ESM Executables (And More?)
Previously, we mentioned that our *libraries* still act as CommonJS.
But, TypeScript isn’t just a library, it’s also a set of executables, including `tsc`
, `tsserver`
, as well as a few other smaller bundles for automatic type acquisition (ATA), file watching, and cancellation.
The critical observation is that these executables *don’t need to be imported*;
they’re executables!
Because these don’t need to be imported by anyone (not even https://vscode.dev, which uses `tsserverlibrary.js`
and a custom host implementation), we are free to convert these executables to whatever module format we’d like, so long as the behavior doesn’t change for users invoking these executables.
This means that, so long as we move our minimum Node version to v12.20, we could change `tsc`
, `tsserver`
, and so on, into ESM.
One gotcha is that the path to our executables within our package are "well known";
a surprising number of tools, `package.json`
scripts, editor launch configurations, etc., use hard-coded paths like `./node_modules/typescript/bin/tsc.js`
or `./node_modules/typescript/lib/tsc.js`
.
As our `package.json`
doesn’t declare `"type": "module"`
, Node assumes those files to be CommonJS, so emitting ESM isn’t enough.
We could try to use `"type": "module"`
, but that would add a whole slew of other challenges.
Instead, we’ve been leaning towards just using a dynamic `import()`
call within a CommonJS file to kick off an ESM file that will do the *actual* work.
In other words, we’d replace `tsc.js`
with a wrapper like this:
```
// https://en.wikipedia.org/wiki/Fundamental_theorem_of_software_engineering
(() => import("./esm/tsc.mjs"))().catch((e) => {
console.error(e);
process.exit(1);
});
```
This would not be observable by anyone invoking the tool, and we’re now free to emit ESM.
Then much of the code shared between `tsc.js`
, `tsserver.js`
, `typingsInstaller.js`
, etc. could all be shared!
This would turn out to save another 7 MB in our package, which is great for a change which nobody can observe.
What that ESM would actually look like and how it’s emitted is a different question. The most compatible near-term option would be to use esbuild’s code splitting feature to emit ESM.
Farther down the line, we could even fully convert the TypeScript codebase to a module format like `Node16`
/`NodeNext`
or `ES2022`
/`ESNext`
, and emit ESM directly!
Or, if we still wanted to ship only a few files, we could expose our APIs as ESM files and turn them into a set of entry points for a bundler.
Either way, there’s potential for making the TypeScript package on npm much leaner, but, it would be a *much* more difficult change.
In any case, we’re absolutely thinking about this for the future; converting the codebase from namespaces to modules was the first big step in moving forward.
## API Patching
As we mentioned, one of our goals was to maintain compatibility with TypeScript’s existing API; however, CommonJS modules allowed people to use the TypeScript API in ways we did not anticipate.
In CommonJS, modules are plain objects, providing no default protection from others modifying your library’s internals! So what we found over time was that lots of projects were monkey-patching our APIs! This put us in a tough spot, because even if we wanted to support this patching, it would be (for all intents and purposes) infeasible.
In many cases, we helped some projects move to more appropriate backwards-compatible APIs that we exposed. In other cases, there are still some challenges helping our community move forward – but we’re eager to chat and help project maintainers out!
## Accidentally Exported
On a related note, we aimed to keep some "soft-compatibility" around our existing APIs that were necessary due to our use of namespaces.
With namespaces, internal functions *had* to be exported just so they could be used by different files.
```
// utilities.ts
namespace ts {
/** @internal */
export function doSomething() {
}
}
// parser.ts
namespace ts {
// ...
let val = doSomething();
}
// checker.ts
namespace ts {
// ...
let otherVal = doSomething();
}
```
Here, `doSomething`
had to be exported so that it could be accessed from other files.
As a special step in our build, we’d just erase them away from our `.d.ts`
files if they were marked with a comment like `/** @internal */`
, but they’d still be reachable from the outside at run time.
Bundling modules, in contrast, won’t leak the exports of every file’s exports. If an entry point doesn’t re-export a function from another file, it will be copied in as a local.
Technically with TypeScript 5.0, we could have not re-exported every `/** @internal */`
-marked function, and made them "hard-privates".
This seemed unfriendly to projects experimenting with TypeScript’s APIs.
We also would need to start explicitly exporting everything in our public API.
That might be a best-practice, but it was more than we wanted to commit to for 5.0.
We opted to keep our behavior the same in TypeScript 5.0.
## How’s the Dog Food?
We claimed earlier that modules would help us empathize more with our users. How true did that end up being?
Well, first off, just consider all the packaging choices and build tool decisions we had to make! Understanding these issues has put us way closer to what other library authors currently experience, and it’s given us a lot of food for thought.
But there were some obvious user experience problems we hit as soon as we switched to modules.
Things like auto-imports and the "Organize Imports" command in our editors occasionally felt "off" and often conflicted with our linter preferences.
We also felt some pain around project references, where toggling flags between a "development" and a "production" build would have required a totally parallel set of `tsconfig.json`
files.
We were surprised we hadn’t received more feedback about these issues from the outside, but we’re happy we caught them.
And the best part is that many of these issues, like respecting case-insensitive import sorting and passing emit-specific flags under `--build`
are already implemented for TypeScript 5.0!
What about project-level incrementality?
It’s not clear if we got the improvements that we were looking for.
Incremental checking from `tsc`
doesn’t happen in under a second or anything like that.
We think part of this might stem from cycles between files in each project.
We also think that because most of our work tends to be on large root files like our shared types, scanner, parser, and checker, it necessitates checking almost *every* other file in our project.
This is something we’d like to investigate in the future, and hopefully it translates to improvements for everyone.
## The Results!
After all these steps, we achieved some great results!
- A 46% reduction in our uncompressed package size on npm
- A 10%-25% speed-up
- Lots of UX improvements
- A more modern codebase
That performance improvement is a bit co-mingled with other performance work we’ve done in TypeScript 5.0 – but a surprising amount of it came from modules and scope-hoisting.
We’re *ecstatic* about our faster, more modern codebase with its dramatically streamlined build.
We hope it makes TypeScript 5.0, and every future release, a joy for you to use.
Happy Hacking!
– Daniel Rosenwasser, Jake Bailey, and the TypeScript Team
The filename GitHub supports should say “.git-blame-ignore-revs” (it currently says refs instead of revs: “.git-blame-ignore-refs”)
Fixed, thank you!
Under “Flipping the Switch!”, brilliant! Thanks for confirming how development can progress when major changes are needed.
Please review the first line –this decisions–.
Thanks!
Fixed, thank you!
While glad to hear there may be performances benefits from this change- it remains striking to me how over-complicated modules and building is in TS/JS. Imports are hideous, an absolute nightmare to manage, and reminiscent of the worst-spaghetti code since they require file paths to work. Even with tooling that tries to help; imports/exports remain the most painful part of TS development- by far.
Project files (a la .csproj) are superior in nearly every respect....
| true | true | true |
One of the most impactful things we’ve worked on in TypeScript 5.0 isn’t a feature, a bug fix, or a data structure optimization. Instead, it’s an infrastructure change. In TypeScript 5.0, we restructured our entire codebase to use ECMAScript modules, and switched to a newer emit target. What to Know Now, before we dive in, […]
|
2024-10-12 00:00:00
|
2023-03-09 00:00:00
|
article
|
microsoft.com
|
TypeScript
| null | null |
|
9,446,929 |
http://emdrive.com/
|
Emdrive
| null |
**EmDrive** is a remarkable new space propulsion technology which uses microwave technology to
convert electrical energy directly into thrust. No propellant is used in the conversion process. Thrust
is produced by the amplification of the asymmetric radiation force from an electromagnetic wave
propagated through a tapered resonant cavity.
The small UK based company that developed this technology, Satellite Propulsion Research Ltd,
(SPR Ltd) has completed the research phase and has now closed down. All patents have been
allowed to expire, which will enable EmDrive thrusters to be manufactured and sold worldwide
without commercial restrictions.
The theory, design and applications of the technology, together with the development of further
generations, are detailed in the book described in the publisher’s link given in the latest news.
The author’s email address remains [email protected]
**Latest news**
**December 2023**
The book* EmDrive: Advances in Spacecraft Thrusters and Propulsion Systems* has now been published.
The link to the publisher’s webpage is here: EmDrive: Advances in Spacecraft Thrusters and Propulsion Systems
**November 2023**
With the publication of the EmDrive book due next month, a new page has been produced for this site which lists many of the references given in the book, and provides links to those references.
The page can be accessed here: Book references.
**October 2023**
A book titled EmDrive: Advances in Spacecraft Thrusters and Propulsion Systems will be published by CRC press on December 12 2023.
The link to the publisher’s webpage is here: EmDrive: Advances in Spacecraft Thrusters and Propulsion Systems
**December 2021**
The SPR Ltd papers for this year’s IAC-21 conference in Dubai can be downloaded here:
The impact of EmDrive Propulsion on the Launch Costs for Solar Power Satellites
A Superconducting EmDrive Thruster. Design, Performance and Application
Also a note on the geometry of the latest TU Dresden cavity, explaining why it does not work as an EmDrive thruster is given here:
A Note on the TU Dresden IAC-21 paper
**November 2021**
Two ten minute video lectures were presented at the IAC-21 conference in Dubai. The lectures can be seen here:
The impact of EmDrive Propulsion on the Launch Costs for Solar Power Satellites
A Superconducting EmDrive Thruster. Design, Performance and Application
**April 2021**
Notes on the recent Dresden TU paper, explaining why their thrust measurements for the NASA replica thruster are zero, are given here: Dresden TU 2021 notes
**April 2021**
A talk on EmDrive theory, Engineering and Applications will be given to APEC on April 3rd at 12:00 US Pacific Time.
Registration for the conference can be made free at www.altpropulsion.com
A recording of the talk can be viewed here: APEC 4/3,Part#1-Roger Shawyer-EmDrive-YouTube
A recording of the Q&A session can be found here: APEC 4/3,Part#2-Roger Shawyer-Q&A session-YouTube
**January 2021**
An explanation of some fundamental principles of EmDrive operation is given here, to help increase public understanding of EmDrive:
EmDrive Fundamentals
**October 2020**
The paper entitled *An EmDrive Thruster for Cubesats*, presented at the IAC-20 conference is given here, together with the associated Bio:
IAC-20 Paper;
IAC-20 Bio
The IAC-20 presentation may be viewed here:
IAC-20 Presentation
**May 2020**
A recorded version of the postponed UCL lecture, The Technologies of Hope, which was to be given on April 2nd, can be seen here:
UCL Lecture: The Technologies of Hope
**February 2020**
A public lecture on EmDrive and its application in solutions to climate change will be given at University College London on April 2nd. The details for the lecture are given here:
https://www.eventbrite.co.uk/e/technologies-of-hope-tickets-93506655925
**January 2020**
The IAC 2019 conference presentation entitled “EmDrive Thrust/Load Characteristics. Theory, Experimental Results and a Moon Mission” is given here: IAC 2019 Presentation
**October 2019**
The full IAC 2019 paper entitled “EmDrive Thrust/Load Characteristics. Theory, Experimental Results and a Moon Mission” is given here: IAC 2019 Paper
**October 2019**
The abstract for the IAC 2019 conference in Washington this month is given here: IAC 2019 Abstract
**September 2019**
A copy of the original Flight Thruster Technical Report is given here. The report which was first produced in September 2010, was updated in December 2017 to include the original manufacturing drawings.
Also given are the Cullen and Bailey papers, which provided the original source material for the development of the EmDrive theory of operation.
These three files are referenced in the paper entitled, EmDrive Thrust/Load Characteristics. Theory, experimental Results and a Moon Mission. This paper will be given at the IAC 2019 conference in Washington next month.
Flight Thruster Report Issue 2
Cullen Paper 0001
Bailey RRE Paper
**April 2019**
SPR Ltd now has client agreement to release typical Thrust data from the Flight Thruster test programme. The data is given here. Notes on FM2 Test 101
**February 2019**
An edited copy of this year’s presentation at Shrivenham Defence Academy is given here. Note that this is the first time nominal experimental data showing the Thrust/Load response of an EmDrive Thruster has been released. Shrivenham Presentation 2019
**December 2018**
A short Technical Note on Thrust performance versus Load conditions of EmDrive Thrusters is given here. The note explains why EmDrive complies with both the Law of Conservation of Momentum, as well as the Law of Conservation of Energy. Technical Note on Emdrive Thrust v Load
**July 2018**
The following presentation was given at an EmDrive seminar held at Dresden Technical University on 11th July 2018. Dresden Seminar July 2018
**May 2018**
For those who are new to the EmDrive saga, the history and background is given in an interview with the inventor here: https://www.youtube.com/watch?v=KUX8EWxmS3k
The interview was carried out by Mary-Ann Russon of the International Business Times, and was originally released on 14 October 2016.
**September 2017**
Patent GB 2493361 entitled High Q microwave radiation thruster has now been granted by the UK Intellectual Property Office.
A short note on general principles of EmDrive design and manufacture can be downloaded here:
General Principles of EmDrive design
**August 2017 - EmDrive Efficiency**
A short presentation on EmDrive thruster efficiency can be downloaded here.
EmDrive Efficiency
**August 2017 **
A short presentation on Third Generation EmDrive can be downloaded here.
3G EmDrive
**June 2017 **
An edited set of slides from a presentation made to the UK Defence Academy in February this year can be downloaded here. They give the background story to the emergence of EmDrive, and illustrate how important Global Defence applications are to the continuing development of the technology.
Shrivenham Presentation
**September 2016**
A slide presentation with narration, explaining the basic science behind EmDrive can be downloaded here.
https://youtu.be/wBtk6xWDrwY
**August 2016**
Development work is continuing on superconducting EmDrive thruster technology in co-operation with a UK aerospace company. No details of this work can be divulged at present.
However, as it is now 10 years since the completion of the original research work, the documents reporting on this work can be released, and can be accessed here.
Feasibility study technical report. Issue 2
Review of experimental thruster report
Demonstrator technical report. Issue 2
Review of DM tech report
The documents are two final technical reports and two independent reviews, and date from July 2002 to August 2006. The work was carried out for the UK government under their SMART and R&D award programmes. Documentation was shared with US government organisations.
The research was carried out concurrently with the BAE Systems Greenglow project, which was the subject of a BBC Horizon programme broadcast in March this year.
**July 2015**
A peer reviewed version of the IAC14 conference paper is given here: IAC14 Paper
A 5 minute audioslide presentation of the IAC14 paper, updated to include the latest test data from the University of Dresden Germany, is given here: IAC14 Audioslide (.avi 11MB)
**June 2015**
The full test video of one of the dynamic test runs of the Demonstrator engine has been released and is available here: Dynamic Test (.mpg 43MB) or Dynamic Test (.avi 112MB)
Notes giving an explanation of the test rig and this particular test run are given here: Notes on Dynamic Test
**May 2015**
A recent interview with Roger Shawyer, filmed by Nick Breeze, can be found here: 2015 Interview
**January 2015**
A number of research groups have asked questions on the methods of measuring EmDrive forces. A note explaining the principles can be found here: EmDrive Force Measurement
**October 2014**
At the IAC 2014 conference in Toronto, Roger Shawyer stated that 8 sets of test data have now verified EmDrive theory. These data sets resulted from thrust measurements on 7 different thrusters, by 4 independent organisations, in 3 different countries.
The Toronto presentation can be found here: IAC14 Presentation
**August 2014**
A recent interview with Roger Shawyer, recorded by Nick Breeze at the Royal Institution in London can be found here: Interview
It is accompanied by a PowerPoint presentation entitled “EmDrive-Enabling a Better Future”.
**July 2014**
A paper entitled "Second Generation EmDrive Propulsion Applied to SSTO Launcher and Interstellar Probe" will be presented at the 65th International Astronautical Congress 2014 at Toronto in September.
**October 2013**
A paper entitled "The Dynamic Operation of a High Q EmDrive Microwave Thruster" and the associated poster for the recent IAC13 conference in Beijing is given here: IAC13 Paper IAC13 Poster
**November 2012**
**China publishes high power test results**
The prestigious Chinese Academy of Sciences has published a paper by Professor Yang Juan confirming their high power test results. At an input power of 2.5kW, their 2.45GHz EmDrive thruster provides 720mN of thrust. The results have clearly been subject to extensive peer review following the NWPU 2010 paper. The measurements were made on a national standard, thrust measurement device, used for Ion Engine development. Details of the measurement system and calibration data are given in the paper. A professional English translation is given here: Yang Juan 2012 paper
**September 2012**
A solution to the acceleration limitation of superconducting EmDrive engines has been found. The application of this breakthrough has been described at a recent presentation, where a hybrid spaceplane provides a dramatic reduction in launch cost to geostationary orbit. A reduction factor of 130 compared to Atlas V launch costs is predicted. This will lead to Solar Power Satellites becoming a low cost, baseload, energy source. The presentation can be downloaded here: 2G update
**July 2012**
An English translation of the 2010 Chinese paper, together with unpublished test results have been obtained. The last line of the paper confirms that experimental thrust measurements have been made at 1kW input power. The unpublished test results show a large number of thrust measurements at input powers up to 2.5kW. The mean specific thrust obtained is close to that measured in the SPR flight thruster tests.
Note that the Chinese thruster, if deployed on the ISS, would easily provide the necessary delta V to compensate for orbital decay, thus eliminating the need for the reboost/refueling missions.
The original 2010 paper, the translation and the unpublished test results are given here:
NWPU 2010 paper
NWPU 2010 paper (English translation)
NWPU 2010 unpublished test results
**June 2011**
Two papers have been identified, published by Professor Yang Juan of The North Western Polytechnical University, Xi'an, China.
These papers provide an independent proof of the theory of EmDrive. Abstracts of these papers are given in Chinese Paper Abstracts. The originals are written in Chinese.
**August 2010**
A Technology Transfer contract with a major US aerospace company was successfully completed. This 10 month contract was carried out under a UK Export Licence and a TAA issued by the US State Department. Details are subject to ITAR regulations.
**June 2010**
A paper was presented at the 2nd Conference on Disruptive Technology in Space Activities. See: Toulouse 2010 Paper
Earlier papers presented in a series of international conferences were:
Brighton 2005 paper
IAC 2008 paper
CEAS 2009 paper
**May 2010**
The Flight Thruster test programme was successfully completed. See: Flight Programme
| true | true | true | null |
2024-10-12 00:00:00
|
2023-12-12 00:00:00
| null | null | null | null | null | null |
2,045,745 |
http://warpspire.com/posts/url-design/
|
URL Design
| null |
December 28, 2010
**You should take time to design your URL structure.** If there’s one thing I hope you remember after reading this article it’s to take time to design your URL structure. Don’t leave it up to your framework. Don’t leave it up to chance. Think about it and craft an experience.
URL Design is a complex subject. I can’t say there are any “right” solutions — it’s much like the rest of design. There’s good URL design, there’s bad URL design, and there’s everything in between — it’s subjective.
But that doesn’t mean there aren’t best practices for creating great URLs. I hope to impress upon you some best practices in URL design I’ve learned over the years and explain why I think new HTML5 javascript history APIs are so exciting to work with.
The URL bar has become a main attraction of modern browsers. And it’s not just a simple URL bar anymore — you can type partial URLs and browsers use dark magic to seemingly conjure up exactly the full URL you were looking for. When I type in ** resque issues** into my URL bar, the first result is
`https://github.com/defunkt/resque/issues`
.URLs are *universal*. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.
Any regular semi-technical user of your site should be able to navigate 90% of your app based off memory of the URL structure. In order to achieve this, your URLs will need to be *pragmatic.* Almost like they were a math equation — many simple rules combined in a strategic fashion to get to the page they want.
The most valuable aspect of any URL is what lies at the top level section. In my opinion, it should be the first discussion of any startup directly after the idea is solidified. Long before any technology discussion. Long before any code is written. This is top-level section is going to change the fundamentals of how your site functions.
Do I seem dramatic? It may seem that way — but come 1,000,000 users later think about how big of an impact it will be. Think about how big of a deal Facebook’s rollout of usernames was. Available URLs are a lot like real estate and the top level section is the best property out there.
Another quick tip — whenever you’re building a new site, think about blacklisting a set of vanity URLs (and maybe learn a little bit about bad URL design from Quora’s URLs).
Namespaces can be a great way to build up a pragmatic URL structure that’s easy to remember with continued usage. What do I mean by a namespace? I mean a portion of a URL that dictates unique content. An example:
`https://github.com/`**defunkt/resque**/issues
In the URL above, ** defunkt/resque** is the namespace. Why is this useful? Because anything after that URL suddenly becomes a new top level section. So you can go to any
`<user>/<repo>`
`/issues`
or maybe `/wiki`
and get the same page, but under a different namespace.Keep that namespace clean. Don’t start throwing some content under `/feature/`
and some under **<user>/<repo>**`/`
. For a namespace to be effective it has to be universal.**<user>/<repo>**/feature
The web has had a confused past with regards to querystrings. I’ve seen everything from every page of a site being served from one URL with different querystring parameters to sites who don’t use a single querystring parameter.
I like to think of querystrings as the knobs of URLs — something to tweak your current view and fine tune it to your liking. That’s why they work so great for sorting and filtering actions. Stick to a uniform pattern (`sort=alpha&dir=desc`
for instance) and you’ll make sorting and filtering via the URL bar easy and rememberable.
One last thing regarding querystrings: The page should work without the querystrings attached. It may show a different page, but the URL without querystrings should render.
The world is a complicated place filled with ¿ümlåts?, ¡êñyés! and all sorts of awesome characters ☄. These characters have no place in the URL of any english site. They’re complicated to type with english keyboards and often times expand into confusing characters in browsers (ever see `xn--n3h`
in a url? That’s a ☃).
I grew up in this industry learning how to game search engines (well, Google) to make money off my affiliate sales, so I’m no stranger to the practice of keyword stuffing URLs. It was fairly common to end up with a URL like this:
```
http://guitars.example.com/best-guitars/cheap-guitars/popular-guitar
```
That kind of URL used to be great for SEO purposes. Fortunately Google’s hurricane updates of 2003 eliminated any ranking benefit of these URLs. Unfortunately the professional SEO industry is centered around extortion and still might advise you stuff your URLs with as many keywords as you can think of. They’re wrong — ignore them.
Some additional points to keep in mind:
Underscores are just bad. Stick to dashes.
Use short, full, and commonly known words. If a section has a dash or special character in it, the word is probably too long.
URLs are for humans. **Design them for humans.**
A URL is an agreement to serve something from a predictable location for as long as possible. Once your first visitor hits a URL you’ve implicitly entered into an agreement that if they bookmark the page or hit refresh, they’ll see the same thing.
**Don’t change your URLs after they’ve been publicly launched.** If you absolutely must change your URLs, add redirects — it’s not that scary.
In an ideal world, every single screen on your site should result in a URL that can be copy & pasted to reproduce the same screen in another tab/browser. In fairness, this wasn’t completely possible until very recently with some of the new HTML5 browser history Javascript APIs. Notably, there are two new methods:
** onReplaceState** — This method replaces the current URL in the browser history, leaving the back button unaffected.
** onPushState** - This method pushes a new URL onto the browser’s history, replacing the URL in the URL bar
`onReplaceState`
and when to use `onPushState`
These new methods allow us to change the *entire* path in the URL bar, not just the anchor element. With this new power, comes a new design responsibility — we need to craft the back button experience.
To determine which to use, ask yourself this question: *Does this action produce new content or is it a different display of the same content?*
**Produces new content** — you should use `onPushState`
(ex: pagination links)
**Produces a different display of the same content** — you should use `onReplaceState`
(ex: sorting and filtering)
Use your own judgement, but these two rules should get you 80% there. Think about what you want to see when you click the back button and make it happen.
There’s a lot of awesome functionality built into linking elements like `<a>`
and `<button>`
. If you middle click or command-click on them they’ll open in new windows. When you hover over an `<a>`
your browser tells you the URL in the status bar. Don’t break this behavior when playing with `onReplaceState`
and `onPushState`
.
Embed the location of AJAX requests in the `href`
attributes of anchor elements.
`return true`
from Javascript click handlers when people middle or command click.
It’s fairly simple to do this with a quick conditional inside your click handlers. Here’s an example jQuery compatible snippet:
In the past, the development community loved to create URLs which could never be re-used. I like to call them POST-specific URLs — they’re the URLs you see in your address bar after you submit a form, but when you try to copy & pasting the url into a new tab you get an error.
There’s no excuse for these URLs at all. Post-specific URLs are for redirects and APIs — not end-users.
ASCII-only user generated URL parts (defunkt, resque).
“pull” is a short version of “pull request” — single word, easily associated to the origin word.
The pull request number is scoped to defunkt/resque (starts at **one** there).
Anchor points to a scrolling position, not hidden content.
**Bonus points:** This URL has many different formats as well — check out the patch and diff versions.
I hope that as usage of new Javascript APIs increases, designers and developers take time to design URLs. It’s an important part of any site’s usability and too often I see URLs ignored. While it’s easy to redesign the look & feel of a site, it’s *much* more difficult to redesign the URL structure.
But I’m excited. I’ve watched URLs change over the years. At times hard-linking was sacrificed at the altar of AJAX while other times performance was sacrificed to generate real URLs for users. We’re finally at a point in time where we can have the performance and usability benefits of partial page rendering while designing a coherent and refined URL experience at the same time.
If you'd like to keep in touch, I tweet @kneath on Twitter. You're also welcome to send a polite email to [email protected]. I don't always get the chance to respond, but email is always the best way to get in touch.
| true | true | true | null |
2024-10-12 00:00:00
|
2010-12-28 00:00:00
| null | null | null | null | null | null |
5,059,438 |
http://www.theatlantic.com/business/archive/2013/01/the-end-of-labor-how-to-protect-workers-from-the-rise-of-the-robots/267135/
|
The End of Labor: How to Protect Workers From the Rise of Robots
|
Noah Smith
|
# The End of Labor: How to Protect Workers From the Rise of Robots
*Technology used to make us better at our jobs. Now it's making many of us obsolete, as the share of income going to workers is crashing, all over the world. What do we do now?*
Here's a scene that will be familiar to anyone who's ever taken an introductory economics course. The professor has just finished explaining that in economics, "efficiency" means that there are no possible gains from trade. Then some loudmouth kid in the back raises his hand and asks: "Wait, so if one person has everything, and everyone else has nothing and just dies, is that an 'efficient' outcome?" The professor, looking a little chagrined, responds: "Well, yes, it is." And the whole class rolls their eyes and thinks: Economists.
For most of modern history, inequality has been a manageable problem. The reason is that no matter how unequal things get, most people are born with something valuable: the ability to work, to learn, and to earn money. In economist-ese, people are born with an "endowment of human capital." It's just not possible for one person to have everything, as in the nightmare example in Econ 101.
For most of modern history, two-thirds of the income of most rich nations has gone to pay salaries and wages for people who work, while one-third has gone to pay dividends, capital gains, interest, rent, etc. to the people who own capital. This two-thirds/one-third division was so stable that people began to believe it would last forever. But in the past ten years, something has changed. Labor's share of income has steadily declined, falling by several percentage points since 2000. It now sits at around 60% or lower. The fall of labor income, and the rise of capital income, has contributed to America's growing inequality.
**WHERE IS THE MONEY GOING?**
What can explain this shift? One hypothesis is: China. The recent entry of China into the global trading system basically doubled the labor force available to multinational companies. When labor becomes more plentiful, the return to labor goes down. In a world flooded with cheap Chinese labor, capital becomes relatively scarce, and its share of income goes up. As China develops, this effect should go away, as China builds up its own capital stock. This is probably already happening.
But there is another, more sinister explanation for the change. In past times, technological change always augmented the abilities of human beings. A worker with a machine saw was much more productive than a worker with a hand saw. The fears of "Luddites," who tried to prevent the spread of technology out of fear of losing their jobs, proved unfounded. But that was then, and this is now. Recent technological advances in the area of computers and automation have begun to do some higher cognitive tasks - think of robots building cars, stocking groceries, doing your taxes.
Once human cognition is replaced, what else have we got? For the ultimate extreme example, imagine a robot that costs $5 to manufacture and can do everything you do, only better. You would be as obsolete as a horse.
Now, humans will never be completely replaced, like horses were. Horses have no property rights or reproductive rights, nor the intelligence to enter into contracts. There will always be something for humans to do for money. But it is quite possible that workers' share of what society produces will continue to go down and down, as our economy becomes more and more capital-intensive. This possibility is increasingly the subject of discussion among economists. Erik Brynjolfsson has written a book about it, and economists like Paul Krugman and Tyler Cowen are talking about it more and more (for those of you who are interested, here is a huge collection of links, courtesy of blogger Izabella Kaminska). In the academic literature, the theory goes by the name of "capital-biased technological change."
The big question is: What do we do if and when our old mechanisms for coping with inequality break down? If the "endowment of human capital" with which people are born gets less and less valuable, we'll get closer and closer to that Econ 101 example of a world in which the capital owners get everything. A society with cheap robot labor would be an incredibly prosperous one, but we will need to find some way for the vast majority of human beings to share in that prosperity, or we risk the kinds of dystopian outcomes that now exist only in science fiction.
**REDISTRIBUTION AGAINST THE MACHINE**
How do we fairly distribute income and wealth in the age of the robots?
The standard answer is to do more income redistribution through the typical government channels - Earned Income Tax Credit, welfare, etc. That might work as a stopgap, but if things become more severe, we'll run into a lot of political problems if we lean too heavily on those tools. In a world where capital earns most of the income, we will have to get more creative.
First of all, it should be easier for the common people to own their own capital - their own private army of robots. That will mean making "small business owner" a much more common occupation than it is today (some would argue that with the rise of freelancing, this is already happening). Small businesses should be very easy to start, and regulation should continue to favor them. It's a bit odd to think of small businesses as a tool of wealth redistribution, but strange times require strange measures.
Of course, not all businesses can be small businesses. More families would benefit from owning stock in big companies. Right now, America is going in exactly the opposite direction, with companies going private instead of making their stock available for public ownership. All large firms should be given incentives to list publicly. This will definitely mean reforming regulations like Sarbanes-Oxley that make it risky and difficult to go public; it may also mean tax incentives.
And then there are more extreme measures. Everyone is born with an endowment of labor; why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity? Of course, some people would want to sell it immediately, cash out, and party, but this could be prevented with some fairly light paternalism, like temporary "lock-up" provisions. This portfolio of capital ownership would act as an insurance policy for each human worker; if technological improvements reduced the value of that person's labor, he or she would reap compensating benefits through increased dividends and capital gains. This would essentially be like the kind of socialist land reforms proposed in highly unequal Latin American countries, only redistributing stock instead of land.
Now of course this is an extreme measure, for an extreme hypothetical case. It may turn out that the "rise of the robots" ends up augmenting human labor instead of replacing it. It may be that technology never exceeds our mental capacity. It may be that the fall in labor's income share has really been due to the great Chinese Labor Dump, and not to robots after all, and that labor will make a comeback as soon as China catches up to the West.
But if not - if the age of mass human labor is about to permanently end - then we need to think fast. Extreme inequality may be "efficient" in the Econ 101 sense, but in the real world it always leads to disaster.
><
| true | true | true |
Technology used to make us better at our jobs. Now it's making us obsolete, and the share of income going to workers is crashing, all over the world. What do we do now?
|
2024-10-12 00:00:00
|
2013-01-14 00:00:00
| null |
article
|
theatlantic.com
|
The Atlantic
| null | null |
20,123,440 |
https://www.ft.com/content/3bbb6fec-88c5-11e9-a028-86cea8523dc2
|
Google warns of US national security risks from Huawei ban
| null |
Google warns of US national security risks from Huawei ban
was $468 now $279 for your first year, equivalent to $23.25 per month. Make up your own mind. Build robust opinions with the FT’s trusted journalism. Take this offer before 24 October.
Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.
FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday.
Terms & Conditions apply
See why over a million readers pay to read the Financial Times.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null |
website
| null |
Financial Times
| null | null |
11,299,756 |
https://github.com/wsieroci/audiorecognizer
|
GitHub - wsieroci/audio-recognizer: Shazam in Java
|
Wsieroci
|
Roy van Rijn has written wonderful post about Shazam algorithm and how to implement it on our own. To do this he placed many chunks of his project source code, but he did not upload all source code of his application because as he stated:
The Shazam patent holders lawyers are sending me emails to stop me from releasing the code and removing this blogpost.
It occurred that core of this algorithm is very simple. I have analyzed his post and as weekend project I have written simple Proof-Of-Concept application which outputs its findings to console. It gives surprisingly correct answers. For now I have tested 10 different mp3 audio files and this application was able to recognize each of them. Application is learning basing on path to mp3 file on your local disk or http stream of mp3 file from any source and recognize by sound from microphone.
| true | true | true |
Shazam in Java. Contribute to wsieroci/audio-recognizer development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2012-05-21 00:00:00
|
https://opengraph.githubassets.com/ce5890c4003e77a1176f63a6d916c6123bfbbbe8cc37b4269db04b3f853f7c0a/wsieroci/audio-recognizer
|
object
|
github.com
|
GitHub
| null | null |
2,199,566 |
https://addons.mozilla.org/en-US/firefox/addon/export-cookies/reviews/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,448,752 |
https://dl.dropboxusercontent.com/s/doazzo5ygu3idna/WorldIPv6Congress-IPv6_LH%20v2.pdf?token_hash=AAGLTRBTf5qeb4SR5c2n2yxXRsFtJStNeXnlEMdk2QsygQ
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,674,855 |
https://newsletter.bringthedonuts.com/p/building-products-at-airbnb
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,814,949 |
https://github.com/rstacruz/psdinfo
|
GitHub - rstacruz/psdinfo: Inspect PSD files from the command line
|
Rstacruz
|
Inspect PSD files from the command line.
```
npm install -g psdinfo
```
```
$ psdinfo file.psd --fonts
# file.psd
fonts:
- DIN-Bold
- FreightSansLight
- Glosa-Roman
- ...
```
```
$ psdinfo file.psd --text
# file.psd
text:
- "Hello"
- "This is text from the document"
- ...
```
**psdinfo** © 2015+, Rico Sta. Cruz. Released under the MIT License.
Authored and maintained by Rico Sta. Cruz with help from contributors (list).
ricostacruz.com · GitHub @rstacruz · Twitter @rstacruz
| true | true | true |
Inspect PSD files from the command line. Contribute to rstacruz/psdinfo development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2015-06-30 00:00:00
|
https://opengraph.githubassets.com/e7cf9a114d5d4a6ba45be0f9834ea22dc4614f5d022249b012cc009e50c7c100/rstacruz/psdinfo
|
object
|
github.com
|
GitHub
| null | null |
5,360,158 |
http://latimesblogs.latimes.com/lanow/2013/03/la-police-chief-other-notables-hacked.html
|
Hackers target LAPD chief, Jay-Z, Beyonce, many others
|
Joel Rubin
|
# Hackers target LAPD chief, Jay-Z, Beyonce, many others
*This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.*
Hackers on Monday targeted Los Angeles Police Chief Charlie Beck and an assorted group of other notables, including Vice President Joe Biden and music mega-stars Jay-Z and Beyonce, posting detailed financial information on the Internet.
The information, which included home addresses, Social Security numbers and credit reports, was published on a website that appeared to originate in Russia.
“We’ll take steps to find out who did this, and if they’re within the boundaries of the United States, we’ll prosecute them,” Beck said.
Beck speculated that he was included with the high-profile performers and politicians because of the recent Christopher Dorner saga. Dorner, a fired LAPD officer, killed two police officers and two others last month during a bloody campaign to seek revenge for his firing. Before he died in a standoff with authorities, Dorner in an on-line manifesto praised the network of hackers known as Anonymous. Many people claiming affiliation with the group have voiced support for Dorner on Twitter and in other Web forums.
Others who were singled out included former U.S. Secretary of State Hillary Rodham Clinton, singer Britney Spears, actors Mel Gibson and Ashton Kutcher, and U.S. Atty. Gen. Eric Holder. The accuracy of information released on people other than Beck could not be independently verified by The Times.
-- Joel Rubin
| true | true | true |
This article was originally on a blog post platform and may be missing photos, graphics or links.
|
2024-10-12 00:00:00
|
2013-03-11 00:00:00
| null |
newsarticle
|
latimes.com
|
Los Angeles Times
| null | null |
21,180,393 |
https://www.theguardian.com/science/2019/oct/07/nobel-prize-in-medicine-awarded-to-hypoxia-researchers
|
Nobel prize in medicine awarded to hypoxia researchers
|
Ian Sample
|
Three scientists have shared this year’s Nobel prize in physiology or medicine for discovering how the body responds to changes in oxygen levels, one of the most essential processes for life.
William Kaelin Jr at the Dana-Farber Cancer Institute and Harvard University in Massachusetts, Sir Peter Ratcliffe at Oxford University and the Francis Crick Institute in London, and Gregg Semenza at Johns Hopkins University in Baltimore, Maryland, worked out how cells sense falling oxygen levels and respond by making new blood cells and vessels.
Beyond describing a fundamental physiological process that enables animals to thrive in some of the highest-altitude regions on Earth, the mechanism has given researchers new routes to treatments for anaemia, cancer, heart disease and other conditions.
Ratcliffe was summoned from a lab meeting in Oxford to take the call from Stockholm. “I tried to make sure it wasn’t some friend down the road having a laugh at my expense,” he told the Guardian. “Then I accepted the news and had a think about how I was going to reorder my day.”
Ratcliffe had spent the weekend working on an EU synergy grant and had not imagined his morning taking such a turn. “When I got up this morning I didn’t have any expectation or make any contingency plans for the announcement at all,” he said.
On finishing the call he returned to his meeting and, at the request of the Nobel committee, carried on without a word. At least one scientist had her suspicions, however, having noticed he had left a coffee in the room and returned with a tea. “She’s a scientist, so trained to draw deductions from the things she observes,” Ratcliffe said. “I’d decided I needed a little less agitation rather than more.”
The three laureates will share the 9m Swedish kronor (£740,000) prize equally, according to the Karolinska Institute in Stockholm. Asked what he intended to do with the windfall, Ratcliffe said: “I’ll be discussing that with my wife in private. But it’ll be something good.” A party was on the cards, he said, but not immediately. “I’m trying to stay sober because it’s going to be a busy day.”
Kaelin said he was half-asleep when his phone went. “I was aware as a scientist that if you get a phone call at 5am with too many digits, it’s sometimes very good news, and my heart started racing,” he said. “It was all a bit surreal.”
The trio won the prestigious Lasker prize in 2016. In work that spanned more than two decades, the researchers teased apart different aspects of how cells in the body first sense and then respond to low oxygen levels. The crucial gas is used by tiny structures called mitochondria found in nearly all animal cells to convert food into useful energy.
The scientists showed that when oxygen is in short supply, a protein complex that Semenza called hypoxia-inducible factor, or HIF, builds up in nearly all the cells in the body. The rise in HIF has a number of effects but most notably ramps up the activity of a gene used to produce erythropoietin (EPO), a hormone that in turn boosts the creation of oxygen-carrying red blood cells.
Randall Johnson, a professor of molecular physiology and pathology at Cambridge University, said this year’s Nobel laureates “have greatly expanded our knowledge of how physiological response makes life possible”.
He said the role of HIF was crucial from the earliest days of life. “If an embryo doesn’t have the HIF gene it won’t survive past very early embryogenesis. Even in the womb our bodies need this gene to do everything they do.”
The work has led to the development of a number of drugs such as roxadustat and daprodustat, which treat anaemia by fooling the body into thinking it is at high altitude, making it churn out more red blood cells. Roxadustat is on the market in China and is being assessed by European regulators.
Similar drugs aim to help heart disease and lung cancer patients who struggle to get enough oxygen into their bloodstream. More experimental drugs based on the finding seek to prevent other cancers growing by blocking their ability to make new blood vessels.
Venki Ramakrishnan, the president of the Royal Society, said the prize was “richly deserved” by all three winners. “Oxygen is the vital ingredient for the survival of every cell in our bodies. Too little or too much can spell disaster. Understanding how evolution has equipped cells to detect and respond to fluctuating oxygen levels helps answer fundamental questions about how animal life emerged.”
Ratcliffe praised the team he worked with in the years it took to decipher how cells adapted to changes in oxygen levels. “At first, none of us knew precisely what we were doing,” he said. “But there was a lot of enthusiasm.”
| true | true | true |
William Kaelin, Sir Peter Ratcliffe and Gregg Semenza worked out how cells adapt to oxygen availability
|
2024-10-12 00:00:00
|
2019-10-07 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
23,167,231 |
https://www.reviews.com/entertainment/streaming/netflix-hours-of-commercials-analysis/
|
ANALYSIS: Netflix Saved Its Average User From 9.1 Days of Commercials in 2019 - Reviews.com
|
Reviews com Staff
|
**Fast facts:**
- The average Netflix subscriber spent over two hours a day streaming Netflix content in 2019.
- Network television averages 18 minutes of commercials per hour (and this number continues to increase).
- With no commercials, two hours of
**Netflix saves users from over 36 minutes of ads daily, or 219 hours of ads a year**. - During COVID-19 quarantine, we estimate the average total daily usage of Netflix has gone up at least thirty minutes per user, meaning the service is saving people from an extra nine minutes of ads a day during the pandemic.
There are a number of reasons television viewers might say they prefer watching Netflix over network TV. Some prefer the original content, or access to an entire back catalog of one of their favorite shows. Others might say they enjoy watching whatever they want on their own schedule as on-demand television becomes the new norm.
One of the most common reasons people say they prefer a streaming service like Netflix over traditional network and cable television is because there are no commercial breaks. There has never been a time in history where people have been inundated by more ads than they are now, so it makes sense why a break from commercials while streaming TV would be popular.
Last year, Netflix VP of Original Content Cindy Holland announced that the *average* Netflix user spends two hours a day using the service. This was reported before the COVID-19 pandemic, and thus we estimate usage numbers are even higher over the last several months. For the sake of accuracy, we will only use Ms. Holland’s metrics of two hours of total daily Netflix usage, although it’s safe to estimate these numbers have grown over the last two months during quarantine.
So how much time do people spend watching commercials every year viewing broadcast and cable television?
According to a report by Gaebler via Wikipedia, the average hour of network or cable television in the United States is roughly 42 minutes of programming and 18 minutes of advertisements. While networks promised to work toward decreasing the number of ads per hour, a report by the Los Angeles Times has actually found that has not yet happened, reporting that commercial time continues to increase per hour. There have also been reports of networks speeding up the playback speed of television episodes 10-15% in order to get in one extra ad slot, something the average viewer likely wouldn’t specifically notice but which adds up over time.
The average American spends just under four hours per day watching television according to a California State University study. Meaning during that time, if they are watching over a traditional cable or network broadcast, they will spend over an hour watching commercials each day.
Comparing these numbers against the two hours a day Netflix users spend watching ad-free content, **the streaming service is saving US households from over 219 hours, or nine days, of commercials a year**.
While network and cable television networks still show strong ratings and total viewership numbers, the number of subscribers to major cable services has been steadily decreasing in recent years as a growing number of people aim to cut the cord. While cost savings tend to be the No. 1 reason people do so, the ability to skip commercials is another major benefit of going streaming-only.
Interestingly, prices for internet-only packages have become more expensive as cable companies work to recoup revenue losses due to the growth of cord cutting. With bundles, cable companies can offer lower pricing on each portion of the package, but as more people opt for internet only service, we have noticed a trend of ISPs raising their prices. This streaming has also created an increase in bandwidth usage, which means it seems likely home internet service might continue to get more expensive as this trend continues.
| true | true | true |
Network TV averages 18 minutes of commercials per hour, which means Netflix saves users from over 36 minutes of ads daily, or 219 hours of ads a year.
|
2024-10-12 00:00:00
|
2020-05-13 00:00:00
|
article
|
reviews.com
|
Reviews.com
| null | null |
|
15,295,561 |
https://about.sourcegraph.com/blog/announcing-sourcegraph-2/
|
Announcing Sourcegraph 2.0
|
About the author
|
## Announcing Sourcegraph 2.0
**Update:** This blog post has been edited to remove references to outdated features.
We’ve been hard at work on some major improvements to how you search, browse, and review code. Today we’re excited to announce several big new features.
## Introducing Sourcegraph
Already used by many of our customers and now available to all companies, Sourcegraph gives you code search and intelligence across all your company’s private and public code. It integrates with multiple code hosts, editors, and code review tools to increase productivity throughout the developer workflow.
Sourcegraph is what powers Sourcegraph.com, and now you can run it inside your company's network. Install a self-hosted Sourcegraph instance in 1 command.
## Explore open source code with the new Sourcegraph.com and browser extensions
Search and browse open source code on Sourcegraph.com and using the Sourcegraph browser extension (on GitHub), with full code intelligence: go-to-definition, find-references, etc. The full power of Sourcegraph is always free for open source.
What’s new and different:
- A single search box to search code and repositories, with regular expression and other advanced query support
- Streamlined interface for code navigation (with go-to-definition and find-references)
## Use Sourcegraph at work
We hear overwhelmingly from our users: "I love Sourcegraph on open source code, and I want to use it for work." Our new products are built to make this even easier. You can use Sourcegraph on your organization's code, and it all stays secure on your own network. Your code never touches our servers, and both products connect directly to your cloud or enterprise code hosts to work across all of your repositories.
Install a self-hosted Sourcegraph instance in 1 command.
### About the author
*Quinn Slack is the CEO and co-founder of Sourcegraph, the code intelligence platform for dev teams and making coding more accessible to more people. Prior to Sourcegraph, Quinn co-founded Blend Labs, an enterprise technology company dedicated to improving home lending and was an egineer at Palantir, where he created a technology platform to help two of the top five U.S. banks recover from the housing crisis. Quinn has a BS in Computer Science from Stanford, you can chat with him on Twitter @sqs.*
| true | true | true |
Find and fix things across all of your code with Sourcegraph universal code search.
|
2024-10-12 00:00:00
|
2017-09-20 00:00:00
|
article
|
sourcegraph.com
|
Sourcegraph
| null | null |
|
14,093,556 |
https://blog.cloudflare.com/how-we-made-our-dns-stack-3x-faster/
|
How we made our DNS stack 3x faster
|
Tom Arnfeld
|
Cloudflare is now well into its 6th year and providing authoritative DNS has been a core part of infrastructure from the start. We’ve since grown to be the largest and one of the fastest managed DNS services on the Internet, hosting DNS for nearly 100,000 of the Alexa top 1M sites and over 6 million other web properties – or DNS zones.
CC-BY 2.0 image by Steve Jurvetson
Today Cloudflare’s DNS service answers around 1 million queries per second – not including attack traffic – via a global anycast network. Naturally as a growing startup, the technology we used to handle tens or hundreds of thousands of zones a few years ago became outdated over time, and couldn't keep up with the millions we have today. Last year we decided to replace two core elements of our DNS infrastructure: the part of our DNS server that answers authoritative queries and the data pipeline which takes changes made by our customers to DNS records and distributes them to our edge machines across the globe.
The rough architecture of the system can be seen above. We store customer DNS records and other origin server information in a central database, convert the raw data into a format usable by our edge in the middle, and then distribute it to our >100 data centers (we call them PoPs - Points of Presence) using a KV (key/value) store.
The queries are served by a custom DNS server, rrDNS, that we’ve been using and developing for several years. In the early days of Cloudflare, our DNS service was built on top of PowerDNS, but that was phased out and replaced by rrDNS in 2013.
The Cloudflare DNS team owns two elements of the data flow: the data pipeline itself and rrDNS. The first goal was to replace the data pipeline with something entirely new as the current software was starting to show its age; as any >5 year old infrastructure would. The existing data pipeline was originally built for use with PowerDNS, and slowly evolved over time. It contained many warts and obscure features because it was built to translate our DNS records into the PowerDNS format.
### A New Data Model
In the old system, the data model was fairly simple. We’d store the DNS records roughly in the same structure that they are represented in our UI or API: one entry per resource record (RR). This meant that the data pipeline only had to perform fairly rudimentary encoding tasks when generating the zone data to be distributed to the edge.
Zone metadata and RRs were encoded using a mix of JSON and Protocol Buffers, though we weren’t making particularly good use of the schematized nature of the protocols so the schemas were very bloated and the resulting data ended up being larger than necessary. Not to mention that as the number of total RRs in our database headed north of 100 million, these small differences in encoding made a significant difference in aggregate.
It’s worth remembering here that DNS doesn’t really operate on a per-RR basis when responding to queries. You query for a name and a type (e.g `example.com`
and `AAAA`
) and you’ll be given an RRSet which is a *collection* of RRs. The old data format had RRSets broken out into multiple RR entries (one key per record) which typically meant multiple roundtrips to our KV store to answer a single query. We wanted to change this and group data by RRSet so that a single request could be made to the KV store to retrieve all the data needed to answer a query. Because Cloudflare optimizes heavily for DNS performance, multiple KV lookups were limiting our ability to make rrDNS go as fast as possible.
In a similar vein, for lookups like A/AAAA/CNAME we decided to group the values into a single “address” key instead of one key per RRset. This further avoids having to perform extra lookups in the most common cases. Squishing keys together also helps reduce memory usage of the cache we use in front of the KV store, since we’re storing more information against a single cache key.
After settling on this new data model, we needed to figure out how to serialize the data and pass it to the edge. As mentioned, we were previously using a mix of JSON and Protocol Buffers, and we decided to replace this with a purely MessagePack-based implementation.
#### Why MessagePack?
MessagePack is a binary serialization format that is typed, but does not have a strict schema built into the format. In this regard, it can be considered a little like JSON. For both the reader and the writer, extra fields can be present or absent and it’s up to your application code to compensate.
In contrast, Protocol Buffers (or other formats like Cap’n Proto) require a schema for data structures defined in a language agnostic format, and then generate code for the specific implementation. Since DNS already has a large structured schema, we didn’t want to have to duplicate all of this schema in another language and then maintain it. In the old implementation with Protocol Buffers, we’d not properly defined schemas for all DNS types – to avoid this maintenance overhead – which resulted in a very confusing data model for rrDNS.
When looking for new formats we wanted something that would be fast, easy to use and that could integrate easily into the code base and libraries we were already using. rrDNS makes heavy use of the miekg/dns Go library which uses a large collection of structs to represent each RR type, for example:
```
type SRV struct {
Hdr RR_Header
Priority uint16
Weight uint16
Port uint16
Target string `dns:"domain-name"`
}
```
When decoding the data written by our pipeline in rrDNS we need to convert the RRs into these structs. As it turns out, the tinylib/msgp library we had been investigating has a rather nice set of code generation tools. This would allow us to auto-generate efficient Go code from the struct definitions without having to maintain another schema definition in another format.
This meant we could work with the miekg RR structs (of which we are already familiar with from rrDNS) in the data pipeline, serialize them straight into binary data, and then deserialize them again at the edge straight into a struct we could use. We didn't need to worry about mapping from one set of structures to another using this technique, which simplified things greatly.
MessagePack also performs incredibly well compared to other formats on the market. Here’s an excerpt from a Go serialization benchmarking test; we can see that on top of the other reasons MessagePack benefits our stack, it outperforms pretty much every other viable cross-platform option.
One unexpected surprise after switching to this new model was that we actually reduced the space required to store the data at the edge by around 9x, which was a significantly higher saving compared to our initial estimates. It just goes to show how much impact a bloated data model can have on a system.
### A New Data Pipeline
Another very important feature of Cloudflare’s DNS is our ability to propagate zone changes around the globe in a matter of seconds, not minutes or hours. Our existing pipeline was struggling to keep up with the growing number of zones, and with changes to at least 5 zones each second, even at the quietest of times we needed something new.
#### Global distribution is hard
For a while now we’ve had this monitoring, and we are able to visualize propagation times across the globe. The graph below is taken from our end-to-end monitoring: it makes changes to DNS via our API and watches for the change from various probes around the world. Each dot on the graph represents a particular probe talking to one of our PoPs, and the delay is tracked as the time it took for a change made via our API to be visible externally.
Due to various layers of caches – both inside and outside of our control – we see some banding on 10s intervals under 1 minute, and it fluctuates all the time. For monitoring and alerting of this nature, the granularity we have here is sufficient but it’s something we’d definitely like to improve. In normal operation, new DNS data is actually available to 99% of our global PoPs in under 5s.
In this time frame we can see there were a couple of incidents where delays of a few minutes were visible for a small number of PoPs due to network connectivity, but generally all probes reported stable propagation times.
In contrast, here’s a graph of the old data pipeline for the same period. We can see how the graph represents the growing delay in visible changes for all PoPs at any given time.
With a new data model designed and ready to go, one that better matched our query patterns, we set out implementing a new service to pick up changes to our zones in the central data store, do any needed processing and send the resulting output to our KV store.
The new service (written in our favourite language Go) has been running in production since July 2016, and we’ve now migrated over **99%** of Cloudflare customer zones over to it. If we exclude incidents where issues with congestion across the internet affect connectivity to or from a particular location, the new pipeline itself has experienced zero delays thus far.
#### Authoritative rrDNS v2
rrDNS is a modular application, which allows us to write different “filters” that can hand off processing of different types of queries to different code. The Authoritative filter is responsible for taking an incoming DNS query, looking up the zone the query name belongs to, and performing all relevant logic to find the RRSet to send back to the client.
Since we’ve completely revised the underlying DNS data model at our edge, we needed to make significant changes to the “Authoritative Filter” in rrDNS. This too is an old area of the code base that hasn’t significantly changed in a number of years. As with any ageing code base, this brings a number of challenges, so we opted to re-write the filter completely. This allowed us to redesign it from the ground up on our new data model, keeping a keen eye on performance, and to better suit the scale and shape of our DNS traffic today. Starting fresh also made it much easier to build in good development practices, such as high test coverage and better documentation.
We’ve been running the v2 version of the authoritative filter in production alongside the existing code since the later months of 2016, and it has already played a key role in the DNS aspects of our new load balancing product.
The results with the new filter have been great: we’re able to respond to DNS queries on average 3x faster than before, which is excellent news for our customers and improves our ability to mitigate large DNS attacks. We can see here that as the percentage of zones migrated increased, we saw a significant improvement in our average response time.
#### Replacing the wings while flying
The most time consuming part of the project was migrating customers from the old system to something entirely new, without impacting customers or anybody noticing what we were doing. Achieving this involved a significant effort from variety of people in our customer facing, support and operations teams. Cloudflare has many offices in different time zones – London, San Francisco, Singapore and Austin – so keeping everyone in sync was key to our success.
Already, as a part of the release process for rrDNS we automatically sample and replay production queries against existing and upcoming code to detect unexpected differences, so naturally we decided to extend this idea for our migration. For any zone to pass the migration test, we compared the possible answers for the entire zone from the old system and the new system. Just one failure would result in the tool skipping the zone.
This allowed us to iteratively test the migration of zones and fix issues as they arose, keeping releases simple and regular. We chose not to do a single – and very scary – switch away from the old system, but run them both in parallel and slowly move zones over keeping them both in sync. Meaning we quickly could migrate zones back in case something unexpected happened.
Once we got going we were safely migrating zones at several hundred thousand per day, and we kept a close eye on how far we were from our initial goal of 99%. The last mile is still in progress, as there is often an element of customer engagement for some complex configurations that need attention.
#### What did we gain?
Replacing a piece of infrastructure this core to Cloudflare took significant effort from a large variety of teams. So what did we gain?
Average of 3x performance boost in code handling DNS queries
Faster and more consistent updates to DNS data around the globe
A much more robust system for SREs to operate and engineers to maintain
Consolidated feature-set based on today’s requirements, and better documentation of edge case behaviours
More test coverage, better metrics and higher confidence in our code, making it safer to make changes and develop our DNS products
Now that we’re now able to process our customers DNS more quickly, we’ll soon be rolling out support for a few new RR types and some other exciting new things in the coming months.
**Does solving these kinds of technical and operational challenges excite you? Cloudflare is always hiring for talented specialists and generalists within our ****Engineering****, ****Technical Operations**** and ****other teams****.**
| true | true | true |
Cloudflare is now well into its 6th year and providing authoritative DNS has been a core part of infrastructure from the start. We’ve since grown to be the largest and one of the fastest managed DNS services on the Internet, hosting DNS for nearly 100,000 of the Alexa top 1M sites.
|
2024-10-12 00:00:00
|
2017-04-11 00:00:00
| null |
article
|
cloudflare.com
|
The Cloudflare Blog
| null | null |
9,809,906 |
https://www.sonderdesign.com/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.