id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,690,810 |
https://www.wired.com/story/as-google-ai-overview-targets-advertisers-it-could-learn-a-lot-from-bing/
|
As Google Targets AI Search Ads, It Could Learn a Lot From Bing
|
Paresh Dave
|
For years Paula Thompson, vice president of client strategy at the US digital ad agency Optimal, has been helping Plunge sell baths to soak in chilling water by buying ads on Google search and Microsoft Bing. The ads appear atop results for searches such as “ice bath” and invite users to “Buy our cold plunge today” or “Experience the cold plunge difference.”
But as Microsoft put an AI spin on its search engine, the tub maker’s ads on Bing now invite users to “learn about the benefits of cold plunging” or “learn about the exclusive benefits of Plunge,” according to Thompson. They direct to informational material, not the purchase pages that were mainly pushed before.
This new tactic is one of the first examples of how the debut over the past year of search chatbots such as Bing Copilot are affecting advertisers, whose decades of patronage have kept tools free for searchers. Last month, Google confirmed that it would join their ranks and soon test ads in its AI Overviews search feature—prompting anxious ad buyers to study up on what’s been happening with Copilot. There is a lot at stake: Google parent Alphabet generated $74 billion in profits last year, and Microsoft $83 billion, with significant (though unspecified) contributions from selling search ads.
It’s not certain that the companies can continue balancing the demands of advertisers with the desires of users in the newer AI search features. Already, WIRED, one ad buyer, and users on social media have experienced irrelevant and potentially deceptive ads in Bing’s new technology.
In WIRED’s limited testing, ads in Copilot have felt incoherent. A prompt about ice baths brought up ads for backpacks, not for Plunge or its rivals. A request for weight-loss tips returned ads for fat-freezing belts, which weren’t listed among the AI-synthesized suggestions. Others have reached similar conclusions, including a user who last year described on Reddit seeing ads for dietary supplements not mentioned in an Copilot answer about burnout at work.
Kya Sainsbury-Carter, corporate vice president of Microsoft Advertising, told WIRED in an interview that ads “are meant to be highly relevant, and so that's probably something we want to take a look at.”
But James Murray, senior product marketing manager for advertising, later added in an email that seeing ads for products not mentioned in a Copilot response is normal. Traditionally, searchers receive ads based on the keywords they enter into the search bar. Copilot ads are driven not only by search terms, Murray says. They also could be tied to questions a user asked earlier in a conversation with the chatbot, prompts that Microsoft auto-generates behind the scenes to engineer a better answer from Copilot, and the AI-generated response itself.
While the ads are always meant to be relevant, serendipity is part of Microsoft’s aim. Someone may search for “best ultra-HD, 8K QLED 80-inch TV,” but to help them explore a range of options, Microsoft will show ads for a TV that is 85 inches or has an OLED screen. “Even when users ask for something extremely specific, they still often click on ads for something that they didn’t ask for,” Murray says. (Google’s ads are meant to be relevant to the query and the AI response.)
Disclosure of ads has been an issue on Copilot as well. Though Microsoft says it labels all ads, Marcus Pratt, senior vice president for insights and technology at the ad-buying agency Mediasmith, says he’s encountered at least two searches in which links with indications that they are sponsored arguably haven’t been adequately disclosed.
Last week, Pratt looked up the best reels to wind up and store his garden hose. Copilot recommended eight options, all apparently lifted from an article from the reviews publication Spruce, which links to Amazon product listings and gets a commission when readers make a purchase. When clicking on the reels in Copilot, he ended up on giraffetools.com, with code in the URL suggesting it had been a sponsored link. But an “Ad” label is only visible if a user hovers over the link for a moment before clicking. Spruce and Giraffe Tools didn’t respond to requests for comment.
In the other search, Copilot recommended a Nike Pegasus running shoe, but when hovering over the name, Microsoft showed a link to the shoe brand On with a small “Ad” label in the corner. A link to a Women’s Health article with more details about the Nike pair is below the ad. Pratt calls it a potentially dissatisfying experience for brands and a confusing one for consumers. “This blending of organic recommendations and sponsored listings is blurring the lines more than I have seen in the past,” he says. Nike, On, and Women’s Health didn’t respond to requests for comment.
Microsoft’s Sainsbury-Carter says ad experiences may vary as Microsoft continues testing and applying feedback.
Despite optimism among investors in the tech giants’ abilities to smooth out the rough edges and keep sales flowing, mixing AI-generated content into search is the industry’s biggest shift since the advent of smartphones. Google is trying to quickly satisfy people’s curiosity by using AI Overviews’ generative AI to summarize the web, which users have panned for embarrassing gaffes like suggesting they squeeze glue on pizza.
Microsoft is not only publishing similar AI summaries, but also enabling users to explore topics by conversing with Copilot, the AI chatbot from Bing. Though Google has tested ads in a precursor to AI Overviews, Microsoft is so far ahead—displaying more ads and disclosing more about how they are doing.
In a webinar for select ad agencies last week seen by WIRED, Microsoft’s Murray said that users click on ads in Copilot at nearly twice the rate they do for equivalent ads when they’re shown as the first ad above traditional search results, which historically is the most clicked ad. They also prefer a Copilot experience with ads than without by a slim margin.
Sainsbury-Carter says to her, the data mean users are finding Copilot ads more integral than tacky. She adds that clicks on multimedia ads, specifically, were three times higher in Copilot than elsewhere in Bing between last July and this past January. The company declined to share specific figures but described the measure as statistically significant.
## Opted-In to AI
Advertisers don’t have much choice about investing in AI search. Microsoft and Google are pulling from customers’ existing ad campaigns for other environments to fill the ad slots in Copilot and Overviews until more data is gathered on their effectiveness. That means Copilot can draw on advertisers’ content to show ads as simple text, a row of product images, sponsored links embedded within AI summarization, or multimedia widgets for booking travel or deciding which car to buy.
“We're still in a place where we don't feel like asking advertisers to adopt, launch, manage, and optimize an entirely new campaign type,” Microsoft’s Sainsbury-Carter says. “Certainly that could happen over time if it feels like it's really bifurcating and the differences are great enough.”
Fortunately for Microsoft, she says, advertiser requests to opt out of AI-focused ads and complaints about ads appearing beside inaccurate AI-generated copy have both been minimal. Microsoft is being “super measured” about how many ads are shown in the new features, Sainsbury-Carter says, declining to provide specific figures.
Microsoft and Google also have not told advertisers exactly when their ads have appeared in AI features, limiting their ability to measure the payoff compared to traditional search ads. And the companies haven’t shared many tips on crafting ads for the new search features, according to four ad agencies’ executives, including Thompson. Sainsbury-Carter says the core message to advertisers is that optimizing for ads on Bing in general does the trick.
Thompson, whose agency also represents Microsoft’s Azure Cloud business, has crafted her own theory about how to adapt: Instead of targeting people who search for a specific product, advertisers need to educate people who have never heard of the product in the first place, since it seems people often turn to Copilot with broad questions.
Rather than targeting just short phrases like “cold plunge,” Plunge now tries to run Bing ads on longer queries such as “How to cold plunge,” “Where do I put cold plunge,” and “What is the optimal temperature for a cold plunge.” (As low as 37 degrees Fahrenheit, Thompson says.)
Thompson believes that the strategy is helping because clicks are up on her clients’ Bing ads, though the search engine’s growth to 140 million daily users from 100 million a year ago also could be a factor. She gains additional comfort from Google’s statement at its big annual advertising conference last month that users find ads in the new experiences less gimmicky. But it’s hardly definitive. “I don’t think there’s enough transparency yet,” she says.
## AI Search Will Spread
There’s little doubt, though, that Microsoft views Copilot as essential to the feature of search. Heavy promotion and integration certainly contributed to the number of Bing searches that involved at least some use of Copilot growing four times faster over the past year than traditional searches alone, according to the company, which declined to share specific figures. Those using Copilot in some way during the second half of last year seemed to get to their answers quicker, shaving 12 percent off their searching time and increasing their ad clicks by 30 percent.
That puts greater pressure on advertisers to perfect their messages and Microsoft’s algorithms to deliver them on the appropriate queries. About four out of every five ad clicks in Copilot during testing throughout last year came from chats lasting less than a minute, Sainsbury-Carter says.
Advertisers and consumers had better get used to it, because ads are poised to spread to additional AI-heavy services. Snapchat, Chinese tech giant Baidu, and German newspaper Bild all signed up to use Microsoft technology to serve ads in their chatbots. Snap spokesperson Ahrim Nam says the partnership has graduated past the testing phase, but declines to further comment. Baidu declined to comment, and Bild didn’t respond to a request for comment.
Microsoft works with over 1,500 publishers, so ads could come to more chatbots as the trend of adding AI conversational tools to apps and websites grows. It will have some competition. OpenAds, a New York City startup that’s raised $1 million in funding, expects its ad technology to launch inside multiple AI search chatbots and image generators in the next couple of months, its CEO, Steven Liss, says.
Liss took on the challenge of developing the service in part because Google, the dominant provider of ad technology on the web, currently refuses to serve ads on webpages and apps “where dynamic content (e.g., live chats, instant messaging, auto-refreshing comments, etc.) is the primary focus of the page.” Even if Google updates its policy, Liss says OpenAds can survive bigger players by designing more engaging ads.
For now, Microsoft enjoys the rare advantage of being in the lead. But the contribution from Copilot and the other AI features to Microsoft’s $18 billion in annual ads sales is unclear, and Sainsbury-Carter declines to disclose the prices the new ads are fetching. “We think this can be a really interesting business over time, but we think it's an interesting business if we are surfacing personalized ads that people love and find joy from and that are super, super useful,” she says. It’ll also require them to keep plunging into the depths of trusting AI with their queries.
| true | true | true |
Microsoft and Google are bringing ads to their AI search experiences. But users don’t always find it helpful.
|
2024-10-12 00:00:00
|
2024-06-14 00:00:00
|
article
|
wired.com
|
WIRED
| null | null |
|
18,540,431 |
https://madewithlove.be/using-prettier-in-php/
|
Using Prettier in PHP
|
Emma Fabre
|
## What is Prettier?
Originally from the Javascript ecosystem, Prettier is a code **prettier**. There are code
Prettier, however, is currently one of the most popular code formatters out there, and it has spread to a lot of different languages already for one simple reason: it gives zero fucks about how you think your code should be formatted. You can pass a few basic options to Prettier (indentation, max width, the basics) but other than that it takes your whole code and reformats it **from scratch**, disregarding any formatting decision you may have previously taken.
This may sound counter-intuitive but it tremendously reduces friction when writing code. The more you trust Prettier, the more you can stop worrying about formatting altogether. You can type code in one disgusting line, press save, and the result is nicely formatted code. You stop thinking about indentation, manually adding commas, or placing things for maximum readability and such. These distractions take up much more of your day than people realise because those are micro interruptions that are scattered and as such feel inconsequential. Once you go without them though, you will realise how much time they take up.
## Prettier and PHP
So ok, Javascript and a bunch of other front-end languages (CSS, HTML, GraphQL, etc.) have this tool, but this is Serious Enterprise PHP™ we have Serious Enterprise Tools™ to take care of our code style and PSRs and RFCs and all that – we don’t just type code until it’s pretty. What is this? Ruby?
At madewithlove we currently use (mainly but not only) PHP CS Fixer. However, a glance at the list of the things it fixes shines a light on the main reason I still use Prettier in addition to it. PCS conflates code format (how the code is formatted, how pretty it is) and coding style (how the code is written, how good it is). Most of the time, it will fix the latter, in very PHP specific ways, but will do very little for the former besides a few blank lines and indentation rules.
## How do you use Prettier?
Most of the time Prettier would be added to the project’s `package.json`
, but since we want to use it in a PHP project we’ll install it globally through NPM (or Yarn if you want).
Next, as the PHP support is not yet stable, we’ll have to add that functionality into Prettier specifically:
`npm install --global prettier @prettier/plugin-php`
Once Prettier is installed, you can quickly try it out on one or more files by invoking it directly (e.g. `prettier somefile.php`
or `prettier src/**.php`
). An example with some badly formatted code:
```
<?php
namespace Foobar;
class SomeClass {
public function getCallbacks() {
return array( function () {return $this->firstName;},
function () {
return $this->callSomeMethod('foo')->andThenOtherMethod( Bar::withSome("arguments"),
$andShit );
});
}}
```
If you `prettier b`
`ad.php`
`--write`
option to write the result directly to the file. The results:
```
<?php
namespace Foobar;
class SomeClass
{
public function getCallbacks()
{
return array(
function () {
return $this->firstName;
},
function () {
return $this->callSomeMethod('foo')->andThenOtherMethod(
Bar::withSome("arguments"),
$andShit
);
}
);
}
}
```
Where it gets interesting is that because Prettier rewrites the code, it continually adapts. If there were a third method call or if the configured max width was different, it would format the code differently. This is a **key difference **with the formatting PHPStorm or PCS would give you because they follow very consistent rules: how to break down arguments, methods, classes, et al depending on their width, etc.
Prettier doesn’t do that when it formats the code; it barely even knows that it is formatting PHP code because it purely translates the input into an abstract tree and formats it based on how readable it expects the output to be. It doesn’t fully know what it is formatting, just where it can add breaks and newlines.
So you might disagree with the final output file, but that would be missing the point which is to stop caring about whether you agree or disagree in the first place.
## How do you use it without having to type shit in the terminal?
Because Prettier is such a widely adopted tool, wherever Prettier is supported you can use it to format PHP code there – as long as it correctly uses the prettier where you’ve `@prettier/plugin-php`
**However** because we often follow some Symfony rules that would clash with the way Prettier reformats, since it does so according to PSR2, I instead recommend
You can use various tools for that, such as GrumPHP, but the most straightforward way remains the following to `composer.json`
file:
```
"scripts": {
"fix-cs": ["prettier src/**/* --write", "php-cs-fixer fix"]
}
```
And then you can run it with `composer fix-cs`
. Be careful, however, that while Prettier is very fast to run, PCS takes a lot more time, so keep the formatting on save for Prettier.
## That doesn’t look worth it at all
Okay, so it reformats a few lines
, PHPStorm can also do that without adding another tool.
Saying “Prettier just reformats code” would be very reductive and it would be ignoring the pros that it brings on its own.
- Using Prettier speeds up the time spent writing code by a huge factor. I talked about this a bit before but it’s not just the time spent formatting your code that is lost right now; it’s the time spent even
*thinking*about it. What is the most readable way to split these lines? How can I best carefully arrange these characters so that they’re easy to read? It’s a whole mental load that suddenly goes away, and you never look back. - Prettier is insanely fast, almost instantaneously fast. Comparing it to tools like PCS or PCF is night and day because they’re not even booted by the time Prettier is done formatting your code, which is why the whole “format on save” paradigm is so important here compared to other tools.
- It also supports a shitload of languages: once you’re used to letting Prettier reformat your code you can let it loose on a massive chunk of your codebase. The website lists the supported languages, but it’s simple to set up a command such as:
`prettier '{src/**,.}/*.{js,php,md,json,yml}'`
which formats most files in your codebase including READMEs and fixture files. It even knows how to write YAML files correctly, a piece of knowledge long lost in the Symfony Wars of 2009.
## Is it ready yet?
Prettier inherently changes the way you code, and it is coming to many more languages other than PHP (e.g. Python, Ruby). In the span of a few months, Prettier almost single-handedly ended most formatting wars happening in the JS ecosystem; people went from entire tirades about trailing semicolons to #justuseprettier.
Although the PHP plugin is not released to the public yet, I am very much excited for when it will be and to see if it has the same effect on its community, even in a smaller way. I think the PHP community has spent literal decades arguing on how to format code, to the point we needed a whole committee to regulate on it. I think it’s time we as developers get over this and start letting tools do these menial inconsequential tasks that we should have delegated to them entirely by now.
So stop worrying about whether to put this brace on this line or the next, about whether you should put spaces around parentheses or anything like that for that matter. Just write your code, and let Prettier (or another tool) worry about the rest because no one is scouring the market for people who are an expert at neatly arranging braces. That’s not your job.
“But the code would have been more readable had the argument been on a separate li–
I know, it’s ok. Prettier will get better, it continually does. But I repeat: this is not your job.
## Member discussion
| true | true | true |
Prettier is currently one of the most popular code formatters out there, and it has spread to a lot of different languages already for one simple reason: it gives zero fucks about how you think your code should be formatted. Here's our opinion about using Prettier in PHP.
|
2024-10-12 00:00:00
|
2018-11-27 00:00:00
|
article
|
madewithlove.com
|
Madewithlove
| null | null |
|
13,095,034 |
http://www.nytimes.com/2016/11/30/technology/while-we-werent-looking-snapchat-revolutionized-social-networks.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,121,372 |
https://github.com/simpletut/Universal-React-Redux-Registration
|
GitHub - simpletut/Universal-React-Redux-Registration: Open Source Universal User Registration System – NodeJS React Redux JWT MongoDB
|
Simpletut
|
Open Source Universal User Registration System – NodeJS React Redux JWT MongoDB
**Demo: Visit the demo**
Please note: As the Demo is hosted on a free Heroku account, the servers its hosted on enter ‘sleep mode’ when not in use. If you notice a delay, please allow a few seconds for the servers to wake up.
Registration, Login, Dashboard, Email Password, Logout
Account Update, 404 (Not Found)
The User Registration System is Fully Responsive out the box and you can Restrict Access to any page!
- NodeJS
- React
- Server-side Rendered
- Redux
- Redux-Thunk
- Redux-Form
- MongoDB
- Mongoose
- JSON Web Tokens
- Webpack 4
- Babel 7
- Express
- SASS
- Async Validation (Redux-Form)
- Winston - Better error handling/logging
- Bcrypt password encryption/verification
- Nodemailer – Custom mail server used to send password reset emails
- Custom ‘Password Reset’ Template (Built with MJML Framework)
- Multiple Layouts – Create unlimited layouts for pages/routes
- Unit Tests
- Toastr - Simple javascript toast notifications
- 100% FREE & Open Source
This software was developed by Ashley Bibizadeh.
The User Registration System is open source software licensed as MIT.
| true | true | true |
Open Source Universal User Registration System – NodeJS React Redux JWT MongoDB - simpletut/Universal-React-Redux-Registration
|
2024-10-12 00:00:00
|
2018-09-28 00:00:00
|
https://opengraph.githubassets.com/576c6331891d87f7fc32202fff6d96c1696708ea0957a944bae6e34aa1e98dbd/simpletut/Universal-React-Redux-Registration
|
object
|
github.com
|
GitHub
| null | null |
40,861,554 |
https://hashtagguru.app
|
Generate Hashtags for Instagram with AI
|
Hashtag Guru
|
Instantly generate optimized hashtags for Instagram, TikTok, and more by uploading an image or taking a photo. Personalize hashtags based on your profile and post caption for maximum engagement.
Easily copy, and share hashtags directly to your social media. Save your favorites into collections for easy future use. Simplify your social media strategy and enhance your visibility with our intuitive hashtag tool.
Discover how popular your hashtags are and track the number of posts using them on Instagram. Stay ahead of the trend and maximize your reach with precise hashtag insights!
Upload a photo to create trendy, tailored captions. Generate captions from your profile or hashtags, and customize length, hashtags, and emojis for maximum impact.
Translate captions using AI and post in multiple languages. Expand your reach and connect with a global audience with just a few clicks.
Organize your favorite captions into collections for quick and convenient access whenever you need them.
Save and organize your favorite hashtags and captions for quick retrieval. Automatically generate more content based on your collections, and easily copy and share with a single tap.
Submit your feature requests directly to our development team and play a vital role in shaping the future of the app. Your feedback helps us enhance your user experience, ensuring the app evolves to meet your needs.
Customers love Hashtag Guru because it helps them find the perfect hashtags and quickly generate captions for their images. This AI-powered tool streamlines their social media marketing, making the process faster and more automated.
Hashtag Guru has transformed my online yoga business! It quickly generates perfect hashtags and catchy captions, saving me time and helping me connect with more students.
Online yoga teacher
Hashtag Guru has transformed my art sharing! It generates the most viral hashtags, helping me reach more art lovers effortlessly.
Artist
Hashtag Guru has been a game-changer for my fitness coaching career! It generates targeted hashtags and engaging captions, helping me connect with a wider audience and grow my online community. This tool is a must-have for any fitness influencer looking to boost their reach and impact!
Fitness influencer
| true | true | true |
Boost your social media reach with Hashtag Guru, the AI-powered app that generates relevant hashtags to simplify posting and increase engagement.
|
2024-10-12 00:00:00
| null | null | null | null | null | null |
|
21,789,349 |
https://www.wsj.com/articles/virtual-travel-could-change-the-worldif-it-gets-off-the-ground-11576162804
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,346,209 |
https://www.ifixit.com/News/43537/the-pixel-4a-is-actually-new-and-interesting-once-you-open-it
|
The Pixel 4a Is Actually New and Interesting, Once You Open It
|
Taylor Dixon
|
What’s new and surprising about a phone we all knew was coming for months? Google’s Pixel 4a doesn’t have exciting specs or unexpected features, but it does have some smart ideas inside. There’s a clever new design for battery removal, a new board-holding midframe—even a almost-but-not-quite revolution in common repair access.
But, let’s not get ahead of ourselves. First, X-rays.
A scan of the phone yields one peculiarity, while confirming a couple things we already expected. In the not-too-surprising-at-this-price-point category, we find no evidence of wireless charging—when present, the copper coils form an unmistakable **Aerobie**** **shape that dominates the image, like on the Pixel 4 XL. There’s also no sign of Google’s Active Edge squeezy sensors, a hallmark of the Pixel line. And that camera bump, which seems sized for about two and a half cameras, really houses just one optically-stabilized module plus flash.
What leaves us scratching our heads is the vertical orientation on that earpiece speaker driver, with an enclosure that also doesn’t quite line up with the speaker grille—instead it’s snuggled up directly alongside the headphone jack. In the game of high-stakes Tetris that is smartphone hardware design, there are always quirks. As always, thanks to Creative Electron for this excellent preview.
With that, tools come out and the poking and prying commences—starting with the display. Beneath its bezels, the Pixel 4a keeps the friendly, foamy adhesive from the Pixel 3a, seemingly implying a trend from Google wherein cheap phones are easy to open and expensive ones aren’t. It’s likely the difference in adhesive exists for IP certification reasons, but we wish there could be a middle ground. (Well, there kind of is.)
Google is also trending toward full-time use of Samsung displays, after sourcing panels from LG for some of its earlier phones. Every model since the Pixel 3 (non-XL) including our Pixel 4a here boasts a Samsung OLED panel.
Past the display, the trends end. The Pixel 4a has a brand-new construction with some very interesting quirks and features. Every Pixel prior to the 4a (and most modern smartphones, really) are built one of two ways:
- With all the internals crammed into the back of the phone, so the display comes off first (think iPhone)
- With all the internals crammed into the front of the phone, so the back cover comes off first (think Samsung phones).
As shown above, the Pixel 4a’s display comes off first. But after the customary midframe screw removal, the plastic back cover *also* comes off, leaving a slender midframe holding nearly all the guts. In terms of repair, it’s a whole new set of pros and cons. One can now theoretically replace the screen or the back cover (both common replacements) in a few steps without having to deal with a sticky battery, or dislodge a motherboard. That’s … actually kind of amazing.
Unfortunately, that back cover doesn’t come off cleanly. It’s tethered by two short cables stuck under a long metal shield held down by screws, requiring some careful, slightly awkward unscrewing. Our first attempt resulted in a torn fingerprint sensor cable, but if you learn from our mistake, you should be able to navigate safely. Forewarned is forearmed in this case.
*seems*easy, but screw and cable complications means it easier if someone goes in before you do (hey, that’s us!)
Despite a few challenges, this weird new construction brings the Pixel 4a tantalizingly close to a repair revolution: a smartphone that lets you choose your own opening procedure. Need a screen replacement? Start and end with the screen. Battery? Head in through the back. That hasn’t been achieved yet, and it’s a tough problem to crack, but the 4a comes about as close as we’ve seen.
Now for our favorite part of the Pixel 4a. If you watched our Very Quiet Pixel 4a Disassembly, you probably noticed there were stretch-release adhesive strips under the battery—but without any pull tabs in the usual spots, we didn’t see them until it was too late. After some off-camera investigation, we found the pull tabs hiding in little windows cut through the midframe!
This clever design allows for unobstructed pulling of the tabs at optimal angles. That’s something we’ve never seen before, and it’s the reason why otherwise repair-friendly stretch-release strips can sometimes be such a headache on other phones—trying to extract them at a sufficiently shallow angle, without snagging on any nearby components, can be nail-bitingly tricky. The Pixel 4a does a literal end-run around that whole problem.
The actual adhesive strips used here aren’t the best we’ve seen in terms of material—they’re thin, and finicky, and fragile if not stretched with extreme patience. Nitpicks aside, it’s leagues better than adhesive tar pits seen in many other manufacturers’ phones.
Stickied on top is the battery, weighing in at 12.15 Wh, which beats the Pixel 3a’s 11.55 Wh and absolutely crushes the new iPhone SE’s 6.96 Wh spec. (The iPhone SE is equipped with a significantly more efficient processor though, so the difference in actual battery life won’t necessarily match the difference in battery specs.)
Several other things cling to the midframe. The headphone jack makes a triumphant return. The vibration motor appears to be the same circular linear resonant actuator that buzzed inside the Pixel 3a, though maybe with some tuning adjustments: some reviewers have called out this phone’s improved haptics.
The Pixel 4a is only equipped with two cameras: one 8 megapixel f/2.0 selfie shooter in the hole-punch under the screen, and a rear-facing 12.2 megapixel f/1.7 module—reportedly the same one from the Pixel 4. (That same sensor is also rumored to be in the upcoming Pixel 5.) 12 MP is a far cry from the high-MP shenanigans that Samsung and others are up to, but at this point Google could probably stick a small potato where the camera goes and it would still pump out class-leading images, thanks to their ever-improving AI computational photography skills.
On the single circuit board lives the Qualcomm Snapdragon 730G processor, along with 6 GB of RAM and 128 GB of storage. If you’re curious what other ICs power the Pixel 4a, you can do some sleuthing with these two images. Just let us know in the comments if you find anything interesting!
A few steps forward and a couple backwards earns the Pixel 4a a 6 out of 10 on our repairability scale:
➕ Most components are modular and independently replaceable.
➕ Repair-friendly stretch-release adhesive secures the battery, and is easier than ever to release successfully.
➕ All screws are standard T3 Torx fasteners
➕ **/** ➖ The display comes off first, but is thin and poorly protected. Foam adhesive makes the opening process relatively easy.
➖ The thin ribbon cables connecting the flash and fingerprint sensor to the main board are tedious to work around and easy to accidentally tear while removing the back cover.
## 9 Comments
Has anyone noticed that the 4a has exactly the same dimensions as the original Pixel? I just bought the 4a to replace my pixel and found with a bit of cutting away the OtterBox defender case it for perfectly, all buttons line up. Speakers, headphone, volume and lock keys.
William Leys - Reply
The fingerprint sensor is a bit off center but is still completely exposed, depending on the size of the cutout. The lower “speaker” holes are also very slightly off center. You might also want to cut a hole for the top mounted, noise cancellation mic.
Mark H -
Any chance that you will identify the chips on the RF board any time?
Ritwick Medikeri - Reply
Do you think it will have the same swollen battery issues found on all the previous models?
Graziano Sorbaioli - Reply
There’s probably no way to know. Time will tell.
As I understand, the best ways to prevent (but not guarantee against) battery swelling are to not let the phone overheat, and to try to keep the charge between 20% and 80% (which should also make the battery last years longer).
Chargie is an accessory that lets you charge to just 80%, but I’d like to see more third party chargers build in this kind of functionality at a better price.
Mike -
| true | true | true |
The specs? Safe. The camera? Standard. But Google is making choices inside the Pixel 4a that make us like this mid-range phone more than its premium siblings.
|
2024-10-12 00:00:00
|
2024-10-12 00:00:00
|
article
|
ifixit.com
|
Ifixit
| null | null |
|
6,492,354 |
https://laracasts.com/
| null | null | null | true | true | false | null |
2024-10-12 00:00:00
|
2000-01-01 00:00:00
| null | null | null |
Laracasts
| null | null |
4,185,712 |
http://onsoftwareandstuff.com/2012/07/01/review-kissflow-the-workflow-as-a-service-on-google-apps/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
245,360 |
http://blog.newscred.com/?p=129
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,521,358 |
https://www.garlandtechnology.com/blog/when-did-phishing-become-a-social-problem
|
When Did Phishing Become a Social Problem?
|
Chris Bihary
|
#### Network Visibility Products
Garland Technology ensures complete packet visibility by delivering a full platform of network TAP (test access point), inline bypass and packet broker products.
#### Visibility Solutions
Garland Technology is committed to educating the benefits of having a strong foundation of network visibility and access. By providing this insight we protect the security of data across your network and beyond.
#### Resources
Garland Technology's resource library offers free use of white papers, eBooks, use cases, infographics, data sheets, video demos and more.
#### Blog
The TAP into Technology blog provides the latest news and insights on network access and visibility, including: network security, network monitoring and appliance connectivity and guest blogs from Industry experts and technology partners
#### Partners
Our extensive technology partnership ecosystem solves critical problems when it comes to network security, monitoring, application analysis, forensics and packet inspection.
#### Company
Garland Technology is dedicated to high standards in quality and reliability, while delivering the greatest economical solutions for enterprise, service providers, and government agencies worldwide.
#### Contact
Whether you are ready to make a network TAP your foundation of visibility or just have questions, please contact us. Ask us about the Garland Difference!
# When Did Phishing Become a Social Problem?
In the world of technology, social media is starting to become king. Just think about this for a minute, 15 years ago, there was no Twitter, Facebook, Instagram or Snapchat.
Now, it seems every American is on one or all of these devices. Social media has become such a fabric in our culture that there isn’t a time when you don’t see someone walking with their head down looking at their phones.
Learn how social engineering is the new gold mine.
**Numbers are Staggering**
Millions of people log into their social media accounts every day. In fact, 1.3 billion users log onto their favorite social networking sites each month. They share their favorite photos and check up on friends on a daily basis.
On someone’s network, you can find their name, date of birth, location, workplace, interests, hobbies, skills, relationship status, telephone number, email address and favorite foods. All of this information can be used against you by social engineers.
In spear phishing, social engineering is the use of known social behaviors and patterns to make targets more likely to take a suggested course of action, like clicking on a link. They can send crafted spear phishing emails to your inbox, or they can try and imitate you to trick your contacts.
## Why Should This Concern You?
**Social Media Usage by the Numbers**
- 66% of adult Facebook users do not know how to use its privacy controls.
- 71% of consumers state their purchasing decisions are influenced by social media posts.
- 26% of social media users have made in-app purchases using payment cards.
- 780% increase in reported social-media related crime in a four year timespan.
- One major social network has more fake profiles than the population of Egypt.
- Social activities account for 91% of all mobile Internet activity.
In January 2010, social media lures, which is when a hacker uses someone’s friend request to launch a successful phishing campaign, were used by 8.3% off all phishing attacks. By December of that year, they were used in 84.5% of attacks - a **staggering increase of 918%**.
**Targeting Social Accounts**In years past, it was companies that were being targeted the most by attackers. But now with social media being so prevalent, attackers are finding it easier to go after the user.
A recent article by Blueprint IT Security hits on that notion. They talk about how Facebook, Twitter and Linkedln are “goldmines” for phishing. So much so that Linkedln has fueled an entire industry of bogus connection requests. Their usefulness isn't to launch a phishing attack, but to research it, spotting high-value management targets after being accepted into the network of contacts that might legitimately know them.
Blueprint goes on to say their first defense is to research Open Source Intelligence (OSINT) in order to see a company’s information footprint from the attacker’s point of view.
**Targeting Has Become More Personal**
Targeting or spearing, as it is being referred to now, is often the first stage of a wider attack, which is designed not to simply steal credentials but to find a way into the deeper parts of the target organization, or user, for a variety of reasons - including data theft and extortion.
Attackers now are doing their homework more and more on their potential targets. As Blueprint states in their recent article, attackers are becoming more aware of the people they are going after.
**Reconnaissance** - Normally, a targeted attack is focused on a specific person within an organization, which is also a calculated guess based on what can be gleaned about the company from OSINT. OSINT is a fancy term to describe information gathered from public sources that companies find it almost impossible to control.
**Stealth **- Whatever channel attackers decide to choose, the goal is not to draw attention to themselves. An email or contact request must not stand out as unusual, or it could trigger interactions that could reveal it for what it really is. If that happens, it is no better than an opportunistic phishing attack.
**Subterfuge - **The close ally of stealth is technical subterfuge. In organizations who do not use email authentication, this usually includes using spoofed email addresses that appear to come from an internal address.
**Software and Awareness is Key**
As Blueprint states in their article, attackers will couple top domains with impersonated cloud services or portals used by the target organizations or users.**Software** - This explains the value of carrying out reconnaissance on the software and services used by a target organization. Again, users rarely check these closely.
**Awareness** - The attack surface can be reduced in a variety of ways but ideally this should be done alongside changing the outlook of employees. A popular solution is to engage some form of anti-phishing awareness training.
The idea behind awareness training is to baseline the degree to which employees can be snared by test phishing scenarios, comparing their behavior when running the same tests weeks or months later. The best approach seems to be to start with a longer training session, running short monthly tests every month for a year.
#### Written by Chris Bihary
Chris Bihary, CEO and Co-founder of Garland Technology, has been in the network performance industry for over 20 years. Bihary has established collaborative partnerships with technology companies to complement product performance and security through the integration of network TAP visibility.
### Authors
### Topics
- IT Security (200)
- Network TAPs (138)
- Network Monitoring (133)
- Hacks and Breaches (87)
- Network Management (79)
- Network Design (73)
- Industrial OT (70)
- Technology Partners (63)
- Network Infrastructure (57)
- Inline Security (49)
- TAPs vs SPAN (47)
- Network Packet Brokers (40)
- Data Center (37)
- Cloud Solutions (33)
- Software Defined Networking (SDN) (24)
- Events & News (21)
- The 101 Series (19)
- Federal (17)
- Cisco Solutions (16)
- Wireshark (14)
- DesignIT (13)
- Healthcare (11)
- MSP/MSSP (9)
- Palo Alto Networks (8)
- Finance (7)
- Troubleshooting (5)
| true | true | true |
Targeting or spearing is often the first stage of a broader attack, designed not to simply steal credentials but to find a way in for data theft and extortion.
|
2024-10-12 00:00:00
|
2017-06-06 00:00:00
|
http://www.garlandtechnology.com/hubfs/blog-files/Blog-Phishing.png
|
article
|
garlandtechnology.com
|
GarlandTech
| null | null |
16,429,430 |
https://www.sciencedaily.com/releases/2018/02/180214093712.htm
|
Alzheimer's disease reversed in mouse model
| null |
# Alzheimer's disease reversed in mouse model
- Date:
- February 14, 2018
- Source:
- Rockefeller University Press
- Summary:
- Researchers have found that gradually depleting an enzyme called BACE1 completely reverses the formation of amyloid plaques in the brains of mice with Alzheimer's disease, thereby improving the animals' cognitive function. The study raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer's disease in humans.
- Share:
A team of researchers from the Cleveland Clinic Lerner Research Institute have found that gradually depleting an enzyme called BACE1 completely reverses the formation of amyloid plaques in the brains of mice with Alzheimer's disease, thereby improving the animals' cognitive function. The study, which will be published February 14 in the *Journal of Experimental Medicine*, raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer's disease in humans.
One of the earliest events in Alzheimer's disease is an abnormal buildup of beta-amyloid peptide, which can form large, amyloid plaques in the brain and disrupt the function of neuronal synapses. Also known as beta-secretase, BACE1 helps produce beta-amyloid peptide by cleaving amyloid precursor protein (APP). Drugs that inhibit BACE1 are therefore being developed as potential Alzheimer's disease treatments but, because BACE1 controls many important processes by cleaving proteins other than APP, these drugs could have serious side effects.
Mice completely lacking BACE1 suffer severe neurodevelopmental defects. To investigate whether inhibiting BACE1 in adults might be less harmful, Riqiang Yan and colleagues generated mice that gradually lose this enzyme as they grow older. These mice developed normally and appeared to remain perfectly healthy over time.
The researchers then bred these rodents with mice that start to develop amyloid plaques and Alzheimer's disease when they are 75 days old. The resulting offspring also formed plaques at this age, even though their BACE1 levels were approximately 50% lower than normal. Remarkably, however, the plaques began to disappear as the mice continued to age and lose BACE1 activity, until, at 10 months old, the mice had no plaques in their brains at all.
"To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer's disease mouse models," says Yan, who will be moving to become chair of the department of neuroscience at the University of Connecticut this spring.
Decreasing BACE1 activity also resulted in lower beta-amyloid peptide levels and reversed other hallmarks of Alzheimer's disease, such as the activation of microglial cells and the formation of abnormal neuronal processes.
Loss of BACE1 also improved the learning and memory of mice with Alzheimer's disease. However, when the researchers made electrophysiological recordings of neurons from these animals, they found that depletion of BACE1 only partially restored synaptic function, suggesting that BACE1 may be required for optimal synaptic activity and cognition.
"Our study provides genetic evidence that preformed amyloid deposition can be completely reversed after sequential and increased deletion of BACE1 in the adult," says Yan. "Our data show that BACE1 inhibitors have the potential to treat Alzheimer's disease patients without unwanted toxicity. Future studies should develop strategies to minimize the synaptic impairments arising from significant inhibition of BACE1 to achieve maximal and optimal benefits for Alzheimer's patients."
**Story Source:**
Materials provided by **Rockefeller University Press**. *Note: Content may be edited for style and length.*
**Journal Reference**:
- Xiangyou Hu, Brati Das, Hailong Hou, Wanxia He, Riqiang Yan.
**BACE1 deletion in the adult mouse reverses preformed amyloid deposition and improves cognitive functions**.*The Journal of Experimental Medicine*, 2018; jem.20171831 DOI: 10.1084/jem.20171831
**Cite This Page**:
*ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com
| true | true | true |
Researchers have found that gradually depleting an enzyme called BACE1 completely reverses the formation of amyloid plaques in the brains of mice with Alzheimer's disease, thereby improving the animals' cognitive function. The study raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer's disease in humans.
|
2024-10-12 00:00:00
|
2024-10-12 00:00:00
|
article
|
sciencedaily.com
|
ScienceDaily
| null | null |
|
8,085,778 |
http://www.wired.com/2014/07/platform-wars/
|
Tech Giants Begin Recruiting for the Next Big Platform Wars
|
Klint Finley
|
The Internet of Things is still young, but it's real. There are already dozens of internet-connected devices available, ranging from home-automation tools to wearable fitness trackers. And it's about to start growing at an even faster pace.
According a new survey by market research firm Evans Data, 17 percent of the world's software developers are already working on Internet of Things projects. Another 23 percent are planning to start an IoT project within the next six months. The most popular devices? Security and surveillance products, connected cars, environmental sensors and smart lights and other office automation tools.
The world's largest tech companies are already in fierce competition to attract developers to their respective connected device platforms. After all, the winners of these new platform wars will define the future of computing. The losers will go to the electronics recycling center. The stakes for developers are almost as high as they are for vendors. No product can support every conceivable standard and no app can run on every platform, so developers have to be strategic and write their code for the winners, while dodging the losers.
For example, Google is hoping to expand its strength in smart phones and tablets to other connected devices. Last month it launched a Android Wear, a version of its mobile operating system which is already in use by LG, Motorola and Samsung. But other options are emerging as well, such as the Pebble smart watch, and just this week Lenovo and Vuzix announced their own "smart glass" product to rival Google Glass. And though Samsung is using Android for its Gear smart watches, the company is also promoting its open source Tizen operating system for wearables and other devices. And of course Apple is slowly starting to get into the market, through products like iBeacon and HealthKit, and has long been rumored to have a smart watch in production.
>Although competition is generally good for customers, competing platforms can be a headache.
And it's not just app developers who are being faced with these sorts of decisions. Companies building Internet of Things devices have many platform considerations to make as well. Hardware hackers already have multiple circuit boards to choose from, ranging from Arduino to Tessel to Spark, each with different advantages and use cases, as well as different wireless standards, including Bluetooth and Zigbee, and various messaging protocols such as those promoted by AllSeen, the Open Interconnect Consortium, and MQTT.
Although competition is generally good for customers, competing platforms can be a headache. You have to worry about which which products will be compatible with the devices you already have, and which ones will have the staying power to have forward-compatible with tomorrow's technologies. You don't want to be stuck with the BetaMax of smart watches.
That means that for the next few years, the Internet of Things will be as exciting and vibrant as it is frustrating and tricky. At least, as far as developers are concerned.
| true | true | true |
The Internet of Things is still young, but it’s real. There are already dozens of internet-connected devices available, ranging from home-automation tools to wearable fitness trackers. And it’s about to start growing at an even faster pace. According a new survey by market research firm Evans Data, 17 percent of the world’s software developers are […]
|
2024-10-12 00:00:00
|
2014-07-25 00:00:00
|
article
|
wired.com
|
WIRED
| null | null |
|
33,571,252 |
https://erenow.net/ancient/the-pantheon-from-antiquity-to-the-present/7.php
|
Seven Building on Adversity: The Pantheon and Problems with Its Construction
| null |
Mark Wilson Jones
The fame of the Pantheon derives substantially from its wondrous engineering. The immense clear span went unchallenged for thirteen centuries until Brunelleschi raised the dome of Florence’s cathedral, and still the ancient feat is unrivaled as a work of unreinforced concrete. This prompts many questions for the casual visitor and the specialist alike. How was the building constructed? How long did it take to erect? What was the relationship between the various parts? In conjunction with the research of Janet DeLaine, Giangiacomo Martines, and Gene Waddell in this same volume, my aim is to advance these questions to the point of charting and explaining the sequence of building operations that is summarized graphically in __Plate XIII__.
One way of framing the inquiry is to ponder why the Pantheon has survived intact despite the passage of almost nineteen centuries, bearing in mind that so many other Roman wide-span buildings have not. It is characteristic of this enigmatic monument that the answer is not entirely straightforward. The Pantheon owes its survival to its transformation into a church in the early seventh century, yet doubtless this initiative reflected admiration for the grandeur of the Rotunda in the first place. In any event, the acquired Christian status ensured some remedy for the various injuries suffered down the ages. Most notably, as a replacement for the earlier theft of its original gilded bronze tiles, the dome received a lead covering during the reign of Gregory III in the eighth century. This represented by far the most important single protective measure – who can guess how many other imperial interiors would have survived if they, too, had had their roofs maintained? The front end of the building had a more checkered fortune. Bell towers and the like were added and removed from time to time, while a convent, shops, stores, and hovels latched on to the structure like limpets; each intervention inevitably brought a degree of destruction, contributing to the dilapidation and partial collapse of the east end of the portico.
It is perhaps surprising that more has not collapsed than just a portion of the portico. After all, the interior span comfortably exceeds any other ancient rival; the actual figure of 43.7 meters, measured from wall to wall, was determined by the axial diameter assigned to the ring of columns of 150 Roman feet (44.3 meters). The next largest surviving Roman domes spanned closer to 100 feet, this being the diameter of the misnamed Temple of Diana at Baiae.__ 1__ Other large domes may once have existed of which we have no trace, yet clearly the Pantheon was an exceptionally ambitious undertaking even by the standards of the high imperial period.
The Pantheon’s survival depends most of all on the technical quality of Roman construction, which reached its apogee in the first half of the second century AD. This derived from a long tradition of intelligent experimentation with materials (primarily brick and concrete) and spatial-constructional units (arches, vaults, and the special kind of vaults we call domes).__ 2__ Illustrative of the attention to technique of the Pantheon’s builders is the grading by density of the aggregate in the concrete, with the heaviest at the bottom and the lightest at the top (see
**7.1.** Armature of relieving arches embedded in the Pantheon. (Drawing Mark Wilson Jones and Robert Grover)
The Pantheon stands today, triumphant, yet the grandness of its ambition tested the Roman building machine almost to the point of disaster. It is impossible to say exactly how close failure came; all we can do is witness signs of structural distress and constructional difficulty. These are particularly notable in three respects: the long cracks that fracture the rotunda at intervals; the building complex that butts up against the rotunda to the south, the very existence of which speaks of emergency; and the curious mismarriage between rotunda and portico. While not in itself a structural issue, this, it seems, was bound up with problems in obtaining the massive column shafts originally intended. This essay looks into these matters and reflects on the drama of the Pantheon worksite. As I see it, the project was balanced on a knife edge between success and failure. It was success that prevailed – but at the cost of perfection compromised.
**Cracking, Concrete, and Centering**
The rotunda displays an array of more or less vertical cracks. Typically, they run from about halfway up the rotunda to halfway up the dome. The mapping of these cracks in the 1930s by Alberto Terenzio during restoration works gives us the best idea of their scale and frequency (__Fig. 7.2__).__ 4__ Many can be seen in old photographs, though today most are obscured by surface finishes or by modern repairs. Nonetheless, parts of some can still be seen on the outside of the rotunda, while one can be traced in the staircase on the east side of the entrance. The largest crack coincides with the main axis on the south side, measuring up to 7 centimeters in width where it can be accessed from behind the rotunda.
**7.2.** Interior elevation of the rotunda, projected flat, showing the principal cracks in the structure. (Wilson Jones __2000__, Fig. 9.21a; drawing by Ippolita D’Ayala Valva, after A. Terenzio)
Cracking is important for the way the dome behaves in terms of statics. It performs less like a modern monolithic shell reinforced with steel and more like an array of tapering sections of masonry comparable with segments of an orange. This is consistent with the effect of outward lateral thrust and hoop tension, both being characteristic of unreinforced domes with profiles based on arcs of circles (as opposed to catenary curves, which offer a reduction in tensile forces).__ 5__ Deformation of the section resulted, with the interior of the dome no longer matching an ideal hemisphere. Recent survey work with laser scanners conducted by the Karman Center reveals that the crown has slumped by around 1½ feet (45 centimeters) with respect to its presumed original hemispherical form. In relative terms, this equates to only about 1 percent of the total height, but in real terms, it still represents a significant shifting of stress and mass. A further cause of structural distress was settlement, for the Pantheon rises not on rock but on clay.
**XXI.** Section combining information from excavations under portico and rotunda, with sloping floor of Agrippan Pantheon shown in dashed line and XXXX shown in solid line. (Pier Olinto Armanini in Beltrami __1898__, Fig. XV)
It would be a mistake to make more than a casual analogy between Roman lime-based concrete and its modern cement-based equivalent, which is typically poured in one go in a relatively liquid state (with formwork initially supporting its entire weight).__ 9__Modern concrete includes steel to provide tensile strength and combat cracking. By contrast, the Romans addressed performance by varying the density of aggregate, by incorporating relieving arches, and by manipulating the cross section. In fact, in understanding the dome of the Pantheon, it is crucial to distinguish between the upper half, which is a relatively thin shell, and the lower half, which is much thicker and has a quite different profile (see
Construction of the lower part of the dome proceeded by stages, as for the rotunda wall. Ring followed ring, each diminishing in diameter, typically in lifts of 5 feet or so. The concrete was laid relatively dry, in more or less horizontal strata of mortar and aggregate in predominantly fist-sized pieces. Each stage would have been allowed to cure substantially before the next was added. By virtue of closing in on itself, each ring, once complete, could support not only itself but also the next ring slightly smaller in diameter, and so on, creating in effect a kind of corbeling. Higher up, after the top of the step-rings where the section is thinner, the vault was flatter, therefore demanding some kind of support up until the time when the concrete set (or “went off”). Finally, at the very top, the device of an oculus represented a wonderful solution: avoiding construction, lightening the dome while lighting the finished space, besides contributing to its symbolic mystique.
It is theoretically possible to build a lime-based concrete dome without any temporary support.__ 10__ But as regards Roman practice, there is plenty of evidence for the use of formwork; witness the imprints of wooden boarding on the vaulting of many a ruin, including Nero’s Golden House (the Domus Aurea), Trajan’s Markets, and Hadrian’s Villa. It is, unfortunately, impossible to obtain this kind of information for the Pantheon, for the rendered internal surfaces of the dome that we see today are the result of only the latest of a number of restorations, some of which date from a time before it had become customary to document the existing state prior to the commencement of work. The possibility of self-support can be reconciled with the use of formwork if we suppose that a temporary wooden assembly provided structural support for the upper parts alone. For the lower parts, the prime function of the formwork, just as its name suggests, was only to mold the form of the concrete. This was necessary for the geometrical precision of the Pantheon coffering, which creates such a magical dance of chiseled planes of light and shadow.
How was the wooden formwork itself erected? Some authorities opt for a system supported from the ground for the full width of the interior. William MacDonald visualizes “an immense hemispherical wooden form, supported by a forest of timbers and struts.”__ 11__Recoiling at the consumption of trees on such a scale, others have imagined centering “flying” across the entire space without vertical supports. Proposals in this vein include those put forward by Eugène Emmanuel Viollet-le-Duc (
**7.3.** Proposals for the centering used to construct the dome. (Left, Viollet-le-Duc __1875__, p. 475; right, Taylor __2003__, Fig. 120)
Any uniform system of centering, however, seems to be contradicted by the marked difference between the lower and upper halves of the Pantheon dome that has already been highlighted. Accordingly, I visualize a very substantial wooden tower rising from a doughnut-shaped plan, with a ring about 11.5 meters wide (__Fig. 7.4__).__ 13__ The upper portion of the dome, being relatively thin, was light enough to have been carried on such a timber structure.
**7.4.** Schematic cross section showing extent (in gray tone) of a hypothetical doughnut-shaped centering tower for constructing the dome, 1:600. (Drawing Mark Wilson Jones)
The quantity of timber consumed must have been considerable, but not beyond the Romans’ capabilities. They had at their disposal extensive forests of oak and sweet chestnut not far from the capital, and are known tohave employed very large timbers, for example, for the trusses spanning the 25-meter-wide nave of Trajan’s majestic Basilica Ulpia. In a treatise on the construction of siege structures, the *Poliorcetica*, this emperor’s architect, Apollodorus, expounded on the assembly of giant towers using small timber members. The surviving copy, which dates to the Middle Ages, has illustrations that convey an almost naive impression, but the original versions may well have been more precise and technical.16
By virtue of experience, imperial architects must have been aware that domes pushed outward at the haunches. Three main counterstrategies were adopted in the Pantheon. The first is a very thick supporting wall (thicker than simple vertical loading would require), which, to save materials and weight, was hollowed out by voids in the form of exedras and chambers (see __Chapter Four__). The second is the carrying up of the drum to a higher level than the springing of the dome, thus creating a mass of weight resistant to lateral movement. The third is the most obscure in its functioning: the vaults and arches embedded within the concrete known as relieving arches, which were made using tile-shaped bricks, mostly 2-foot-square *bipedales* with some 1.5-foot-square*sesquipedales*. As already noted, the Pantheon boasts the most elaborate known arrangement of relieving arches (__Fig. 7.1__). In part, they served to direct loadpaths to points of greatest strength, the eight “piers” of the rotunda plan. They also facilitated constructional processes, an important consideration for Roman builders. Since brick and mortar cured faster than concrete, the use of relieving arches enabled work to proceed upward faster than would otherwise have been the case.__ 17__ Although it is hard to know the full range of the ways in which they work – or were thought to work – we can still judge them, almost two thousand years later, wonderfully efficacious.
**The Grottoni**
In spite of these strategies, the stability of the Pantheon was not a foregone conclusion – indeed, it was evidently a matter of great concern for the builders. This is demonstrated by the annex of structures sandwiched between the rotunda and the adjacent basilica to the south (__Fig. 7.5__, and see __Fig. 6.5__). Parallel walls and associated floors and vaulting delimit a series of spaces on two levels that are collectively known in Italian, rather suggestively, as the *grottoni*. Above them, on the main axis, a solid brick arch supported a kind of bridge connecting the basilica with the rotunda. This whole complex constituted, in effect, a gigantic buttress, as may be deduced from the lack of any obvious ceremonial or utilitarian purpose, along with the crude fashion in which it butts up against the rotunda.__ 18__ Indeed, it is plain to see that the lower parts of the grottoni are not bonded with the rotunda.
**7.5.** Rotunda viewed from the south, above the *grottoni*. Note the scarring (particularly evident at and above the level of the three openings visible in the middle of the photograph), which testifies to the presence of a lost connection or “bridge” with the basilica to the rear (south) of the grottoni. (Photo Gene Waddell)
It is generally assumed that the grottoni were created after the completion of the rotunda, as an improvised post facto countermeasure to resist its outward pressure. However, my own observations suggest that work on the grottoni began relatively early. The key here is the connection between the rotunda and the structure overhead. Instead of casually butting up to the rotunda as occurs at low level, the “bridge” has a cornice that meets the middle cornice of the rotunda at a bonded miter, or, in other words, in a premeditated relationship.__ 19__ The springing of the arches of bipedales is integral with the rotunda, as shown by photographs taken at the time when parts of the grottoni were rebuilt, and as is still observable at high level (
This rather extraordinary state of affairs suggests, firstly, that the grottoni were initiated after the drum had risen to around a third of its height, and, secondly, *that they were built speedily so as to catch up with the drum*. This occurred before the dome was begun (or, at any rate, before it curved inward to a significant extent). All this suggests that the grottoni were built very fast. A rapid pace of work is attested at Trajan’s Baths by dates inscribed in red pigment on brick-faced concrete walls of broadly comparable width with those of the grottoni; the dates indicate that over a period of around two and a half months, one wall rose by an astonishing 15 meters. The vast substructures of the baths, comprising many other walls equally tall, were probably executed in a single season.__ 21__ We can only speculate how fast the grottoni were built, yet bearing this comparison in mind, a couple of years or less is not out of the question.
It seems, therefore, that the grottoni respond to a problem that occurred early, *before* the addition of the dome. The nature of the problem is suggested by the huge crack, already mentioned, that fractures the rotunda approximately on the main axis, where the wall defining the apse is at its thinnest. Unlike other cracks, which tend to peter out earlier, this one reached floor level.__ 22__ The cause could be settlement of the foundations, although this cannot be proved without a geotechnical investigation. In short, the grottoni were built so as to minimize the further movement expected when the thrust of the dome came into play. The intervention can be judged a success; despite the alarm it registers the Pantheon stands.
**The Connection between Rotunda and Transitional Block**
Different problems affected the north end of the Pantheon where its three main parts meet: rotunda, portico, and the structure in between. This is known in Italian as the *avancorpo,* and, rather less elegantly, as the “transitional block” or “intermediate block.” As noted in the Introduction (__Chapter One__), over the centuries the relationship among these three parts has provoked markedly contrasting interpretations. Traditionally, the explanation was thought to lie in (various different) phasing sequences, with the rotunda usually being presumed to have been built before the rest.__ 23__ Bound up as it is with perceived compositional shortcomings, the debate has an inevitable subjective component, and so it makes sense to address objective constructional realities first.
The junction between rotunda and transitional block can best be observed in the two staircases on either side of the entrance. Unlike other parts of the building, there is no marble revetment here to hinder inspection, while the stairs facilitate access for the entire height – an enormous practical advantage for the purposes of study. Achille Leclère, one of the long line of prize-winning architects awarded a period of residence at the French Academy in Rome, included a small-scale survey as part of his *envoi* of 1813 on the Pantheon.__ 24__ Otherwise this part of the building has been neglected, leading me to make a new survey in 2005 and 2006 of the east stair, the better preserved of the two, yielding the drawings illustrated in
**7.6.** Pantheon, east stair, section. (Drawing Mark Wilson Jones and Robert Grover)
**7.7.** Pantheon, east stair. (Drawing Mark Wilson Jones and Robert Grover)
**7.8.** Pantheon, east stair, plan. (Drawing Mark Wilson Jones and Roberta Zaccara)
Today, the east stair is entered from one of the two great apses of the portico, the ancient doorway on the flank having been blocked up. The stairs have suffered reconfiguration at the top and bottom, but otherwise remain essentially unchanged. The trapezoidal plan makes six full turns plus an extra seventh flight against the curved wall of the rotunda. They afford access to several different parts of the building: to the semicircular chambers in the drum on three levels, to the suite of rooms fronting the transitional block occupied by the Virtuosi of the Pantheon, to the entablature of the portico, to the middle cornice of the rotunda, and finally to the roof (__Figs. 7.6__, __7.7__, __7.8__, __7.9__).
**7.9.** Junction of the rotunda, transitional block, and portico on the east side, at high level. (Photo Mark Wilson Jones)
Inspection of the staircases shows that the rotunda and the transitional block are *united* at low level, but *disunited* at high level. It seems that both rose as one until somewhere in the region of 12 to 14 meters from the floor of the portico. From then on, work evidently proceeded on the rotunda alone, pending the completion of the transitional block.
At high level, the disjunction is obvious to the untrained eye. Wherever the rotunda is exposed to view it presents finished surfaces that can only have existed if it were built first (__Plate XXII__). Since the Pantheon stands intact and not exposed for study like a ruin, the unity of the lower parts is less glaringly evident, yet nonetheless inescapable. A key piece of evidence is a sounding, or *saggio*, located on the second short landing of the west stair, at the junction between the rotunda and the transitional block (see __Plate XXIII__). The ample view it offers into the “guts” of the fabric (the sounding reaches 63 centimeters deep) reveals no gap, crack, or joint, and the mortar traverses uninterrupted. In addition, there is a course of bipedales that passes unbroken from one part to the other, including a whole *bipedalis* right where they meet. It would have been quite impossible to insert so large and brittle an element after the original construction.__ 26__ So both the rotunda and the transitional block rose together at low level, although about halfway up, construction advanced on the rotunda while that of the transitional block was held back.
**XXII.** Pantheon, east stair, sounding “S7” near the top of the rotunda. (Photo Mark Wilson Jones)
**XXIII.** West stair, detail of sounding on level 2. This shows the “gut” of the construction at the junction between the rotunda and transitional block. Note the continuity of mortar and aggregate, as well as a whole bipedalis (indicated by arrow) that traverses the junction. (Photo Mark Wilson Jones)
**XXIV.** Manfredo Manfredi, permanent tomb of Vittorio Emanuele II, lateral niche of the Pantheon, begun 1884. (Photo Robin B. Williams)
These observations effectively eliminate all previous proposals that would claim that the main parts of the Pantheon were built completely separately. That the rotunda was never planned to stand on its own is further confirmed by the connections between the staircase and the chambers encased within the drum (__Figs. 7.7__, __7.8__). These connections, being perfectly intact, were part of the original construction. This confirms what has become evident given the other considerations mentioned: the stairs were anticipated from the outset. And if the stairs were envisaged, so too must have been the transitional block as a whole.
Inspection of the brickwork surfaces and of the courses of bipedales running around the stair offers further clarification of the relationship between parts of the fabric. The courses of bipedales are particularly instructive, for they traverse at intervals the entire thickness of construction, like layers in a layer cake (__Fig. 7.7__ and __Plate XXIII__). In the lower half of the staircase, the bipedales, save for a few exceptions, run at the same level around all four walls, which suggests that these were coeval. This coordination is less pronounced higher up, but one bipedalis course on the sixth turn of the stairs runs right around the staircase and *all the way to the dome*. That this occurs in spite of the separation between the rotunda and the transitional block points to the temporal proximity of the entire complex, suggesting that work on the latter only suffered a short-lived hiatus. Operations must have resumed on the upper half of the transitional block quite quickly, probably within a year or two.27
**The Connection between Transitional Block and Portico**
The excavations of the 1890s supervised by Luca Beltrami indicated that the foundations of the existing portico and those of the transitional block were made at the same time. This was later confirmed by A. M. Colini and Italo Gismondi in their study.__ 28__ They also reinforced Leclère’s observation of the continuity displayed by the entablature running longitudinally, noting that the blocks incorporating the capitals of the pilasters in the portico are embedded into the fabric of the transitional block too deeply to have been inserted in a separate epoch.
**7.10.** Pantheon, vestibule, and transitional block at the junction with the portico. Note the unusual grouping of pilasters, and in particular the conjunction of one that forms part of the transitional block with the three-pilaster-faced anta. All four of the antae in the portico have sides toward the great niches that are wider than the other faces. This creates a “leftover” rough portion on each of the capital blocks, since the capitals proper are maintained the same width throughout. The result may be seen on the far right. This and other capital blocks are embedded into the fabric of the transitional block. (Photo Mark Wilson Jones)
The portico and transitional block were, then, planned together and their joint foundations implemented together. The portico cannot have been added in a completely separate campaign. This does not mean, however, that both marched exactly in step. In fact, the raising of columnar structures was normally carried out after completion of any associated masonry structure (see __Chapter Six__), while there were also reasons that led to a greater delay than normal in this particular instance, as we shall see.
To summarize our examination so far, the fabric of the Pantheon reveals the following:
· All three main parts of the Pantheon, rotunda, transitional block, and portico, belong to a unitary initial conception.
· At the south end, the grottoni were not part of the original project; they were added after the commencement of the rotunda, but were built so quickly as to catch up and become united with it before the dome was far advanced.
· At the north end, the rotunda and transitional block are bonded at the bottom, but about halfway up the elevation the procedure changed. Work was next carried forward on the rotunda alone, with the rest of the transitional block following on soon afterwards.
· A portico was planned as part of the project from the outset, but it was the last major part of the edifice to be implemented.
The curious phasing of the grottoni can be explained as a response to concerns about the stability of the rotunda, while building the portico last made practical sense. But how can we explain the interruption of work on the transitional block?
Can there be an explanation of a structural kind, for example differential settlement between the rotunda and the transitional block? However, there are no signs of such. Those lesions that are present in the staircases respond to the general pattern of cracks affecting the rotunda as a whole, and they tend to peter out before ground level. There is no cracking visible in the sounding in the west stair where the rotunda and the transitional block intersect (__Plate XXIII__). Nor are there any significant lesions in the side walls of the staircase (those that run north–south). What explains, then, the hiatus in building the latter? This is an issue, I contend, that cannot be resolved by focusing on construction alone. It is now time to turn to issues of design that might bear on the same puzzle.
**The Front of the Pantheon and the “Compromise Hypothesis”**
While the structure of the Pantheon solicits both wonder and alarm, its design has historically provoked just as varied responses. Alternating between praise and criticism, the paradoxical *fortuna* of the monument has been charted lucidly by Tilmann Buddensieg,__ 31__while surfacing in the Introduction to this book, and in some chapters in the second half. Criticism of the interior was mainly directed at the attic level, and especially the pilasters for being too small and for not aligning with either the main order below or the coffering above. This can be understood as a misplaced faith in academicism, which tended to dominate from the time of the Renaissance, and in particular the “law” of vertically aligning like with like. Instead, we may delight in a coherent scheme that spurns a predictable radial solution for the sake of a genuinely dynamic experience. The “push and pull” effect of the openings and exedras on the eight main axes is accentuated by compositional alignments avoided elsewhere.
Criticism of the exterior has concerned the difficult marriage of the rotunda, transitional, block and portico, as exemplified by the abrupt termination of the entablature where the circular and orthogonal geometries meet (__Figs. 7.9__, and __7.11__, and see __Fig. 1.9__). Along with various “solecisms” – offenses to the classical “grammar” of the orders – this lack of unity used to be seen as the legacy of separate phases. Giorgio Vasari related how many artists of his time, “Michelangelo among them, are of the opinion that the Rotunda was built by three architects, the first carrying it up to the cornice above the columns, the second doing from the cornice upwards.... [T]he third is believed to have done the beautiful portico.”__ 34__ As late as the 1930s, Giuseppe Cozzo, a specialist of Roman construction who should have known better, continued to maintain that the rotunda was built first (in the time of Agrippa), and the rest later (in the reign of the Severan emperors).
**7.11.** Junction of the rotunda, transitional block, and portico on the west side. (Photo Mark Wilson Jones)
But the enigma of the Pantheon is not to be solved in this way. Following the work of Georges Chedanne, Luca Beltrami, and Heinrich Dressel in the 1890s, scholars had to accept the implications of brickstamp studies (see __Chapter Three__). Leaving aside for a moment the precise dates implied, these showed that save for later repairs and alterations, the whole edifice was erected more or less in one go. What explains, then, the character of the design? The inept collision of rotunda, transitional block, and portico continued to elude a positive interpretation, representing something of an embarrassment to be sidestepped as deftly as possible by anyone writing about the Pantheon in the course of the twentieth century.37
Paul Davies, David Hemsoll, and I attempted an explanation of a quite different kind in 1987, arguing that the front of the Pantheon is not what was originally intended, but rather the outcome of compromises induced by circumstances beyond the architect’s control. The “compromise hypothesis” proposes that the portico was originally planned to have a roof at the level of the existing upper pediment, a roof supported on columns incorporating monolithic shafts of Egyptian granite 50 feet in length and 100 tons in weight (__Fig. 7.12__, __Plate XVII__).__ 38__ For some reason unknown – perhaps because a consignment of the intended shafts had sunk at sea en route between Alexandria and Rome – only after work had started on site was the decision made to employ 40-foot shafts instead.
**7.12.** Pantheon plans and elevations, intended and as executed. (Wilson Jones __2000__, Fig. 10.12)
Although it should perhaps not be admitted in the politely serious domain of scholarly discourse, our article of 1987 was conceived by chance in a London pub after a day studying other things in the Warburg Institute, while sketching from memory. But what started as a bit of speculative amusement came to take on substance upon further research. Calculation showed that 50-foot shafts were perfectly commensurate with an order rising to the cornice running around the rotunda and the start of the upper pediment. Meanwhile, scrutiny brought into focus the solecisms that had worried so many past commentators, while revealing some previously unnoticed ones. In effect, there is quite a tally of features that are sufficiently unusual or perverse as to raise the question of whether they were really intended in the first instance. Here follows the list of points as they stood in the year 2000:41
i. The transitional block is faced with an accessory pediment that is partially cut off by the main roof (__Fig. 7.9__). No known earlier building has a comparable arrangement save for the Propylaea of the Athenian Acropolis.
ii. The entablature of the portico terminates abruptly at the rotunda, failing to align with the moldings of the latter (__Figs. 7.9__, __7.11__).
iii. The portico pediment is exceptionally tall in relation to the height of the order (__Fig. 7.12__, and see __Plate I__), to judge by the proportions of other Roman buildings of similar size, such as Augustus’s temple of Mars Ultor.
iv. The cornice brackets or modillions of the portico pediment are smaller and are spaced at more frequent intervals than those of the upper pediment, despite the fact that both pediments are the same size (__Fig. 7.9__).
v. The gaps between the columns, or intercolumnations, are unusually large relative to the column diameter when compared with most other monumental imperial colonnades (although widely spaced rhythms did also exist).
vi. The antae in the portico are oddly unbalanced. The sides facing the great niches are wider than the rest, an arrangement that gave rise to an unsatisfactory resolution of the capitals overhead (__Figs. 7.10__, right; __7.13__, partial plan)
vii. The central aisle of the portico becomes narrower where it enters the transitional block; here is a peculiar grouping of pilasters, as if the ones nearer to the entrance door were added after the others were already in place (__Figs. 7.10__, __7.14__).
viii. Where the portico meets the transitional block the entablature steps out by a small amount, one neither so small as to be insignificant nor so big as to constitute a positive feature (__Fig. 7.11__).
ix. The transitional block is only bonded with the rotunda in the lower levels of the building. In the upper parts, it merely runs up against the rotunda, as has just been confirmed in the preceding discussion (see __Fig. 1.9__).
All such solecisms and curiosities would simply not have existed in the hypothetical original project (__Plate XVII__). Scholarly responses to the compromise hypothesis have been favorable, though of course not everyone is convinced.__ 42__ Lothar Haselberger, the author of important publications on the building, hastaken issue with such an approach, highlighting the danger of presuming that we can know what ancient architects intended, along with specific objections to some of the points just outlined.
**7.13.** The portico as built (top) and as intended (bottom). Transverse section through the portico, with the transitional block and rotunda seen in elevation, 1: 400, with part-section top right and part-plans in the middle. (Drawing Mark Wilson Jones)
**7.14.** The vestibule and door, seen on axis with view through to the rotunda beyond. Originally a bronze, suspended, vaulted ceiling would have abutted the reveal of the masonry barrel vault over the vestibule. (Photo Maxim Atayants)
Other responses to the compromise hypothesis take the form of qualified support, such as that published by Rabun Taylor in his book *Roman Builders*; he runs with the idea of a compromised Pantheon portico, but adapts it in favor of hypothetical columns that were taller still.__ 45__ Yet Taylor’s 55-foot shaft size is not able to appeal to evidence of the same kind that favors 50 footers. One such is a letter on papyrus from an Egyptian contractor dating to Hadrian’s reign. It calls urgently for fodder for animals involved in transporting overland a single 50-foot granite shaft from the quarries at Mons Claudianus to the Nile, and thence to Alexandria, from where it would in all certainty have been bound for Rome.
It is possible to marshal further fresh evidence in favor of the compromise hypothesis. My inspection of the staircases has established once and for all that the rotunda and transitional block are united at low level, and so part of a unified project. The compromise hypothesis offers an explanation for the interruption of work on the transitional block; furthermore, it fits neatly with a hiatus of relatively short duration during which the design was argued over and revised.
It is also worth scrutinizing once more the relationship between the transitional block and the portico. While Colini and Gismondi’s observations, discussed earlier, concerned the *actual portico*, certain constructional details fit a *hypothetical taller one*. The original sequence of nine numbered points embraced by the compromise hypothesis can now be extended with reference to a sectional elevation of the transitional block in both its actual and intended form (__Fig. 7.13__).51
x. The 10-foot-wide concrete strip foundations under the portico are unusually wide for the columns they carry, and would have been adequate for larger columns (__Fig. 7.13__, __A__).52
xi. At high level, the front face of the transitional block presents some unsightly projecting blocks (__Fig. 7.13__, __B__ and __C__). These facilitated construction in some way or other, though exactly how rather baffled Colini and Gismondi. With a hypothetical taller portico, all such blocks would have been hidden from view between the suspended ceiling and the roof.
xii. The profile of the transitional block sets back where the cornice demarcates the high-level register, just as occurs on the rotunda (__Fig. 7.13__, __D__). This set-back follows the classical principle of recession, in tune with structural logic (walls high up in a building need not be as thick as those below). On the front of the transitional block, moreover, the set-back tracks the *upper* pediment, which was therefore an integral feature of the composition. This arrangement makes most sense if a roof had been planned to arrive here – that is, that of the hypothetical taller portico.__ 53__ (Contrariwise, there is no such set-back at the level of the existing portico roof.)
xiii. The ancient bronze trusses that once spanned the portico displayed oddities of configuration, as is clear from surveys made before 1625, including one by Borromini at the time this singular assembly was taken down (see __Fig. 10.1__).__ 54__ In particular, the tie beams over the central aisle did not reach far enough to be seated over the colonnades, and were instead supported by raking struts (
At the same time, the original design was consistent with the following advantages: a total height, measured to the peak of the pediment/roof would have been 100 feet (more or less), an eminently satisfying round dimension that echoed other key dimensions (e.g., the 150-foot diameter of the ring of interior columns, the 75-foot datum for the entablature and middle cornice of the rotunda, the 60-foot height of the columns, and the 50-foot height of their shafts). The relatively steep pitch of the pediment is now explained; this particular rake was a necessary ingredient for sweetly resolving these various conditions and intentions.
These last points, especially xi and xii, suggest that when work resumed on the transitional block, there was possibly still the intention to achieve the taller portico. But other features suit the actual portico, including the inclined line of bipedales just above the roof that Colini and Gismondi observed, and the embedded capital blocks already mentioned. As regards the latter, it is noticeable that these are not neatly encased in the masonry as would befit work made all of a piece; there is a slight gap to the sides that would be consistent with their having been lowered and levered into a seating that was fashioned at a later stage to the initial building of the masonry in this area.__ 58__ Following the nonappearance of the desired 50-foot shafts, it seems that there was an uncertain phase when both options – to use 50 or 40 footers – were kept open pending a definitive decision.
The compromise hypothesis, then, can potentially account for most, if not all, of the design puzzles that the Pantheon presents on its entrance side. It also concurs with the relative phasing of construction. But can we be more precise and pin down the specific dates involved?
**Brickstamps**
The practice of imprinting bricks and other Roman building products of fired clay with the identification marks of individual production units (*officinae*) and their parent brickyards (*figlinae*) happens to have been particularly prevalent in the years spanning Trajan’s and Hadrian’s rule. Usefully, for study purposes these stamps can be dated either roughly or in some cases to a particular year.__ 60__ This assigns any building in which they are found a
On the basis of the prewar studies of Herbert Bloch and Julien Guey, no less than 115 of the 120 stamps observed in situ in the Pantheon belong to the late Trajanic or early Hadrianic period.__ 62__ It is significant that similar stamps are dispersed in different parts of the building.
Establishing exactly when works on site began is controversial. As Lise Hetland shows in __Chapter Three__, the brickstamps that can be dated precisely, or relatively precisely, are mainly late Trajanic. Bloch argued that the Trajanic shipments were stockpiled, not to be taken up until Hadrian instigated the project after coming to power in the middle of 117. Exposing a certain circularity in Bloch’s position, Hetland argues more straightforwardly that the project was Trajan’s, in line with Wolf-Dieter Heilmeyer’s ideas of the 1970s based on stylistic comparisons.__ 64__ And is it not more logical, asks Hetland, that Trajan commissioned a replacement Pantheon sooner rather than later after the fire of 110 that ruined its predecessor? In short, a start date between 112 and 115 is more likely than one around 118.
The key consideration for the end date is that in AD 123, a higher than usual proportion of bipedales were produced bearing brickstamps, often with the names of the then-reigning consuls Apronianus and Paetinus. The absence of such stamps in the superstructure of the Pantheon shows that it must have been completed by this time or soon after, in other words by around 124.
It is revelatory to focus on a single stamp that does not fit the general pattern. This is the sole example from the whole building that is unambiguously Hadrianic, one recorded by Rodolfo Lanciani and datable to AD 123. Bloch was struck by the anomalous character of this find, in effect adding another enigma to the building that Lanciani called the “Sphinx of the Campus Martius.” Bloch knew that the rigors of his discipline were unassailable; no structure can be earlier than the latest stamp present (provided it is not connected with out-of-sequence working or repairs). Having been found close to ground level, did not this one stamp postpone the start of construction to later than 123? Bloch resolved this dilemma by supposing that Lanciani had simply been mistaken.65
Lanciani’s record, however, sounds as if it were accurate: “read by myself on the 25th of April on a piece [*scaglia*] of brick extracted from a sounding made by the north east corner of the brick front, behind the marble pilaster.”__ 66__ Rather than doubt his word, there is a way of reconciling it with the evidence of all the other brickstamps. The key is the find-spot, just behind one of the marble pilasters, that is to say exactly where the columnar system of the portico meets the transitional block. In all likelihood, Lanciani’s
**The Progress of Works on Site**
There are thus two main possibilities for the duration of the project from conception to completion: either a period of seven or so years (ca. 118/119 to ca. 125/126), if we give credence to Bloch, or one roughly five years longer (ca. 113/114 to ca. 125/126), if we give credence to Heilmeyer and Hetland, which on balance I think we must. It may also be noted that the papyrus cited earlier that concerns a 50-foot shaft in transit across the eastern Egyptian desert dates to the third year of Hadrian’s reign, specifically the winter months of 119/120.__ 69__ If the shaft were indeed intended for the Pantheon, the timing seems too early for a Hadrianic commission; on the other hand, it fits neatly with a start under Trajan.
In the normal course of events, as DeLaine demonstrates in __Chapter Six__, a total construction period of six or seven years would be feasible for the Pantheon. But from what we have seen, events at the site were far from normal. Delays were generated by the improvised erection of the grottoni. Delays are also implicit in the interruption of the transitional block caused by the nonappearance of the intended shafts for the portico. (It remains difficult to say whether these delays ran separately or concurrently.)
The combined evidence of the sources, brickstamps, worksite logistics, and the present examination of the fabric thus allows the sequence of operations and chronology of the project to be reconstructed as shown in __Plate XIII__, which is to say along the following lines:
110 |
Previous Pantheon burns |
Trajan reigns |
112–114 |
Conception of the new Pantheon; scheme design |
|
114–116 |
Site preparation and foundations |
|
116–119 |
Progress on brick and concrete superstructure |
(117) Hadrian’s accession |
118–121 |
Rotunda suffers cracking; progress interrupted; improvisation of the grottoni; nonappearance of 50-foot shafts for the portico |
(118) Hadrian returns to Rome |
120–123 |
Grottoni completed; work begins on the dome; work on the transitional block interrupted |
(121) Hadrian leaves Rome |
122–124 |
Completion of the dome; completion of the transitional block; decision to use 40-foot shafts for the portico |
|
124–125 |
Completion of the portico; installation of statuary and fittings; finishing and inspections |
(125) Hadrian returns |
125–127 |
Dedication of the Pantheon |
|
128 |
Hadrian leaves Rome |
**Apollodorus and Hadrian**
Inception under Trajan as opposed to Hadrian makes it more likely that the Pantheon was designed by the architect-engineer Apollodorus of Damascus, who was Trajan’s preferred designer but apparently at odds with Hadrian. Certainly Apollodorus is the more credible author of the Pantheon than Hadrian himself, who has also been proposed.__ 71__ As we have seen in the Introduction, ancient sources credit Apollodorus with Trajan’s Forum and Baths, both quite exceptional projects.
It is well, furthermore, to recall discussion about the centering used to build the dome. This would have been a considerable work of engineering in its own right, and Apollodorus was evidently a master architect-engineer with extensive expertise in the erection of giant timber structures, as attested by his authorship of the *Poliorcetica*. Ancient sources also credit him with a pertinent technological feat, a huge wooden bridge over the Danube, which apparently approached 170 Roman feet or 55 meters in span (though probably less in reality). This sensational structure, which is represented in compact form on Trajan’s Column, was destroyed on Hadrian’s orders out of fear that it would provide a conduit for barbarian invasion. (Some of its stone and concrete piers still survive.)__ 75__ The bridge was the subject of another treatise by Apollodorus, a work which, though since lost, was referred to by the sixth-century historian Procopius in such a way as to suggest that it was still well known in his own day.
It is curious, too, that the persons of Apollodorus and Hadrian come into conflict, according to the testimony of the third-century senator and historian Dio Cassius.__ 77__ Apparently, the emperor first banished and later put to death the architect on account of bad feeling that began long before, when Trajan was consulting Apollodorus, who tactlessly put down one of Hadrian’s interruptions with the remark: “be off and draw your pumpkins, you don’t understand any of these matters.” Later, after becoming emperor, Hadrian sent his own design of the Temple of Venus and Rome to Apollodorus, only to receive intolerable criticisms. The divine statues had been made too tall for the height of the cella, so much so that “if the goddesses wish to get up and go out, they will be unable to do so.”
The disparaging reference to pumpkins, or gourds, was most likely an allusion to the scalloped vaults that Hadrian and his architects used to such effect at his villa at Tivoli.__ 78__ It is tempting to wonder if the story about the Temple of Venus and Rome was a corruption of a text in which the Pantheon was the real focus of dispute.
Presumably, Apollodorus held out for the taller portico and its majestic 50-foot shafts, while the emperor sought to prevent further embarrassing delays by resorting to compromise. From his knowledge of Athens, Hadrian may have been aware that the Propylaea of the Acropolis had two separate pedimented roofs, and that when seen from a distance, one might look as if it were superimposed on the other. Was it he who imposed the double pediment solution, while commandeering a batch of 40-foot columns from another project under way in the capital?
Leaving aside such conjecture, the building site of the Pantheon was eventful, to say the least. Improvisation at the south end suggests that the dome was thought to be in jeopardy. Then there was the dilemma caused by the nonappearance of the intended column shafts at the north end. The architect, whoever he was, no doubt had to shoulder the consequences and perhaps the blame for them, too, even if unfairly so. Remembering all the while that design represents a team effort, the architect(s) of the Pantheon can stake a claim to one of the most sublime architectural experiences of all time. As the product of a rare genius and extraordinary technical audacity, it must have given its author immense satisfaction, yet by this interpretation, the building of it was harrowing in its uncertainty and immensely frustrating. The awesome magnificence of the interior should have been matched on the exterior, but instead the designer saw his vision spoiled. Much of his efforts must have been directed at artfully minimizing the negative impact of circumstances that could not be avoided. But compromise is part and parcel of an architect’s business. Building the Pantheon was a dream that turned nightmarish, though in the end it sends all who enter into reveries.
** 1** The diameter of the Temple of Diana, part of a thermal complex, is fractionally greater than 29.5 meters, or 100 Roman feet. The so-called Temple of Apollo, also at Baiae, apparently measures about 35 meters (ca. 120 ft) in diameter, but too little is known about this structure to be sure that it once supported a dome. In Rome, the caldarium of the Baths of Caracalla, originally domed, spanned about 35 meters too.
** 2** Selected studies of imperial construction include G. Lugli,
** 3** Lynne Lancaster, “The Lightweight Volcanic Scoria in the Concrete Vaults of Imperial Rome: Some Evidence for the Trade and Economy of Building Materials,”
** 4** Alberto Terenzio, “La Restauration du Panthéon de Rome,”
** 5** Rowland J. Mainstone,
** 6** Giorgio Croci,
** 7** Differential settlement affected Agrippa’s Pantheon to a greater extent, to judge by sloping levels observable in the foundations that survive under the portico of the existing building; see
** 8** That the cracking occurred during or soon after construction is suggested by the use of bricks of similar date to those in the rest of the Pantheon for repairing and filling the cracks; see Licht
** 9** On Roman concrete, see Adam 1984 (
** 10** S. Huerta,
** 11** William L. MacDonald,
** 12** Eugène-Emmanuel Viollet-le-Duc, s.v. “Voute,”
** 13** I thank Dina D’Ayala for generously lending her engineering expertise to vet initial proposals.
** 14** The structural behavior is composite in nature, meaning that part of the load was resisted by the lower part of the dome (once the concrete had hardened sufficiently).
** 15** For opinion favoring some kind of central timber tower, see Jürgen Rasch, “Zur Konstruktion spätantiker Kuppeln vom 3 bis 6 Jahrhundert,”
** 16** Adriano La Regina, ed.,
** 17** Relieving arches could provide support for higher levels to be initiated without the fabric of the walls enclosed by the arches, this following on later, as convenient. For further discussion, see Heene
** 18** Terenzio
** 19** Licht
** 20** Cozzo
** 21** Rita Volpe, “Un antico giornale di cantiere delle terme di Traiano,”
** 22** Today the crack may be inspected on the second level of the grottoni. Its presence at floor level, though now covered over, is attested by photographs, including one in the Archivio Fotografico, Soprintendenza per i Beni Architettonici e per il Paesaggio di Roma, neg. 2967.
** 23** For a reasoned summary of preceding opinion, see Licht
** 24** For Leclère’s survey, see
** 25** I am grateful to many for their kind help with this project: to Giovanni Belardi, the director responsible for the Pantheon of the Soprintendenza per i Beni Architettonici e per il Paesaggio di Roma, for permission; to Cinzia Conti and her students Roberta Zaccara, Tomaso De Pasquale, and Mariangela Perrota for surveying; to Robert Grover for drawing up the results; to Cinzia Conti and Giangiacomo Martines for precious observations in loco.
** 26** Here, I find myself conscious of a debt to Giovanni Belardi for authorization to study the stairs, yet we disagree over interpretation. He believes the rotunda to precede the transitional block, but to me, the
** 27** Such is the similarity in technique between the upper and lower halves of the staircase in general that their construction may have been supervised by the same people, as remarked to me by Cinzia Conti.
** 28** Antonio Maria Colini and Italo Gismondi, “Contributo allo studio del Pantheon: La parte frontale dell’avancorpo e la data del portico,”
** 29** Colini and Gismondi
** 30** Colini and Gismondi
** 31** Tilmann Buddensieg, “Criticism and Praise of the Pantheon in the Middle Ages and the Renaissance,”
** 32** Wilson Jones
** 33** Heinz Kähler, “Das Pantheon in Rom,”
** 34** Adapted from Giorgio Vasari,
** 35** Cozzo
** 36** On these inscriptions and their interpretation, see Adam Ziolkowski, “Prolegomena to Any Future Methaphysics [
** 37** It has been pointed out, for example, that that even if the junction of rotunda and portico might be judged unsatisfactory, this could not be seen in antiquity from the forum-like “forecourt,” not forgetting that the ground level was at least two meters lower than at present. See MacDonald
** 38** The key proportional rule for the Corinthian order set the height of the shaft as 5/6 that of the complete column (including base and capital), and so 50-foot shafts imply columns 60 feet tall; both dimensions harmonize well with 75- and 150-foot measures elsewhere in the whole project. For the design of the Corinthian column, see Wilson Jones, “Designing the Roman Corinthian Order,”
** 39** Davies, Hemsoll, and Wilson Jones
** 40** Wilson Jones
** 41** Wilson Jones
** 42** Theodore Peña, “P. Giss. 69: Evidence for the Supplying of Stone Transport Operations in Roman Egypt and the Production of Fifty-Foot Monolithic Column Shafts,”
** 43** In November 2006, Haselberger presented objections at the conference at the Karman Center in Bern that may be summarized as follows:
· The Propylaea of the Athenian Acropolis offer a precedent for the upper pediment (as Tiberi observed), which thus could have been intended from the outset (cf. my point i);
· Other buildings exist with similarly tall/heavy pediments (iii);
· The spacing of the modillions varies considerably, and so on this basis, it is hard to sustain arguments about intentions (iv);
· Other buildings exist with similarly wide intercolumnations (v);
· A capital inside the rotunda is not axially aligned with its pilaster, and so similar misalignments in the portico need not reflect a change of project (vi).
I concede that points iii, iv, and v are relatively subjective, and that they cannot furnish conclusive arguments either way. Point i calls into question a major plank of the compromise hypothesis, yet it does not necessarily negate it, since the idea of a second pediment, perhaps inspired by the Athenian Propylaea, may only have arisen *after* the Pantheon project ran into problems. As for the misalignment of the capitals (vi), there is a difference between an isolated case in the interior and the systematic occurrence of a more severe misalignment on all four antae in the portico. In short, none of these criticisms is fatal, while the other points (ii, vii, viii, ix) remain unchallenged.
** 44** Lothar Haselberger, “The Pantheon: Nagging Questions to No End,” in Grasshoff, Heinzelmann, and Wäfler
** 45** Taylor
** 46** Peña
** 47** Wilson Jones
** 48** Stefania Fogagnolo, “Scoperta di frammenti di colonne colossali dal foro della pace,” in
** 49** For a list of 50 footers, see Peña
** 50** On standardization in the service of the Roman “building machine,” see Wilson Jones
** 51** This drawing is based on those of Leclère and Colini, supplemented by my measurements of the plan, and aspects of the main order that I was able to check from openings in the staircase. The trusses were reconstructed on the basis of Borromini’s survey and sixteenth-century drawings. Further features were observed and photographed from nearby scaffolding in November 2010.
** 52** The 10-foot width of the foundations relates to the 5-foot column diameter as 2:1. By contrast, Vitruvius recommends a ratio of around 3:2 (1.5:1), a value more or less consistent with monumental imperial practice. The substructures under monumental colonnades typically project approximately in line with the plinths of the columns, implying a thickness about 1.4 times the column diameter. A ratio in the range of 1.4–1.5 recurs at the temples of Castor, of Vespasian, and of Antoninus and Faustina, as well as on the foundation blocks of travertine and peperino supporting colonnades in the Forum of Trajan. For a generic illustration of a concrete foundation only slightly wider than the plinths of the columns it supports, see Giuliani
** 53** The set-back could also have been intended to seat elements of the roof construction, as Gene Waddell has drawn to my attention.
** 54** For this drawing of Borromini, see Heinrich Thelen,
** 55** Rice
** 56** Rice
** 57** The stone blocks projecting from the upper part of the transitional block may have facilitated constructional operations, but there is also the possibility that they were intended to provide some kind of connection with the trusses of the abandoned project (
** 58** I have no particular opinion on the three rough blocks immediately above the architrave that runs on top of the capitals, though they may have participated in anchoring the bronze assembly associated with the ceiling of the side aisles.
** 59** As regards the original project, it is also impossible to know how the transitional block should have looked. It could have terminated more or less as it does today, or it could have been capped by a continuation of the (higher) portico roof; see Davies, Hemsoll, and Wilson Jones
** 60** On brickstamps and their interpretation, see Heinrich Dressel,
** 61** A typical lag of a few months twixt production and use would be understandable, in part because stamps were imprinted in wet clay, which had to dry before firing, in part for any flaws that might develop to make themselves evident. At times, bricks may have been rushed to market, or they may have been set aside for later use. Note divergent views on this and the implications for Trajan’s Markets, where Domitianic brickstamps may indicate a Domitianic inception (E. Bianchi, “I bolli laterizi dei Mercati Traiani,”
** 62** Guey
** 63** Guey
** 64** Wolf-Dieter Heilmeyer, “Apollodorus von Damaskus – der Architekt des Pantheon,”
** 65** Bloch
** 66** “... da me letto il giorno 25 aprile su d’una scaglia di mattone, cavata dal tasto fatto presso lo spigolo N-E. della fronte laterizia, dietro il pilastro marmoreo del portico” (Rodolfo Lanciani,
** 67** Bloch (
** 68** Bloch
** 69** Peña
** 70** A date of 119/120 also seems too early for the Temple of Trajan, presuming its design not to have begun before his death in the summer of 117.
** 71** For recent affirmation of Hadrian acting in effect as an architect, see E. Salza Prina Ricotti,
** 72** Scriptores Historiae Augustae, S.H.A.
** 73** See Wolf-Dieter Heilmeyer, “Korinthische Normalkapitelle: Studien zur Geschichte der römischen Architekturdekoration,”
** 74** Wilson Jones
** 75** Piers from the bridge are to be found at Turnu-Severin in Romania. See A. Barcacila, “Les piliers du pont Trajan sur la rive gauche du Danube et la scène CI de Colonne Trajan,”
** 76** By referring his readers to Apollodorus’s treatise, Procopius kept brief his own mention (
** 77** Dio Cassius, 69.4. For the passage in full, see MacDonald
** 78** F. E. Brown, “Hadrianic Architecture,”
** 79** Wilson Jones
| true | true | true |
Building on Adversity: The Pantheon and Problems with Its Construction - The Pantheon: From Antiquity to the Present - by Tod A. Marder
|
2024-10-12 00:00:00
|
2012-06-04 00:00:00
|
/share.png
| null |
erenow.org
|
erenow.org
| null | null |
2,498,981 |
http://tomasztunguz.com/2011/04/29/speech-is-power-when-answering-emails/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,028,122 |
https://www.troyhunt.com/inside-the-massive-naz-api-credential-stuffing-list/
|
Inside the Massive Naz.API Credential Stuffing List
|
Troy Hunt
|
It feels like not a week goes by without someone sending me yet another credential stuffing list. It's usually something to the effect of "hey, have you seen the Spotify breach", to which I politely reply with a link to my old No, Spotify Wasn't Hacked blog post (it's just the output of a small set of credentials successfully tested against their service), and we all move on. Occasionally though, the corpus of data is of much greater significance, most notably the Collection #1 incident of early 2019. But even then, the rapid appearance of Collections #2 through #5 (and more) quickly became, as I phrased it in that blog post, "a race to the bottom" I did not want to take further part in.
Until the Naz.API list appeared. Here's the back story: this week I was contacted by a well-known tech company that had received a bug bounty submission based on a credential stuffing list posted to a popular hacking forum:
Whilst this post dates back almost 4 months, it hadn't come across my radar until now and inevitably, also hadn't been sent to the aforementioned tech company. They took it seriously enough to take appropriate action against their (very sizeable) user base which gave me enough cause to investigate it further than your average cred stuffing list. Here's what I found:
- 319 files totalling 104GB
- 70,840,771 unique email addresses
- 427,308 individual HIBP subscribers impacted
- 65.03% of addresses already in HIBP (based on a 1k random sample set)
That last number was the real kicker; when a third of the email addresses have never been seen before, that's statistically significant. This isn't just the usual collection of repurposed lists wrapped up with a brand-new bow on it and passed off as the next big thing; it's a significant volume of new data. When you look at the above forum post the data accompanied, the reason why becomes clear: it's from "stealer logs" or in other words, malware that has grabbed credentials from compromised machines. Apparently, this was sourced from the now defunct illicit.services website which (in)famously provided search results for other people's data along these lines:
I was aware of this service because, well, just look at the first example query 🤦♂️
So, what does a stealer log look like? Website, username and password:
That's just the first 20 rows out of 5 million in that particular file, but it gives you a good sense of the data. Is it legit? Whilst I won't test a username and password pair on a service (that's way too far into the grey for my comfort), I regularly use enumeration vectors on websites to validate whether an account actually exists or not. For example, take that last entry for racedepartment.com, head to the password reset feature and mash the keyboard to generate a (quasi) random alias @hotmail.com:
And now, with the actual Hotmail address from that last line:
The email address exists.
The VideoScribe service on line 9:
Exists.
And even the service on the very first line:
From a verification perspective, this gives me a high degree of confidence in the legitimacy of the data. The question of how valid the accompanying passwords remain aside, time and time again the email addresses in the stealer logs checked out on the services they appeared alongside.
Another technique I regularly use for validation is to reach out to impacted HIBP subscribers and simply ask them: "are you willing to help verify the legitimacy of a breach and if so, can you confirm if your data looks accurate?" I usually get pretty prompt responses:
Yes, it does. This is one of the old passwords I used for some online services.
When I asked them to date when they might have last used that password, they believed it was was either 2020 or 2021.
And another whose details appears alongside a Webex URL:
Yes, it does. but that was very old password and i used it for webex cuz i didnt care and didnt use good pass because of the fear of leaking
And another:
Yes these are passwords I have used in the past.
Which got me wondering: is my own data in there? Yep, turns out it is and with a *very* old password I'd genuinely used pre-2011 when I rolled over to 1Password for all my things. So that sucks, but it does help me put the incident in more context and draw an important conclusion: this corpus of data isn't *just *stealer logs, it also contains your classic credential stuffing username and password pairs too. In fact, the largest file in the collection is just that: 312 million rows of email addresses and passwords.
Speaking of passwords, given the significance of this data set we've made sure to roll every single one of them into Pwned Passwords. Stefán has been working tirelessly the last couple of days to trawl through this massive corpus and get all the data in so that anyone hitting the k-anonymity API is already benefiting from those new passwords. And there's *a lot* of them: it's a rounding error off 100 million *unique* passwords that appeared 1.3 *billion* times across the corpus of data 😲 Now, what does that tell you about the general public's password practices? To be fair, there are instances of duplicated rows, but there's also a massive prevalence of people using the same password across multiple difference services and completely different people using the same password (there are a finite set of dog names and years of birth out there...) And now more than ever, the impact of this service is absolutely *huge!*
When we weren't looking, @haveibeenpwned's Pwned Passwords rocketed past 7 *billion* requests in a month 😲 pic.twitter.com/hVDxWp3oQG
— Troy Hunt (@troyhunt) January 16, 2024
Pwned Passwords remains totally free and completely open source for both code and data so do please make use of it to the fullest extent possible. This is such an easy thing to implement, and it has a *profound* impact on credential stuffing attacks so if you're running any sort of online auth service and you're worried about the impact of Naz.API, this now completely kills any attack using that data. Password reuse remain rampant so attacks of this type prosper (23andMe's recent incident comes immediately to mind), definitely get out in front of this one as early as you can.
So that's the story with the Naz.API data. All the email addresses are now in HIBP and searchable either individually or via domain and all those passwords are in Pwned Passwords. There are inevitably going to be queries along the lines of "can you show me the actual password" or "which website did my record appear against" and as always, this just isn't information we store or return in queries. That said, if you're following the age-old guidance of using a password manager, creating strong and unique ones and turning 2FA on for all your things, this incident should be a non-event. If you're not and you find yourself in this data, maybe this is the prompt you finally needed to go ahead and do those things right now 🙂
**Edit:** A few clarifications based on comments:
- The blog post refers to both stealer logs and classic credential stuffing lists. Some of this data does not come from malware and has been around for a significant period of time. My own email address, for example, accompanied a password not used for well over a decade and did not accompany a website indicating it was sourced from malware.
- If you're in this corpus of data and are not sure which password was compromised, 1Password can automatically (and anonymously) scan all your passwords against Pwned Passwords which includes all passwords from this corpus of data.
- It's already in the last para of the blog post but given how many comments have asked the question: no, we don't store any data beyond the email addresses in the breach. This means we don't store any additional data from the breach such as if a specific website was listed next to a given address.
| true | true | true |
It feels like not a week goes by without someone sending me yet another credential stuffing list. It's usually something to the effect of "hey, have you seen the Spotify breach", to which I politely reply with a link to my old No, Spotify Wasn't Hacked blog post (it's just
|
2024-10-12 00:00:00
|
2024-01-17 00:00:00
|
article
|
troyhunt.com
|
Troy Hunt
| null | null |
|
19,767,188 |
https://www.youtube.com/watch?v=ObkdErzqXy4
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,398,257 |
https://www.oculus.com/en-us/blog/welcome-to-the-virtual-age/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
38,649,226 |
https://arstechnica.com/security/2023/12/unifi-devices-broadcasted-private-video-to-other-users-accounts/
|
UniFi devices broadcasted private video to other users’ accounts
|
Dan Goodin
|
Users of UniFi, the popular line of wireless devices from manufacturer Ubiquiti, are reporting receiving private camera feeds from, and control over, devices belonging to other users, posts published to social media site Reddit over the past 24 hours show.
“Recently, my wife received a notification from UniFi Protect, which included an image from a security camera,” one Reddit user reported. “However, here's the twist—this camera doesn't belong to us.”
## Stoking concern and anxiety
The post included two images. The first showed a notification pushed to the person’s phone reporting that their UDM Pro, a network controller and network gateway used by tech-enthusiast consumers, had detected someone moving in the backyard. A still shot of video recorded by a connected surveillance camera showed a three-story house surrounded by trees. The second image showed the dashboard belonging to the Reddit user. The user’s connected device was a UDM SE, and the video it captured showed a completely different house.
Less than an hour later, a different Reddit user posting to the same thread replied: “So it's VERY interesting you posted this, I was just about to post that when I navigated to unifi.ui.com this morning, I was logged into someone else's account completely! It had my email on the top right, but someone else's UDM Pro! I could navigate the device, view, and change settings! Terrifying!!”
Two other people took to the same thread to report similar behavior happening to them.
Other Reddit threads posted in the past day reporting UniFi users connecting to private devices or feeds belonging to others are here and here. The first one reported that the Reddit poster gained full access to someone else’s system. The post included two screenshots showing what the poster said was the captured video of an unrecognized business. The other poster reported logging into their Ubiquiti dashboard to find system controls for someone else. “I ended up logging out, clearing cookies, etc seems fine now for me…” the poster wrote.
Yet another person reported the same problem in a post published to Ubiquiti’s community support forum on Thursday, as this Ars story was being reported. The person reported logging into the UniFi console as is their routine each day.
| true | true | true |
“I was presented with 88 consoles from another account,” one user reports.
|
2024-10-12 00:00:00
|
2023-12-14 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
7,982,351 |
http://online.wsj.com/articles/forget-dinner-its-always-snack-time-1404240759
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,826,465 |
https://mateusfreira.github.io/@mateusfreira-secrets-for-becoming-a-better-developer-in-2024/
|
Secrets for becoming a better developer in 2024
|
Mateus Freira
|
# Secrets for becoming a better developer in 2024
Welcome to my yearly post about how to become a better developer; here, I share what I have learned in my last 16 years working as a Software Engineer and what I do to make myself more productive in my full-time job as a Principal Architect in a fast-growing American Startup, running my own SaaS business serving over 6k users monthly, teaching in-person Data Structures at the FasF Faculty (in Brazil) and working on my Open Source project Nun-db in my free time, I have to learn how to do things efficiently and that is what I share in this post yearly.
This year’s post is divided into three major categories: Automate, Coding Like a Pro, and Productivity Tricks,
# Automate
## Learn and Use Makefiles: they are awesome
Makefiles are a great way to automate your workflow. Every day, you may do repetitive tasks, and you do not even notice that they are eating your time. It can vary from pushing updates about your tasks to running all tasks of your project or as simple as creating a file to write your next blog post. For a while, I used `.sh`
files to do this automation since they are nearly everywhere, and for most simple tasks, `sh`
is good enough. But you know what is even better? Make it “GNU make utility to maintain groups of programs,” initially made to help C programmers automate compliers tasks that are still very handy nowadays.
### Why are Makefiles awesome?
Because it’s seamlessly integrated with the bash commands, and it is simple to define dependencies from commands to other commands. They are extensible and easy to compound. Allow me to show you a few concrete examples.
### Simple example
In this very blog, every time I have a new idea, I want to immediately start writing and not having to worry about the file name the folder or even what main parts do I have to add to each port, so I automated that using a simple make file as follows.
new-post: @echo "Creating new post..." @read -p "Enter post title: " title; \ file_name=`echo $$title | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr -cd '[[:alnum:]-]'`; \ sed "s/#title/$$title/" _posts/draft_base.md > _posts/draft-$${file_name}.md \ echo "Opening the file $$file_name.md in nvim..."; \ nvim _posts/draft-$${file_name}.md .PHONY: new-post
That will take care of creating the file and opening a new vim editor already editing the file, ready for me to start typing my new ideas.
### More complicated example
In Nun-db, I have +20 common commands I use very frequently, and for those, Makefiles are just amazing; here is an example.
Most of the time, I want to test Nun-db in a cluster with at least three processes, so I make sure it will perform as I expect when running in production-like environments. For That, I have to run the next three commands.
- Build the last nun-db code
`cargo build`
- Start a primary process server in the background ` NUN_DBS_DIR=/tmp/dbs RUST_BACKTRACE=1 target/debug/nun-db –user $user -p $user start –http-address “$primaryHttpAddress” –tcp-address “$primaryTcpAddress” –ws-address “127.0.0.1:3058” –replicate-address “$replicaSetAddrs” » primary.log &` replicas
-
Start two other server processes to act as secondary replicas
`NUN_DBS_DIR=/tmp/dbs1 RUST_BACKTRACE=1 target/debug/nun-db --user $user -p $user start --http-address "$secoundary1HttpAddress" --tcp-address "127.0.0.1:3016" --ws-address "127.0.0.1:3057" --replicate-address "$replicaSetAddrs" >> secoundary.log &`
- After I am done, kill all processes using the pid files I created in the start
`cat .primary.pid | xargs -I '{}' kill -9 {}&&cat .secoundary.pid | xargs -I '{}' kill -9 {}`
Imagine having to remember all these commands or having to go to my notes every time I needed to do some of those operations. Totally not adequate.
Putting all that in a make file, all I need is to call two commands: `make start-all`
and `make kill`
once I am done. You can see how simple these commands are in my GitHub repo Here, But I am also showing them next picture.
I often combine the command line `make clean kill-all start-all create-sample create-test,`
which will clean up the data files, kill all instances running locally, start all two replicas, and create the sample and test databases for me, all with a single command.
Minor optimizations may not sound like a good idea at first, but they add up over time, and you will get faster and faster at doing the same tasks in the same pass. This will set you to a new level of productivity and will make you shine among others.
# Coding like a Pro
## Latency is not 0! Learn to deal with it.
Latency is no 0ms! When working locally, you may think and act like it is, and when getting your system to production, you will notice that your code does not behave the way you think it should. In many cases, that is caused by latency differences between working locally and working in production environments.
When working locally with everything running on your machine, Db, application, and Cache, the time for the information to travel from your app code to the database will be virtually 0, and adding 100 individual calls to the database may not sound like a bad idea, in an environment with latency this may kill your system altogether. Check out this fake example I am showing in the next code.
```
const data = [..]; //100 items;
const productData = [];
for(const item of data) {
productData.push(await db.query(item.produce_id));
}
// Do stuff
```
Now, let’s suppose there is `30ms`
of latency between the app server and the database server. When this code runs locally, it will run in 2ms and may not seem like a problem. However, when running in production, for example, `100*30ms = 3s`
, depending on the interaction, 3s is a no deal for most users.
That is why you have to learn how to deal with latency from the ground up, from devs working locally to production environments, I suggest working locally with something at least equal to prod and recommend with a greater (maybe twice as much) latency, and if possible set up test deployments in CI to run with introduced latency so you can find problems earlier.
There are great tools that can help you easily emulate those kinds of environments. In the example, I am going to show how I used toxiproxy, and it will intercept all connections to the server and the connections between servers to introduce latency like I am showing in the next diagram.
This topic became too complicated to fit into this post, so I published a post dedicated to that. If you want to see it in more detail, please take a look at The post Nun-db Chaos Testing with Docker and Toxiproxy, where I shared the details of the tests I made on Nun-db’s Election process.
## Learn how to write compilers
It may sound extreme at first, but knowing how to write compilers will be a superpower of yours; it is incredible how quickly some problems can be solved with a simple compiler. Just to be clear, writing a compiler here, I don’t mean creating a new programming language; in fact, most of the time, I had the opportunity to write compilers to compile code that already existed in a famous language like Javascript or some query language, that means one of the most challenging parts of the process is already done.
In the last two years, I wrote two compilers in different circumstances.
**2022 Compiler: POC migration from Mongo to Elasticsearch in 2 days**:
Imagine you are working on a system where there is a component where the user can search for any of the over 100 available fields, with many different kinds of fields and groups and orders and operators, and the screen and backend are modeled to support this need. The team has done a great job of abstracting and creating strategies to support such a complex use case.
Mongo is having performance issues, and you, the lead, need to decide where to go. Proposing using Elastic Search seems like a good idea, but that will mean you need to dedicate months of work to only then see if it is viable at all. You have to prove to management that that investment is worth it. The code that generates queries for Mongo is distributed into +50 different classes with very complicated rules and extensions.
Easy: Write a compiler to compile Mongo query to Elasticsearch query; the harder part is done already since there is this mongodb-query-parser, so all you need to do is to walk the generated tree and re-write the same query Elastic dialect. In fact, that is what I did in less than 4 hours, and it could take care of 95% of the existing queries with Mongo; of course, I did not use the full capacity of Elastic search and chose each datatype correctly I piked the most obvious ones like making all string fields keywords and did the most basic translation of the mongo operators to Elasticsearch operator.
This is a rather trivial implementation since all you have to do is walk the tree and create the equivalent command in Elasticsearch. It is easy to automate the tests, too.
### 2023 Compiler: Compile Cypress.js tests to Cucumber
Imagine you inherited 100k lines of code from a legacy project and +400 end-to-end tests that cover a big part of the project you work on, and because of regulation, you need to record on your test record system not only the results of each test cycle but what steps, checks and specks each test do in plain English. You can either put your entire team to work on it for several weeks while they would be in hell trying to describe in text what each text does.
Another option is to create a compiler that compiles function operators and checks to plan English; Cypress tests are written in JS, so you can use a Babel parser to parse it and later walk the tree looking for the branches you want to write as text, in this case, I decided to do it in 2 phases, to make it easier to implement I decided to compile it to Gherkin (Cucumber language) because that would be a bit better structured than plain English and would be easier to map one to the other. Nevertheless, it took less than a week to have more than 98% of tests compiling to some acceptable form of text that we could submit and not programmer humans can read and parse easily.
### Closing the subject
Once you learn how to write compilers, they will become a new tool under your Tool Belt, and the opportunities will arrive. I am sure they can save you from spending time on boring activities or investing too many developer hours in some POC that may not be fruitful. I mention the two opportunities I had in the last two years, and they saved me and the company a lot of time and money to overcome some challenges that would be long otherwise. If you are not a CS major and don’t know where to start, I would recommend this book, Crafting Interpreters. But more importantly, do not consider it an impossible task; compilers are just code, and it does take years to build mainstream and polished ones, but that is rarely the case; simple, direct, and hacked ones are quick and fun to build and can accelerate you a lot.
## Use AI Copilots
This is probably on many other lists, so I won’t spend too much time on it. I use GitHub Copilot, and it helps when I am prototyping, POC, and experimenting quickly. It also helps when coding something that you are not super familiar with; it may speed up the learning process. It is worth mentioning that the proper way to use it is not to comment on something and expect it to code for you but rather to start coding and let it guess/complete what you have already started doing. It does hallucinate sometimes, so check that it does.
ChatGPT, in the same way, can help; I use it sometimes. For example, if I have something written in bash or nodejs and want something similar in Makefile or another language, it is pretty good at doing this kind of conversion. Again, check the code before using it to make sure you are not doing something absolutely stupid.
# Productivity tricks
## Try Demo Driver Development
Demo your work frequently; demoing your work to others frequently will help you share whatever you are working on and even gain a better understanding of what you have been working on. Many times, when working on a problem and implementing it.
I have been following this mantra for a couple of years, and it has really paid dividends. Doing demos is stressful and gets most of the developers out of their comfort zones.
It also helps you to sell whatever you are making for the correct price. There are core tips to succeed in doing demos, and they are:
- Showing the real things running is much better than showing PPT; taking is cheap.
- “Code is a liability” [1], very few people care about the code. Show the code results and not the code. Feature running, and metrics improvement sell much better than showing blogs of code.
- Keep your camera on while doing a presentation and show people your expression and what you are proud of.
- Prepare for the demo and run it before making it, if possible. Mastering the demo and listening to the podcast “Even the Best Rides Come to an End” made me even more sure about this point, and watching Kelsey Hightower’s demos gave me a new understanding of what doing a demo means.
Get used to presenting your work, and do it more often, not only in video form but also in text form. It will do great for your career.
## Teach
Teaching is the best way to learn; whether it be mentoring senior developers or teaching someone how to code from scratch, it will make you rethink a lot of how you see things and help to consolidate what you have already studied.
This year, I started teaching algorithms and data structures to first-year students of analysis and systems development at a local faculty in my city. It was a refreshing activity, making me go back and re-read the books I read during my graduation and master’s, and put me in a very uncomfortable situation of having to explain the basics to a group of young and energetic aspiring coders who are trying to find their way into our field.
I spend 4 to 5 hours a week implementing, demonstrating, and discussing data structures with my students, which keeps me up to date with the subject and forces me to stay sharp so I can explain the details and whys behind each data structure and algorithm.
If you want to learn and get better at something, find a way to teach it to someone, whether it be in a class (very dependent on the situation), a blog post, a YouTube video, or a live call; just do it, and you will see how much you learn.
## Read
Just reading is a refreshing activity, and it gets you out of the digital world. This year, I did not read as much as I had wished; too much was changing in my working life, and teaching at the university took away a lot of the time I used to put into reading. Nevertheless, reading is one of the activities that gives the most pleasure, and you should learn how to enjoy it too. Every year, this is part of my recommendations, and I still feel like adding it this year again.
## Take care of your sleep quality
I am convinced that sleeping well improves the quality of your work overall, but it took me several years to realize I should take care of my sleep quality. This year, I went to a doctor who helps others improve their sleep.
I used the snorelab app to watch how much I would snore while sleeping, and I realized I snored much more than I would have expected. The “normal” for the app would be 15 (The snore score), and on the first night, I got 32, meaning I snored twice as much as the average person. There was the extreme day that I was drunk, and my score was 62, meaning I snored +4x as much as an average user of that app would, and I noticed the days I snored the most were the days I woke with less energy.
I started using an anti-snoring bruxism mouth guard, and my sleep improved a lot. I wake up feeling much more energetic and happy. This is probably a problem that only impacts a small fraction of the population. The lesson to learn here is to track and try to improve your sleep quality. Sleeping at least 7 hours a day and making sure the hours you are sleeping are well used will pay off a lot.
## Conclusion
Being a better developer is a daily task, and you have to look for opportunities to improve your day-to-day work and automate the tasks you do repetitively. You will see your life getting better and better and you being able to handle the same amount of load with much less friction and start. Being better and more productive is not only about making more money but also finding smart ways to handle the same task with less stress and less time. That means a more peaceful mind and more time to spend with your loved ones or having fun outside of work.
This was a great year. I started the year working on one of my former employer’s customers and drove the largest migration of my career there, bringing over 700 terabytes of medical imaging data and 300 customers from Azure to GCP with no downtime. I started the company’s first SaaS product from scratch (It used to be a consultancy) and delivered it to general availability with the initial customers and integrators working back in September. Finally, at the end of the year, I joined a new company called Vida (AI-powered lung intelligence) as a Principal Architect to help them scale their business, team, and stack to the next level. It was too much change for one year, and I am happy to have some time to rest at the end of the year as I polish this text that I planned to publish in September. It’s still fine to publish it now, and I look forward to 2024 and what I will learn to share with you next year in this blog; stay tuned.
- What Makes a Great Software Engineering Team?
- Lessons for Software Developers from Ramon Dino's Result and the Arnold Ohio 2024 Event
- Consulting to Fix AI-Created Software May Be Our Next Big Opportunity
- The making of a software engineer (My personal take 16 years into it)
- Secrets for becoming a better developer in 2024
- Nun-db Chaos Testing with Docker and Toxiproxy
- My Thoughts about On-Call Compensation and Scheduling for Small Engineering Teams and Companies
- Turning Failures into Success: How Overcoming Challenges Fuels Software Engineer Growth
- Secrets for becoming a better developer in 2022
- Argo workflow as performance test tool
- How not to burnout when working on hard problems as a developer
- Are you working remotely? You should be ready to hit the road at any time in 2022
- Secrets to becoming a better remote developer 2021 edition
- Secrets I use to becoming a better remote developer
- Are you working remotely? You should be ready to hit the road at any time
- Productivity Trackers I use (as a developer working remote)
| true | true | true |
Welcome to my yearly post about how to become a better developer; here, I share what I have learned in my last 16 years working as a Software Engineer and what I do to make myself more productive in my full-time job as a Principal Architect in a fast-growing American Startup, running my own SaaS business serving over 6k users monthly, teaching in-person Data Structures at the FasF Faculty (in Brazil) and working on my Open Source project Nun-db in my free time, I have to learn how to do things efficiently and that is what I share in this post yearly.
|
2024-10-12 00:00:00
|
2023-12-31 00:00:00
| null | null | null | null | null | null |
18,992,615 |
http://find.xyz/map/effectiveness-of-open-floor-plans
|
.find
| null |
You need to enable JavaScript to run this app.
| true | true | true |
.find on Flow - Your Gateway to People and NFTs onFlow
|
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
26,684,838 |
https://labix.org/lunatic-python
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,040,588 |
https://alpaca.markets/blog/trading-competition/
|
Join Us For the 1st Alpaca Trading Competition
|
Yoshi Yokokawa
|
We are excited to announce that Alpaca is hosting its first virtual Electronic Trading Contest, a two week-long contest where participants compete against each other in a simulated market using the Alpaca Paper Trading feature. There are cash prizes (aka the amazon gift card ?) for the winning participants.
*****1st contest entry has been filled and closed as of Aug 7th*****
We have been looking at trading competitions by Quantopian, Quantiacs, QuantConnect closely, and we decided to host our own version of contest. Objective of Alpaca's contest is a little different from others. Our focus is to expand the community of enthusiastic developers and tech-savvy people around trading, so we welcome any types of trading styles which don't need to be strictly quants algorithmic trading (TradingView platform has been popular among the Alpaca users to support the algorithmic trading ?). Hence, we are not trying to recruit your algorithms into a fund. We want to create an occasion that gets you interested in trying the cross-road of tech x trading.
With that said, this competition is open to everyone globally, the US and non-US residents ?.
### ***The first competition starts on August 10th (MON), 2020***
## How It Works
### 1. Enter the Contest
Firstly, create an Alpaca account with a new email address that you or your team can access the Alpaca Paper Trading feature.
Then, enter the contest by filling out this google form (make sure to use your new Alpaca account!)
### 2. Track Your Performance
We will update this leaderboard of the top 10 performers every day during the two-week contest. Participants will be ranked by his/her Equity values.
### 3. Win Your Prize
For this very first Alpaca Trading Competition, we are going to award top 3 participant winners with Amazon gift cards (yes, our google form asks for an email for us to send the gift card ?).
The amount is planned to be $150 for #1, $30 for #2, and $20 for #3.
## How You Can Participate
### High Level Rules
Please check out Trading Contest Fine Print - Rules for the detail, but unlike other quant trading competitions, we do not set any strict algorithm criteria. We will simply measure who can generate the best returns during the set period. Participants are free to use your preferred algorithms and strategies.
We are going to select winners who have the highest Equity value at the end of the contest day. There are multiple requirements, and we ask you not to reset the original Equity balance of $100,000 and require winners to result in positive returns.
### FAQ
**(Q) How do I sign up? **(A) Create an Alpaca Paper Trading account from here. Sign up for the contest with this entry form.
**(Q) What kind of algorithms are you looking for?** (A) This contest does not restrict you from using any specific strategies or algorithms.
**(Q) Will you see my algorithm? **No, we will not look at your algorithm. Your code is private, visible only to you.
**(Q) Do I need to pay an entry fee? **(A) No.
**(Q) Is there a submission deadline? **(A) Yes, you need to sign up for the contest before the contest starts on [August 10th] market open.
**(Q) Does Alpaca offer a backtesting tool?** (A) No, we do not offer backtest software. However, Alpaca integrates with several highly-regarded backtest applications such as QuantRocket and Blueshift. Please see the list here.
*Technology and services are offered by AlpacaDB, Inc. Brokerage services are provided by Alpaca Securities LLC, member FINRA/SIPC. Alpaca Securities LLC is a wholly-owned subsidiary of AlpacaDB, Inc.*
*You can find us @AlpacaHQ, if you use twitter.*
| true | true | true |
Alpaca is launching the Alpaca Trading Competition where all the quants, developers, tech-savvy traders can participate globally
|
2024-10-12 00:00:00
|
2020-08-03 00:00:00
|
article
|
alpaca.markets
|
Alpaca Blog | Developer-First API for Stocks, Options, and Crypto
| null | null |
|
9,388,808 |
http://blog.in-sight.io/3-reasons-why-story-points-are-better-than-hours/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,903,535 |
https://www.reddit.com/r/EntrepreneurRideAlong/comments/aftfas/what_would_you_do_with_50acres_of_land/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,104,022 |
http://www.harriswblog.com/2018/05/phishing-state-of-art.html?m=1
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,951,004 |
https://www.thecollector.com/plato-arguments-against-democracy/
|
What Are Plato’s Arguments Against Democracy?
|
Miljan Vasic
|
Plato is renowned for his writings on various subjects, including ethics, knowledge, and politics. In his central work, *The Republic*, Plato delves into the ideal state and its governance. A part of his argument is a critique of democratic government, a form of rule that he viewed as inherently flawed and unsustainable. To understand why Plato had such reservations about democracy, we must explore his classification of government types, his critique of democracy as a regime, and the analogy he employed to argue that ruling is a skill best left to experts.
**Plato’s Classification of Five Regimes**
In Books VIII and IX, Plato presents a classification of government types, with aristocracy ruled by philosophers being the most ideal and resembling the perfect city-state. Alongside aristocracy, Plato identifies four other forms of government: timocracy, oligarchy, democracy, and tyranny. Timocracy refers to the rule of a few individuals who prioritize honor and glory as the highest virtues. Oligarchy involves the rule of a few where wealth serves as the primary criterion for attaining power. Democracy represents majority rule, where freedom and equality hold paramount importance in political positions. Lastly, tyranny represents an entirely unjust form of rule where the whims of a single ruler become law for the subjects.
Plato’s classification suggests a causal sequence where the regimes appear to arise from one another, with a descending order from a value standpoint. It appears as if the ideal regime succumbs to timarchy, which then leads to the emergence of oligarchy and so forth. Timarchy and oligarchy are considered less just than aristocracy, while democracy and tyranny are generally regarded as unjust regimes, with tyranny being the worst form.
Plato’s classification of government types is based on the notion that there is only one good regime and that all others are deviations from that absolute ideal. Aristotle would later criticize Plato’s classification, deeming it insufficiently comprehensive and overly abstract. Aristotle advocated for value realism, asserting the existence of objectively superior regimes while recognizing that practical social realities dictate the feasible forms of government. Nevertheless, Plato’s typology is particularly interesting due to its reflection of his views on democracy.
### Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
**Is Democracy Unsustainable?**
According to Plato, the emergence of democracy from oligarchy occurs when the poorer class revolts against the wealthy minority. This revolt is typically led by someone who betrays the oligarchic class but possesses the talent to rule and manipulate people, often through persuasive speeches. This individual is known as a demagogue. With a demagogue at the helm, the masses seize power, often through violence, killing some, expelling others, and forcing the remainder to coexist. In this regime, everyone is granted equal rights to everything — it is a regime in which the government is chosen by lot. Naturally, Plato’s description is primarily inspired by the Athenian democracy of his time, and he highlights everything that he considered to be problematic with it.
Democracy, as Plato describes it, is characterized by equality and freedom, but also the right to publicly say whatever comes to one’s mind, as well as the right to lead a life as one wants. Democracy fosters a wide array of lifestyles, and because of that, every other form of government can be found in democracy to a certain extent. This occurs because individuals in a democratic society are not guided by an understanding of what is truly good. Instead, they succumb to the notion that all pleasures hold equal value. Consequently, they lack the ability to discipline their lives and mindlessly pursue the satisfaction of every desire and passion that arises within them or is propagated by demagogues as the common good. Rather than leading to knowledge, this pursuit of freedom distances individuals from wisdom.
Plato argues that democracy lacks restrictions, making it inferior to oligarchy, where certain limitations exist. In a democracy, no one is compelled to rule or be politically engaged if they choose not to be. Freedom is paramount in this regime: even during times of war, a democratic citizen can peacefully abstain from participating in the defense of the city. Additionally, the relationships between ruler and subjects, parents and children, and teachers and students are undefined and often interchangeable in a democratic society. Plato asserts that democracy is always susceptible to the danger of a demagogue who rises to power by pleasing the crowd and, in doing so, commits terrible acts of immorality and depravity. This ultimately leads to the complete collapse of the democratic order, which results in tyranny. Tyrannies arise when powerful groups or individuals separate themselves from the democratic regime and become uncontrollable forces.
**The Overview of Plato’s Argument Against Democracy**
Plato’s critique of democracy finds its foundation at an earlier point in the *Republic*, specifically in Book VI. The principle of specialization, which Plato introduces when constructing the ideal city in Book II, contributes to his thesis that philosophers are best suited to rule. In this ideal city, each citizen is assigned a specific role, one that aligns with their abilities and for which they have received training. Whether they are farmers, artisans, doctors, cooks, or soldiers, they are expected to contribute to the community’s well-being solely in their designated capacity. From this foundational principle, an implicit conclusion arises: ordinary workers, constituting the electorate in any democracy, should refrain from involvement in political decision-making. Instead, political rule should be reserved for those who possess the necessary abilities and education that enable them to excel in governance.
Plato’s argument can be summarized as follows: Ruling is a skill, and it is rational to entrust the exercise of skills to experts. In a democracy, power lies with the people, who, by definition, are not experts in ruling. Consequently, Plato concludes that democracy is inherently irrational.
Plato’s *Republic* delves into the question of how one should lead their life, which is essentially an ethical inquiry concerning individual behavior and existence. However, from the very beginning of the dialogue, it becomes evident that this extends beyond personal conduct and touches upon fairness and justice in the state’s organization. According to Plato, ethical and political issues are interconnected, with the study of governance being an extension of understanding virtuous living.
Throughout the dialogue, Plato defends the analogy between the state and the human soul. He suggests that by envisioning a just and well-structured state, one can gain insight into the nature of justice in an individual’s life. The state is like a magnified version of the soul, allowing us to apply the understanding of justice on a grander scale to an individual level. A properly functioning state, just like a healthy soul, is one where the different parts are perfectly balanced and work in harmony with each other.
Plato emphasizes the internal unity of both the political state and an individual’s personality. Just as the state comprises various parts, so does the human soul. A well-ordered state and a morally upright individual share the trait of harmonious components. Such harmony leads to a healthy and just society, which should be the ultimate aspiration of both individual and collective actions.
**Plato’s Analogy: Ruling as a Skill**
Plato’s analysis is deeply rooted in the notion of division of labor and the principle of specialization. He concludes that fairness in the state can be achieved when each person fulfills their role according to their natural talents, education, and training. This principle of specialization dictates that members of each social class should focus solely on their designated work and refrain from interfering with the tasks of other classes. The ruling, he claims, should be left to those who possess the knowledge of good — the philosophers.
Thus, Plato’s argument against democracy is ultimately built upon an analogy. He draws attention to the various social roles that contribute to the common good, such as farming, cooking, and house-building. All jobs that serve the common good require specific training and preparation. Similarly, political tasks like selecting officers, participating in the assembly, and presiding over courtroom cases also contribute to the common good. People in these positions require specialized training and expertise to excel at their respective tasks. Therefore, those who acquire the necessary political qualifications are the most likely to perform these tasks effectively, or at least better than others. Consequently, Plato asserts that individuals should refrain from participating in politics unless they have undergone the required training and acquired the relevant political skills.
**The Relevance of Plato’s Argument**
Despite the fact that Plato wrote with ancient Athenian democracy in mind, the core of his argument can be applied to modern-day democracies as well. Today, there are still those who believe that crowds of people lack political skills and that politics should be left to a select few. In response to Plato’s anti-democratic critique of rule by the many, a defender of democracy might raise an argument put forth by Aristotle in *Politics*, which has also been revisited in modern times. The essence of this response lies in the belief that a large group can collectively possess greater wisdom than a small one. This notion is analogous to how a group of less wealthy individuals, when united, can collectively become richer than a single wealthy person. By pooling together their limited knowledge, the group forms a vast body of information from smaller bits, yielding a potentially wiser and more informed decision-making process.
A more radical response to Plato’s critique of democracy can be found among democrats who argue in favor of granting political power to individuals, even when they may not be highly qualified to wield it effectively. They emphasize that there are more profound considerations in politics beyond mere decision-making effectiveness. According to them, the process of how decisions are made holds greater moral significance. Thus, they assert that democratic decision-making possesses a decisive advantage solely because of its inherent fairness. Consequently, Plato’s anti-democratic argument remains relevant in contemporary times, and the majority of modern democratic theory revolves around providing diverse responses to counter his viewpoint.
| true | true | true |
The great philosopher was famously skeptical of the rule of the people. What are Plato’s arguments against democracy?
|
2024-10-12 00:00:00
|
2024-01-11 00:00:00
|
website
|
thecollector.com
|
TheCollector
| null | null |
|
15,349,160 |
http://johan.kanflo.com/commercial-pilots-control-my-moodlight/
|
Commercial pilots control my moodlight
|
Johan
|
Having spent some time building the Wifi Ghost I wanted it to be something that was actually used. Few people in the house found the interest to change color on a daily basis (myself included). Then it occured to me, why not let the pilots of the aircrafts buzzing around the airspace of southern Sweden control it? They will probably never know that by passing within a few kilometers of my ADS-B receiver they will light up my study.
This will be a small project as most parts are already in place. The ADS-B tracker from my Skygrazer project will feed a script that sets the ghost color via its MQTT topic. What color though? Well, the most prominent color in the airlines’s logo of course! Make a Bing image search for the name of the airline with the word “logo” appended, pick an image, download and analyze. The color will be dimmed according to the distance to the aircraft. I use a maximum distance of 2 kilometers making the light fade up and down whenever an aircraft passes near my house.
The result? A wifi ghost light put to good use. And art 🙂
Code available on Github.
Pingback: ADS-B skygrazing – Johan Kanflo
Man, you’re crazy 🙂
| true | true | true |
Having spent some time building the Wifi Ghost I wanted it to be something that was actually used. Few people in the house found the interest to change color on a daily basis (myself included). The…
|
2024-10-12 00:00:00
|
2016-01-14 00:00:00
|
article
|
kanflo.com
|
Johan Kanflo
| null | null |
|
8,566,965 |
http://www.scienceofcooking.com/meat/slow_cooking1.htm
|
Science of Slow Cooking
| null |
Here are tips to keep in mind when slow-low roasting:
--Of all the attributes of eating quality, tenderness is rated the most important factor affecting beef palatability--
Slow cooked meals are generally easier to make and very cost effective using cuts of meat that improve in texture and flavor when cooked for long periods of time at low temperatures. These tough cuts of meat contain large amounts of collagen which require long cooking times to break down into a rich gelatin.
**HOW DOES SLOW COOKING WORK?**
When you cook, collagen begins to melt at about 160F and turns to a rich liquid,*gelatin*. This gives meat a lot of flavor and a wonderful silky texture. When cooking it is important to liquify collagen.
*Denatu ration of the collagen molecule is a kinetic process, and hence a function of both temperature and duration of heating. Cooking at low temperatures require long periods of time to liquify collagen.*
**COOKING MEAT TEMPERATURES**
**105F/40C - 122F/50C**--Calpains begin to denature and lose activity till around 105F, cathepsains at 122F. Since enzyme activity increases up to those temperatures, slow cooking can provide a significant aging effect during cooking. Meat should however be quickly seared or blanched first to kill surface microbes.
**120°F/50°C -- **Meat develops a white opacity as heat sensitive myosin denatures. Coagulation produces large enough clumps to scatter light. Red meat turns pink.
__ Rare Meats:__ 120°F/50°C
**140°F/60°C -- **Red myoglobin begins to denature into tan colored hemichrome. Meat turns from pink to brown-grey color.
**140°F/60°C -- **Meat suddely releases lots of juice, shrinks noticebly, and becomes chewy as a result of collagen denaturing which squeezes out liquids.
__ Medium -- Well Meats:__ Collagen shrinks as the meat tmeperature rises to 140/60 more of the protein coagulates and cells become more seggregated into a solid core and surrounding liquid as the meat gets progressively firmer and moister. At 140-150 the meat suddenly releases lots of juices, shrinks noticeably and becomes chewier as a result of collagen shrinkage. Meat served at this temperature is considered medium and begins to change from juicy to dry.
**160°F/70°C** -- Connective tissue collagen begins to dissolve to gelatin. Melting of collagen starts to accelerate at 160F and continues rapidly up to 180F.
__ Well Done Slow Cooked Meats:__ Falling apart tenderness collagen turns to gelatin at 160/70. The meat gets dryer, but at 160F the connective tissues containing collagen begins to dissolve into gelatin. With time muscle fibers that had been held tightly together begin to easily spread apart. Although the fibers are still very stiff and dry the meat appears more tender since the gelatins provide succulence.
**NOTES**: At 140**°**F changes are caused by the denaturing of collagen in the cells. Meat served at this temperature med-rare is changing from juicy to dry. At 160**°**F/ 70**°**C connective tissue collagen begins to dissolve to gelatin. This however is a very lengthy process. The fibers are still stiff and dry but meat seems more tender. __Source:__ Harold McGee -- On Food and Cooking
A muscle is completely enclosed by a thick sheath of connective tissue (the epimysium) and is divided into bundles of fibres by a connective tissue network (perimysium). Individual muscle fibres are bounded by a plasma membrene surrounded by connective tissue (endomysium) which consists of a basement membrane surrounded by a reticular later in which a meshwork of fine collage fibrils is embedded in a matrix. Tendons are elastic collagenous tissues.
**THE CHALLENGE IN COOKING MEAT**
**We like our meat tender and juicy at the same time****...**
We therefore want our meat to be cooked tender where tough collagen is converted to gelatin but with a minimum loss of moisture. The reality is that these methods are contracdictory and hence the challenge or dilemma to cooking meats. To minimize moisture loss requires temperatures less than 130F, however .turning collagen into gelatin requires temperatures above 160F and for extended time periods. As moisture evaporates, the meat begins to shrink. A slab can lose 20% or more of its weight in cooking due to shrinkage. Even meat cooked in liquid will dry out although not as quickly. So we are faced with a dilemma. To liquefy the collagen we need to cook the meat to 180F and hold it there for for long periods of time. But by then it is well past well-done and the muscle fibers can be dryed out. As a result, we need to add moisture.
**How to slow loss of moisture**
**Brining. **Brining adds a significant amount of moisture, it helps retain moisture during cooking, contributes noticeable flavor enhancements.
**Steaming. **Another method of adding moisture is to cook the meat in very high humidity by wrapping it in foil with a little water or juice. This keeps moisture from escaping and some vapors penetrate the meat.
**Braising **or** poaching (--low temperatures--). **Braising is a method of cooking by submerging the meat in hot liquid, but not hot enough to boil. Braising can give you juicy, tender, and flavorful meat, especially if you use a flavorful braising liquid. But it tends to pull all the collagen out and rob the meat of its natural flavor. Flavor the liquid (water with pickling spices is a nice simple start), completely submerge the slab, keep the lid off, keep the temp down to about 160-180F for about 30 minutes, and let the meat cool in the liquid for 20-30 minutes so it will absorb some of the water before putting it on the grill.
**Breakage of collagen covalent links using Acids -- (Tenderizing meats with acid) -- **It is well known that adding a little vinegar to a stock will help tenderize meat while cooking. It is also useful to marinate meat for a few hours using vinegar to tenderize meat. Offer and Knight (1988) suggested that one of the mechanisms of pH induced tenderisation of meat could be a breakage of covalent collagen cross-links and of some specific peptide bonds.
Here are tips to keep in mind when slow-low roasting:
- __Develop a caramelized crust before slow cooking __-- by searing the meat either in a dry pan or with a small amount of oil or fat.
- __Place the meat or roast fat side up__ in the pan so it self-bastes.
- __Tenderize your cuts of meat __--e..g, pounding meat, buying aged meats (Note: meats cooked longer a 120F will age and be more tender), marinading meats with acids with tenderize the meat.
- __Tent the resting meat with foil __and allow 10 to 15 minutes before cutting it so the meat's juices will return to the center; slice the meat against the grain.
**KITCHEN APPLIANCES TO AID IN SLOW COOKING**
New appliances such as __Sous Vide Cookers, CVap Ovens and Combi Ovens __are now be used in restaurants and homes. Reading more about this: What is Sous Vide Cooking? --- Comparing Sous Vide to CVap and Combi Ovens,How is heat transferred in Cooking.
**References **
Review: Collagen contribution to meat toughness: Theoretical aspects Jacques Lepetit ..Meat Science 80 (2008) 960–967
Offer, G., & Knight, P. (1988). The structural basis of water-holding in meat. Part 1: General principles and water uptake in meat processing. Developments in Meat Science, 4, 63–171.
What is the difference between LDL and HDL?
What are the different types of Omega-3 fatty acids?
What is the difference between nitrates and nitrites?
What is the difference between saturated and unsaturated fats?
| true | true | true |
What is slow cooking?
|
2024-10-12 00:00:00
|
2008-01-01 00:00:00
| null | null | null | null | null | null |
23,250,890 |
https://hbr.org/2020/05/dont-let-a-single-metric-drive-your-business
|
Don’t Let a Single Metric Drive Your Business
|
Jonathan Golden
|
## Summary.
It’s impossible to capture the complexities of your business with a single metric. Prioritize a single headline number — to the exclusion of all others — and you’ll invariably leave a lot of people and priorities out. Moreover, you’re likely to constrain your growth. Instead, you need to think about a constellation of metrics that focus on the measurement of three things: quantity, quality, and efficiency. These three metrics in relationship to each other tell the story of your business and allow for prioritization and alignment. They become a shorthand language internally when committing resources and making investments — or trade-offs. No metric is perfect. But understanding, and regularly reassessing, the relationship between quantity, quality, and efficiency is critical to more deeply understanding your business — and to staying nimble. It will enable you to drive what matters most — the customer experience — and empower all of your teams in the process. Done right, metrics are among the best ways to make people truly understand how their work impacts the business in a positive way.
Metrics are essential to running a business. We all know that. What may not be as obvious, though, is how metrics intersect with your company mission and even employee happiness. Prioritize a single number — to the exclusion of all others — and you’ll invariably leave a lot of people and priorities out. Moreover, you’re likely to constrain your growth. During my time at Airbnb, I lead teams that included product managers, designers, engineers, and data scientists. It would have been impossible to capture the complexities and interactions of their activities with a single metric. Instead, it took a constellation of metrics to capture what the business needed to scale and how each team could facilitate that success.
| true | true | true |
It’s impossible to capture the complexities of your business with a single metric. Prioritize a single headline number — to the exclusion of all others — and you’ll invariably leave a lot of people and priorities out. Moreover, you’re likely to constrain your growth. Instead, you need to think about a constellation of metrics that focus on the measurement of three things: quantity, quality, and efficiency. These three metrics in relationship to each other tell the story of your business and allow for prioritization and alignment. They become a shorthand language internally when committing resources and making investments — or trade-offs. No metric is perfect. But understanding, and regularly reassessing, the relationship between quantity, quality, and efficiency is critical to more deeply understanding your business — and to staying nimble. It will enable you to drive what matters most — the customer experience — and empower all of your teams in the process. Done right, metrics are among the best ways to make people truly understand how their work impacts the business in a positive way.
|
2024-10-12 00:00:00
|
2020-05-11 00:00:00
|
/resources/images/article_assets/2020/05/May20_08_DataConstellation2-1024x576.jpg
|
article
| null |
Harvard Business Review
| null | null |
20,283,819 |
https://wasi.dev/
|
Introduction · WASI.dev
| null |
# Introduction
The **WebAssembly System Interface (WASI)** is a group of standard API specifications for software compiled to the **W3C WebAssembly (Wasm) standard**. WASI is designed to provide a secure standard interface for applications that can be compiled to Wasm from any language, and that may run anywhere—from browsers to clouds to embedded devices.
By standardizing APIs for WebAssembly, WASI provides a way to compose software written in different languages—without costly and clunky interface systems like HTTP-based microservices. We believe that every project with a plugin model should be using WASI, and that WASI is ideally suited for projects with SDKs for multiple languages, e.g. client libraries.
To date, WASI has seen two milestone releases known as **0.1** and **0.2**. (Sometimes you will see these referred to as Preview 1 and Preview 2, or P1 and P2). The concepts and vocabulary of Wasm and WASI can sometimes be opaque to newcomers, so WASI.dev serves as an introduction to WASI for users of all backgrounds. It's very much a work-in-progress, and we welcome contributions on the GitHub repo.
## Who are we?
WASI is an open standard under active development by the **WASI Subgroup** in the **W3C WebAssembly Community Group**. Discussions happen in GitHub issues, pull requests, and bi-weekly Zoom meetings.
## Who are you?
WASI and Wasm are tools for any type of software developer: whether you're writing web apps, plugins, serverless functions, User-Defined Functions (UDFs) in a database, embedded controller components, sidecar networking filters, or something completely different. This site is intended to make WASI understandable regardless of your background, use-case, or familiarity with the WebAssembly ecosystem.
## How to get started
There are many different runtimes that support WASI including Wasmtime, WAMR, WasmEdge, wazero, Wasmer, wasmi, and wasm3. Many of these runtimes have different areas of focus (i.e., IoT, embedded devices, and edge for WAMR, or server-side and non-web embeddings with components for Wasmtime). The introductory documentation for each is a great place to start.
WASI can be implemented by both core Wasm modules and applications built according to the **Component Model**, a specification for Wasm applications that are interoperable and composable. You can learn more about components in the Bytecode Alliance's **WebAssembly Component Model** documentation.
Continue reading to learn more about WASI interfaces, including available APIs and how they are defined.
| true | true | true |
The WebAssembly System Interface (WASI) is a group of standard API specifications for software compiled to the W3C WebAssembly (Wasm) standard. WASI is designed to provide a secure standard interface for applications that can be compiled to Wasm from any language, and that may run anywhere—from browsers to clouds to embedded devices.
|
2024-10-12 00:00:00
| null | null | null |
wasi.dev
|
WASI.dev
| null | null |
23,299,399 |
https://www.lifesfabric.com/
| null | null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
21,972,082 |
https://massivesci.com/notes/dna-barcoding-fish-spawning-habitat/
|
DNA barcodes help identify fish eggs and inform conservation
|
Makenzie Burrows
|
## DNA barcodes help identify fish eggs and inform conservation
Determining where fish spawn could help us protect these crucial habitats and bolster declining fish populations
Photo by Vlad Tchompalov on Unsplash
Fish are economically and ecologically important in the Gulf of Mexico, yet their stocks are decreasing due to overfishing. One major way that we can help protect fish is to protect the habitats where they reproduce. But in order to do that, we first have to find out *where* they reproduce. One way to find these spawning habitats is by using floating fish eggs.
Before setting up projects focused on reef fishes, like grouper and snapper, we needed to know if eggs from shallow water fishes stay in the shallows or if the eggs move into deeper waters as they float.
Fish eggs can be found in most surface waters, making them easy to collect with a plankton net. However, these eggs are usually clear balls the size of the tip of a pencil, making them difficult to visually identify down to species level. To solve this problem, we use a laboratory method called DNA barcoding. DNA barcoding allows us to look at the genetic material of each fish egg to figure out which species it belongs to. Each species has a unique DNA signature, just like how each product at a grocery store has its own unique barcode.
Using DNA barcoding, we found that most shallow water fish eggs stay in shallow waters. This information will help us plan future fish egg collections to help inform fisheries managers where and how much these shallow water species are spawning.
| true | true | true |
Determining where fish spawn could help us protect these crucial habitats and bolster declining fish populations
|
2024-10-12 00:00:00
|
2020-01-06 00:00:00
|
article
|
massivesci.com
|
Massive Science
| null | null |
|
39,313,361 |
https://arxiv.org/abs/2402.03067
|
Multilingual transformer and BERTopic for short text topic modeling: The case of Serbian
|
Medvecki; Darija; Bašaragin; Bojana; Ljajić; Adela; Milošević; Nikola
|
# Computer Science > Computation and Language
[Submitted on 5 Feb 2024]
# Title:Multilingual transformer and BERTopic for short text topic modeling: The case of Serbian
View PDFAbstract:This paper presents the results of the first application of BERTopic, a state-of-the-art topic modeling technique, to short text written in a morphologi-cally rich language. We applied BERTopic with three multilingual embed-ding models on two levels of text preprocessing (partial and full) to evalu-ate its performance on partially preprocessed short text in Serbian. We also compared it to LDA and NMF on fully preprocessed text. The experiments were conducted on a dataset of tweets expressing hesitancy toward COVID-19 vaccination. Our results show that with adequate parameter setting, BERTopic can yield informative topics even when applied to partially pre-processed short text. When the same parameters are applied in both prepro-cessing scenarios, the performance drop on partially preprocessed text is minimal. Compared to LDA and NMF, judging by the keywords, BERTopic offers more informative topics and gives novel insights when the number of topics is not limited. The findings of this paper can be significant for re-searchers working with other morphologically rich low-resource languages and short text.
## Submission history
From: Nikola Milošević Dr [view email]**[v1]**Mon, 5 Feb 2024 14:59:29 UTC (374 KB)
### References & Citations
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
| true | true | true |
This paper presents the results of the first application of BERTopic, a state-of-the-art topic modeling technique, to short text written in a morphologi-cally rich language. We applied BERTopic with three multilingual embed-ding models on two levels of text preprocessing (partial and full) to evalu-ate its performance on partially preprocessed short text in Serbian. We also compared it to LDA and NMF on fully preprocessed text. The experiments were conducted on a dataset of tweets expressing hesitancy toward COVID-19 vaccination. Our results show that with adequate parameter setting, BERTopic can yield informative topics even when applied to partially pre-processed short text. When the same parameters are applied in both prepro-cessing scenarios, the performance drop on partially preprocessed text is minimal. Compared to LDA and NMF, judging by the keywords, BERTopic offers more informative topics and gives novel insights when the number of topics is not limited. The findings of this paper can be significant for re-searchers working with other morphologically rich low-resource languages and short text.
|
2024-10-12 00:00:00
|
2024-02-05 00:00:00
|
/static/browse/0.3.4/images/arxiv-logo-fb.png
|
website
|
arxiv.org
|
arXiv.org
| null | null |
8,049,912 |
https://blog.mozilla.org/blog/2014/07/17/firefox-os-ecosystem-shows-strong-momentum-and-expands-across-new-devices-markets-and-categories/
|
Firefox OS Ecosystem Shows Strong Momentum and Expands Across New Devices, Markets and Categories | The Mozilla Blog
|
Mozilla
|
# Firefox OS Ecosystem Shows Strong Momentum and Expands Across New Devices, Markets and Categories
Firefox OS has unlocked the mobile ecosystem and is quickly expanding across a broad range of devices and product categories in Europe, Latin America and Asia Pacific. Just one year after the first devices were launched, Firefox OS is now available on seven smartphones offered by five major operators in 15 countries, showing strong signs of ecosystem momentum and widespread industry adoption.
## New Partners, Markets and Opportunities
Firefox OS leverages the Web as the platform to enable flexibility, scalability and powerful customization, and is free of limits and restrictions associated with proprietary mobile operating systems. The success and growth of the platform has encouraged global and regional operators to continue advancing Firefox OS in their markets.
“Firefox OS has emerged out of the ‘other’ category as one of the top platforms for the global smartphone industry,” said Neil Mawston, Executive Director, Global Wireless Practice (GWP) at Strategy Analytics. “Strong interest from major operators and hardware makers has enabled Firefox OS to make headway in key regions and challenge established software players.”
## Expansion in Europe
- Deutsche Telekom will be the first operator to sell the new ALCATEL ONETOUCH Fire E phones, available in Germany this week through congstar. In the coming months, Deutsche Telekom will also launch Firefox OS devices in four new markets – Croatia, Czech Republic, Macedonia and Montenegro.
- Telefónica just announced that Germany has become the ninth country where they offer Firefox OS phones, as O2 started pre-sales of ALCATEL ONETOUCH Fire E phones.
- ZTE will launch the Open C as the first Firefox OS device available in France later this month.
## Firefox OS Launches Throughout Latin America
- Telefónica will offer Firefox OS phones across all of their Latin American markets by the end of the year, expanding to Central America in the next few months and followed by Argentina and Ecuador. Telefónica is also expanding its Firefox OS portfolio, having recently launched ZTE Open C and Open II devices in the region and soon offering the Alcatel ONETOUCH Fire C phones, all running the latest version of Firefox OS.
- América Móvil, which launched Firefox OS phones in Mexico earlier this summer, is committed to expanding its offering in Latin America by the end of the year.
** **Firefox OS Coming Soon to Asia Pacific
- Spice and Intex will soon launch the first Firefox OS devices in the ultra-low-cost category in India.
- Telenor confirmed they will offer Firefox OS phones in Asia by the end of the year.
- Chunghwa Telecom, the largest operator in Taiwan, recently announced that they have joined the more than 20 operators committed to delivering Firefox OS in their markets.
## Looking Forward
As the Firefox OS ecosystem expands, we are working with partners to bring the experience to higher-end devices and more form factors, proving the flexibility of the Web as the best development platform.
To accelerate the design, development and testing of the Firefox OS ecosystem, Mozilla has partnered with Thundersoft to manufacture and distribute the Firefox OS Flame reference phone, now on sale. The Flame is representative of the mid-tier phone hardware that Mozilla and its partners will release over the coming year.
Mozilla continues to evolve Firefox OS to include features like NFC and enhanced Bluetooth connections that support easy sharing of content like contacts, videos and images; LTE support to make the mobile experience even faster; and Firefox Accounts as a safe and easy way for users to take Firefox everywhere through services that include the Firefox Marketplace, Firefox Sync, device backup, cloud storage, and a service to help locate, message or wipe a phone if it is lost or stolen.
Mozilla is working with Panasonic to develop next generation SmartTVs running Firefox OS, and Abitcool will launch an HDMI streaming device later this year that allows the user to fling content from compatible mobile or Web apps to an HDTV.
“It’s been a year of both tremendous growth and exciting opportunity for Firefox OS,” said Andreas Gal, Chief Technology Officer at Mozilla. “As the only truly open-source platform, Firefox OS has unlocked users, developers and industry participants from the mobile market’s closed systems and content gatekeepers. Now, we are expanding that pursuit across new regions, handheld devices, and soon into other areas of our users’ lives as new form factors begin to take shape.”
## Supporting Partner Quotes
**Dan Dery, Chief Marketing Officer of ALCATEL ONETOUCH**, said: “Our FIRE series is a perfect example of how we innovate to enable mobile Internet access for everyone. With Mozilla’s Firefox OS, we’ve designed affordable, rich-featured devices that create unmatched value for our customers. ALCATEL ONETOUCH will continue to partner with Mozilla as we grow our range of devices to meet the different needs of customers in every market segment across the globe.”
**Marco Quatorze, Director of Value Added Services at América Móvil**, said: “América Móvil has always been at the forefront in mobile technology, that’s why we are glad to announce that América Móvil will continue selling Firefox OS phones and is committed to expanding the offering in Latin America by the end of the year. With this, we can offer all our users a combination of excellent mobile devices with an innovative operating system.”
**Thomas Kiessling, Chief Product & Innovation Officer at Deutsche Telekom**, said: “The continuing rollout of Firefox OS across our European markets is tangible proof of our partnership with Mozilla to bring an open operating system to all of our customers. Deutsche Telekom is the biggest operator in Europe for Firefox OS. In the upcoming months we intend to introduce new smartphones with Firefox OS and extend our footprint to four more countries: Croatia, the Czech Republic, Macedonia and Montenegro.”
**Yuki Kusumi, Director of the Home Entertainment Business Division of the Appliances Company of Panasonic**, said: “With our joint announcement in CES 2014, we started to work with Mozilla on our next generation smart TV running Firefox OS. We are glad to see that Firefox OS gained fruitful results in the first year, both in ecosystem expansion and development on different form factor devices. In the second year of Firefox OS, with Open Web technologies and collaboration with the talents from Mozilla, we will realize further innovation in smart TV technologies and products, and bring customers a whole new level of experience.”
**Francisco Montalvo, Group Director of Devices at Telefónica**, said: “We are delighted that our customers are supporting Firefox OS in their countries, with significant penetration in many of them. Telefónica is determined to give back control of the content, privacy and freedom to its customers and Firefox OS is the ecosystem that best supports that goal.”
**Adam Zeng, CEO of Mobile Devices at ZTE**, said: “Mozilla is a key partner for ZTE, as we both share the commitment to bring cutting-edge mobile Internet innovations to consumers across the world. ZTE has long recognized the potential of Firefox OS as a key platform that offers unique value to mobile users globally, and we will continue to invest in the success of our partnership.”
| true | true | true |
Firefox OS has unlocked the mobile ecosystem and is quickly expanding across a broad range of devices and product categories in Europe, Latin America and A
|
2024-10-12 00:00:00
|
2014-07-17 00:00:00
|
webpage
|
mozilla.org
|
Firefox OS Ecosystem Shows Strong Momentum and Expands Across New Devices, Markets and Categories
| null | null |
|
10,230,287 |
http://www.rockpapershotgun.com/2015/09/16/how-gog-com-save-and-restore-classic-videogames/
|
How GOG.com Save And Restore Classic Videogames
|
Tom Bennet
|
# How GOG.com Save And Restore Classic Videogames
Meet gaming’s restoration experts
“Hunting for distribution rights is essentially detective work,” says Marcin Paczyński, Head of Product at GOG. “Rights can repeatedly change hands or be split up between different parties, and it’s our job to get to the bottom of what happened.”
Preservation of old games involves more than just an extra patch. The journey from dusty unplayable relic to polished, cross-platform installer is a minefield of technical and legal obstacles. The team at Good Old Games remain the industry leaders in the restoration of classic PC games, tasked with reverse engineering code written more than 20 years ago, unraveling knotty licensing issues left behind by defunct development studios, and battling lethargy on the part of skeptical publishers. It’s a thrilling and, at times, gruelling process, but - as the GOG team will testify - it never fails to surprise.
Games generally take one of three paths to the GOG storefront. Newer titles are procured by the company’s Business Development team, while small indie releases are often submitted directly by their development studios (Lords of Xulima and Sunless Sea being two such examples). The vast majority of older titles, however, take the third path; whether they’ve climbed the Community Wishlist or are simply a favourite of GOG’s developers, their distribution rights must be hunted down manually. To that end, the legal team scrutinise the storied history of the game’s original development studio, connecting the dots between mergers, buyouts, and bankruptcies, searching for clues as to which publisher or conglomerate to contact.
“On more than one occasion, our community was also extremely helpful in tracking down classic games," says Paczyński. "A GOGer might know somebody involved with a release, or try a few of their own leads and share anything that they come up with. There’s actually a community thread on our forums dedicated specifically to this sort of thing, and in the past we’ve been able to follow up on these leads to release the games they requested. It’s always awesome to add a game to GOG.com that’s the product of a combined effort between our team and our community.”
Once a deal has been struck with the new rights holders, the team are - in theory - free to update the game’s ancient source code to run on modern systems. There’s one problem, however: in almost all cases, the original code has been lost or deleted.
“Source and game code is an extremely rare commodity for us,” explains Paczyński. “Older titles have often gone through so many different hands that no one knows who has the original code anymore, or it no longer exists in any usable form.” With source files lost forever, the team’s only recourse is to retrofit retail code taken from a boxed copy of the game.
While publishers are occasionally able to supply an archived build, most classics necessitate a scavenger hunt for the best available edition of the title. These are often found among the GOG crew - “it’s probably not a surprise that many of us are collectors!” adds Paczyński - but it’s not uncommon for the team to trawl the web in search of second-hand copies.
Retail code is *far less malleable* than source code, and restoring a game to its playable state using only a decades-old installation disk is quite a feat of software engineering. In terms of sheer difficulty, the process might be likened to film restoration using only a VHS recording of a television broadcast, the original negatives having been destroyed.
GOG’s engineers must therefore take a creative approach by using customised emulators and wrappers. “We have a great relationship with the team behind DOSBox,” explains Paczyński. “In the past they’ve helped us create custom setups specifically for a particular release - notable examples include Theme Park and Harvester.” It’s not uncommon for these setups to become extraordinarily complex; the team describe *wrapping wrappers around wrappers* in a kind of ‘Russian Doll’ approach to emulation.
Sometimes the process of digging through old code in old games leads to surprising discoveries, but easter eggs, such as the hidden message discovered inside Dungeon Keeper, are rare. According to the team’s specialists, these discoveries are not the most appealing thing about the job. What makes GOG’s brand of digital archeology *fun*, they say, is actually the need for creative experimentation. “You can’t just go through the motions,” explains a GOG engineer. “System compatibility tools are a brilliant go-to, but we’ve had to learn to expect the unexpected.”
When prompted to provide examples, dozens of anecdotes are forthcoming. The restoration of Airline Tycoon, for example, required the team to extract the raw language assets from a multilingual Mac version and port them into the English-only PC build they’d received, thereby providing the same experience for all users. Games which once required switching physical CDs call for novel workarounds to eliminate pointless dialogue prompts. One title completely crashed the team’s Cyrillic systems, and required painstaking patch development.
“There are a few games out there that are only playable with community-made fixes and patches nowadays. In several cases, we’ve been able to get in touch with mod creators to implement their select technical fixes into our releases. Whenever we do this, it’s a must for us to get in touch, get their permission first, and offer a token of appreciation as a thank you - and nearly everyone is just happy to help.”
The complexity of the team’s solutions has a knock-on effect on the testing phase. “We sometimes reverse-engineer parts of a game,” says Paczyński, “but messing with the binaries can produce unexpected results and put a lot of strain on our QA, so we do our best to keep things simple.”
This is not always easy. As a gaming platform, the PC is extremely fragmented - developers must account for a diverse range of chipsets, graphics cards, and operating systems. The effects of this variation are exacerbated by the age and inflexibility of the software being restored. The number of unknown variables spirals, making the testing process gruelling, and the bugs wildly unpredictable.
Incompatible with the power of modern systems, older titles can exhibit bizarre behaviour which only becomes apparent during testing. S.T.A.L.K.E.R and Saints Row 2, for example, would go haywire as the frame-rate skyrocketed, resulting in hilarious physics engine malfunctions and even overheated PCs. Carmageddon was plagued by inexplicable crashes, and its tortuous three-month stint in QA still evokes painful memories among GOG’s testers.
“There’s also a lot of work involved in keeping things working post-release," says Paczyński. "It’s important for us to never simply fire and forget, and with many of our titles being decades old, we constantly have to pay attention to community feedback and monitor new software and hardware changes to eliminate any problems that can, will, and do, come up.”
The final step is to prepare GOG’s famous ‘game goodies’ - digitised copies of supplementary material including manuals, soundtracks, and original artwork.
This practice originates from the service’s early years as *Good Old Games*. “We wanted gamers to get all the cool stuff they used to receive with boxed editions,” says Paczyński. “Something to look at, browse through, feel the heft - we wanted to recreate that old feeling of ownership.” Many older games, particularly adventures and RPGs, actually relied on these physical add-ins to complete the experience. Manuals frequently included beautifully detailed maps or reams of lore.
This dedication to authenticity can sometimes pose unexpected challenges. The recently released Forgotten Realms titles of the 1980s and early ‘90s, for example, relied on cardboard ‘code wheels’, a primitive form of copy protection which was integrated into the games’ stories. Rather than treat these wheels like DRM and simply eliminate them completely, the team decided to retain the mechanic for the sake of preserving the original experience. Customers therefore receive a printable copy of the code wheel as well as an electronic on-screen version.
Sourcing these materials entails another round of detective work. In addition to their own personal collections, the team frequently turn to eBay and other online marketplaces. Localised auction sites are often used to track down foreign language editions. The best source, however, is the community; collectors, enthusiasts, and fan site webmasters frequently prove to be invaluable allies during a treasure hunt.
“The community will often send us scans of the manuals, maps, or other bonus content. Sometimes we’ll get whole packages mailed to us in the post as well!” laughs Paczyński. “We’ve actually received bonus items and even physical game versions from places like Canada and Germany.” While the process of buying and digitising all these collectibles can be extremely time-consuming, many members of the team share a sense of responsibility when it comes to this aspect of their work.
“There is a strong element of actually preserving game history - digitising and archiving materials that could disappear at any time. Just talking to our Product team, you can tell that it’s really something of a passion project; they collect tons of materials for games that we haven’t released yet (or materials we can’t get the rights to release). “Just in case”, they say, but years from now when all the online links go down, and the printed pages are faded, a digital copy will at least be kept safe somewhere.”
With the legalities settled, available code patched, game goodies digitised and the whole package rigorously tested, the title is ready for its long-awaited rerelease. In many cases, the game will have been out-of-print for literally *decades*. This perilous journey out of licensing hell and back to legal sale raises questions about the monetary value of older games, the role of unsanctioned emulation, and the threats posed by DRM.
As we’ve seen, the games medium faces a unique set of challenges when it comes to the preservation of its past. The most obvious example is DRM, even the most archaic forms of which can still be insurmountable by game restorationists. Paczyński recalls the German release of KKND, which used an encrypted executable beyond a simple CD check; the team believe this form of copy protection will be impossible to work around.
Worryingly, however, it seems likely that many of the industry’s recent technological advances - even those regarded as ‘pro-consumer’ - will serve only to make the preservation of *today’s* games harder in the decades to come. Given the inexorable transition from physical media to cloud-based storage, and the now-ubiquitous requirement for a persistent internet connection, it seems likely that modern games will face a whole new set of challenges. Kyle Orland recently explored some of these issues in a great article for Ars Technica.
Another equally-pressing issue is that of our collective attitude toward older games. When the Internet Archive made over 2,500 MS-DOS games freely available to play in web browsers earlier this year, the news was covered with enthusiasm by dozens of major publications. Undoubtedly, it was a remarkable accomplishment - the IA curators used EmDOSBox, a version of the same emulator powering GOG’s classics and which compiles to JavaScript, to preserve these games in a playable state using open web standards.
But, as Dan Whitehead observed on Eurogamer, our collective indifference to the legality of this move demonstrates some of the issues which confront gaming as it grows up as an art form. “The biggest problem that games face as a commercial medium is that there are no ancillary markets and no reliable revenue streams beyond the initial launch," writes Whitehead. "Compare it to film. There, the cinema release is just the start of a film’s commercial life [...] It has what smartly dressed business people call a ‘long tail’, making money for years to come and helping to fund more movies."
Services like GOG play a vital role in challenging preconceptions about old games and by making classic games commercially-viable once more. There’s a long way still to go, but the more the industry is aware of the challenges faced in the fight to preserve gaming’s heritage, the better.
“The work we do on classic games here at GOG.com is, first and foremost, born from a passion and love for these experiences," says Paczyński. "Gaming was alive and kicking well before the digital distribution era, and it’s important for us to preserve these decades of an evolving art form in as authentic and accessible a way as possible.”
| true | true | true |
We speak to the legal detectives and expert programmers bringing old games back from the dead.
|
2024-10-12 00:00:00
|
2015-09-16 00:00:00
|
article
|
rockpapershotgun.com
|
Rock Paper Shotgun
| null | null |
|
30,347,870 |
https://hackernoon.com/the-internet-computer-provides-a-solution-to-platform-risk
|
The Internet Computer Provides a Solution to Platform Risk | HackerNoon
|
Cryptonomicon
|
**Get up to 6 months free of Notion + unlimited AI!**
The Internet Computer is a new computing platform that uniquely enables developers to reap the benefits of blockchain technology without sacrificing performance. This is the second article in a series of six articles that outline why developers should build their applications on the Internet Computer. The first article, which briefly explains what the Internet Computer is, can be found here.
One of the core dynamics in technology is that there are platforms and there are applications that are built on top of those platforms. Under this dynamic, application developers shoulder a unique risk—commonly referred to as “platform risk”—that the platform upon which they’ve built will revoke their access at any time, for any reason. While this risk has always existed, it has become particularly acute in recent years as major technology platforms have slowly revoked or limited access to their APIs. Today, it represents a major problem for platform and application developers alike as few people are willing to build on top of new platforms, and those who do know that they face an existential threat to their business.
The Internet Computer provides a unique solution to this problem: it has a feature that allows (but does not require) developers to make their software’s APIs *irrevocable*. Platform developers who utilize this feature can attract people to build applications on top of their platform because it allows them to plausibly promise that their platform will remain open forever. And application developers can have peace of mind knowing that the foundation upon which they are building their business is not made of sand.
At the beginning of a platform’s life, its relationship with the applications built on top of it is mutually beneficial: the applications attract users to join the platform, and the platform provides applications with access to users or data. This dynamic can create a positive feedback loop commonly referred to as the “flywheel effect.” The flywheel effect is a phenomenon where the growth of a new platform compounds because the applications built on top of it attract new users to the platform, which in turn attracts developers to build new applications.
Facebook, LinkedIn, and Twitter are all prime examples of platforms that grew quickly because of the applications built on top of them. Games like Farmville helped draw in tens of millions of users to Facebook, and kept them engaged on the platform. And dozens of third-party websites drove traffic to the platforms through their APIs.
Indeed, in a 2007 interview, one of Twitter’s founders (Biz Stone) explained just how important applications were to the platform’s early success: “The API has been arguably the most important, or maybe even inarguably, the most important thing we’ve done with Twitter. It has allowed us, first of all, to keep the service very simple and create a simple API so that developers can build on top of our infrastructure and come up with ideas that are way better than our ideas, and build things like Twitterrific, which is just a beautiful elegant way to use Twitter that we wouldn’t have been able to get to, being a very small team. So, the API, which has easily 10 times more traffic than the website, has been really very important to us.”
When a platform reaches maturity, its relationship with the applications built on top of it begins to change. Historically, when a platform reaches maturity, the company behind it changes gears from trying to grow the platform to trying to maximize its profits. This typically involves extracting rent from the applications built on top of the platform—or removing them from the platform entirely.
Twitter, for example, changed its API policy in 2012 to throttle the way certain application developers could use the platform’s APIs. They are not alone: Facebook and LinkedIn have each also famously revoked API access from countless applications that were built on top of their platforms once they decided that they no longer needed their help to grow. These platforms thus turned their backs on the same applications that helped them grow so quickly in their youth.
The decisions made by Twitter, Facebook, and LinkedIn have eroded trust that new platforms will remain open once they reach a certain threshold of success. This lack of trust discourages entrepreneurs from building applications and investors from funding the ones that are built: they know that building on top of platforms is akin to building on sand. With few applications being built on new platforms, those platforms struggle to grow the way the tech behemoths did in the mid-2000s.
The Internet Computer has a feature that allows platform developers to plausibly promise they will not revoke access to their platform’s APIs. If a developer designates their platform’s APIs as “permanent,” the Internet Computer prevents the developer from later revoking access to those APIs, or even constructively revoking API access by degrading the functionality they provide.
This feature is uniquely enabled by the Internet Computer’s architecture. The Internet Computer is a distributed network of independent data centers that uses something called the Internet Computer Protocol to create what is effectively a single world computer. That protocol can be thought of as a list of rules that must be followed by the data centers that comprise the network. One such rule is that when a canister’s APIs have been designated permanent, the data centers that comprise the network *automatically reject* any updates to the canister that would revoke access to its APIs.
Separately, the Internet Computer has a robust governance system that allows a diverse group of stakeholders to make changes to its composition or protocol. Anyone is allowed to participate in governance: the only requirement is that you must “stake”—i.e., lock up—the Internet Computer’s native token (ICP) to receive a vote. This governance system can be used to revert changes to canisters that constructively revoke APIs that have been designated permanent.
*Sonic*. Sonic is a decentralized exchange built entirely on the Internet Computer. Sonic’s functionality is similar to Uniswap’s on Ethereum. However, unlike Uniswap—where a simple transaction can cost over $40 in fees alone—transactions on Sonic are virtually free thanks to the Internet Computer’s unique performance characteristics.
Decentralized exchanges like Sonic are useful because they allow users to exchange different types of tokens for one another. For example, a user could exchange a token that is pegged to the US Dollar (such as USDC) for one that is pegged to the Euro (such as EURT). Or a user could exchange one application’s governance token for the governance token of another application.
If Sonic’s APIs are designated permanent, developers could confidently incorporate Sonic into their application. This would enable application developers to offer a feature that would allow payments to be made in one token and received in a totally different token. For example, a merchant who sells digital goods for an online game could allow users to make purchases in any token and have Sonic automatically convert that token to one that is pegged to the merchant’s native currency (such as USDC).
*OpenChat*. OpenChat is a chat application built entirely on the Internet Computer. In its current form, OpenChat is akin to WhatsApp or Telegram. But the developers behind OpenChat intend to incorporate features into the platform that are uniquely enabled by blockchain technology—such as allowing users to send money to each other via messages.
If OpenChat’s APIs are designated permanent, developers could confidently incorporate OpenChat into their applications. This means that developers who build on the Internet Computer could add a chat feature to their application by simply calling OpenChat’s APIs, instead of building one from scratch.
**Disclosure**: *The author of this article owns ICP, which is the native token for the Internet Computer*.
| true | true | true |
The Internet Computer is a new computing platform that provides a unique solution to platform risk.
|
2024-10-12 00:00:00
|
2022-02-15 00:00:00
|
article
|
hackernoon.com
|
Hackernoon
| null | null |
|
92,664 |
http://discovermagazine.com/2007/nov/the-man-who-imagined-wormholes-and-schooled-hawking
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,760,191 |
https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx
|
The Aletheia Framework<sup>®</sup>
| null |
The Aletheia Framework is our toolkit for ethics and trustworthiness in artificial intelligence that we believe is too useful to keep to ourselves. So, we’ve made it freely available to everyone.
We’re using it in our business and believe it can help any organisation navigate the day-to-day intricacies of applying AI in a way that can build public trust in the technology.
Artificial intelligence ethics is a complex area and, while The Aletheia Framework hasn’t solved all its challenges, it can help reassure organisations, people and communities that the ethical implications of an AI have been fully considered; it is as fair as possible; and makes trustworthy decisions. Not only could greater use of ethical artificial intelligence help our world recover from Covid-19, but it will help us prepare for the opportunities of the digital future. However, the potential of artificial intelligence to support the health, wealth and growth of society can only be realised with public trust.
| true | true | true |
Title
|
2024-10-12 00:00:00
|
2020-12-14 00:00:00
|
website
|
rolls-royce.com
|
rolls-royce.com
| null | null |
|
6,607,300 |
http://instagram-business.tumblr.com/post/64973486231/a-look-at-ads-on-instagram
|
A Look at Ads on Instagram
|
Instagram-Business
|
# A Look at Ads on Instagram
A few weeks ago, we shared our plans to introduce advertising on Instagram. Today, we want to provide a few more details about exactly what ads on Instagram will look like.
If you’re in the United States, you’ll see the sample ad above sometime in the coming week. This is a one-time ad from the Instagram team that’s meant to give you a sense for the look and feel of the ads you will see.
You’ll know a photo or video is an advertisement when you see the “Sponsored” label where the time stamp normally would be. Tap the label to learn more about how advertising works on Instagram. If you have other questions about how advertising on Instagram works, you can learn more here.
We want ads to be creative and engaging, so we’re starting with just a handful of brands that are already great members of the Instagram community. We’ll proceed slowly and let you know when we’re ready to expand, continuing to partner with brands whose content shines.
In the meantime, all businesses can use Instagram by creating an account on the platform. All you need is a mobile device, username and profile image.
Instagram is the place for brands to share beautiful and captivating photos and videos that people can’t see anywhere else.
Stay tuned to this blog to see how a few brands are already crafting eye-catching original content and inspiring their customers and followers to do the same.
We recommend exploring the stories and best practices featured here, such as these profiles of some of our ad launch partners:
How Lexus built excitement for the 2014 Lexus IS with a video made from Instagram photos
How PayPal partners with guest Instagrammers to bring its services to life
We look forward to more businesses joining the Instagram community and sharing moments that capture the essence of their brands.
| true | true | true |
A few weeks ago, we shared our plans to introduce advertising on Instagram. Today, we want to provide a few more details about exactly what ads on Instagram will look like. If you're in the United...
|
2024-10-12 00:00:00
|
2013-10-24 00:00:00
|
article
|
tumblr.com
|
Tumblr
| null | null |
|
19,660,535 |
https://www.ruby-lang.org/en/news/2019/03/31/support-of-ruby-2-3-has-ended/
|
Ruby
| null |
Posted by antonpaisov on 31 Mar 2019
We announce that all support of the Ruby 2.3 series has ended.
After the release of Ruby 2.3.7 on March 28, 2018, the support of the Ruby 2.3 series was in the security maintenance phase. Now, after one year has passed, this phase has ended. Therefore, on March 31, 2019, all support of the Ruby 2.3 series ends. Security and bug fixes from more recent Ruby versions will no longer be backported to 2.3. There won’t be any patches of 2.3 either. We highly recommend that you upgrade to Ruby 2.6 or 2.5 as soon as possible.
## About currently supported Ruby versions
### Ruby 2.6 series
Currently in normal maintenance phase. We will backport bug fixes and release with the fixes whenever necessary. And, if a critical security issue is found, we will release an urgent fix for it.
### Ruby 2.5 series
Currently in normal maintenance phase. We will backport bug fixes and release with the fixes whenever necessary. And, if a critical security issue is found, we will release an urgent fix for it.
### Ruby 2.4 series
Currently in security maintenance phase. We will never backport any bug fixes to 2.4 except security fixes. If a critical security issue is found, we will release an urgent fix for it. We are planning to end the support of the Ruby 2.4 series on March 31, 2020.
| true | true | true | null |
2024-10-12 00:00:00
|
2019-03-31 00:00:00
| null | null |
ruby-lang.org
|
ruby-lang.org
| null | null |
9,028,635 |
http://www.theguardian.com/info/developer-blog/2015/feb/10/what-to-listen-to-next-jq-to-the-rescue
|
What to listen to next? jq to the rescue!
|
Rupert Bates
|
Recently I was looking for some new music to listen to and thought I’d check the Guardian to see what had received good reviews. When I went on to the site though I found it a bit hard to narrow down the reviews to ones I was interested in — albums rather than live reviews, which had received at least 4 stars.
Naturally I turned to the Content API for help and this also seemed like a good opportunity to learn a tool I’d been interested in for a while: jq.
Jq, according to their GitHub site, is “a lightweight and flexible command-line JSON processor ... like sed for JSON”.
I think of it as performing the functions that XPath and XSLT do for XML but for JSON (fortunately the syntax is far less verbose than XSLT). There is a fairly decent tutorial on the site, but best of all there is an online tool which allows you to try out jq interactively in a browser.
First though we need some JSON to transform, so over to the Content API.
## CAPI talk
It is a simple enough task to fetch reviews by adding the tone/reviews tag, and by adding the tone/albumreview tag as well we can exclude live reviews and just return albums (I found that using both tags rather than just tone/albumreview alone gave better results because it filtered out some other non-review type articles).
By also adding fields=starRating into our query, we can return the star rating given in the review. Then we will limit our query to the last month by adding date parameters. This is about as far as we can go with the Content API since it doesn’t support querying by star rating.
Our final query string now looks like this:
...and the response looks like this (trimmed to two results for brevity):
## Jq for the music
Next we need to take this JSON and do something interesting with it. If you recall, we were trying to find albums which received good reviews — ie. those with a star rating of 4 or above. This is where jq comes in as it allows us to take the JSON returned by the Content API and very quickly and easily filter it based on JSON values.
The general idea will be to issue a curl command to retrieve the JSON and then pipe it to jq to query, transform and analyse it.
As a first step, you can see from the JSON above that there is quite a bit of metadata at the root of the response which we are not really interested in for the purposes of this exercise. Using jq we can quite easily throw all this away and just focus on what we are interested in — the results array.
The output from jq looks like this (again trimmed for brevity):
Let’s have a look in more detail at what is happening here. Along with the JSON from the Content API which is piped into jq, we are also passing a filter ‘.response.results[]’ — this describes a path into our JSON structure. The initial . is the root of the object, then we navigate into the response element and access the elements of the results array using ‘results[]’.
This is great because we now have a really flat, clean JSON structure to work with and it is much easier to see what is going on.
The next thing to do is to filter the reviews we’ve returned based on their star rating. Unfortunately the Content API returns the star rating as a string — it would be much nicer if it was a number as we could then search for reviews where the rating is greater than 3 rather than where it is “4” or “5”. Fortunately, an addition to our jq filter will allow us to do just this (from now on I will just show the actual filter we pass to jq rather than the full command):
## Easy pieces
There are a few things to notice here: firstly we are combining our original path filter which pulls out the results array with a second filter using the pipe operator ‘|’ in the same way we might combine commands in a bash shell. This allows us to easily build up arbitrarily complicated pipelines out of simple component filters.
Also of interest is the second filter we have added, which tells jq to transform our input JSON into a new JSON object. The curly brackets denote the root of the new object, and we are copying the webTitle field over as-is. Then we are creating a starRating field at the top level of the new object (rather than nested inside a fields object) and converting this to a number using the ‘tonumber’ function.
The output then looks like this (note that starRating is now a number):
All that is now left to do is to select those reviews with a star rating of more than three. This is easily achieved by adding a select filter onto our jq input:
## It’s all too much
This returned quite a lot of results. To find out exactly how many, we can use the length function. Note that to use length we have to wrap the preceding filters in square brackets to turn their output into an array.
57 results apparently — rather too many to get through in an afternoon! So let’s just concentrate on the five star reviews — there are only six of them, and here they are:
So that’s my listening for the afternoon sorted out: a heady mix of classical and Napalm Death, with some Dylan and Natalie Prass thrown in for good measure.
## It’s all over now baby blue
I found working with jq a real pleasure once I grasped the basics of chaining filters to select, transform and analyse JSON. JSON is such a central part of so many applications these days that it is great to have such a powerful tool at our disposal. The syntax is beautifully terse — pretty much anything can be accomplished in a one-liner and the concept of mapping and filtering to transform inputs will feel totally natural to anyone with a functional programming background.
Here is my final script for retrieving five star reviews from the Content API. In the final version I’ve added an additional filter to remove any results where the star rating is missing and also included the web url with the output.
## Comments (…)
Sign in or create your Guardian account to join the discussion
| true | true | true |
Search the Guardian for five star music reviews directly from your command-line
|
2024-10-12 00:00:00
|
2015-02-10 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
3,036,916 |
http://www.ornl.gov/info/press_releases/get_press_release.cfm?ReleaseNumber=mr20110914-00
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,688,858 |
http://blog.codinghorror.com/building-a-pc-part-i/
|
Building a PC, Part I
|
Jeff Atwood
|
Over the next few days, I'll be building Scott Hanselman's computer. My goal today is more modest: **build a minimal system that boots**.
I'd like to dispel the myth that building computers is risky, or in any way difficult or complicated. **If you can put together a LEGO kit, you can put together a PC from parts.** It's dead easy, like snapping together so many LEGO bricks. Well, mostly. Have you seen how complicated some of those LEGO kits are?
Granted, building computers isn't for everybody. There are plenty of other things you might want to do with your time, like, say, spending time with your children, or finding a cure for cancer. That's why people buy pre-assembled computers from Dell. But if you need fine-grained control over *exactly* what's inside your PC, if you desire a deeper understanding of how the hardware fits together and works, then building a PC is a fun project to take on. You can easily match or beat Dell's prices in most cases, while building a superior rig -- and you can learn something along the way, too.
Here's the complete set of parts we ordered, per the component list. The CPU and memory boxes aren't shown, unfortunately, because I had already opened those by the time I took this photo. Whoops!
All you need is a few basic tools to build this PC. I typically use needle-nose pliers, wire cutters, and a small phillips screwdriver.
Before we get started, let me share a few key things I've learned while building PCs:
**Computer parts are surprisingly durable.**They aren't fragile. You don't have to baby them. So often I see people handle computer parts as if they're sacred, priceless relics. While I don't think you should play "catch" with your new Core 2 Quad processor, it's also not going to explode into flames if you look at it the wrong way. You don't have to tiptoe around the build. Just be responsible and use common sense. I've done some appalling things to computer hardware in my day, truly boneheaded stuff, and I think I've broken all of two or three items in the last 10 years.**The risk of static discharge is overblown**. I*never*wear anti-static wristbands, and I've yet to electrocute any components with static electricity. Never. Not once. However, I always touch a metal surface before handing computer components-- and that's a good habit for you to cultivate as well.**Be patient, and don't force it**. Those rare times I've damaged components, it's because I rushed myself and forced something that I thought should fit-- despite all the warning signs. I've learned through hard experience that "maybe I need to use lots of additional force" is*never*the right answer when it comes to building PCs. Take a deep breath. Count to ten. Refer to the manual, and double-check your work.
I always build up the motherboard first. Place the motherboard on top of the anti-static bag it came in so it's easier to work on. Slot in the **CPU** and snap in the **memory sticks**. We're using four sticks here, so every slot is populated. However, if you're only using two sticks of memory, be sure they are in the correct paired slots for dual-channel operation. If you need advice, the motherboard manual is a good reference for basic installation steps.
Continue building up the motherboard by installing the **CPU cooler**. I strongly recommend buying an aftermarket CPU cooler based on a heatpipe tower design, as they *wildly* outperform the stock Intel coolers. This particular model we chose for Scott's build is the Scythe Mine, but I'm also a fan of the Scythe Infinity and Scythe Ninja Plus. (You can see the Ninja Plus on my work rig.)
It's important to install the CPU cooler correctly, otherwise you risk frying your CPU. Refer closely to the heatsink instructions. Don't forget to place a bit of the heatsink paste (included with the cooler) on the surface of the CPU before installing. These larger heatsinks can be quite heavy, so be sure you've followed the installation instructions to the letter and secured it firmly to the motherboard. Check the orientation of the heatsink so the fan blows "out" if possible, e.g., towards the back of the motherboard, where the case exhaust fans usually are.
Now let's **build up the case** to accept the motherboard. We chose the Antec P182 case for Scott's build. This case is unique; it's a collaborative venture between the well-known case vendor Antec and Silent PC Review, one of my favorite PC enthusiast websites.
This is the second version of the case, which reflects a number of design tweaks over the original P180. It's a little expensive, but the P182 oozes quality and attention to detail. It's probably the single best designed case I've ever worked on. But don't take my word for it; see reviews at AnandTech and SilentPCReview.
Some cases are sold with power supplies, but the higher end cases, such as the P182, typically are not. For Scott's build, we chose the Corsair HX series power supply, which is a rebranded and tweaked Seasonic. It's considered one of the best quiet and efficient power supplies on the market, which is why it tops the list of recommended PSUs at SilentPCReview.
I opened the opposite side of the case to gain access to the PSU cage from both sides, installed the PSU in the cage, and threaded the power cables up through the opening in the middle.
If you have cats, like we do, you have curious cat helpers. Unfortunately, cat helpers aren't all that... *helpful*.
Now **install the backplate** included with the motherboard. Every backplate is different because every motherboard is different. It's held in by pressure; just snap it in firmly around the edges.
It's finally time to **place the motherboard in the case**. Clear room in the case compartment by moving any errant cables out of the way and stowing them. Make sure the screw holes on the motherboard line up with the pre-installed screw mount standoffs in the case. In our P182, everything matched up perfectly out of the box.
Angle the motherboard down slowly and line up the ports to the backplate, then gently let the motherboard down to rest against the standoffs. Loosely line up the motherboard screw holes to the motherboard standoffs.
Find the packet of screws included with the case, and use the appropriate screws to **secure the motherboard to the case standoffs**.
Now let's **connect the power supply to the motherboard**. There are *two* power connectors on modern motherboards, so be sure you've connected them both. Don't worry, the connectors are keyed; you can't install them incorrectly and blow up your PC. As you can see here, I threaded the power connectors along the back side of the motherboard platform. That's one of the many nifty little design features of the P182 case.
Before we can boot up, we need to **connect the power and reset switches** so they work. This part is a little fiddly. Find the cable with the labelled power, reset, and LED connectors from the case, then refer to the motherboard manual to see where the appropriate motherboard front panel connector pins are.
Connect each front panel wire to the specific motherboard front panel pins individually. Make sure you connect them to the right location, but orientation of these connectors doesn't matter. This is where the needlenose pliers come in handy unless you have nimble (and tiny) fingers. Why this isn't a universally standard keyed block connector by now is beyond me.
We need some kind of video output to see if our computer can boot, so let's **install a video card**. Scott's not a hardcore gamer, so I went for something midrange, a set of two NVIDIA 8600GTS cards. They're an excellent blend of performance and the latest DX10 and high-definition features, while using relatively little power.
Don't forget to connect the 6-pin video card power connector if your video card requires it! This is a common mistake that I've made more than once. Our power supply has modular connectors, so I snapped in one of the two 6-pin power connectors and threaded it up to the video card.
We're ready for the moment of truth: **does it boot?** I attached a power cord to the power supply, hooked up a utility 15" LCD I keep around for testing, and then pressed the power button.
Success! I know "reboot and select proper boot device" doesn't look like much, but it means everything is working. We've just built a minimal PC that boots up. It's a small step that we'll build on tomorrow.
Getting this system from a pile of parts to bootable state took **about two hours**. Like I promised -- easy! Writing it up is taking almost as long as actually doing it. This was a slow build for me because I was extra cautious with Scott's parts, and I was stopping to take frequent pictures. With some practice, it's possible to build a PC much more quickly-- even in under ten minutes.
| true | true | true |
Over the next few days, I'll be building Scott Hanselman's computer. My goal today is more modest: build a minimal system that boots. I'd like to dispel the myth that building computers is risky, or in any way difficult or complicated. If you can put together a LEGO kit, you
|
2024-10-12 00:00:00
|
2007-07-09 00:00:00
| null |
article
|
codinghorror.com
|
Coding Horror
| null | null |
15,120,251 |
https://statements.pss.gov.au/statements/verifyIdentity?token=xxx
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,200,874 |
https://twitter.com/jonas/status/1784960245376196807
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
16,080,350 |
https://arstechnica.com/science/2017/08/when-it-comes-to-controversial-science-a-little-knowledge-is-a-problem/
|
When it comes to controversial science, a little knowledge is a problem
|
John Timmer
|
For a lot of scientific topics, there's a big gap between what scientists understand and what the public thinks it knows. For a number of these topics—climate change and evolution are prominent examples—this divide develops along cultural lines, typically religious or political identity.
It would be reassuring to think that the gap is simply a matter of a lack of information. Get the people with doubts about science up to speed, and they'd see things the way that scientists do. Reassuring, but wrong. A variety of studies have indicated that the public's doubts about most scientific topics have nothing to do with how much they understand that topic. And a new study out this week joins a number of earlier ones in indicating that scientific knowledge makes it easier for those who are culturally inclined to reject a scientific consensus.
## What’s the consensus?
The new work was done by two social scientists at Carnegie Mellon University, Caitlin Drummond and Baruch Fishchoff. They relied on a large, regular survey called the General Social Survey, which attempts to capture the public's perspective on a large variety of issues (they used data from the 2006 and 2010 iterations of the survey). The survey included a number of questions on general education and scientific education, as well as a number of questions that determined basic scientific literacy. In addition, it asked for opinions on a number of scientific issues: acceptance of the evidence for the Big Bang, human evolution, and climate change; thoughts on the safety of GMOs and nanotechnology; and the degree to which the government should fund stem cell research.
| true | true | true |
For those on the wrong side of an ideological divide, scientific knowledge hurts.
|
2024-10-12 00:00:00
|
2017-08-22 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
5,840,000 |
http://www.aeinstein.org/organizations/org/FDTD.pdf
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,086,135 |
https://getstream.io/blog/how-a-go-program-compiles-down-to-machine-code/
|
How a Go Program Compiles down to Machine Code
|
Koen V
|
Here at Stream, we use Go extensively, and it has drastically improved our productivity. We have also found that by using Go, the speed is outstanding and since we started using it, we have implemented mission-critical portions of our stack, such as our in-house storage engine powered by gRPC, Raft, and RocksDB. Today we are going to look at the Go 1.11 compiler and how it compiles down your Go source code to an executable to gain an understanding of how the tools we use everyday work. We will also see why Go code is so fast and how the compiler helps. We will take a look at three phases of the compiler:
- The scanner, which converts the source code into a list of tokens, for use by the parser.
- The parser, which converts the tokens into an Abstract Syntax Tree to be used by code generation.
- The code generation, which converts the Abstract Syntax Tree to machine code.
*Note: The packages we are going to be using (**go/scanner**,* *go/parser**,* *go/token**,* *go/ast**, etc.) are not used by the Go compiler, but are mainly provided for use by tools to operate on Go source code. However, the actual Go compiler has very similar semantics. It does not use these packages because the compiler was once written in C and converted to Go code, so the actual Go compiler is still reminiscent of that structure.*
## Scanner
The first step of every compiler is to break up the raw source code text into tokens, which is done by the scanner (also known as lexer). Tokens can be keywords, strings, variable names, function names, etc. Every valid program “word” is represented by a token. In concrete terms for Go, this might mean we have a token “package”, “main”, “func” and so forth. Each token is represented by its position, type, and raw text in Go. Go even allows us to execute the scanner ourselves in a Go program by using the **go/scanner** and **go/token** packages. That means we can inspect what our program looks like to the Go compiler after it has been scanned. To do so, we are going to create a simple program that prints all tokens of a Hello World program. The program will look like this: https://gist.github.com/astrotars/fb5d7350f2f052d8f50794c010285019 We will create our source code string and initialize the **scanner.Scanner** struct which will scan our source code. We call **Scan()** as many times as we can and print the token’s position, type, and literal string until we reach the End of File (**EOF**) marker. When we run the program, it will print the following: https://gist.github.com/koesie10/e312024b5f52795756e81a95906bd8e1 Here we can see what the Go parser uses when it compiles a program. What we can also see is that the scanner adds semicolons where those would usually be placed in other programming languages such as C. This explains why Go does not need semicolons: they are placed intelligently by the scanner.
## Parser
After the source code has been scanned, it will be passed to the parser. The parser is a phase of the compiler that converts the tokens into an Abstract Syntax Tree (AST). The AST is a structured representation of the source code. In the AST we will be able to see the program structure, such as functions and constant declarations. Go has again provided us with packages to parse the program and view the AST: **go/parser** and **go/ast**. We can use them like this to print the full AST: https://gist.github.com/astrotars/234cc8ff0aa75067c22607d633d2e1f0 Output: https://gist.github.com/astrotars/85f429cd024544f3b73dfa6c6d81c15d In this output, you can see that there is quite some information about the program. In the **Decls** fields, there is a list of all declarations in the file, such as imports, constants, variables, and functions. In this case, we only have two: our import of the **fmt** package and the main function. To digest it further, we can look at this diagram, which is a representation of the above data, but only includes types and in red the code that corresponds to the nodes: The main function is composed of three parts: the name, the declaration, and the body. The name is represented as an identifier with the value main. The declaration, specified by the Type field, would contain a list of parameters and return type if we had specified any. The body consists of a list of statements with all lines of our program, in this case only one. Our single **fmt.Println** statement consists of quite a few parts in the AST. The statement is an **ExprStmt**, which represents an expression, which can, for example, be a function call, as it is here, or it can be a literal, a binary operation (for example addition and subtraction), a unary operation (for instance negating a number) and many more. Anything that can be used in a function call’s arguments is an expression. Our **ExprStmt** contains a **CallExpr**, which is our actual function call. This again includes several parts, most important of which are **Fun** and **Args**. Fun contains a reference to the function call, in this case, it is a **SelectorExpr**, because we select the **Println** identifier from the fmt package. However, in the AST it is not yet known to the compiler that **fmt** is a package, it could also be a variable in the AST. Args contains a list of expressions which are the arguments to the function. In this case, we have passed a literal string to the function, so it is represented by a **BasicLit** with type **STRING**. It is clear that we are able to deduce a lot from the AST. That means that we can also inspect the AST further and find for example all function calls in the file. To do so, we are going to use the **Inspect** function from the **ast** package. This function will recursively walk the tree and allow us to inspect the information from all nodes. To extract all function calls, we are going to use the following code: https://gist.github.com/koesie10/ba6af59e0dd8213260e5944c1464b0b1
What we are doing here is looking for all nodes and whether they are of type ***ast.CallExpr**, which we just saw represented our function call. If they are, we are going to print the name of the function, which was present in the **Fun** member, using the printer package. The output for this code will be: **fmt.Println** This is indeed the only function call in our simple program, so we indeed found all function calls. After the AST has been constructed, all imports will be resolved using the GOPATH, or for Go 1.11 and up possibly modules. Then, types will be checked, and some preliminary optimizations are applied which make the execution of the program faster.
## Code generation
After the imports have been resolved and the types have been checked, we are certain the program is valid Go code and we can start the process of converting the AST to (pseudo) machine code. The first step in this process is to convert the AST to a lower-level representation of the program, specifically into a Static Single Assignment (SSA) form. This intermediate representation is not the final machine code, but it does represent the final machine code a lot more. SSA has a set of properties that make it easier to apply optimizations, most important of which that a variable is always defined before it is used and each variable is assigned exactly once. After the initial version of the SSA has been generated, a number of optimization passes will be applied. These optimizations are applied to certain pieces of code that can be made simpler or faster for the processor to execute. For example, dead code, such as **if (false) { fmt.Println(“test”) }** can be eliminated because this will never execute. Another example of an optimization is that certain nil checks can be removed because the compiler can prove that these will never false. Let’s now look at the SSA and a few optimization passes of this simple program:
package main import "fmt" func main() { fmt.Println(2) }
As you can see, this program has only one function and one import. It will print 2 when run. However, this sample will suffice for looking at the SSA. *Note: Only the SSA for the main function will be shown, as that is the interesting part.* To show the generated SSA, we will need to set the **GOSSAFUNC** environment variable to the function we would like to view the SSA of, in this case main. We will also need to pass the -S flag to the compiler, so it will print the code and create an HTML file. We will also compile the file for Linux 64-bit, to make sure the machine code will be equal to what you will be seeing here. So, to compile the file we will run: **$ GOSSAFUNC=main GOOS=linux GOARCH=amd64 go build -gcflags "-S" simple.go** It will print all SSA, but it will also generate a ssa.html file which is interactive so we will use that. When you open ssa.html, a number of passes will be shown, most of which are collapsed. The start pass is the SSA that is generated from the AST; the lower pass converts the non-machine specific SSA to machine-specific SSA and genssa is the final generated machine code. The start phase’s code will look like this:
b1: v1 = InitMem <mem> v2 = SP <uintptr> v3 = SB <uintptr> v4 = ConstInterface <interface {}> v5 = ArrayMake1 <[1]interface {}> v4 v6 = VarDef <mem> {.autotmp_0} v1 v7 = LocalAddr <*[1]interface {}> {.autotmp_0} v2 v6 v8 = Store <mem> {[1]interface {}} v7 v5 v6 v9 = LocalAddr <*[1]interface {}> {.autotmp_0} v2 v8 v10 = Addr <*uint8> {type.int} v3 v11 = Addr <*int> {"".statictmp_0} v3 v12 = IMake <interface {}> v10 v11 v13 = NilCheck <void> v9 v8 v14 = Const64 <int> [0] v15 = Const64 <int> [1] v16 = PtrIndex <*interface {}> v9 v14 v17 = Store <mem> {interface {}} v16 v12 v8 v18 = NilCheck <void> v9 v17 v19 = IsSliceInBounds <bool> v14 v15 v24 = OffPtr <*[]interface {}> [0] v2 v28 = OffPtr <*int> [24] v2 If v19 → b2 b3 (likely) (line 6) b2: ← b1 v22 = Sub64 <int> v15 v14 v23 = SliceMake <[]interface {}> v9 v22 v22 v25 = Copy <mem> v17 v26 = Store <mem> {[]interface {}} v24 v23 v25 v27 = StaticCall <mem> {fmt.Println} [48] v26 v29 = VarKill <mem> {.autotmp_0} v27 Ret v29 (line 7) b3: ← b1 v20 = Copy <mem> v17 v21 = StaticCall <mem> {runtime.panicslice} v20 Exit v21 (line 6)
This simple program already generates quite a lot of SSA (35 lines in total). However, a lot of it is boilerplate and quite a lot can be eliminated (the final SSA version has 28 lines and the final machine code version has 18 lines). Each v is a new variable and can be clicked to view where it is used. The **b****’s** are blocks, so in this case, we have three blocks: **b1****,** **b2**, and **b3****.** **b1** will always be executed. **b2** and **b3** are conditional blocks, which can be seen by the **If v19 → b2 b3 (likely)** at the end of **b1**. We can click the **v19** in that line to view where **v19** is defined. We see it is defined as **IsSliceInBounds **, and by viewing the Go compiler source code we can see that **IsSliceInBounds** checks that **0 <= arg0 <= arg1**. We can also click **v14** and **v15** to view how they are defined and we will see that **v14 = Const64 ****;** **Const64** is a constant 64-bit integer. **v15** is defined as the same but as **1**. So, we essentially have **0 <= 0 <= 1**, which is obviously **true**. The compiler is also able to prove this and when we look at the **opt** phase (“machine-independent optimization”), we can see that it has rewritten **v19** as **ConstBool** ** [true]**. This will be used in the
**opt deadcode**phase, where
**b3**is removed because
**v19**from the conditional shown before is always true. We are now going to take a look at another, simpler, optimization made by the Go compiler after the SSA has been converted into machine-specific SSA, so this will be machine code for the amd64 architecture. To do so, we are going to compare lower to lowered deadcode. This is the content of the lower phase:
b1: BlockInvalid (6) b2: v2 (?) = SP <uintptr> v3 (?) = SB <uintptr> v10 (?) = LEAQ <*uint8> {type.int} v3 v11 (?) = LEAQ <*int> {"".statictmp_0} v3 v15 (?) = MOVQconst <int> [1] v20 (?) = MOVQconst <uintptr> [0] v25 (?) = MOVQconst <*uint8> [0] v1 (?) = InitMem <mem> v6 (6) = VarDef <mem> {.autotmp_0} v1 v7 (6) = LEAQ <*[1]interface {}> {.autotmp_0} v2 v9 (6) = LEAQ <*[1]interface {}> {.autotmp_0} v2 v16 (+6) = LEAQ <*interface {}> {.autotmp_0} v2 v18 (6) = LEAQ <**uint8> {.autotmp_0} [8] v2 v21 (6) = LEAQ <**uint8> {.autotmp_0} [8] v2 v30 (6) = LEAQ <*int> [16] v2 v19 (6) = LEAQ <*int> [8] v2 v23 (6) = MOVOconst <int128> [0] v8 (6) = MOVOstore <mem> {.autotmp_0} v2 v23 v6 v22 (6) = MOVQstore <mem> {.autotmp_0} v2 v10 v8 v17 (6) = MOVQstore <mem> {.autotmp_0} [8] v2 v11 v22 v14 (6) = MOVQstore <mem> v2 v9 v17 v28 (6) = MOVQstoreconst <mem> [val=1,off=8] v2 v14 v26 (6) = MOVQstoreconst <mem> [val=1,off=16] v2 v28 v27 (6) = CALLstatic <mem> {fmt.Println} [48] v26 v29 (5) = VarKill <mem> {.autotmp_0} v27 Ret v29 (+7)
In the HTML file, some lines are greyed out, which means they will be removed or changed in one of the next phases. For example, **v15** (**MOVQconst [1]**) is greyed out. By further examining
**v15**by clicking on it, we see it is used nowhere else, and
**MOVQconst**is essentially the same instruction as we saw before,
**Const64**, only machine-specific for
**amd64**. So, we are setting
**v15**to
**1**. However,
**v15**is used nowhere else, so it is useless (dead) code and can be eliminated. The Go compiler applies a lot of these kinds of optimization. So, while the first generation of SSA from the AST might not be the fastest implementation, the compiler optimizes the SSA to a much faster version. Every phase in the HTML file is a phase where speed-ups can potentially happen. If you are interested to learn more about SSA in the Go compiler, please check out the Go compiler’s SSA source. Here, all operations, as well as optimizations, are defined.
## Conclusion
Go is a very productive and performant language, supported by its compiler and its optimization. To learn more about the Go compiler, the source code has a great README. If you would like to learn more about why Stream uses Go and in particular why we moved from Python to Go, please check out our blog post on switching to Go.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-09-25 00:00:00
|
website
|
getstream.io
|
Stream
| null | null |
|
22,231,759 |
https://godotengine.org/article/godot-engine-was-awarded-epic-megagrant
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
10,740,282 |
https://medium.com/@bbrennan/why-we-built-datafire-6adc250210d8#.oyvuiez72
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,833,350 |
https://blog.helpshift.com/blog/in-app-messaging-customer-prosperity
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,451,823 |
https://gist.github.com/borkdude/591e9f2a7453fd0872823c50b3e60130
|
Google cloud function running sci
|
Borkdude
|
Last active
March 1, 2020 13:21
-
-
Save borkdude/591e9f2a7453fd0872823c50b3e60130 to your computer and use it in GitHub Desktop.
Google cloud function running sci
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
const { evalString } = require("@borkdude/sci"); | |
let printlnArgs = []; | |
function println(...args) { | |
printlnArgs.push(args.map(arg => arg.toString()).join(" ")); | |
} | |
exports.evalClojureExpr = (req, res) => { | |
const { text } = req.body; | |
try { | |
const result = evalString(text, {namespaces: {"clojure.core": {println: println}}}); | |
let value = []; | |
if (printlnArgs.length !== 0) { | |
value.push(...printlnArgs); | |
} | |
if (result !== undefined) { | |
value.push(result.toString()); | |
} | |
res.json({ | |
response_type: "in_channel", | |
text: `\`\`\`${value.join("\n")}\`\`\``, | |
type: "mrkdwn" | |
}); | |
} catch (error) { | |
res.json({ | |
response_type: "in_channel", | |
text: `\`${error.message}\``, | |
type: "mrkdwn" | |
}); | |
} | |
printlnArgs = []; | |
}; |
borkdudecommented
| true | true | true |
Google cloud function running sci. GitHub Gist: instantly share code, notes, and snippets.
|
2024-10-12 00:00:00
|
2020-02-21 00:00:00
|
article
|
github.com
|
Gist
| null | null |
|
33,646,794 |
https://github.com/arxanas/git-branchless
|
GitHub - arxanas/git-branchless: High-velocity, monorepo-scale workflow for Git
|
Arxanas
|
(This suite of tools is 100% compatible with branches. If you think this is confusing, you can discuss a new name here.)
▼ Jump to installation ▼
▼ Jump to table of contents ▼
`git-branchless`
is a suite of tools which enhances Git in several ways:
It **makes Git easier to use**, both for novices and for power users. Examples:
`git undo`
: a general-purpose undo command. See the blog post*git undo: We can do better*.- The smartlog: a convenient visualization tool.
`git restack`
: to repair broken commit graphs.- Speculative merges: to avoid being caught off-guard by merge conflicts.
It **adds more flexibility** for power users. Examples:
- Patch-stack workflows: strong support for "patch-stack" workflows as used by the Linux and Git projects, as well as at many large tech companies. (This is how Git was "meant" to be used.)
- Prototyping and experimenting workflows: strong support for prototyping and experimental work via "divergent" development.
`git sync`
: to rebase all local commit stacks and branches without having to check them out first.`git move`
: The ability to move subtrees rather than "sticks" while cleaning up old branches, not touching the working copy, etc.- Anonymous branching: reduces the overhead of branching for experimental work.
- In-memory operations: to modify the commit graph without having to check out the commits in question.
`git next/prev`
: to quickly jump between commits and branches in a commit stack.`git sw -i/--interactive`
: to interactively select a commit to switch to.
It **provides faster operations** for large repositories and monorepos, particularly at large tech companies. Examples:
- See the blog post
*Lightning-fast rebases with git-move*. - Performance tested: benchmarked on torvalds/linux (1M+ commits) and mozilla/gecko-dev (700k+ commits).
- Operates in-memory: avoids touching the working copy by default (which can slow down
`git status`
or invalidate build artifacts). - Sparse indexes: uses a custom implementation of sparse indexes for fast commit and merge operations.
- Segmented changelog DAG: for efficient queries on the commit graph, such as merge-base calculation in O(log n) instead of O(n).
- Ahead-of-time compiled: written in an ahead-of-time compiled language with good runtime performance (Rust).
- Multithreading: distributes work across multiple CPU cores where appropriate.
- To my knowledge,
`git-branchless`
provides the*fastest*implementation of rebase among Git tools and UIs, for the above reasons.
See also the User guide and Design goals.
Undo almost anything:
- Commits.
- Amended commits.
- Merges and rebases (e.g. if you resolved a conflict wrongly).
- Checkouts.
- Branch creations, updates, and deletions.
## Why not `git reflog`
?
`git reflog`
is a tool to view the previous position of a single reference (like `HEAD`
), which can be used to undo operations. But since it only tracks the position of a single reference, complicated operations like rebases can be tedious to reverse-engineer. `git undo`
operates at a higher level of abstraction: the entire state of your repository.
`git reflog`
also fundamentally can't be used to undo some rare operations, such as certain branch creations, updates, and deletions. See the architecture document for more details.
## What doesn't `git undo`
handle?
`git undo`
relies on features in recent versions of Git to work properly. See the compatibility chart.
Currently, `git undo`
can't undo the following. You can find the design document to handle some of these cases in issue #10.
- "Uncommitting" a commit by undoing the commit and restoring its changes to the working copy.
- In stock Git, this can be accomplished with
`git reset HEAD^`
. - This scenario would be better implemented with a custom
`git uncommit`
command instead. See issue #3.
- In stock Git, this can be accomplished with
- Undoing the staging or unstaging of files. This is tracked by issue #10 above.
- Undoing back into the
*middle*of a conflict, such that`git status`
shows a message like`path/to/file (both modified)`
, so that you can resolve that specific conflict differently. This is tracked by issue #10 above.
Fundamentally, `git undo`
is not intended to handle changes to untracked files.
## Comparison to other Git undo tools
`gitjk`
: Requires a shell alias. Only undoes most recent command. Only handles some Git operations (e.g. doesn't handle rebases).`git-extras/git-undo`
: Only undoes commits at current`HEAD`
.`git-annex undo`
: Only undoes the most recent change to a given file or directory.`thefuck`
: Only undoes historical shell commands. Only handles some Git operations (e.g. doesn't handle rebases).
Visualize your commit history with the smartlog (`git sl`
):
## Why not `git log --graph`
?
`git log --graph`
only shows commits which have branches attached with them. If you prefer to work without branches, then `git log --graph`
won't work for you.
To support users who rewrite their commit graph extensively, `git sl`
also points out commits which have been abandoned and need to be repaired (descendants of commits marked with `rewritten as abcd1234`
). They can be automatically fixed up with `git restack`
, or manually handled.
Edit your commit graph without fear:
## Why not `git rebase --interactive`
?
Interactive rebasing with `git rebase --interactive`
is fully supported, but it has a couple of shortcomings:
`git rebase --interactive`
can only repair linear series of commits, not trees. If you modify a commit with multiple children, then you have to be sure to rebase all of the other children commits appropriately.- You have to commit to a plan of action before starting the rebase. For some use-cases, it can be easier to operate on individual commits at a time, rather than an entire series of commits all at once.
When you use `git rebase --interactive`
with `git-branchless`
, you will be prompted to repair your commit graph if you abandon any commits.
See https://github.com/arxanas/git-branchless/wiki/Installation.
Short version: check for packages in the repositories appropriate for your system or run `cargo install --locked git-branchless`
. Once installed, run `git branchless init`
in your repository.
`git-branchless`
is currently in **alpha**. Be prepared for breaking changes, as some of the workflows and architecture may change in the future. It's believed that there are no major bugs, but it has not yet been comprehensively battle-tested. You can see the known issues in the issue tracker.
`git-branchless`
follows semantic versioning. New 0.x.y versions, and new major versions after reaching 1.0.0, may change the on-disk format in a backward-incompatible way.
To be notified about new versions, select Watch » Custom » Releases in Github's notifications menu at the top of the page. Or use GitPunch to deliver notifications by email.
There's a lot of promising tooling developing in this space. See Related tools for more information.
Thanks for your interest in contributing! If you'd like, I'm happy to set up a call to help you onboard.
For code contributions, check out the Runbook to understand how to set up a development workflow, and the Coding guidelines. You may also want to read the Architecture documentation.
For contributing documentation, see the Wiki style guide.
Contributors should abide by the Code of Conduct.
| true | true | true |
High-velocity, monorepo-scale workflow for Git. Contribute to arxanas/git-branchless development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2020-12-10 00:00:00
|
https://opengraph.githubassets.com/67f49541370496eb4c8aa58dfe3c702f3d67320b7adb1a1d4e065aaf9e4fdd0f/arxanas/git-branchless
|
object
|
github.com
|
GitHub
| null | null |
32,232,630 |
https://gizmodo.com/china-rocket-uncontrolled-reentry-july-2022-1849327164
|
An Out-of-Control Rocket Launched by China Will Crash to Earth Soon
|
George Dvorsky
|
China’s space agency performed a successful launch of a Long March 5B rocket on Sunday, delivering a new module to its fledgling space station. Similar to previous launches, however, the rocket’s core stage remained in orbit and is now set to perform an uncontrolled reentry.
The Long March 5B blasted off from Wenchang Space Launch Center in Hainan on Sunday, June 24, at 2:22 p.m. Beijing time. Packed atop the rocket was the 22-ton Wentian laboratory, which arrived at China’s Tiangong space station 13 hours later, according to state-run China Daily. Waiting for the 59-foot-long (18-meter) module were Chen Dong, Liu Yang, and Cai Xuzh, making them the first astronauts in China’s space history to attend an orbital docking. Wentian docked to the front port of the Tianhe core module, creating a T-shaped space station.
Instead of celebrating this accomplishment, however, we’re forced to wonder when the 21-metric-ton core stage will slip back into the atmosphere and where it will crash. Such is the pattern with Long March 5B launches, as two previous missions resulted in chaotic reentries (during controlled reentries, rocket stages are brought down with reignited engines, allowing launch providers to steer the rocket body away from populated areas, typically into the ocean). In May 2020, debris from an out-of-control core stage fell onto an inhabited area along the west coast of Africa, while a rocket launched in April 2021 crashed in the Indian Ocean near the Maldives.
The odds of rocket debris landing on your house are exceptionally low, but the risk to human life and property does exist. According to research published earlier this month, the chance of someone getting killed or hurt from falling rocket parts will rise to 10% in the coming decade. China has been admonished for not taking better care of its incoming rockets, but the stage appears to be set—yet again—for a recurrence of the previous two episodes.
Two objects cataloged from the CZ-5B launch: 53239 / 2022-085A in a 166 x 318 km x 41.4 deg orbit, 53240 / 2022-085B in a 182 x 299 km x 41.4 deg orbit. Orbital epoch of ~1200 UTC confirms that the inert 21t rocket core stage remains in orbit and was not actively deorbited.
— Jonathan McDowell (@planet4589) July 24, 2022
And indeed, U.S. Space Command cataloged two objects from Sunday’s launch, one being Wentien and the other the discarded core stage. Astronomer Jonathan McDowell from the Harvard-Smithsonian Center for Astrophysics expects the stage to reenter Earth’s atmosphere within a week or so.
“Unfortunately we can’t predict when or where,” he explained to me in an email. “Such a large rocket stage should not be left in orbit to make an uncontrolled reentry; the risk to the public is not huge, but it is larger than I am comfortable with.”
During a livestream of the launch on China Global Television Network, Xu Yangson, director general of the Asia-Pacific Space Cooperation Organization, said China took measures this time to make sure that the core stage will come back down in a controlled manner, but did not elaborate. When I asked about Xu’s comment, McDowell said: “I think he is misinformed.” McDowell is likely correct, as the Long March 5B core stage would require a significant upgrade or revision to suddenly have the capacity for controlled reentry.
As for the Wentien module, it will now be used to support a host of scientific experiments ranging from microgravity studies and the effects of space radiation through to experiments to study the growth of plants, insects, small mammals, and microbes. A third module, named Mengtian, is scheduled to launch in October. China intends to use its Tiangong space station for 10 years, during which astronauts will work for stints lasting six months.
More: Damaged SpaceX Rocket Delays NASA’s Next Astronaut Mission.
| true | true | true |
It’s too early to know when or where the rocket core stage might crash, but it could happen within a week.
|
2024-10-12 00:00:00
|
2022-07-25 00:00:00
|
article
|
gizmodo.com
|
Gizmodo
| null | null |
|
39,925,647 |
https://kottke.org/16/06/the-industrial-revolution-climate-change-and-brexit
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
16,167,002 |
https://www.indiehackers.com/forum/show-ih-anyshortcut-customize-shortcut-boost-productivity-a2f41921e5
| null | null | null | true | true | false | null |
2024-10-12 00:00:00
|
2000-01-01 00:00:00
| null | null | null | null | null | null |
12,214,863 |
https://opensource.com/business/16/8/selling-open-source-smart-way
|
Selling open source the smart way
|
Job van der Voort
|
Open source software is experiencing huge growth, with a staggering 64% of companies currently participating in open source projects. But you probably know that already. What's more interesting to look at is how to sell it, and with a little luck, make some money to help support the people who develop your software and sustain your project.
## Why we love open source software
Having a deep understanding of why so many organizations use and trust open source is key when it comes to selling it. As one of our own GitLab colleagues explains, it "allows for a level of transparency which closed sourced products do not have; it provides a greater level of innovation when there is a larger community of contributors; and it allows those who use open source to have a say in the product's direction." What's more, it dramatically adds value, thanks to the passion and expertise of a huge team of developers: in our case, more than a thousand.
On a more prosaic—but nevertheless extremely important—level, it also allows open source software (OSS) organizations to ship faster. While this might not be as exciting as sharing methodologies to crack a problem that's been stumping you for weeks, it's vital when it comes to understanding and emulating its success.
## Why should I pay for something that I can get for free?
It's not unusual to hear someone ask why a company would pay for software when they can get it for free. And it's a good question. The companies that do make money from selling open source have acknowledged and utilized this reality. The OSS companies that have experienced huge success have built not just a fantastic OSS project for sales, marketing, and engineering, but also have a business strategy that takes into account proprietary enhancements.
Successful open source organizations have made their money based on the premise that a certain type of user will be happy to pay. They can and will pay for an enterprise-grade version of the complete product. This often includes security, a range of proprietary enhancements, and support. They have also understood another type of user won't be able to pay, but it is important not to alienate these individuals (or organizations)—community support is essential for open source.
At GitLab, our competitive advantage arises partly from finding the balance between service and profit. As one of our developers sets out: "Our approach to selling open source relies on us being stewards to the community and always thinking of features and improvements that will benefit the greater good, and finding that balance between taking care of the overall community, while also making a profit.
"I would say overall we try to sell companies on purchasing an all-in-one solution that will change the way your team works together, and how fast you ship code, while we can always be adding improvements and features at rapid speed because of our open source model."
## Support the community, and the community will support you
The importance of the community cannot be over-emphasized when it comes to selling open source. By leveraging the expertise of over 1,000 contributors, we can move faster, quickly and regularly provide the features developers really want, and maintain complete transparency.
This approach enables an environment to be created where everyone—customers, companies, individuals—benefits by contributing and taking ownership. It enables GitLab to be more nimble and generate the features users genuinely want.
## It isn't always easy selling open source
Selling a product that's based on an open source project is challenging. In October 2014, we adopted an "open core" licensing model to help us generate income in a sustainable way. Striking the right balance between the open source project and the proprietary version takes skill and experience. We want to take care of the community, but let's be honest—we want to make some money, too! This is where it's useful to have the support of colleagues from a sales and marketing background. By conveying the value proposition of the enterprise edition (EE) over the community edition, these team members make a real impact.
When discussing open source sales, there are two main areas that give cause for concern. "It's hard to make money" is a phrase that crops up. Choosing which features should be EE only is another decision which can be difficult. If this sounds familiar, then take a look at the tips our GitLab colleagues have come up with for sales teams selling OSS:
"Take care of the community, have them in mind, mention the size of your community to prospective clients. Have a clear value proposition of what your enterprise edition offers over free versions."
"Understand the benefits of open source in general; leverage your community; have a clear value proposition and ROI for your enterprise product."
"Highlight the value proposition of why open source works well. Crowd-sourcing ideas to add/improve features, allow people to contribute on any level, lastly, this will speed up the release timeline. Also have a clear pitch on why the company needs a paid enterprise solution."
The results from Elastic, Red Hat, and others show that it is possible to make money from selling open source. Scott Farquhar, co-founder and CEO of Atlassian, spoke at the Business of Software conference, sharing his thoughts on the Freemium business model, the importance of measuring data, and effective marketing tips (who wants to make themselves popular by sponsoring the beer at the next tech conference?), are essential reading for anyone looking for ideas on achieving financial success with open source software.
## Looking forward
Increasingly, companies are discussing the security of open source components. Having a process in place that is free of vulnerabilities and license compliant is essential. Although selling open source software will always have particular challenges, the good news is that companies continue to value the range of features, control over product direction, competitive cost, and transparency that it offers.
## 3 Comments
| true | true | true |
How can you make a business out of open source to help support the people who develop your software and sustain your project?
|
2024-10-12 00:00:00
|
2016-08-01 00:00:00
| null |
opensource.com
|
Opensource.com
| null | null |
|
34,793,461 |
https://betterprogramming.pub/the-alternative-to-performance-reviews-for-software-engineers-7b6d1c9537dd
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
826,360 |
http://theincidentaleconomist.com/war-of-attrition-interpretations-and-applications/
|
“War of Attrition”: Interpretations and Applications | The Incidental Economist
|
Austin Frakt
|
Last week I posed the war of attrition game, and earlier this week I analyzed it. Building on that analysis, in this post I provide some interpretations and applications for the mixed strategy Nash equilibrium solution we found. As a reminder, here’s a short summary of the game in more general notation than originally posed:
You and a competitor will battle in rounds for a prize worth
Vdollars. In each round each of you may choose to either fight or fold. The first one to fold wins $0. If the other player doesn’t also fold he wins theVdollar prize. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costsCdollars per round per player. AssumeV > C.
Recall that what we found in the analysis was that there was a mixed strategy Nash equilibrium to fight with probability *p=V*/(*V+C*). In the case *V=*$5 and *C=*$0.75, *p=*0.87. What does this mean?
There are multiple ways to interpret mixed strategy Nash equilibria. One way is to interpret the probability as a statement about a population. Applied to the game of attrition this interpretation would say that proportion *p *of the population are fighters and the rest are folders. That’s certainly plausible. I bet that upon reading the statement of this problem last week some folks immediately thought “I will not fight even one round,” while other folks immediately thought, “I would fight forever.” Even if nobody actually thought the latter, experiments show that people will really fight a very long time, even to the point that the cumulative fight fees exceed the prize. There really are “fighter” and “folder” personality types in the population.
A second interpretation is that each individual will play a mixed strategy. That is, you yourself will “roll the dice” in your head and fight with probability *p *and fold otherwise. Notice that each round is an independent “roll of the dice.” Past fight fees have no bearing on your probability of fighting in the current round. They are sunk costs. With probability *p *you will fight on, and on, and on…
What is the probability that this fight will go to round 2? It is the probability that both you and your opponent fight in round 1, or *p*2. What is the probability the fight will enter round 3? It is the probability that you and your opponent both fight in round 1 and both fight in round 2. Those decisions are independent so the probability of entering round 3 is *p*4. In general, the probability of fighting to round *n*+1 is *p*2 n. When
*p*is large (i.e.,
*V*is large relative to
*C*) some very long fights can occur. With each round there is hope of earning some money (if you win) so it is rational for you to continue precisely when your expected winnings of doing so are equivalent to those if you don’t. That’s exactly what
*p*promises.
In fact, long wars of attrition have occurred in history, in warefare, in competition between firms, and in politics. Wars of attrition also occur in auctions. Each side is rational to continue the war but not because they wish to recoup past fight fees. Those are sunk costs, they cannot be recovered, and therefore they are irrelevant to current play. Each stage is independent of the next so players fight on because the expected benefit is equivalent to not fighting (in mixed strategy Nash equilibrium play). Eventually one side’s resources are exhausted and the war of attrition comes to an end.
My set of things to say about this game has also been exhausted so this series on the war of attrition game also ends here, at least for now.
| true | true | true |
Last week I posed the war of attrition game, and earlier this week I analyzed it. Building on that analysis, in this post I provide some interpretations and applications for the mixed strategy Nash equilibrium solution we found. As a reminder, here's a short summary of the game in more general notation than originally posed: You
|
2024-10-12 00:00:00
|
2009-09-16 00:00:00
|
article
|
theincidentaleconomist.com
|
The Incidental Economist
| null | null |
|
12,414,856 |
http://www.rollingstone.com/culture/news/pamela-anderson-porn-is-for-losers-w437764
|
Pamela Anderson: 'Porn Is for Losers'
|
Daniel Kreps
|
# Pamela Anderson: ‘Porn Is for Losers’
Pamela Anderson, who appeared on more *Playboy* covers than any other model in the magazine’s history, has co-written a *Wall Street Journal* op-ed slamming pornography as “a boring, wasteful and dead-end outlet for people too lazy to reap the ample rewards of healthy sexuality.” In essence, “porn is for losers,” Anderson and rabbi and author Shmuley Boteach wrote.
“From our respective positions of rabbi-counselor and former *Playboy* model and actress, we have often warned about pornography’s corrosive effects on a man’s soul and on his ability to function as husband and, by extension, as father,” the unlikely duo continued. “This is a public hazard of unprecedented seriousness given how freely available, anonymously accessible and easily disseminated pornography is nowadays.”
Inspired by the continued sexting troubles of Anthony Weiner, Anderson and Boteach compared the addiction to pornography to that of narcotics. “Nine percent of porn users said they had tried unsuccessfully to stop — an indication of addiction that is all the more startling when you consider that the dependency rate among people who try marijuana is the same — 9 percent — and not much higher among those who try cocaine (15 percent),” they wrote.
“But it is a fair guess that whereas drug-dependency data are mostly stable, the incidence of porn addiction will only spiral as the children now being raised in an environment of wall-to-wall, digitized sexual images become adults inured to intimacy and in need of even greater graphic stimulation. They are the crack babies of porn.”
To replace antiquated pornography, Anderson and Boteach instead suggest a “sensual revolution” that would instead focus on eroticism. “The ubiquity of porn is an outgrowth of the sexual revolution that began a half-century ago and which, with gender rights and freedoms now having been established, has arguably run its course,” they wrote. “Now is the time for an epochal shift in our private and public lives. Call it a ‘sensual revolution.'”
*Full-frontal nudity and one-for-the-money shots — these movies pushed the envelope and still played multiplexes. Watch here.*
| true | true | true |
Pamela Anderson has co-written an op-ed where she slams pornography as a "boring, wasteful and dead-end outlet" for "losers."
|
2024-10-12 00:00:00
|
2016-09-02 00:00:00
|
article
|
rollingstone.com
|
Rolling Stone
| null | null |
|
25,323,248 |
https://www.focalityapp.com/en/resources/time-management-strategies/
|
Time Management Strategies
| null |
Managing your own time is an essential skill in today’s fast-paced world. It's also more challenging than ever. Fortunately, there are many **time management strategies that can help you become organized and efficient**. Use this overview of 58 strategies and techniques to find the ones that work best for you.
Most plans are strictly sequential. When you invariably have to adjust your plan to reality, you have to update the whole lengthy sequence. Time-consuming at best, motivation consuming at worst. With deep planning, you create a simple hierarchy instead.
Think about what you want to achieve this year. Based on that, what do you want to achieve this month? This week? Today? This way you create a long-term strategy and still adjust your plans with little effort.
Curious? Then take a look at Focality, our time management app. Focality combines deep planning, self-reflection and data-driven insights to let you constantly improve your time management skills.
Without further ado, here comes the complete list of time management techniques in alphabetic order:
For every day, create a list of one big thing, three medium things and five small things that you want to accomplish this day. This brings clarity into your workday and helps not to get buried by an endless flood of to-do items.
See Two-Hour Solution.
There are two 2-Minute Rules:
If you want to establish a new habit, make sure that it requires no more than 2 minutes in the beginning. Work yourself up from there.
In the Getting Things Done methodology (see below), if a task takes only 2 minutes or less, then do it right away instead of organizing it.
If you are putting something off, force yourself to do it for just 10 minutes. Chances are that you will continue past the 10 minutes.
Start your day by planning what you will do that day for 5 minutes. Every hour, take one minute, review your progress and refocus. At the end of the day, take 5 minutes to reflect and evaluate your day.
This method helps you to quickly decide what to do with a task. Either Do it, Defer it, Delegate it or Drop it.
Spend 7 minutes in the morning to plan your day. Then another 7 minutes in the evening to reflect.
The 80/20 rule, also known as the Pareto Principle, is a rule of thumb that states that 80% of the effect step from 20% of the causes. Applied to time management it means that you can get 80% of the results with just 20% of the work. Getting to 100% will require disproportionately more work. Perfect is the enemy of done.
Align your work with your basic rest–activity cycle by doing focused work for approximately 90 minutes, followed by a 20-minute break. The following article sums it up nicely: Avoid Burnout and Increase Awareness Using Ultradian Rhythms
Created by Brian Tracy, ABCDE is a method for setting priorities. Basically assign each task a priority from A to E. A: Very important, must do. B: Important, should do. C: Nice to do, without consequence if skipped. D: Delegate. E: Eliminate.
Leave every meeting, workshop or other event with a set of concrete tasks that need to be performed, called “action steps”. These should be kept separately from accompanying information (“reference items”). Everything that can’t / shouldn’t be approached right now becomes a “backburner item” to be possibly revived later. The method was developed by Behance. There used to be an online tool that implemented the action method, but it was discontinued in 2015.
The Agile Results technique is heavily inspired by software development frameworks like Scrum. At the beginning of the week, identify three wins that you want to achieve. Each day, identify three wins for that day. Use Fridays to recognize three things that are going well and three things to improve. Accompanied by a set of further practices to improve your time management like 30 day improvement sprints, reference collections and more.
Autofocus is a to-do list methodology by Mark Forster which tries to use as little structure as possible. Dump everything in a ruled notebook. Scan your list and work on what stands out for as long as you feel like it. Then cross the item off the list. If you haven’t finished it, add it to the bottom again. Repeat. If you pass through a whole page (except the latest page) without anything standing out, dismiss all items on that page.
Work on batches of similar tasks instead of mixing unrelated ones. It reduces friction by minimizing context switching.
Track your energy levels and identify your most energetic times of the day. Schedule your most important work for those times. First described by Sam Carpenter in his book Work the System.
A bullet journal combines to-do list, planning and journaling. The name bullet journal comes from the extensive use of bullet(ish) points to structure and mark information.
The Clear-Organized-Productive-Efficient technique helps you to eliminate low-value activities and prioritize everything so that you can focus on the things with the most impact.
See Seinfeld Strategy.
Tackle the biggest and/or most difficult and/or most disliked task first thing in the morning. Then you won’t waste energy or distract yourself the rest of the day because you are secretly dreading that task. Eat That Frog was created by Brian Tracy who named it after a quote by Mark Twain: “Eat a live frog first thing in the morning and nothing worse will happen to you the rest of the day.”
See Time Management Matrix.
Start with a list of tasks. Select the first one on the list. Think about which other task from the list you would rather do. Select that one but remember/mark the first. Repeat until you no longer want to do anything before the currently selected task. Then work on the tasks in this chain - but in reverse order! Once you are done with the chain start the process again with the next open task on the list. Reference: The Final Version newsletter
Work on only one task at a time. When you start, write down the time. Continue working on the task until you feel that you need a break. If you are exhausted or can’t focus anymore, take a break. Write down the stop time. Decide how long this break should be and set a timer for it. Repeat.
At the end of the day, when your brain is fried, prioritize your tasks for the next day. Schedule important tasks to the beginning of the day, when your brain is still fresh. Schedule less important or easier tasks towards the later parts of the day when your brain fries again.
Reference: Dominate Your Day With the “Fresh or Fried” Prioritization System
Getting Things Done is a famous method created by David Allen. It makes extensive use of to-do lists and techniques to manage them. At its core it employs five basic activities to bring structure to your task management:
Goal setting is the process of defining what you want to achieve in the long-term (or mid-term). Instead of focusing just on tasks, it helps you getting direction in life. If you don’t have clear goals, you are basically adrift. And you hardly get to where you want to be by accident.
We have compiled an extensive guide to personal goal setting which will help you to understand the topic in depth. Learn why it is important, what types of goals to set, attributes of well-defined goals and much, much more.
This strategy focuses on managing your email inbox with maximum efficiency. According to its creator, Merlin Mann, the name Inbox Zero does not refer to the number of emails in your inbox, but “how long it takes to use the inbox”. Basically, you run every incoming email through a short process in which you do one of the following things: Delete, delegate, respond, defer (to do later), do (immediately).
This time management strategy has been created in 1918 by productivity consultant Ivy Lee. He recommends writing down six tasks at the end of each day that you want to accomplish the next day. Order them by importance. Then, on the next day, work your way down the list. At the end of the day, move any uncompleted task to the list for the next day. Repeat.
Combine the strengths of digital and paper tools. Chad Hall’s Medium Method employs paper notebooks, post-it notes, a task-management app, an online calendar and a note app.
Write down the most important tasks for the next day. Up to three. At the beginning of the day, focus on those and do nothing else until your MITs are completed.
Borrowed from requirements management this technique helps you prioritize your tasks. Give each task a priority of either “Must have”, “Should have”, “Could have” or “Won’t have”. Or, reformulated to time management: Must do, should do, could do and won’t do.
Make sure that every day the amount of work you put towards your goal is non-zero. No matter how little you do, you do need to do something.
Write everything that you should never do on a list. This might be bad habits or categories of tasks that are not worth your time. This helps you to avoid unproductive activities by making them more conscious.
OKRs got to fame through their adoption at Google, although their history goes back way further. The principle is easy: You set your objective, a goal, that you want to achieve. For example “Delight my clients”. Then you define a set of key results by which your success will be measured. For example “Improve feedback scores by 20%” or “Increase client retention by 30%”. You should set your key results challenging enough so that you usually only achieve them to approximately 70%.
See 80/20 Rule.
Parkinson’s Law states that “work expands so as to fill the time available for its completion.” Employ other time management strategies (e.g. time boxing) to combat its effect.
Think of your day as a pickle jar and your tasks as rocks, pebbles and sand that you want to fill in. The rocks are your big, important tasks. Pebbles and sand are less important tasks. If you want to get the most into the glass, your day, you need to fill it in order of importance. Rocks first. If you start with the unimportant stuff, you will never be able to fill in your important rocks.
At the beginning of the day (or the evening before) create a plan of what you are going to accomplish that day. Don’t just “wing it”. Focality is a great tool for that.
Focus on one activity for 25 minutes. Then take a 5-minute break, even if you are in the middle of something. Repeat. After four cycles, take a 15-30 minute break.
The Pomodoro Technique helps to limit the impact of interruptions. It aims to keep you in flow.
If you want to stay true to the name of the method, use a tomato-shaped kitchen timer to measure the intervals. “Pomodoro” is the Italian word for tomato.
POSEC stands for Prioritize, Organize, Streamline, Economize, Contribute. The method draws connections to Maslow’s Hierarchy of Needs.
Document what you have done.
This technique, created by Tony Robbins, boils down to answering three questions whenever you want to achieve something: What do I want? Why do I want it? What do I have to do to get it?
Tony offers a free workbook if you want to dive into detail.
Work for 52 minutes, then rest for 17 minutes. The 52 / 17 rule is basically a variation of the Pomodoro technique. Why 52 minutes? Julia Gifford analyzed the working patterns of the people at her workplace. Turns out the most productive 10% work for approximately 52 minutes focused on one task.
Slice a big task into smaller ones that you can easily accomplish in 20-30 minutes.
Work on your goal every day. Mark every day in your calendar on which you worked for your goal. Make sure that there is an unbroken chain of marks in your calendar.
A variation of the AutoFocus system described earlier. Write everything that you want to do on a page. Leave space for a second column. Work on what you feel like working for as long as you feel like. If something urgent comes up, add it to the second column. Work on the tasks of the page until no tasks feel ready to be done anymore. Then move to the next page - but not until you have completed all items from the second column.
Avoid work that has no strategic value for you.
Do you need to tackle a big, complex task that you can’t bring yourself to face head-on? Punch some holes into it. Find something small and manageable that you can do right away. Copy over your template presentation, brainstorm some headline ideas - anything. Keep repeating until your cheese or project either has so many holes that it is gone or becomes easy to complete as a whole.
Amir Salihefendić, the founder of the popular to-do list app Todoist, created the time management system Systemist. It consists of six elements:
A time audit analyzes where you actually spend your time. There are several ways to do this (using time tracking software, manually logging time in a spreadsheet, etc.) but the goal is always to understand where your time goes. You will frequently find activities that take up way more of your time than expected. Based on this understanding you can make adjustments so that you allocate your time better.
Split your day/week into distinct blocks. Each block is dedicated to one task or activity. Do nothing else during this time.
Elon Musk famously takes this to the extreme by dividing his days into 5-minute intervals to manage his insane workloads.
Allocate a fixed time period, the timebox, to complete an activity. It’s similar to time blocking but is independent of planning your whole day/calendar.
Former U.S. president Dwight D. Eisenhower used to rate his problems by the two criteria urgency and importance. These criteria are commonly used as axis on a 2x2 matrix with 4 quadrants:
The Time Management Matrix (also known as Eisenhower Matrix / Box) helps you to better prioritize your tasks and avoid the urgency trap where you fill your day with everything that’s urgent and neglecting the important.
More Details: Time Management Matrix
See Not-to-do list.
Take two hours each week to plan your next week and have at least a vague idea of the following week.
Schedule fixed commitments, self-care and time off. Don’t schedule your work activities. You then clearly see the remaining time available for work. Now get to work. *After* you worked at least 30 minutes on something, add it to the calendar to make your progress visible. Don’t let yourself get interrupted before the 30-minute mark or you won’t be allowed to record the session. Allow yourself a break after each work period.
The Unschedule is described in Neil A. Fiore’s book The Now Habit, which also contains plenty of other tricks like making sure never to end blocked. Always have at least some idea how to continue before you stop your work session.
Remember the action method that said every meeting must be left with an action item? That item is the monkey. Make sure that you don’t take care of other people’s monkeys.
This analogy was created in a 1999 article in Harvard Business Review. It is aimed at managers and describes how to delegate effectively, but can also be applied to other areas of time management.
Divide your tasks for the day into three groups that all take roughly equally long to complete. Get to work on the first one. When it’s done, pack your things and move to a different location. For example, move from the office to a coffee shop. Then work on the second group. When it’s done, move on to yet another location. When the third one is done, go home. (Reference: Workstation Popcorn: How To Become Uber Productive While Working For Yourself)
Zen to Done tries to improve on Getting Things Done. It focuses on simplicity and effectiveness. It doesn’t just try to get done as many tasks as possible but also to choose those tasks wisely to bring you closer to your goals. It also offers a strategy to implement the central habits of the methodology step by step - instead of establishing a complicated system all at once.
If you managed to read through all 58 strategies, you are probably facing some analysis paralysis. Where should you go from here?
Obviously, you should first give Focality a try. ;) It combines an effective time management strategy, self-reflection and data-driven insights in an easy-to-use app. Thanks to Deep Planning it also requires very little time investment.
If you are looking for something else, think back right now which strategies you still remember. This little mental shortcut, known as the availability heuristic, tells you which strategies you judged as most important. Read about those in more detail and - if they still seem promising - give them a try. Not every technique is right for every person, so you might have to try more than one. But the **time invested in testing time management strategies is well spent**.
| true | true | true |
Use this comprehensive overview of all time management strategies, techniques and methods to find the ones that work best for you.
|
2024-10-12 00:00:00
|
2022-01-01 00:00:00
| null | null |
focalityapp.com
|
focalityapp.com
| null | null |
6,462,969 |
http://app.thefacesoffacebook.com
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
27,748,914 |
https://www.polygon.com/22166494/nintendo-switch-pro-4k-release-date-price-specs
|
Nintendo announces new Nintendo Switch model with 7-inch OLED screen
|
Michael McWhertor
|
Nintendo will release a new version of the Nintendo Switch this October, the company announced Tuesday. The new Nintendo Switch, officially named “Nintendo Switch (OLED model),” will cost $349.99, $50 more than the price of the original Switch that launched in 2017.
# Nintendo announces new Nintendo Switch model with 7-inch OLED screen
New Switch will cost $349.99, coming in October
If you buy something from a Polygon link, Vox Media may earn a commission. See our ethics statement.
If you buy something from a Polygon link, Vox Media may earn a commission. See our ethics statement.
This embedded content failed to load.
The new Nintendo Switch will boast a larger, 7-inch screen and an OLED display when it launches on Oct. 8 (the same day *Metroid Dread* launches). The standard Switch has a 6.2-inch LCD screen, and outputs a 1080p feed when docked. The Nintendo Switch (OLED model) will feature a maximum resolution of 1920x1080 and maximum frame rate of 60 fps, according to hardware specifications listed on Nintendo’s website. As with the original Switch, that applies in docked mode; the OLED model’s built-in screen still features a resolution of 1280x720. Contrary to previous reports, the system does not appear to support 4K resolution when docked.
The new Switch will also offer an adjustable stand that is as long as the unit itself, a dock that includes a wired Ethernet port, 64 GB of internal storage, and “enhanced audio” from the system’s internal speakers. Nintendo says that existing Joy-Cons will be compatible with the OLED model, and that the new Switch will be compatible with the full library of already-released Nintendo Switch games.
The new system will come in two color variations: one with white Joy-Cons and a white dock, and the other with traditional red and blue Joy-Cons and a black dock.
The Nintendo Switch (OLED model) will feature a slimmer bezel around the larger screen, and slightly larger overall dimensions: The new system is 0.1 inch longer and weighs 0.05 lbs more than the standard Nintendo Switch. Battery life for the OLED model appears to be identical to the standard Nintendo Switch at 4.5-9 hours.
Here are the full technical specifications for the Nintendo Switch (OLED model) from Nintendo:
## Nintendo Switch (OLED model) tech specs
Size | 4 inches high, 9.5 inches long, and 0.55 inches deep (with Joy-Con attached) *The depth from the tip of the analog sticks to the tip of the ZL/ZR buttons is 1.12 inches |
Weight | Approximately .71 lbs (Approximately .93 lbs with Joy-Con controllers attached) |
Screen | Multi-touch capacitive touch screen / 7.0 inch OLED screen / 1280x720 |
CPU/GPU | NVIDIA Custom Tegra processor |
Storage | 64 GB Users can easily expand storage space using microSDHC or microSDXC cards up to 2TB (sold separately). |
Wireless | Wi-Fi (IEEE 802.11 a/b/g/n/ac compliant) / Bluetooth 4.1 |
Video output | Up to 1080p via HDMI in TV mode Up to 720p via built-in screen in Tabletop mode and Handheld modes |
Audio output | Compatible with 5.1ch Linear PCM output Output via HDMI connector in TV mode |
Speakers | Stereo |
Buttons | Power button / Volume button |
USB connector | USB Type-C Used for charging or for connecting to the Nintendo Switch dock. |
Headphone/mic jack | 3.5mm 4-pole stereo (CTIA standard) |
Game card slot | Nintendo Switch game cards |
microSD card slot | Compatible with microSD, microSDHC, and microSDXC memory cards *Once the microSDXC card is inserted, a system update will be necessary. An internet connection is required to perform this system update. |
Sensor | Accelerometer, gyroscope, and brightness sensor |
Operating environment | 41-95 degrees F / 20-80% humidity |
Internal battery | Lithium-ion battery / 4310mAh |
Battery life | Approximately 4.5 - 9 hours The battery life will depend on the games you play. For instance, the battery will last approximately 5.5 hours for The Legend of Zelda: Breath of the Wild. |
Charging time | Approximately 3 hours *When charging while the hardware is in sleep mode |
Nintendo’s confirmation of the new Nintendo Switch comes after months of rumors and reports about an upgraded version of the hit console-handheld hybrid set to arrive this fall. Word of a new, more powerful Switch dates back to 2019.
Nintendo also sells the $199 Nintendo Switch Lite, which launched in September 2019 with a 5.5-inch screen, and includes nondetachable controllers and cannot be docked to a TV.
## Most Popular
- Beetlejuice Beetlejuice, Bad Boys: Ride or Die, Netflix’s Lonely Planet, and every movie new to streaming this week
- I have questions about Travis Kelce’s character on Grotesquerie
- The new Vox Machina episode pulls in a crucial character from a different campaign
- The cover of the new Discworld tabletop RPG brings back a flood of memories
- Dozens of great books, graphic novels, and movies are buy 2, get 1 free at Amazon
| true | true | true |
New Switch will cost $349.99, coming in October
|
2024-10-12 00:00:00
|
2021-07-06 00:00:00
|
article
|
polygon.com
|
Polygon
| null | null |
|
10,787,406 |
http://motherboard.vice.com/read/why-jon-stewarts-family-decided-to-open-an-animal-sanctuary
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,736,243 |
http://thenextweb.com/insider/2016/05/19/heres-first-look-ubers-self-driving-car/#gref
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,190,785 |
http://www.theinquisition.co.uk/2011/02/cyber-security-challenge-xmas-cipher/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,061,111 |
http://math.stackexchange.com/questions/637728/splitting-a-sandwich-and-feeling-not-deceived?newreg=19574bc3278242e98a5a2e8db58ef491
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,863,248 |
https://www.bbc.co.uk/news/articles/c72ver6172do
|
A Bugatti, a first lady and the fake stories aimed at Americans
|
Paul Myers; Olga Robinson; Shayan Sardarizadeh; Mike Wendling
|
# A Bugatti car, a first lady and the fake stories aimed at Americans
- Published
**A network of Russia-based websites masquerading as local American newspapers is pumping out fake stories as part of an AI-powered operation that is increasingly targeting the US election, a BBC investigation can reveal. **
**A former Florida police officer who relocated to Moscow is one of the key figures behind it.**
The following would have been a bombshell report - if it were true.
Olena Zelenska, the first lady of Ukraine, allegedly bought a rare Bugatti Tourbillon sports car for 4.5m euros ($4.8m; £3.8m) while visiting Paris for D-Day commemorations in June. The source of the funds was supposedly American military aid money.
The story appeared on an obscure French website just days ago - and was swiftly debunked.
Experts pointed out strange anomalies on the invoice posted online. A whistleblower cited in the story appeared only in an oddly edited video that may have been artificially created. Bugatti issued a sharp denial, calling it "fake news", external, and its Paris dealership threatened legal action against the people behind the false story.
But before the truth could even get its shoes on, the lie had gone viral. Influencers had already picked up the false story and spread it widely.
One X user, the pro-Russia, pro-Donald Trump activist Jackson Hinkle, posted a link seen by more than 6.5m people. Several other accounts spread the story to millions more X users – at least 12m in total, according to the site’s metrics.
It was a fake story, on a fake news website, designed to spread widely online, with its origins in a Russia-based disinformation operation BBC Verify first revealed last year - at which point the operation appeared to be trying to undermine Ukraine’s government.
Our latest investigation, carried out over more than six months and involving the examination of hundreds of articles across dozens of websites, found that the operation has a new target - American voters.
Dozens of bogus stories tracked by the BBC appear aimed at influencing US voters and sowing distrust ahead of November’s election. Some have been roundly ignored but others have been shared by influencers and members of the US Congress.
The story of the Bugatti hit many of the top themes of the operation – Ukrainian corruption, US aid spending, and the inner workings of French high society.
Another fake which went viral earlier this year was more directly aimed at American politics.
It was published on a website called The Houston Post – one of dozens of sites with American-sounding names which are in reality run from Moscow - and alleged that the FBI illegally wiretapped Donald Trump’s Florida resort.
It played neatly into Trump’s allegations that the legal system is unfairly stacked against him, that there is a conspiracy to thwart his campaign, and that his opponents are using dirty tricks to undermine him. Mr Trump himself has accused the FBI of snooping on his conversations.
Experts say that the operation is just one part of a much larger ongoing effort, led from Moscow, to spread disinformation during the US election campaign.
While no hard evidence has emerged that these particular fake news websites are run by the Russian state, researchers say the scale and sophistication of the operation is broadly similar to previous Kremlin-backed efforts to spread disinformation in the West.
“Russia will be involved in the US 2024 election, as will others,” said Chris Krebs, who as the director of the US Cybersecurity and Infrastructure Security Agency was responsible for ensuring the integrity of the 2020 presidential election.
“We're already seeing them - from a broader information operations perspective on social media and elsewhere - enter the fray, pushing against already contentious points in US politics,” he said.
The BBC contacted the Russian Foreign Ministry and Russia’s US and UK embassies, but received no response. We also attempted to contact Mr Hinkle for comment.
## How the fakes spread
Since state-backed disinformation campaigns and money-making “fake news” operations attracted attention during the 2016 US election campaign, disinformation merchants have had to get more creative both in spreading their content and making it seem credible.
The operation investigated by BBC Verify uses artificial intelligence to generate thousands of news articles, posted to dozens of sites with names meant to sound quintessentially American – Houston Post, Chicago Crier, Boston Times, DC Weekly and others. Some use the names of real newspapers that went out of business years or decades ago.
Most of the stories on these sites are not outright fakes. Instead, they are based on real news stories from other sites apparently rewritten by artificial intelligence software.
In some instances, instructions to the AI engines were visible on the finished stories, such as: “Please rewrite this article taking a conservative stance”.
The stories are attributed to hundreds of fake journalists with made-up names and in some cases, profile pictures taken from elsewhere on the internet.
For instance, a photo of best-selling writer Judy Batalion was used on multiple stories on a website called DC Weekly, “written” by an online persona called “Jessica Devlin”.
“I was totally confused,” Ms Batalion told the BBC. “I still don't really understand what my photo was doing on this website.”
Ms Batalion said she assumed the photo had been copied and pasted from her LinkedIn profile.
“I had no contact with this website,” she said. “It's made me more self-conscious about the fact that any photo of yourself online can be used by someone else.”
The sheer number of stories - thousands each week - along with their repetition across different websites, indicates that the process of posting AI-generated content is automated. Casual browsers could easily come away with the impression that the sites are thriving sources of legitimate news about politics and hot-button social issues.
However, interspersed within this tsunami of content is the real meat of the operation - fake stories aimed increasingly at American audiences.
The stories often blend American and Ukrainian political issues - for instance one claimed that a worker for a Ukrainian propaganda outfit was dismayed to find that she was assigned tasks designed to knock down Donald Trump and bolster President Biden.
Another report invented a New York shopping trip made by Ukraine’s first lady, and alleged she was racist towards staff at a jewellery store.
The BBC has found that forged documents and fake YouTube videos were used to bolster both false stories.
Some of the fakes break out and get high rates of engagement on social media, said Clement Briens, senior threat intelligence analyst at cybersecurity company Recorded Future.
His company says that 120 websites were registered by the operation - which it calls CopyCop, external - over just three days in May. And the network is just one of a number of Russia-based disinformation operations.
Other experts - at Microsoft, Clemson University, and at Newsguard, a company that tracks misinformation sites - have also been tracking the network. Newsguard says, external it has counted at least 170 sites connected to the operation.
“Initially, the operation seemed small,” said McKenzie Sadeghi, Newsguard’s AI and foreign influence editor. “As each week passed it seemed to be growing significantly in terms of size and reach. People in Russia would regularly cite and boost these narratives, via Russian state TV, Kremlin officials and Kremlin influencers.
“There's about a new narrative originating from this network almost every week or two,” she said.
## Making the fake appear real
To further bolster the credibility of the fake stories, operatives create YouTube videos, often featuring people who claim to be “whistleblowers” or “independent journalists”.
In some cases the videos are narrated by actors – in others it appears they are AI-generated voices.
Several of the videos appear to be shot against a similar-looking background, further suggesting a co-ordinated effort to spread fake news stories.
The videos aren’t themselves meant to go viral, and have very few views on YouTube. Instead, the videos are quoted as “sources” and cited in text stories on the fake newspaper websites.
For instance, the story about the Ukrainian information operation allegedly targeting the Trump campaign cited a YouTube video which purported to include shots from an office in Kyiv, where fake campaign posters were visible on the walls.
Links to the stories are then posted on Telegram channels and other social media accounts.
Eventually, the sensational “scoops” - which, like the Trump wiretap story and a slew of earlier stories about Ukrainian corruption, often repeat themes already popular among patriotic Russians and some supporters of Donald Trump - can reach both Russian influencers and audiences in the West.
Although only a few rise to the highest levels of prominence, some have spread to millions – and to powerful people.
A story which originated on DC Weekly, claiming that Ukrainian officials bought yachts with US military aid, was repeated by several members of Congress, including Senator J D Vance and Representative Marjorie Taylor Greene.
Mr Vance is one of a handful of politicians mentioned as a potential vice-presidential running mate for Donald Trump.
## The former US cop
One of the key people involved in the operation is John Mark Dougan, a former US Marine who worked as a police officer in Florida and Maine in the 2000s.
Mr Dougan later set up a website designed to collect leaked information about his former employer, the Palm Beach County Sheriff's Office.
In a harbinger of his activities in Russia, Mr Dougan’s site published authentic information including the home addresses of police officers, alongside fake stories and rumours. The FBI raided his apartment in 2016, at which point he fled to Moscow.
He has since written books, reported from occupied parts of Ukraine and has made appearances on Russian think tank panels, at military events and on a TV station owned by Russia’s ministry of defence.
In text message conversations with the BBC, Mr Dougan has flatly denied being involved with the websites. On Tuesday, he denied any knowledge of the story about the Bugatti sports car.
But at other times he has bragged about his prowess in spreading fake news.
At one point he also implied that his activities are a form of revenge against American authorities.
“For me it’s a game," he said. “And a little payback.”
At another point he said: “My YouTube channel received many strikes for misinformation” for his reporting from Ukraine, raising the prospect of his channel being taken offline.
“So if they want to say misinformation, well, let’s do it right,” he texted.
A large body of digital evidence also shows connections between the former police officer and the Russia-based websites.
The BBC and experts we consulted traced IP addresses and other digital information back to websites run by Dougan.
At one point a story on the DC Weekly site, written in response to a New York Times piece which mentioned Dougan, was attributed to “An American Citizen, the owner of these sites,” and stated: “I am the owner, an American citizen, a US military veteran, born and raised in the United States.”
The article signed off with Dougan’s email address.
Shortly after we reported on Mr Dougan’s activities in a previous story, a fake version of the BBC website briefly appeared online. It was linked through digital markers to his network.
- Published20 December 2023
Mr Dougan is most likely not the only person working on the influence operation and who funds it remains unclear.
“I think it's important not to overplay his role in this campaign," said Darren Linvill, co-director of Clemson University’s Media Forensic Hub, which has been tracking the network. “He may be just a bit of a bit player and a useful dupe, because he's an American.”
Despite his appearances on state-run media and at government-linked think tanks, Mr Dougan denies he is being paid by the Kremlin.
“I have never been paid a single dime by the Russian government,” he said via text message.
## Targeting the US election
The operation that Dougan is involved in has increasingly shifted its focus from stories about the war in Ukraine to stories about American and British politics.
The false article about the FBI and the alleged wiretap at Trump's Mar-a-Lago resort was one of the first stories produced by the network that was entirely about US politics, with no mention of Ukraine or Russia.
Clint Watts, who leads Microsoft’s Digital Threat Analysis Center, said that the operation often blends together issues with salience both in Ukraine and the West.
Mr Watts said that the volume of content being posted and the increasing sophistication of Russia-based efforts could potentially pose a significant problem in the run-up to November’s election.
“They're not getting mass distribution every single time,” he said, but noted that several attempts made each week could lead to false narratives taking hold in the “information ocean” of a major election campaign.
“It can have an outsized impact", and stories from the network can take off very quickly, he said.
“Gone are the days of Russia purchasing ads in roubles, or having pretty obvious trolls that are sitting in a factory in St. Petersburg,” said Nina Jankowicz, head of the American Sunlight Project, a non-profit organisation attempting to combat the spread of disinformation.
Ms Jankowicz was briefly director of the short-lived US Disinformation Governance Board, a branch of the Department of Homeland Security designed to tackle false information.
“Now we're seeing a lot more information laundering,” she said - using a term referring to the recycling of fake or misleading stories into the mainstream in order to obscure their ultimate source.
## Where it goes next
Microsoft researchers also say the operation is attempting to spread stories about UK politics – with an eye on Thursday’s general election – and the Paris Olympics.
One fake story – which appeared on the website called the London Crier – claimed that Mr Zelensky bought a mansion owned by King Charles III at a bargain price.
It was seen by hundreds of thousands of users on X, and shared by an official Russian embassy account. YouTube removed an AI-narrated video posted by an obscure channel that was used as the source of the false story after it was flagged by BBC Verify.
And Mr Dougan hinted at even bigger plans when asked whether increased attention on his activities would slow the spread of his false stories.
“Don’t worry,” he said, “the game is being upped.”
*Correction 4 July 2024:*
*An earlier version of this story used the incorrect logo to represent the Chicago Chronicle website linked to this network. It has now been updated.*
| true | true | true |
A former US police officer runs an AI-powered network of misleading news sites turning its sights towards November.
|
2024-10-12 00:00:00
|
2024-07-03 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
31,643,716 |
https://www.theverge.com/2022/6/6/23156741/ios-16-carplay-apple-wwdc-hvac-deeper-integration
|
Apple’s CarPlay is going beyond the infotainment screen
|
Umar Shakir
|
Apple announced a complete refresh of CarPlay to better connect with a car’s instrument panel and a deep integration with the vehicle itself. CarPlay users will be able to swap what they see on the instrument panel with a very Apple-looking widget design.
Users can add trip info, control climate in the car, see the weather, view updated navigation information, fuel and battery levels, and more. It can adapt to different screen sizes and has an all-new interface that is reminiscent of having an iPad on the center screen.
Apple says more announcements will come late next year. Apple has been working internally on an electric autonomous vehicle, but the company keeps getting hit with setbacks and executive departures. This new CarPlay integration is the closest we’ve seen to what an Apple Car could look like.
Manufacturers are already looking into integrating the next generation of CarPlay, including Ford, Audi, Jaguar-Land Rover, Nissan, Volvo, Polestar, and more. Apple says 98 percent of all new cars already have CarPlay, and 79 percent of users consider the feature before buying a car.
| true | true | true |
Apple CarPlay is getting deep vehicle integrations
|
2024-10-12 00:00:00
|
2022-06-06 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
6,531,000 |
http://www.youtube.com/watch?v=6zpUb3mUExA
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,618,485 |
http://swreflections.blogspot.co.uk/2012/09/technical-debt-when-do-you-have-to-pay.html
|
Technical Debt – when do you have to pay it off?
|
Jim Bird
|
There are 2 times to think about technical debt:
- When you are building a system and making trade-off decisions between what can be done now and what will need to be done “sometime in the future”.
- “Sometime in the future”, when have to deal with those decisions, when you need to pay off that debt.
What happens when “sometime in the future” is now? How much debt is too much to carry? When do you have to pay if off?
## How much debt is too much?
Every system carries some debt. There is always code that isn’t as clean or clear as it should be. Methods and classes that are too big. Third party libraries that have fallen out of date. Changes that you started in order to solve problems that went away. Design and technology choices that you regret making and would do differently if you had the chance.
But how much is this really slowing the team? How much is this really costing you? You can try to measure if technical debt is increasing over time by looking at your code base. Code complexity is one factor. There is a simple relationship between complexity and how hard it is to maintain code, looking at the chance of introducing a regression:
Complexity | % Chance of bad fix | |
1-10 | 5% | |
20-30 | 20% | |
>50 | 40% | |
100 | 60% |
Complexity by itself isn’t enough. Some code is essentially complex, or accidentally complex but it doesn’t need to be changed, so it doesn’t add to the real cost of development. Tools like Sonar look at complexity as well as other variables to assess the technical risk of a code base:
Cost to fix duplications + cost to fix style violations + cost to comment public APIs + cost to fix uncovered complexity (complex code that has less than 80% automated code coverage) + cost to bring complexity below threshold (splitting methods and classes)
This gives you some idea of technical debt costs that you can track over time or compare between systems.But when do you have to fix technical debt? When do you cross the line?
Deciding on whether you need to pay off debt depends on two factors:
- Safety / risk. Is the code too difficult or too dangerous to change? Does it have too many bugs? Capers Jones says that every system, especially big systems, has a small number of routines where bugs concentrate (the 20% of code that has 80% of problems), and that cleaning up or rewriting this code is the most important thing that you can do to improve reliability as well as to reduce long the term costs of running a system.
- Cost – real evidence that it is getting more expensive to make changes over time, because you’ve taken on too much debt. Is it taking longer to make changes or to fix bugs because the code is too hard to understand, or because it is too hard to change, or too hard to test?
While apparently for some teams it’s obvious that if you are slowing down it must be because of technical debt, I don’t believe it is that simple.
There are lots of reasons for a team to slow down over time, as systems get bigger and older, reasons that don’t have anything to do with technical debt. As systems get bigger and are used by more customers in more ways, with more features and customization, the code will take longer to understand, changes will take longer to test, you will have more operational dependencies, more things to worry about and more things that could break, more constraints on what you can do and what risks you can take on. All of this has to slow you down.
## How do you know that it is technical risk that is slowing you down?
A team will slow down when people have to spend too much time debugging and fixing things – especially fixing things in the same part of the system, or fixing the same things in different parts of the system. When you see the same bugs or the same kind of bugs happening over and over, you know that you have a debt problem. When you start to see more problems in production, especially problems caused by regressions or manual mistakes, you know that you are over your head in debt. When you see maintenance and support costs going up – when everyone is spending more time on upgrades and bug fixing and tuning than they are on adding new features, you're running in circles.
## The 80:20 rule for paying off Technical Debt
Without careful attention, all code will get worse over time, but whatever problems you do have are going to be worse in some places than others. When it comes to paying back debt, what you care about most are the hot spots:
- Code that is complex and
- Code that changes a lot and
- Code that is hard to test and
- Code that has a history of bugs and problems.
You can identify these problem areas by reviewing check-in history, mining your version control system (the work that Michael Feathers is doing on this is really cool) and your bug database, through static analysis checks, and by talking with developers and testers.
This is the code that you have to focus on. This is where you get your best return on investment from paying down technical debt. Everything else is good hygiene – it doesn't hurt, but it won’t win the game either. If you’re going to pay down technical debt, pay it down smart.
## No comments:
Post a Comment
| true | true | true |
There are 2 times to think about technical debt: When you are building a system and making trade-off decisions between what can be done now...
|
2024-10-12 00:00:00
|
2012-09-18 00:00:00
| null | null |
blogspot.com
|
swreflections.blogspot.com
| null | null |
16,697,212 |
https://www.lawfareblog.com/taking-mueller-protection-bills-seriously-response
|
Taking the Mueller Protection Bills Seriously: A Response
|
Steve Vladeck
|
# Taking the Mueller Protection Bills Seriously: A Response
Last Monday, I wrote a lengthy post about why Congress should pass the pending, bipartisan bills to protect Special Counsel Robert Mueller from being fired without good cause—and why the proffered constitutional objections to that legislation are based upon a combination of unsubstantiated (and contestable) assumptions about the current Supreme Court’s willingness to overturn *Morrison *v. *Olson* and far more basic misunderstandings about what these bills would actually do.
Published by The Lawfare Institute
in Cooperation With
Last Monday, I wrote a lengthy post about why Congress should pass the pending, bipartisan bills to protect Special Counsel Robert Mueller from being fired without good cause—and why the proffered constitutional objections to that legislation are based upon a combination of unsubstantiated (and contestable) assumptions about the current Supreme Court’s willingness to overturn *Morrison *v. *Olson* and far more basic misunderstandings about what these bills would actually do. With that post in mind, I’d like to respond to my friend Adam White, who argues that, rather than pass these bills, Congress should create a special court procedure to provide for timely, expeditious and appropriate consideration of a lawsuit by Mueller contesting an allegedly wrongful removal. On closer inspection, Adam’s post only underscores why the pending bills can and should be passed forthwith.
In a nutshell, Adam argues that Congress should require any lawsuit arising from the allegedly wrongful termination of a special counsel to be brought on an expedited basis before a three-judge district court in the District of Columbia under 28 U.S.C. §2284, with the concomitant right of mandatory appeal to the Supreme Court under 28 U.S.C. §1253. He argues that “Congress should require the district court to afford ‘expedited’ review of such cases.” And he argues for an automatic stay, “requiring the parties to preserve all materials produced or held by the special counsel’s investigation for the duration of judicial review.” Finally, these moves are necessary, Adam writes, because even without them, “there would still be litigation to determine what actually constitutes ‘misconduct, dereliction of duty, incapacity, conflict of interest, or ... other good cause’ within the meaning of the regulations, and whether that standard has been met in the case of the particular special counsel.”
Procedurally, what Adam is proposing is, with one technical addition, what the pending bills would already provide for. For example, the Tillis-Coons bill (the “Special Counsel Integrity Act”) already provides that “An action filed under this subsection shall be heard and determined by a court of 3 judges not later than 14 days after the date on which the action is filed in accordance with the provisions of section 2284 of title 28, United States Code, and any appeal shall lie to the Supreme Court.” And although the Graham-Booker bill doesn’t include the same time limit, it *does* also require any action to be filed before a three-judge district court. So all Adam’s proposal really adds here is that the three-judge district court would have to sit in the District of Columbia. Fair enough, but I take that as a friendly amendment, not a hostile one. (Adam also suggests that Congress legislate an automatic stay and a material-preservation requirement, but, of course, both a stay and a preservation order would be available under current law.)
More problematically, Adam’s post proceeds on the assumption that litigation would already be possible without these new procedures. But it’s not at all clear that he’s right that current law would indeed allow for litigation over whether the termination of a special counsel comported with the good-cause standard provided by 28 C.F.R. §600.7(d). After all, the same regulation provides, in 28 C.F.R. §600.10, that “[t]he regulations in this part are not intended to, do not, and may not be relied upon to create any rights, substantive or procedural, enforceable at law or equity, by any person or entity, in any matter, civil, criminal, or administrative.” It’s not hard to imagine the government arguing that, by dint of §600.10, there’s no individual right on the part of the special counsel to contest the validity of his termination under §600.7(d).
Given that constraint on existing law, I had always thought, as I testified last September, that the true salutary effect of both of the pending bills is their guarantee of judicial review of the existing removal standard. Perhaps I’m misreading it, but Adam’s post seems to support that goal, and object solely to Congress codifying that removal standard by statute, rather than simply providing for enforcement of the existing administrative standard for removal in §600.7(d).
At the end of the day, the difference between the existing proposals that I believe Congress can and should enact, and the legislation Adam supports, appears to be simply whether the removal standard is codified or simply incorporated from the regulation, and whether the suit before a three-judge district court must be brought in D.C. In that case, it seems to me that Adam’s proposal is not actually that different from the pending bills at all. The procedural guidance he envisions is compatible with the existing legislation: to satisfy his criticism, the bills would only need to be tweaked slightly before passage.
*Journal of National Security Law & Policy*, Steve is also the co-editor of Aspen Publishers’ leading National Security Law and Counterterrorism Law casebooks.
| true | true | true |
Last Monday, I wrote a lengthy post about why Congress should pass the pending, bipartisan bills to protect Special Counsel Robert Mueller from being fired without
|
2024-10-12 00:00:00
|
2018-03-28 00:00:00
|
website
|
lawfaremedia.org
|
Default
| null | null |
|
27,815,737 |
https://simonsarris.substack.com/p/the-fox-and-the-cat
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,948,130 |
https://www.saveur.com/culture/tyromancy-cheese-divination/
|
The Un-Brie-Lievable History of Tyromancy
|
Jennifer Billock
|
# The Un-Brie-Lievable History of Tyromancy
This fortune-telling practice uses cheese to predict everything from your future spouse to your next career move.
A few months ago, I told a chef in Vancouver that he would soon experience major growth in his career and take on much more responsibility, and that the letter B would somehow be involved. How did I know? Some cheese told me. He’d been standing next to a particularly veiny piece of blue cheese, and asked me to read his fortune from it. Predicting the future using cheese is something I do as a side business, and from what I can tell, there aren’t very many of us doing this anymore.
This isn’t some cheesy divination method I just made up. Tyromancy, or the practice of telling fortunes with cheese, was first officially mentioned in the second century in the writings of Greek historian and professional diviner Artemidorus of Daldis (also known as Artimedorus of Ephesus) on dream interpretation. He apparently didn’t think cheese was a great invention: he noted at the time that the food signifies “trickery and ambushes” and that tyromancers sullied the work of true diviners like sacrificers and liver examiners. Tyromancers, he argued, were more in league with those who practiced evil types of divination, including dice diviners and necromancers. It feels like a bit of a leap to go from cheese to death, but Artimedorus had some opinions, I guess.
Tyromancy reached peak popularity in England during the Middle Ages and early modern period (1500–1800). The country was primarily an agrarian society at the time, with most families having some sort of livestock that produced milk for cheese—and people loved to dabble in the paranormal. Christianity was ingrained in most people, so looking for insight into one’s predestined future, or trying to find a way to gain control over it, led to widespread interest in divination. One used whatever tools were on hand to achieve that, and at that particular point in history, that meant cheese. It was a much more convenient choice than previous divination methods, which included dumping a ladle of molten lead into a bucket of water to see what shapes it made.
People used cheese to divine all sorts of things: who committed a crime, whether the year would bring a fruitful harvest, and how a child’s life would turn out. Those who practiced it generally used farmer’s cheese, though some tried it with runnier options, like fondue.
Back then, a typical use for tyromancy was to determine who you would marry. You’d simply carve the names of all potential suitors into some pieces of cheese, then wait to see which one molded first. And there it was—your life partner! People also analyzed the number and size of holes in a block of cheese, the patterns of the mold and veins, and the shapes that curds made as they coagulated. The process is similar to that of reading tea leaves or coffee grounds—you tell a story through the shapes you see. A heart shape, for example, signifies love and happiness, while an odd number of holes predicts that something negative might happen.
During the early modern years, tyromancy’s popularity began to wane. After the 1920s, cheese fortune-telling essentially disappeared—perhaps in part due to the swift rise of tarot after the invention of the well-known Rider-Waite deck in 1909. Save for occasional pop-culture references, like in the video game series “The Witcher” and “Baldur’s Gate”, tyromancy became largely unknown.
Today, tyromancy still lingers in the realm of obscurity, though I’m trying to change that by leading workshops and one-on-one sessions to teach people how to read their own cheese. Since launching these classes during the pandemic, they’ve been selling out, which I attribute in part to the popularity of WitchTok and the general feelings of uncertainty created by COVID-19.
In the years I’ve been tyromancing, I’ve figured out ways to read just about any type of cheese. Those with patterns on the surface are best, but you can always break a chunk in half and analyze the variations along the break. You can also use crumbles by dumping them out on a plate. I’ve even read a fortune from a Kraft Singles slice—but the person for whom I was reading had to tear it up and drop it onto a plate first. Vegan cheese? No problem.
That’s probably the biggest way tyromancy has changed between the second century and the present: our world of cheese is so vast now, and every piece of it can tell a story.
After I finished my divination session with that chef in Vancouver, he came around the table to hug me and take a photo. Apparently he had been planning a career move: opening his own eponymous restaurant. Unbeknownst to me, his last name starts with a B, just as the cheese foretold.
## Keep Reading
Continue to Next Story
| true | true | true |
This fortune-telling practice uses cheese to predict everything from your future spouse to your next career move.
|
2024-10-12 00:00:00
|
2023-11-16 00:00:00
|
article
|
saveur.com
|
Saveur
| null | null |
|
18,761,269 |
https://www.economist.com/finance-and-economics/2018/12/22/why-americans-and-britons-work-such-long-hours
|
Why Americans and Britons work such long hours
| null |
# Why Americans and Britons work such long hours
## Society as a whole must judge whether or not there is more to life than work
THE YEAR ahead will, like every year, consist of just under 8,800 hours. Most people will spend about a third of that time sleeping, and another third or so arguing on social media. Much of the remainder will be spent at work. There is increased interest in corners of the political world in trying to reduce the amount of time people must spend on the job. The Labour Party in Britain has said it will consider introducing a four-day work week when it is next in power. Figures on the American left are similarly intrigued by the idea. To assess whether such moves to reduce working time have any merit first requires an understanding of why hours in those countries have not fallen more already.
This article appeared in the Finance & economics section of the print edition under the headline “The time off your lives”
## Discover more
### China’s property crisis claims more victims: companies
Unsold homes are contributing to a balance-sheet recession
### Europe’s green trade restrictions are infuriating poor countries
Only the poorest can expect help to cushion the blow
### How America learned to love tariffs
Protectionism hasn’t been this respectable for decades
### Why have markets grown more captivated by data releases?
Especially when the quality of statistics is deteriorating
### Can the world’s most influential business index be fixed?
Two cheers for the World Bank’s new global business survey
### Can markets reduce pollution in India?
An experiment in Gujarat yields impressive results
| true | true | true |
Society as a whole must judge whether or not there is more to life than work
|
2024-10-12 00:00:00
|
2018-12-22 00:00:00
|
Article
|
economist.com
|
The Economist
| null | null |
|
19,944,115 |
https://www.npr.org/sections/health-shots/2019/05/17/724300217/to-improve-health-cut-costs-walmart-pushes-for-better-medical-imaging-for-worker
|
To Improve Health, Cut Costs, Walmart Pushes For Better Medical Imaging For Workers
|
Phil Galewitz
|
# To Improve Health, Cut Costs, Walmart Pushes For Better Medical Imaging For Workers
Walmart Inc., the nation's largest private employer, is worried that too many of its workers are having health conditions misdiagnosed, leading to unnecessary surgery and wasted health spending.
The issue crystallized for Walmart officials when they discovered about half of the company's workers who went to the Mayo Clinic and other specialized hospitals for back surgery in the past few years turned out not to need those operations. They were either misdiagnosed by their doctor or needed only non-surgical treatment.
A key issue: Their diagnostic imaging, such as CT scans and MRIs, had high error rates, says Lisa Woods, senior director of benefits design for Walmart.
So the company, whose health plans cover 1.1 million U.S. employees and dependents, has recommended since March that workers use one of 800 imaging centers identified as providing high-quality care. That list was developed for Walmart by Covera Health, a New York City-based health analytics company that uses data to help spot facilities likely to provide accurate imaging for a wide variety of conditions, from cancer to torn knee ligaments.
Although Walmart and other large employers in recent years have been steering workers to medical centers with proven track records for specific procedures such as transplants, the retail giant is believed to be the first to prod workers to use specific imaging providers based on diagnostic accuracy — not price, say employer health experts.
"A quality MRI or CT scan can improve the accuracy of diagnoses early in the care journey, helping create the correct treatment plan with the best opportunity for recovery," says Woods. "The goal is to give associates the best chance to get better, and that starts with the right diagnosis."
Walmart employees are not required to use those 800 centers, but if they don't use one that is available near them, they will have to pay additional cost sharing. Company officials advise workers that they could have more accurate results if they opt for the specified centers.
Studies show a 3% to 5% error rate each workday in a typical radiology practice, but some academic research has found mistakes on advanced images such as CT scans and MRIs can reach up to 30% of diagnoses. Although not every mistake affects patient care, with millions of CT scans and MRIs done each year in the United States, such mistakes can have a significant impact.
"There's no question that there are a lot of errors that occur," says Dr. Vijay Rao, chairwoman of radiology at the Thomas Jefferson University Hospital in Philadelphia.
Errors at imaging centers can happen for many reasons, Rao says, including the radiologist not devoting enough time to reading each image, the technician not positioning the patient correctly in the imaging machine or a radiologist not having sufficient expertise.
Employers and insurers typically do little to help patients identify which radiology practices provide the most accurate results. Instead, employers have been focused on the cost of imaging tests. Some employers or insurers require plan members to use free-standing outpatient centers rather than those based in hospitals, which tend to be more expensive.
Woods says Walmart found that deficiencies and variation in imaging services affected employees nationwide. "Unfortunately, it is all over the country. It's everywhere," she says.
Walmart's new imaging strategy is aligned with its efforts over the past decade to direct employees to select hospitals for high-cost health procedures. Since 2013, Walmart has been sending workers and their dependents to select hospitals across the country where it believes they can get better results for spine surgery, heart surgery, joint replacement, weight loss surgery, transplants and certain cancers.
As part of its "Centers of Excellence" program, the Bentonville, Ark.-based retail giant picks up the tab for the surgeries and all related travel expenses for patients on the company's health insurance plan, including a caregiver.
**Tracking imaging centers' quality**
Most consumers give little thought to where to get an MRI or CT scan, and usually go where their doctors send them, the closest facility or, increasingly, the one that offers the lowest price, notes Covera CEO Ron Vianu. "Most people think of diagnostic imaging as a commodity, and that's a mistake," he says.
Vianu says studies have shown that radiologists frequently offer different diagnoses based on the same image taken during an MRI or CT scan. Among explanations are that some radiologists are better at analyzing certain types of images — like those of the brain or bones — and sometimes radiologists read images from exams they have less experience with, he says.
Covera has collected information on thousands of hospital-based and outpatient imaging facilities.
"Our primary interest is understanding which radiologist or radiology practices are achieving the highest level of diagnostic accuracy for their patients," says Dan Elgort, Covera's chief data science officer.
Covera has independent radiologists evaluate a sampling of patient care data on imaging centers to determine facilities' error rates. It uses statistical modeling along with information on each center's equipment, physicians and use of industry-accepted patient protocols to determine the facilities' rates of accuracy.
Covera expects to have about 1,500 imaging centers in the program it runs for Walmart by year's end, says CEO Ron Vianu.
There are about 4,000 outpatient imaging centers in the United States, not counting thousands of hospital-based facilities, he estimated.
As a condition for participating in the program, each of the imaging centers has agreed to routinely send a sampling of their patients' images and reports to Covera.
Rao applauded the effort by Walmart and Covera to identify imaging facilities likely to provide the most accurate reports. "I am sure centers that are worried about their quality will not be happy, but most quality operations would welcome something like this," she says.
**Few guides for consumers**
Consumers have little way to distinguish the quality of care from one imaging center to the next. The American College of Radiology has an accreditation program but does not evaluate diagnostic quality.
"We would love to have more robust ... measurements" about the outcomes of patient care than what is currently available, says Dr. Geraldine McGinty, chair of the college's board of chancellors.
Facilities typically conduct peer reviews of their radiologists' patient reports, but there is no public reporting of such results, she says.
Covera officials says they have worked with Walmart for nearly two years to demonstrate they could improve the quality of diagnostic care its employees receive. Part of the process has included reviewing a sample of Walmart employees' health records to see where changes in imaging services could have caught potential problems.
Covera says the centers in its network were chosen based on quality; price was not a factor.
In an effort to curtail unnecessary tests, Walmart, like many large employers and insurers, requires its insured members to get authorization before getting CT scans and MRIs.
"Walmart is on the leading edge of focusing on quality of diagnostic imaging," says Suzanne Delbanco, executive director of the Catalyst for Payment Reform, an employer-led health care think tank and advocacy group.
But Mark Stolper, executive vice president of Los Angeles-based RadNet, which owns 335 imaging centers nationally, questions how Covera has enough data to compare facilities. "This would be the first time," he says, "I have seen or heard of a company trying to narrow a network of imaging centers that is based on quality instead of price."
Woods says that even though the new imaging strategy is not based on financial concerns, it could pay dividends down the road.
"It's been demonstrated time and time again that high quality ends up being more economical in the long run because inappropriate care is avoided, and patients do better," she says.
| true | true | true |
To cut down on unnecessary procedures — and health costs — Walmart is pushing its workers to get more accurate diagnoses by using diagnostic imaging centers known for high quality, not low price.
|
2024-10-12 00:00:00
|
2019-05-17 00:00:00
|
article
|
npr.org
|
KFF Health News
| null | null |
|
11,581,551 |
http://blog.helpjuice.com/hacked-how-when-to-use-emojis-in-customer-support/
|
A Guide to Using Emojis In Customer Support
|
David Oragui
|
Want to start sounding more human in your customer support responses? Start **using emojis**!
John, is that you? (ok, maybe your name isn’t John)
But surely you want to make an impression like John did here. You’ve dealt with an angry customer of two? Then you’ll know this fact …
…Every customer is a small puppy and a fierce lion at the same time. It all depends on the way YOU approach him/her. Tweet this quote
You’ve got this “fierce lion” side of a customer at least once?
Then read on … we might help you …
There’s been a lot of research done on this topic, so we’ve dug deeper and found some easy hacks around making customer support more interesting & effective.
You can start increasing your customer loyalty today with a simple change that’ll go a long way.
As well as, decrease the number of dissatisfied customers greatly.
Genesis global survey had come to conclusion that 40 % of the customers claim that the biggest improvement in customer support can be done in “**better human service**”
Furthermore, top two reasons for customer loss are:
**Customer FEELS poorly treated****Failure to solve a problem in a timely manner**
As you can see, the biggest **CHALLENGE **in customer support today is to **humanize **the relationship with customers.
Here is the twist: **Using EMOJIS/EMOTICONS **in customer support will make your job so much easier and humanize your relationship with customers online!
If you’re still not up to date with this trend, fasten your seat belts, you’re about to go on a customer support fast track!
The problem with de-humanized customer support is that it’s become “normal”. All that lines of plain text are killing both you and the customer.
People are visual creatures. If you doubt that, just go to the nearest bar with your friends and tell the waiter to bring you three beers and show him **four fingers**.
He will most likely bring you four beers, which isn’t that bad after all, that’s just the way we communicate.
The same goes for emoticons/emojis, they are highly visual and impactful for humans.
Emojis are a great way to get your customers involved on a **subconscious **level, and that’s always a great way to go!
Actually, Penn State did an interesting study, and I couldn’t help but notice a few crucial facts/points:
- The emoticon is even more powerful than the picture
- Emoticon makes customers feel like the customer agent has an emotional presence
- Emoticons can be effective vehicles for expression of empathy in customer relations, especially in the mobile e-commerce context
-
Agents who responded more quickly to customers during the chat were rated more positively than those who did not
Here is what customer service and experience expert and New York Times bestselling author, Shep Hyken, had to say about this topic and research:
“The emoji is just another way to rate a customer’s satisfaction (or dissatisfaction) about service or product. The difference between a one-to-ten type of rating is that the emoji also expresses the emotion behind the rating. Happy, sad, glad and more are ways to gain insight into the customer’s emotional connection to the brand.” Tweet this quote
We’ve gone in depth on this subject and want to speed up the process along for you.
I bet that you’ll find this article useful as hell and this is why…
Here is what you’ll get in this blog post so that you can quickly scan it for what you need:
- How emoticons/emojis can greatly improve your customer support
- If you don’t follow these simple RULES to using emojis/emoticons in the customer support, you’ll do more damage than good
- Tools that are most suited fo using emoticons/emojis in customer support
## How Emojis Can Greatly Improve Your Customer Support
Only 1 in 10 of customers who could complain about something will, actually, do it. So, make sure that you do **EVERYTHING **to resolve that issue.
When you have superior customer support, people are more likely to go and share the positive experience with their friends.
The fact of the matter is that the 90 % of the positive reaction of a customer like this begins with an overly frustrated customer who sought your help.
Make sure that you have a **SUPERIOR **customer support and focus on **every little detail** to ensure customer satisfaction.
Emoticons/emojis are just that, a detail that only the superior customer supports focus and insist on.
We’ve asked Matthew Larner, the director of clicksend.com (Business SMS gateway, voice, fax, email), to give us his take on emojis in customer support. Here’s what he said:
“I think emojis are a great way to make customer support more personal. They should be used sparingly though, as some customers still feel it’s unprofessional.” Tweet this quote
The proper use of emoticons will improve your customer support in these 5 aspects:
-
Using emojis will make it easy for your interaction to be perceived by the customer as genuine.The best way to make the interaction genuine is with video chatting options, but most of the time that’s not an option and all you are left with is text chat tools.
Well, winners use whatever situation they have and make it as good as they can. Be genuine – USE emoticons.
By using emojis you’ll emphasize that you understand that your customer is a human and make it easier for them to understand that, you are also a human.
-
Emoticons convey social POWER! There is a huge correlation between using the emoticons (especially the positive ones) and the social influence that one has on social media.
Customer support on social media is huge today! You can prevent so many unsatisfied customers with having a strong influence in social media.
Kind people from The University of Cambridge did this study and proven that emoticons are a really powerful way of communication in today’s society.
-
Emoticons make your customer feel like they are in a real conversation and that you are in the same place in dealing with an issue.
You know what I’m talking about for sure. It’s like chatting with a friend and you feel like he really gets you. THAT is the exact same effect you should strive towards in your customer relationships
-
Emoticons will soften up the relationship andYOUwill feel so much more at ease with dealing with customers!
That’s the most important thing of all. If people that work with customers are not satisfied and happy, then they’ll rub that off on customers.
Emoticons ——> Superior customer relations -
Emoticons will make your customers feel special and important.
Let’s be honest, it’s not an everyday thing to use emoticons/emojis with your customers. They will notice that and nobody will be bothered by them, ok maybe a few will, but that’s why you should read our rules that we’ve put together for you.
## Simple RULES to Using Emojis/Emoticons in Customer Support
OK, I can’t give you all the goodies (RULES) without first introducing most powerful and simple weapon from our “arsenal” of this super weapon – EMOTICONS.
With these, you can show every kind of emotion to your customer easier. Feel free to comment if you feel that there is more to this list
What we want to convey | Emoticon | Example |
Disappointment | :/ |
Testing you issue right now, sorry for the late reply. The “chat service” has been weird on my end :/ |
Happiness | It’s done | |
Excitement | You’re welcome And if you need anything else, please do tell | |
Apologetic and or sympathy | I’m really sorry to hear that but … |
### Six simple rules to follow with emoticons in CS
#### 1. Don’t overuse emoticons
Overusing anything is bad for you, you can do more harm than good. BALANCE is the key here, use your emoticons wisely.
#### 2. Signal basic emotions with them
Emoticons are great for getting some context between you and your customer, but keep it simple. You will never have a perplexed relationship with your customer, stick to the basics and you’re good.
#### 3. Don’t use them on the first contact
When first addressing the customer the ball is in your court. Don’t blow it on the first hit. Return a safe ball first and then ping of the reaction of your customer.
#### 4. Think about who you’re talking to
This one is huge! Know your customer persona in depth! Are they likely to use emoticons in their day to day life? If yes, then you’re set to get some extra points with them
#### 5. Don’t use them with customer who’s really pissed of
You need to use your EQ wisely with the matter of emoticons. If you see by the tone of the message that the customer is super agitated, don’t experiment on them! Mirror them in a way and again, ping off of their reactions.
#### 6. And the BIGGEST of them all, when in doubt – DON’T use emoticons at all!
Let’s be honest, most of you have high IQ and EQ, or you wouldn’t be dealing with the customers.
Put that high IQ and EQ to use with emoticons! And do it wisely, you know what uncle Ben had told us: “ **With great power, comes great responsibility**”.
## Tools That Are Most Suited for Using Emojis/ Emoticons in Customer Support
There is a lot of subjectivity when it comes to using tools, right? Everybody has their own unique situation in terms of tools and that’s fine.
Here are three tools that have helped us achieve a lot of great things with our customers and we’ve used emoticons in all of them.
Wonder how?
Read on …
### 1. Intercom
Intercom aims to change the way you communicate with customers and they are doing a hell of a good job at it.
This is an all in one tool that’s simple for teams to use! Intercom Acquire helps you with their easy to use direct communication with the customer and it’s a mix of formal and casual interactions.
Intercom great way to provide superior customer service to your customers.
This is, in a nutshell, what Intercom does.
### 2. Olark
Those customers, ha? They are some pretty sensitive people. Did you know that 53% of customers are irritated if they don’t speak with a real person right away?
Well, there is this awesome tool which will help you prevent those irritations and let you get on a winning streak with your customers.
Olark is one of the best tools for a quick response. It’s incredibly fast, you get a chance to improve your rates by 40% if you respond within two minutes time-frame.
One other advantage is that you can be a little more informal when you’re using this text chat app and utilize those emoticons.
### 3. Social media (Twitter is on the rise folks, AGAIN)
Even though social media is talked a lot as a means to getting customer satisfaction, only 26% of companies surveyed claim that their employees take social media seriously!
And while your may not get a lot of customer support requests from your social media channels, consequences can be brutal if you ignore or take too long to respond to inquiries.
According to Gartner.com, failure to respond to customer support requests on social media can lead to 15% increase in churn.
That's why it's important to take your social media channels seriously and use the right tools as well as rules and strategies we outlined above on emojis.
## Wrapping Up
Using emojis is truly acceptable when providing customer support. What a great world we live in, eh?
I’d love to hear your thoughts and experiences on emojis – do you use them in your organization? If so, how often?
| true | true | true |
Learn how to use emojis appropriately in your customer interactions with with this comprehensive guide.
|
2024-10-12 00:00:00
|
2019-02-01 00:00:00
|
website
|
helpjuice.com
|
Helpjuice
| null | null |
|
27,090,333 |
https://book.babashka.org/#tasks
|
Babashka book
|
Michiel Borkent
|
## Introduction
Welcome reader! This is a book about scripting with Clojure and babashka.
Clojure is a functional, dynamic programming language
from the Lisp family which runs on the JVM. Babashka is a scripting environment
made with Clojure, compiled to native with GraalVM. The
primary benefits of using babashka for scripting compared to the JVM are fast
startup time and low memory consumption. Babashka comes with batteries included
and packs libraries like `clojure.tools.cli`
for parsing command line arguments
and `cheshire`
for working with JSON. Moreover, it can be installed just by
downloading a self-contained binary.
### Target audience
Babashka is written for developers who are familiar with Clojure on the JVM. This book assumes familiarity with Clojure and is not a Clojure tutorial. If you aren’t that familiar with Clojure but you’re curious to learn, check out this list of beginner resources.
### Setting expectations
Babashka uses SCI for interpreting Clojure. SCI implements a substantial subset of Clojure. Interpreting code is in general not as performant as executing compiled code. If your script takes more than a few seconds to run or has lots of loops, Clojure on the JVM may be a better fit, as the performance on JVM is going to outweigh its startup time penalty. Read more about the differences with Clojure here.
## Getting started
### Installation
Installing babashka is as simple as downloading the binary for your platform and
placing it on your path. Pre-built binaries are provided on the
releases page of babashka’s
Github repo. Babashka is also available in
various package managers like `brew`
for macOS and linux and `scoop`
for
Windows. See here for
details.
### Building from source
If you would rather build babashka from source, download a copy of GraalVM and
set the `GRAALVM_HOME`
environment variable. Also make sure you have
lein installed. Then run:
```
$ git clone https://github.com/borkdude/babashka --recursive
$ script/uberjar && script/compile
```
See the babashka build.md page for details.
### Running babashka
The babashka executable is called `bb`
. You can either provide it with a Clojure
expression directly:
```
$ bb -e '(+ 1 2 3)'
6
```
or run a script:
`(println (+ 1 2 3))`
```
$ bb -f script.clj
6
```
The `-e`
flag is optional when the argument starts with a paren. In that case babashka will treat it automatically as an expression:
```
$ bb '(+ 1 2 3)'
6
```
Similarly, the `-f`
flag is optional when the argument is a filename:
```
$ bb script.clj
6
```
Commonly, scripts have shebangs so you can invoke them with their filename only:
```
$ ./script.clj
6
```
```
#!/usr/bin/env bb
(println (+ 1 2 3))
```
## Usage
Typing `bb help`
from the command line will print all the available command
line options which should give you a sense of the available features in
babashka.
Babashka v1.3.191 Usage: bb [svm-opts] [global-opts] [eval opts] [cmdline args] or: bb [svm-opts] [global-opts] file [cmdline args] or: bb [svm-opts] [global-opts] task [cmdline args] or: bb [svm-opts] [global-opts] subcommand [subcommand opts] [cmdline args] Substrate VM opts: -Xmx<size>[g|G|m|M|k|K] Set a maximum heap size (e.g. -Xmx256M to limit the heap to 256MB). -XX:PrintFlags= Print all Substrate VM options. Global opts: -cp, --classpath Classpath to use. Overrides bb.edn classpath. --debug Print debug information and internal stacktrace in case of exception. --init <file> Load file after any preloads and prior to evaluation/subcommands. --config <file> Replace bb.edn with file. Defaults to bb.edn adjacent to invoked file or bb.edn in current dir. Relative paths are resolved relative to bb.edn. --deps-root <dir> Treat dir as root of relative paths in config. --prn Print result via clojure.core/prn -Sforce Force recalculation of the classpath (don't use the cache) -Sdeps Deps data to use as the last deps file to be merged -f, --file <path> Run file --jar <path> Run uberjar Help: help, -h or -? Print this help text. version Print the current version of babashka. describe Print an EDN map with information about this version of babashka. doc <var|ns> Print docstring of var or namespace. Requires namespace if necessary. Evaluation: -e, --eval <expr> Evaluate an expression. -m, --main <ns|var> Call the -main function from a namespace or call a fully qualified var. -x, --exec <var> Call the fully qualified var. Args are parsed by babashka CLI. REPL: repl Start REPL. Use rlwrap for history. socket-repl [addr] Start a socket REPL. Address defaults to localhost:1666. nrepl-server [addr] Start nREPL server. Address defaults to localhost:1667. Tasks: tasks Print list of available tasks. run <task> Run task. See run --help for more details. Clojure: clojure [args...] Invokes clojure. Takes same args as the official clojure CLI. Packaging: uberscript <file> [eval-opt] Collect all required namespaces from the classpath into a single file. Accepts additional eval opts, like `-m`. uberjar <jar> [eval-opt] Similar to uberscript but creates jar file. prepare Download deps & pods defined in bb.edn and cache their metadata. Only an optimization, this will happen on demand when needed. In- and output flags (only to be used with -e one-liners): -i Bind *input* to a lazy seq of lines from stdin. -I Bind *input* to a lazy seq of EDN values from stdin. -o Write lines to stdout. -O Write EDN values to stdout. --stream Stream over lines or EDN values from stdin. Combined with -i or -I *input* becomes a single value per iteration. Tooling: print-deps [--format <deps | classpath>]: prints a deps.edn map or classpath with built-in deps and deps from bb.edn. File names take precedence over subcommand names. Remaining arguments are bound to *command-line-args*. Use -- to separate script command line args from bb command line args. When no eval opts or subcommand is provided, the implicit subcommand is repl.
### Running a script
Scripts may be executed from a file using `-f`
or `--file`
:
`bb -f download_html.clj`
The file may also be passed directly, without `-f`
:
`bb download_html.clj`
Using `bb`
with a shebang also works:
```
#!/usr/bin/env bb
(require '[babashka.http-client :as http])
(defn get-url [url]
(println "Downloading url:" url)
(http/get url))
(defn write-html [file html]
(println "Writing file:" file)
(spit file html))
(let [[url file] *command-line-args*]
(when (or (empty? url) (empty? file))
(println "Usage: <url> <file>")
(System/exit 1))
(write-html file (:body (get-url url))))
```
```
$ ./download_html.clj
Usage: <url> <file>
$ ./download_html.clj https://www.clojure.org /tmp/clojure.org.html
Downloading url: https://www.clojure.org
Writing file: /tmp/clojure.org.html
```
If `/usr/bin/env`
doesn’t work for you, you can use the following
workaround:
```
$ cat script.clj
#!/bin/sh
#_(
"exec" "bb" "$0" hello "$@"
)
(prn *command-line-args*)
./script.clj 1 2 3
("hello" "1" "2" "3")
```
### Current file path
The var `*file*`
contains the full path of the file that is currently
being executed:
```
$ cat example.clj
(prn *file*)
$ bb example.clj
"/Users/borkdude/example.clj"
```
### Parsing command line arguments
Command-line arguments can be retrieved using `*command-line-args*`
. If you
want to parse command line arguments, you can use the built-in
`babashka.cli`
namespace:
```
(require '[babashka.cli :as cli])
(def cli-options {:port {:default 80 :coerce :long}
:help {:coerce :boolean}})
(prn (cli/parse-opts *command-line-args* {:spec cli-options}))
```
```
$ bb script.clj
{:port 80}
$ bb script.clj --port 1223
{:port 1223}
$ bb script.clj --help
{:port 80, :help true}
```
Note that clojure.tools.cli is also built-in to babashka.
### Classpath
It is recommended to use `bb.edn`
to control what directories and libraries are
included on babashka’s classpath. See Project setup
If you want a lower level to control
babashka’s classpath, without the usage of `bb.edn`
you can use the
`--classpath`
option that will override the classpath. Say we have a file
`script/my/namespace.clj`
:
```
(ns my.namespace)
(defn -main [& args]
(apply println "Hello from my namespace!" args))
```
Now we can execute this main function with:
```
$ bb --classpath script --main my.namespace 1 2 3
Hello from my namespace! 1 2 3
```
If you have a larger script with a classic Clojure project layout like
```
$ tree -L 3
├── deps.edn
├── README
├── src
│ └── project_namespace
│ ├── main.clj
│ └── utilities.clj
└── test
└── project_namespace
├── test_main.clj
└── test_utilities.clj
```
then you can tell babashka to include both the `src`
and `test`
folders
in the classpath and start a socket REPL by running:
`$ bb --classpath src:test socket-repl 1666`
If there is no `--classpath`
argument, the `BABASHKA_CLASSPATH`
environment
variable will be used. If that variable isn’t set either, babashka will use
`:deps`
and `:paths`
from `bb.edn`
.
Also see the babashka.classpath namespace which allows dynamically adding to the classpath.
The namespace babashka.deps integrates
tools.deps with babashka and allows
you to set the classpath using a `deps.edn`
map.
### Invoking a main function
A main function can be invoked with `-m`
or `--main`
like shown above. When
given the argument `foo.bar`
, the namespace `foo.bar`
will be required and the
function `foo.bar/-main`
will be called with command line arguments as strings.
Since babashka 0.3.1 you may pass a fully qualified symbol to `-m`
:
```
$ bb -m clojure.core/prn 1 2 3
"1" "2" "3"
```
so you can execute any function as a main function, as long as it accepts the number of provided arguments.
When invoking `bb`
with a main function, the expression ```
(System/getProperty
"babashka.main")
```
will return the name of the main function.
### Preloads
The environment variable `BABASHKA_PRELOADS`
allows to define code that
will be available in all subsequent usages of babashka.
```
BABASHKA_PRELOADS='(defn foo [x] (+ x 2))'
BABASHKA_PRELOADS=$BABASHKA_PRELOADS' (defn bar [x] (* x 2))'
export BABASHKA_PRELOADS
```
Note that you can concatenate multiple expressions. Now you can use these functions in babashka:
```
$ bb '(-> (foo *input*) bar)' <<< 1
6
```
You can also preload an entire file using `load-file`
:
`export BABASHKA_PRELOADS='(load-file "my_awesome_prelude.clj")'`
Note that `*input*`
is not available in preloads.
### Running a REPL
Babashka supports running a REPL, a socket REPL and an nREPL server.
#### REPL
To start a REPL, type:
`$ bb repl`
To get history with up and down arrows, use rlwrap:
`$ rlwrap bb repl`
#### Socket REPL
To start a socket REPL on port `1666`
:
```
$ bb socket-repl 1666
Babashka socket REPL started at localhost:1666
```
Now you can connect with your favorite socket REPL client:
```
$ rlwrap nc 127.0.0.1 1666
Babashka v0.0.14 REPL.
Use :repl/quit or :repl/exit to quit the REPL.
Clojure rocks, Bash reaches.
bb=> (+ 1 2 3)
6
bb=> :repl/quit
$
```
The `--socket-repl`
option takes options similar to the `clojure.server.repl`
Java property option in Clojure:
`$ bb socket-repl '{:address "0.0.0.0" :accept clojure.core.server/repl :port 1666}'`
Editor plugins and tools known to work with a babashka socket REPL:
#### pREPL
Launching a prepl can be done as follows:
`$ bb socket-repl '{:address "0.0.0.0" :accept clojure.core.server/io-prepl :port 1666}'`
or programmatically:
```
$ bb -e '(clojure.core.server/io-prepl)'
(+ 1 2 3)
{:tag :ret, :val "6", :ns "user", :ms 0, :form "(+ 1 2 3)"}
```
#### nREPL
To start an nREPL server:
`$ bb nrepl-server 1667`
or programmatically:
```
$ bb -e "(babashka.nrepl.server/start-server\!) (deref (promise))"
Started nREPL server at 0.0.0.0:1667
```
Then connect with your favorite nREPL client:
```
$ lein repl :connect 1667
Connecting to nREPL at 127.0.0.1:1667
user=> (+ 1 2 3)
6
user=>
```
Editor plugins and tools known to work with the babashka nREPL server:
The babashka nREPL server currently does not write an `.nrepl-port`
file at
startup. Using the following `nrepl`
task, defined in a `bb.edn`
, you can
accomplish the same:
```
{:tasks
{nrepl
{:requires ([babashka.fs :as fs]
[babashka.nrepl.server :as srv])
:task (do (srv/start-server! {:host "localhost"
:port 1339})
(spit ".nrepl-port" "1339")
(-> (Runtime/getRuntime)
(.addShutdownHook
(Thread. (fn [] (fs/delete ".nrepl-port")))))
(deref (promise)))}}}
```
The `babashka.nrepl.server`
API is exposed since version 0.8.157.
##### Debugging the nREPL server
To debug the nREPL server from the binary you can run:
`$ BABASHKA_DEV=true bb nrepl-server 1667`
This will print all the incoming messages.
To debug the nREPL server from source:
```
$ git clone https://github.com/borkdude/babashka --recurse-submodules
$ cd babashka
$ BABASHKA_DEV=true clojure -A:main --nrepl-server 1667
```
#### REPL server port
For the socket REPL, pREPL, or nREPL, if a randomized port is needed, 0 can be used anywhere a port argument is accepted.
### Input and output flags
In one-liners the `*input*`
value may come in handy. It contains the
input read from stdin as EDN by default. If you want to read in text,
use the `-i`
flag, which binds `*input*`
to a lazy seq of lines of text.
If you want to read multiple EDN values, use the `-I`
flag. The `-o`
option prints the result as lines of text. The `-O`
option prints the
result as lines of EDN values.
`*input*` is only available in the `user` namespace, designed
for one-liners. For writing scripts, see Scripts.
|
The following table illustrates the combination of options for commands of the form
echo "{{Input}}" | bb {{Input flags}} {{Output flags}} "*input*"
Input | Input flags | Output flag | `*input*` |
Output |
---|---|---|---|---|
|
|
|
||
hello |
|
|
|
|
hello |
|
|
|
hello |
|
|
|
|
|
|
|
|
|
|
When combined with the `--stream`
option, the expression is executed for
each value in the input:
```
$ echo '{:a 1} {:a 2}' | bb --stream '*input*'
{:a 1}
{:a 2}
```
#### Scripts
When writing scripts instead of one-liners on the command line, it is
not recommended to use `*input*`
. Here is how you can rewrite to
standard Clojure code.
#### EDN input
Reading a single EDN value from stdin:
```
(ns script
(:require [clojure.edn :as edn]))
(edn/read *in*)
```
Reading multiple EDN values from stdin (the `-I`
flag):
```
(ns script
(:require [clojure.edn :as edn]
[clojure.java.io :as io]))
(let [reader (java.io.PushbackReader. (io/reader *in*))]
(take-while #(not (identical? ::eof %)) (repeatedly #(edn/read {:eof ::eof} reader))))
```
#### Text input
Reading text from stdin can be done with `(slurp *in*)`
. To get a lazy
seq of lines (the `-i`
flag), you can use:
```
(ns script
(:require [clojure.java.io :as io]))
(line-seq (io/reader *in*))
```
#### Output
To print to stdout, use `println`
for text and `prn`
for EDN values.
### Uberscript
The `--uberscript`
option collects the expressions in
`BABASHKA_PRELOADS`
, the command line expression or file, the main
entrypoint and all required namespaces from the classpath into a single
file. This can be convenient for debugging and deployment.
Here is an example that uses a function from the clj-commons/fs library.
Let’s first set the classpath:
`$ export BABASHKA_CLASSPATH=$(clojure -Spath -Sdeps '{:deps {clj-commons/fs {:mvn/version "1.6.307"}}}')`
Write a little script, say `glob.clj`
:
```
(ns glob (:require [me.raynes.fs :as fs]))
(run! (comp println str)
(fs/glob (first *command-line-args*)))
```
For testing, we’ll make a file which we will find using the glob function:
`$ touch README.md`
Now we can execute the script which uses the library:
```
$ time bb glob.clj '*.md'
/private/tmp/glob/README.md
bb glob.clj '*.md' 0.03s user 0.01s system 88% cpu 0.047 total
```
Producing an uberscript with all required code:
`$ bb uberscript glob-uberscript.clj glob.clj`
To prove that we don’t need the classpath anymore:
```
$ unset BABASHKA_CLASSPATH
$ time bb glob-uberscript.clj '*.md'
/private/tmp/glob/README.md
bb glob-uberscript.clj '*.md' 0.03s user 0.02s system 93% cpu 0.049 total
```
Caveats:
-
*Dynamic requires*. Building uberscripts works by running top-level`ns`
and`require`
forms. The rest of the code is not evaluated. Code that relies on dynamic requires may not work in an uberscript. -
*Resources*. The usage of`io/resource`
assumes a classpath, so when this is used in your uberscript, you still have to set a classpath and bring the resources along.
If any of the above is problematic for your project, using an uberjar is a good alternative.
#### Carve
Uberscripts can be optimized by cutting out unused vars with carve.
```
$ wc -l glob-uberscript.clj
583 glob-uberscript.clj
$ carve --opts '{:paths ["glob-uberscript.clj"] :aggressive true :silent true}'
$ wc -l glob-uberscript.clj
105 glob-uberscript.clj
```
Note that the uberscript became 72% shorter. This has a beneficial effect on execution time:
```
$ time bb glob-uberscript.clj '*.md'
/private/tmp/glob/README.md
bb glob-uberscript.clj '*.md' 0.02s user 0.01s system 84% cpu 0.034 total
```
### Uberjar
Babashka can create uberjars from a given classpath and optionally a main method:
```
$ cat bb/foo.clj
(ns foo)
(defn -main [& args] (prn :hello))
$ cat bb.edn
{:paths ["bb"]}
$ bb uberjar foo.jar -m foo
$ bb foo.jar
:hello
```
### System properties
Babashka sets the following system properties:
-
`babashka.version`
: the version string, e.g.`"1.2.0"`
-
`babashka.main`
: the`--main`
argument -
`babashka.file`
: the`--file`
argument (normalized using`.getAbsolutePath`
)
### Data readers
Data readers can be enabled by setting `*data-readers*`
to a hashmap of
symbols to functions or vars:
```
$ bb -e "(set! *data-readers* {'t/tag inc}) #t/tag 1"
2
```
To preserve good startup time, babashka does not scan the classpath for
`data_readers.clj`
files.
### Reader conditionals
Babashka supports reader conditionals by taking either the `:bb`
or
`:clj`
branch, whichever comes first. NOTE: the `:clj`
branch behavior
was added in version 0.0.71, before that version the `:clj`
branch was
ignored.
```
$ bb -e "#?(:bb :hello :clj :bye)"
:hello
$ bb -e "#?(:clj :bye :bb :hello)"
:bye
$ bb -e "[1 2 #?@(:bb [] :clj [1])]"
[1 2]
```
### Invoking clojure
Babashka bundles deps.clj for invoking a
`clojure`
JVM process:
```
$ bb clojure -M -e "*clojure-version*"
{:major 1, :minor 10, :incremental 1, :qualifier nil}
```
See the clojure function in the babashka.deps namespace for programmatically invoking clojure.
## Project setup
### bb.edn
Since version 0.3.1, babashka supports a local `bb.edn`
file to manage a project.
### :paths and :deps
You can declare one or multiple paths and dependencies so they are automatically added to the classpath:
```
{:paths ["bb"]
:deps {medley/medley {:mvn/version "1.3.0"}}}
```
If we have a project that has a `deps.edn`
and would like to reuse those deps in `bb.edn`
:
`{:deps {your-org/your-project {:local/root "."}}}`
`bb.edn`
applies to the local project, and dependencies defined in
this files are never shared with other projects. This is typically
what you want when writing a script or tool. By contrast, `deps.edn`
is useful when creating libraries that are used by other projects.
Use a unique name to refer to your project’s `deps.edn` , the same name that
you would otherwise use when referring to your project as a dependency.
|
Only pure Clojure libraries are supported in the `bb.edn` `:deps` map. Java
libraries cannot be included due to the closed nature of the GraalVM classpath.
|
If we have a main function in a file called `bb/my_project/main.clj`
like:
(ns my-project.main (:require [medley.core :as m])) (defn -main [& _args] (prn (m/index-by :id [{:id 1} {:id 2}])))
we can invoke it like:
```
$ bb -m my-project.main
{1 {:id 1}, 2 {:id 2}}
```
See Invoking a main function for more details on how to invoke a function from the command line.
The `:deps`
entry is managed by deps.clj
and requires a `java`
installation to resolve and download dependencies.
### :min-bb-version
Since version 0.3.6, babashka supports the `:min-bb-version`
where the minimal
babashka version can be declared:
```
{:paths ["src"]
:deps {medley/medley {:mvn/version "1.3.0"}}
:min-bb-version "0.3.7"}
```
When using an older bb version (that supports `:min-bb-version`
), babashka will
print a warning:
`WARNING: this project requires babashka 0.3.7 or newer, but you have: 0.3.6`
### :tasks
Since babashka 0.4.0 the `bb.edn`
file supports the `:tasks`
entry which
describes tasks that you can run in the current project. The tasks feature is
similar to what people use `Makefile`
, `Justfile`
or `npm run`
for. See Task runner for more details.
### Script-adjacent bb.edn
Since babashka 1.3.177 a `bb.edn`
file relative to the invoked file is
respected. This makes writing system-global scripts with dependencies easier.
Given a `bb.edn`
:
`{:deps {medley/medley {:mvn/version "1.3.0"}}}`
and a script `medley.bb`
:
```
#!/usr/bin/env bb
(ns medley
(:require [medley.core :as medley]))
(prn (medley/index-by :id [{:id 1}]))
```
Assuming that `medley.bb`
is executable (`chmod +x medley.bb`
), you can directly execute it in the current directory:
```
~/my_project $ ./medley.bb
{1 {:id 1}}
```
To execute this script from anywhere on the system, you just have to add it to the `PATH`
:
```
/tmp $ export PATH=$PATH:~/my_project # ensure script is on path
/tmp $ medley.bb # works, respects ~/my_project/bb.edn file with :deps
{1 {:id 1}}
```
Of course you can just call your script `medley`
without the `.bb`
extension.
#### Windows
On Windows bash shebangs are not supported. An alternative is to create a script-adjacent `.bat`
file, e.g `medley.bat`
:
```
@echo off
set ARGS=%*
set SCRIPT=%~dp0medley.bb
bb %SCRIPT% %ARGS%
```
Then add this script to your `%PATH%`
:
```
C:\Temp> set PATH=%PATH%;c:\my_project
C:\Temp> medley
{1 {:id 1}}
```
## Task runner
### Introduction
People often use a `Makefile`
, `Justfile`
, `npm scripts`
or `lein`
aliases in
their (clojure) projects to remember complex invocations and to create shortcuts
for them. Since version 0.4.0, babashka supports a similar feature as part of
the `bb.edn`
project configuration file. For a general overview of what’s
available in `bb.edn`
, go to Project setup.
The tasks configuration lives under the `:tasks`
key and can be used together
with `:paths`
and `:deps`
:
```
{:paths ["script"]
:deps {medley/medley {:mvn/version "1.3.0"}}
:min-bb-version "0.4.0"
:tasks
{clean (shell "rm -rf target")
...}
}
```
In the above example we see a simple task called `clean`
which invokes the
`shell`
command, to remove the `target`
directory. You can invoke this task from
the command line with:
`$ bb run clean`
Babashka also accepts a task name without explicitly mentioning `run`
:
`$ bb clean`
To make your tasks more cross-platform friendly, you can use the built-in
babashka.fs library. To use libraries in tasks,
use the `:requires`
option:
```
{:tasks
{:requires ([babashka.fs :as fs])
clean (fs/delete-tree "target")
}
}
```
Tasks accept arbitrary Clojure expressions. E.g. you can print something when executing the task:
```
{:tasks
{:requires ([babashka.fs :as fs])
clean (do (println "Removing target folder.")
(fs/delete-tree "target"))
}
}
```
```
$ bb clean
Removing target folder.
```
### Task-local options
Instead of naked expressions, tasks can be defined as maps with options. The
task expression should then be moved to the `:task`
key:
```
{:tasks
{
clean {:doc "Removes target folder"
:requires ([babashka.fs :as fs])
:task (fs/delete-tree "target")}
}
}
```
Tasks support the `:doc`
option which gives it a docstring which is printed
when invoking `bb tasks`
on the command line. Other options include:
-
`:requires`
: task-specific namespace requires. -
`:extra-paths`
: add paths to the classpath. -
`:extra-deps`
: add extra dependencies to the classpath. -
`:enter`
,`:leave`
: override the global`:enter`
/`:leave`
hook. -
`:override-builtin`
: override the name of a built-in babashka command.
### Discoverability
When invoking `bb tasks`
, babashka prints a list of all tasks found in `bb.edn`
in the order of appearance. E.g. in the clj-kondo.lsp project it prints:
```
$ bb tasks
The following tasks are available:
recent-clj-kondo Detects most recent clj-kondo version from clojars
update-project-clj Updates project.clj with most recent clj-kondo version
java1.8 Asserts that we are using java 1.8
build-server Produces lsp server standalone jar
lsp-jar Copies renamed jar for upload to clj-kondo repo
upload-jar Uploads standalone lsp server jar to clj-kondo repo
vscode-server Copied lsp server jar to vscode extension
vscode-version Prepares package.json with up to date clj-kondo version
vscode-publish Publishes vscode extension to marketplace
ovsx-publish Publishes vscode extension to ovsx thing
publish The mother of all tasks: publishes everything needed for new release
```
### Command line arguments
Command line arguments are available as `*command-line-args*`
, just like in
Clojure. Since version `0.9.160`
, you can use
babashka.cli in tasks via the exec
function to deal with command line arguments in a concise way. See the chapter on babashka CLI.
Of course, you are free to parse command line arguments using the built-in
`tools.cli`
library or just handle them manually.
You can re-bind `*command-line-args*`
to ensure functions see a different set of
arguments:
```
{:tasks
{:init (do (defn print-args []
(prn (:name (current-task))
*command-line-args*)))
bar (print-args)
foo (do (print-args)
(binding [*command-line-args* (next *command-line-args*)]
(run 'bar)))}}
```
```
$ bb foo 1 2 3
foo ("1" "2" "3")
bar ("2" "3")
```
#### Terminal tab-completion
##### zsh
Add this to your `.zshrc`
to get tab-complete feature on ZSH.
```
_bb_tasks() {
local matches=(`bb tasks |tail -n +3 |cut -f1 -d ' '`)
compadd -a matches
_files # autocomplete filenames as well
}
compdef _bb_tasks bb
```
##### bash
Add this to your `.bashrc`
to get tab-complete feature on bash.
```
_bb_tasks() {
COMPREPLY=( $(compgen -W "$(bb tasks |tail -n +3 |cut -f1 -d ' ')" -- ${COMP_WORDS[COMP_CWORD]}) );
}
# autocomplete filenames as well
complete -f -F _bb_tasks bb
```
##### fish
Add this to your `.config/fish/completions/bb.fish`
to get tab-complete feature on Fish shell.
```
function __bb_complete_tasks
if not test "$__bb_tasks"
set -g __bb_tasks (bb tasks |tail -n +3 |cut -f1 -d ' ')
end
printf "%s\n" $__bb_tasks
end
complete -c bb -a "(__bb_complete_tasks)" -d 'tasks'
```
### Run
You can execute tasks using `bb <task-name>`
. The babashka `run`
subcommand can
be used to execute with some additional options:
-
`--parallel`
: invoke task dependencies in parallel.`{:tasks {:init (def log (Object.)) :enter (locking log (println (str (:name (current-task)) ":") (java.util.Date.))) a (Thread/sleep 5000) b (Thread/sleep 5000) c {:depends [a b]} d {:task (time (run 'c))}}}`
`$ bb run --parallel d d: #inst "2021-05-08T14:14:56.322-00:00" a: #inst "2021-05-08T14:14:56.357-00:00" b: #inst "2021-05-08T14:14:56.360-00:00" c: #inst "2021-05-08T14:15:01.366-00:00" "Elapsed time: 5023.894512 msecs"`
Also see Parallel tasks.
-
`--prn`
: print the result from the task expression:`{:tasks {sum (+ 1 2 3)}}`
`$ bb run --prn sum 6`
Unlike scripts, babashka tasks do not print their return value.
### Hooks
The task runner exposes the following hooks:
#### :init
The `:init`
is for expressions that are executed before any of the tasks are
executed. It is typically used for defining helper functions and constants:
```
{:tasks
{:init (defn env [s] (System/getenv s))
print-env (println (env (first *command-line-args*)))
}
}
```
```
$ FOO=1 bb print-env FOO
1
```
#### :enter, :leave
The `:enter`
hook is executed before each task. This is typically used to print
the name of a task, which can be obtained using the `current-task`
function:
```
{:tasks
{:init (defn env [s] (System/getenv s))
:enter (println "Entering:" (:name (current-task)))
print-env (println (env (first *command-line-args*)))
}
}
```
```
$ FOO=1 bb print-env FOO
Entering: print-env
1
```
The `:leave`
hook is similar to `:enter`
but is executed after each task.
Both hooks can be overriden as task-local options. Setting them to `nil`
will
disable them for specific tasks (see Task-local options).
### Tasks API
The `babashka.tasks`
namespace exposes the following functions: `run`
, `shell`
,
`clojure`
and `current-task`
. They are implicitly imported, thus available
without a namespace prefix.
#### run
Tasks provide the `run`
function to explicitly invoke another task:
```
{:tasks
{:requires ([babashka.fs :as fs])
clean (do
(println "Removing target folder.")
(fs/delete-tree "target"))
uberjar (do
(println "Making uberjar")
(clojure "-X:uberjar"))
uberjar:clean (do (run 'clean)
(run 'uberjar))}
}
```
When running `bb uberjar:clean`
, first the `clean`
task is executed and the `uberjar`
:
```
$ bb uberjar:clean
Removing target folder.
Making uberjar
```
The `clojure`
function in the above example executes a clojure process using deps.clj. See clojure for more info
The `run`
function accepts an additional map with options:
##### :parallel
The `:parallel`
option executes dependencies of the invoked task in parallel
(when possible). See Parallel tasks.
#### shell
Both `shell`
and `clojure`
return a
process object which returns the
`:exit`
code among other info. By default these functions will throw an
exception when a non-zero exit code was returned and they will inherit the
stdin/stdout/stderr from the babashka process.
```
{:tasks
{
ls (shell "ls foo")
}
}
```
```
$ bb ls
ls: foo: No such file or directory
Error while executing task: ls
$ echo $?
1
```
You can opt out of this behavior by using the `:continue`
option:
```
{:tasks
{
ls (shell {:continue true} "ls foo")
}
}
```
```
$ bb ls
ls: foo: No such file or directory
$ echo $?
0
```
When you want to redirect output to a file instead, you can provide the `:out`
option.
`(shell {:out "file.txt"} "echo hello")`
To capture output as a string, set `:out`
to `:string`
and get the `:out`
key
from the resulting map. In most cases, you probably want to `trim`
away the
trailing newline as well:
`(->> "echo hello" (shell {:out :string}) :out clojure.string/trim)`
To run a program in another directory, you can use the `:dir`
option:
`(shell {:dir "subproject"} "ls")`
To set environment variables with `shell`
or `clojure`
:
`(shell {:extra-env {"FOO" "BAR"}} "printenv FOO")`
Other supported options are similar to those of
`babashka.process/process`
.
The process is executed synchronously: i.e. babashka will wait for the process
to finish before executing the next expression. If this doesn’t fit your use
case, you can use
`babashka.process/process`
directly instead. These two invocations are roughly equivalent:
```
(require '[babashka.process :as p :refer [process]]
'[babashka.tasks :as tasks])
(tasks/shell {:dir "subproject"} "npm install")
(-> (process {:dir "subproject" :inherit true} "npm install")
(p/check))
```
Note that the first string argument to `shell`
is tokenized (broken into multiple parts) and the trailing arguments are not:
Correct:
`(shell "npm install" "-g" "nbb")`
Not correct (`-g nbb`
within the same string):
`(shell "npm install" "-g nbb")`
Note that the varargs signature plays well with feeding `*command-line-args*`
:
`(apply shell "npm install" *command-line-args*)`
Note that `shell`
does not invoke a shell but just shells out to an external program. As such, `shell`
does not understand bash specific syntax.
The following does not work: `(shell "rm -rf target/*")`
. To invoke a specific shell, you should do that explicitly with:
`(shell "bash -c" "rm -rf target/*")`
Also see the docstring of `shell`
here.
#### clojure
The `clojure`
function starts a Clojure process using
deps.clj. The interface is exactly the
same as the clojure CLI. E.g. to evaluate an expression:
`{:tasks {eval (clojure "-M -e '(+ 1 2 3)'")}}`
or to invoke clj-kondo as a library:
`{:tasks {eval (clojure {:dir "subproject"} "-M:clj-kondo")}}`
The `clojure`
task function behaves similar to `shell`
with respect to the exit
code, return value and supported options, except when it comes to features that
do not start a process, but only do some printing. E.g. you can capture the
classpath using:
`(with-out-str (clojure "-Spath"))`
because this operation doesn’t start a process but prints to `*out*`
.
To run a `clojure`
task in another directory:
`{:tasks {eval (clojure {:dir "subproject"} "-M:clj-kondo")}}`
#### current-task
The `current-task`
function returns a map representing the currently running task. This function is typically used in the `:enter`
and `:leave`
hooks.
### Dependencies between tasks
Dependencies between tasks can be declared using `:depends`
:
```
{:tasks {:requires ([babashka.fs :as fs])
-target-dir "target"
-target {:depends [-target-dir]
:task (fs/create-dirs -target-dir)}
-jar-file {:depends [-target]
:task "target/foo.jar"}
jar {:depends [-target -jar-file]
:task (when (seq (fs/modified-since -jar-file
(fs/glob "src" "**.clj")))
(spit -jar-file "test")
(println "made jar!"))}
uberjar {:depends [jar]
:task (println "creating uberjar!")}}}
```
The `fs/modified-since`
function returns a seq of all newer files compared to a
target, which can be used to prevent rebuilding artifacts when not necessary.
Alternatively you can use the `:init`
hook to define vars, require namespaces,
etc.:
```
{:tasks {:requires ([babashka.fs :as fs])
:init (do (def target-dir "target")
(def jar-file "target/foo.jar"))
-target {:task (fs/create-dirs target-dir)}
jar {:depends [-target]
:task (when (seq (fs/modified-since jar-file
(fs/glob "src" "**.clj")))
(spit jar-file "test")
(println "made jar!"))}
uberjar {:depends [jar]
:task (println "creating uberjar!")}}}
```
It is common to define tasks that only serve as a helper to other tasks. To not
expose these tasks in the output of `bb tasks`
, you can start their name with a
hyphen.
### Parallel tasks
The `:parallel`
option executes dependencies of the invoked task in parallel
(when possible). This can be used to speed up execution, but also to have
multiple tasks running in parallel for development:
```
dev {:doc "Runs app in dev mode. Compiles cljs, less and runs JVM app in parallel."
:task (run '-dev {:parallel true})}
```**(1)**
-dev {:depends [dev:cljs dev:less dev:backend]} **(2)**
dev:cljs {:doc "Runs front-end compilation"
:task (clojure "-M:frontend:cljs/dev")}
dev:less {:doc "Compiles less"
:task (clojure "-M:frontend:less/dev")}
dev:backend {:doc "Runs backend in dev mode"
:task (clojure (str "-A:backend:backend/dev:" platform-alias)
"-X" "dre.standalone/start")}
1 |
The `dev` task invokes the (private) `-dev` task in parallel |
2 |
The `-dev` task depends on three other tasks which are executed simultaneously. |
### Invoking a main function
Invoking a main function can be done by providing a fully qualified symbol:
```
{:tasks
{foo-bar foo.bar/-main}}
```
You can use any fully qualified symbol, not just ones that end in `-main`
(so e.g.
`foo.bar/baz`
is fine). You can also have multiple main functions in one namespace.
The namespace `foo.bar`
will be automatically required and the function
will be invoked with `*command-line-args*`
:
`$ bb foo-bar 1 2 3`
### REPL
To get a REPL within a task, you can use `clojure.main/repl`
:
`{:tasks {repl (clojure.main/repl)}}`
Alternatively, you can use `babashka.tasks/run`
to invoke a task from a REPL.
For REPL- and linting-friendliness, it’s recommended to move task code longer
than a couple of lines to a `.clj`
or `.bb`
file.
### Naming
#### Valid names
When running a task, babashka assembles a small program which defines vars
bound to the return values of tasks. This brings the limitation that you can
only choose names for your tasks that are valid as var names. You can’t name
your task `foo/bar`
for this reason. If you want to use delimiters to indicate
some sort of grouping, you can do it like `foo-bar`
, `foo:bar`
or `foo_bar`
.
Names starting with a `-`
are considered "private" and not listed in the
`bb tasks`
output.
#### Conflicting file / task / subcommand names
`bb <option>`
is resolved in the order of file > task > subcommand.
Escape hatches in case of conflicts:
-
execute relative file as
`bb ./foo`
-
execute task as
`bb run foo`
-
execute subcommand as
`bb --foo`
When choosing a task name that overrides a babashka builtin subcommand, you have
to provide the `:override-builtin`
option to get rid of the warning that appears
when running babashka:
```
$ bb -Sdeps '{:tasks {help {:task (prn :help)}}}' help
[babashka] WARNING: task(s) 'help' override built-in command(s).
:help
```
```
$ bb -Sdeps '{:tasks {help {:task (prn :help) :override-builtin true}}}' help
:help
```
#### Conflicting task and clojure.core var names
You can name a task similar to a core var, let’s say: `format`
. If you want to
refer to the core var, it is recommended to use the fully qualified
`clojure.core/format`
in that case, to avoid conflicts in `:enter`
and `:leave`
expressions and when using the `format`
task as a dependency.
### Syntax
Because `bb.edn`
is an EDN file, you cannot use all of Clojure’s syntax in
expressions. Most notably:
-
You cannot use
`#(foo %)`
, but you can use`(fn [x] (foo x))`
-
You cannot use
`@(foo)`
but you can use`(deref foo)`
-
You cannot use
`#"re"`
but you can use`(re-pattern "re")`
-
Single quotes are accidentally supported in some places, but are better avoided:
`{:task '(foo)}`
does not work, but`{:task (quote (foo))`
does work. When requiring namespaces, use the`:requires`
feature in favor of doing it manually using`(require '[foo])`
.
## Babashka CLI
In version `0.9.160`
of babashka, the babashka
CLI added as a built-in library together with task integration.
### -x
For invoking functions from the command line, you can use the new `-x`
flag (a pun to Clojure’s `-X`
of course!):
```
bb -x clojure.core/prn --hello there
{:hello "there"}
```
What we see in the above snippet is that a map `{:hello "there"}`
is
constructed by babashka CLI and then fed to the `prn`
function.
After that the result is printed to the console.
What if we want to influence how things are parsed by babashka CLI and
provide some defaults? This can be done using metadata. Let’s create a
`bb.edn`
and make a file available on the classpath:
`bb.edn`
:
`{:paths ["."]}`
`tasks.clj`
:
```
(ns tasks
{:org.babashka/cli {:exec-args {:ns-data 1}}})
(defn my-function
{:org.babashka/cli {:exec-args {:fn-data 1}
:coerce {:num [:int]}
:alias {:n :num}}}
[m] m)
```
Now let’s invoke:
```
$ bb --prn -x tasks/my-function -n 1 2
{:ns-data 1, :fn-data 1, :num [1 2]}
```
As you can see, the namespace options are merged with the function
options. Defaults can be provided with `:exec-args`
, like you’re used
to from the clojure CLI.
### exec
What about task integration? Let’s adapt our `bb.edn`
:
```
{:paths ["."]
:tasks {doit {:task (let [x (exec 'tasks/my-function)]
(prn :x x))
:exec-args {:task-data 1234}}
}}
```
and invoke the task:
```
$ bb doit --cli-option :yeah -n 1 2 3
:x {:ns-data 1, :fn-data 1, :task-data 1234, :cli-option :yeah, :num [1 2 3]}
```
As you can see it works similar to `-x`
, but you can provide another
set of defaults on the task level with `:exec-args`
. Executing a
function through babashka CLI is done using the `babashka.task/exec`
function, available by default in tasks.
To add `:exec-args`
that should be evaluated you can pass an extra map to `exec`
as follows:
```
{:paths ["."]
:tasks {doit {:task (let [x (exec 'tasks/my-function {:exec-args {:foo (+ 1 2 3)}})]
(prn :x x))
:exec-args {:task-data 1234}}
}}
```
```
$ bb doit --cli-option :yeah -n 1 2 3
:x {:ns-data 1, :fn-data 1, :task-data 1234, :cli-option :yeah, :num [1 2 3] :foo 6}
```
## Libraries
### Built-in namespaces
In addition to `clojure.core`
, the following libraries / namespaces are available in babashka.
Some are available through pre-defined aliases in the `user`
namespace,
which can be handy for one-liners. If not all vars are available, they
are enumerated explicitly. If some important var is missing, an issue or
PR is welcome.
From Clojure:
-
`clojure.core`
-
`clojure.core.protocols`
:`Datafiable`
,`Navigable`
-
`clojure.data`
-
`clojure.datafy`
-
`clojure.edn`
aliased as`edn`
-
`clojure.math`
-
`clojure.java.browse`
-
`clojure.java.io`
aliased as`io`
:-
`as-relative-path`
,`as-url`
,`copy`
,`delete-file`
,`file`
,`input-stream`
,`make-parents`
,`output-stream`
,`reader`
,`resource`
,`writer`
-
-
`clojure.java.shell`
aliased as`shell`
-
`clojure.main`
:`demunge`
,`repl`
,`repl-requires`
-
`clojure.pprint`
:`pprint`
,`cl-format`
-
`clojure.set`
aliased as`set`
-
`clojure.string`
aliased as`str`
-
`clojure.stacktrace`
-
`clojure.test`
-
`clojure.walk`
-
`clojure.zip`
Additional libraries:
-
`babashka.cli`
: CLI arg parsing -
`babashka.http-client`
: making HTTP requests -
`babashka.process`
: shelling out to external processes -
`babashka.fs`
: file system manipulation -
`bencode.core`
aliased as`bencode`
:`read-bencode`
,`write-bencode`
-
`cheshire.core`
aliased as`json`
: dealing with JSON -
`clojure.core.async`
aliased as`async`
. -
`clojure.data.csv`
aliased as`csv`
-
`clojure.data.xml`
aliased as`xml`
-
`clojure.tools.cli`
aliased as`tools.cli`
-
`clj-yaml.core`
alias as`yaml`
-
`cognitect.transit`
aliased as`transit`
-
`hiccup.core`
and`hiccup2.core`
-
-
`clojure.test.check`
-
`clojure.test.check.generators`
-
`clojure.test.check.properties`
-
-
-
`rewrite-clj.parser`
-
`rewrite-clj.node`
-
`rewrite-clj.zip`
-
`rewrite-clj.paredit`
-
-
-
`selmer.parser`
-
-
`timbre`
: logging -
`edamame`
: Clojure parser
Check out the babashka toolbox and
projects page
for libraries that are not built-in, but which you can load as an external
dependency in `bb.edn`
.
See the build page for built-in libraries that can be enabled via feature flags, if you want to compile babashka yourself.
A selection of Java classes are available, see
`babashka/impl/classes.clj`
in babashka’s git repo.
### Babashka namespaces
#### babashka.classpath
Available functions:
-
`add-classpath`
-
`get-classpath`
-
`split-classpath`
##### add-classpath
The function `add-classpath`
which can be used to add to the classpath
dynamically:
```
(require '[babashka.classpath :refer [add-classpath]]
'[clojure.java.shell :refer [sh]]
'[clojure.string :as str])
(def medley-dep '{:deps {medley {:git/url "https://github.com/borkdude/medley"
:sha "91adfb5da33f8d23f75f0894da1defe567a625c0"}}})
(def cp (-> (sh "clojure" "-Spath" "-Sdeps" (str medley-dep)) :out str/trim))
(add-classpath cp)
(require '[medley.core :as m])
(m/index-by :id [{:id 1} {:id 2}]) ;;=> {1 {:id 1}, 2 {:id 2}}
```
##### get-classpath
The function `get-classpath`
returns the classpath as set by `--classpath`
,
`BABASHKA_CLASSPATH`
and `add-classpath`
.
##### split-classpath
Given a classpath, returns a seq of strings as the result of splitting the classpath by the platform specific path separatator.
#### babashka.deps
Available functions:
-
`add-deps`
-
`clojure`
-
`merge-deps`
##### add-deps
The function `add-deps`
takes a deps edn map like ```
{:deps {medley/medley
{:mvn/version "1.3.0"}}}
```
, resolves it using
deps.clj and then adds to the babashka
classpath accordingly.
Example:
```
(require '[babashka.deps :as deps])
(deps/add-deps '{:deps {medley/medley {:mvn/version "1.3.0"}}})
(require '[medley.core :as m])
(m/index-by :id [{:id 1} {:id 2}])
```
Optionally, `add-deps`
takes a second arg with options. Currently the only
option is `:aliases`
which will affect how deps are resolved:
Example:
```
(deps/add-deps '{:aliases {:medley {:extra-deps {medley/medley {:mvn/version "1.3.0"}}}}}
{:aliases [:medley]})
```
##### clojure
The function `clojure`
takes a sequential collection of arguments, similar to
the clojure CLI. The arguments are then passed to
deps.clj. The `clojure`
function returns
`nil`
and prints to `*out*`
for commands like `-Stree`
, and `-Spath`
. For `-M`
,
`-X`
and `-A`
it invokes `java`
with `babashka.process/process`
(see
babashka.process) and returns the associated record. For
more details, read the docstring with:
```
(require '[clojure.repl :refer [doc]])
(doc babashka.deps/clojure)
```
Example:
The following script passes through command line arguments to clojure, while adding the medley dependency:
```
(require '[babashka.deps :as deps])
(def deps '{:deps {medley/medley {:mvn/version "1.3.0"}}})
(def clojure-args (list* "-Sdeps" deps *command-line-args*))
(if-let [proc (deps/clojure clojure-args)]
(-> @proc :exit (System/exit))
(System/exit 0))
```
#### babashka.wait
Contains the functions: `wait-for-port`
and `wait-for-path`
.
Usage of `wait-for-port`
:
```
(wait/wait-for-port "localhost" 8080)
(wait/wait-for-port "localhost" 8080 {:timeout 1000 :pause 1000})
```
Waits for TCP connection to be available on host and port. Options map
supports `:timeout`
and `:pause`
. If `:timeout`
is provided and reached,
`:default`
's value (if any) is returned. The `:pause`
option determines
the time waited between retries.
Usage of `wait-for-path`
:
```
(wait/wait-for-path "/tmp/wait-path-test")
(wait/wait-for-path "/tmp/wait-path-test" {:timeout 1000 :pause 1000})
```
Waits for file path to be available. Options map supports `:default`
,
`:timeout`
and `:pause`
. If `:timeout`
is provided and reached,
`:default`
's value (if any) is returned. The `:pause`
option determines
the time waited between retries.
The namespace `babashka.wait`
is aliased as `wait`
in the `user`
namespace.
#### babashka.signal
Contains the function `signal/pipe-signal-received?`
. Usage:
`(signal/pipe-signal-received?)`
Returns true if `PIPE`
signal was received. Example:
```
$ bb -e '((fn [x] (println x) (when (not (signal/pipe-signal-received?)) (recur (inc x)))) 0)' | head -n2
1
2
```
The namespace `babashka.signal`
is aliased as `signal`
in the `user`
namespace.
#### babashka.http-client
The `babashka.http-client`
library for making HTTP requests. See
babashka.http-client for how to use it.
#### babashka.process
The `babashka.process`
library. See the
process repo for API docs.
#### babashka.fs
The `babashka.fs`
library offers file system utilities. See the
fs repo for API docs.
#### babashka.cli
The `babashka.cli`
library allows you to turn functions into CLIs. See the
cli repo for API docs and check out the
babashka CLI chapter on how to use it
from the command line or with tasks.
### Projects
Babashka is able to run Clojure projects from source, if they are compatible with the subset of Clojure that sci is capable of running.
Check this page for projects that are known to work with babashka.
Do you have a library that is compatible with babashka? Add the official badge to give some flair to your repo!
## Pods
Pods are programs that can be used as a Clojure library by babashka. Documentation is available in the library repo.
A list of available pods can be found here.
### Pod registry
Since bb 0.2.6 pods can be obtained via the pod-registry.
This is an example script which uses the fswatcher pod to watch a directory for changes:
```
#!/usr/bin/env bb
(require '[babashka.pods :as pods])
(pods/load-pod 'org.babashka/fswatcher "0.0.5")
(require '[pod.babashka.fswatcher :as fw])
(fw/watch "." prn {:delay-ms 5000})
(println "Watching current directory for changes... Press Ctrl-C to quit.")
@(promise)
```
### Pods in bb.edn
Since bb 0.8.0 pods can be declared in `bb.edn`
:
```
{:paths ["bb"]
:pods {org.babashka/go-sqlite3 {:version "0.2.3"}}}
```
Given the file `bb/my_project/db.clj`
:
```
(ns my-project.db
(:require [pod.babashka.go-sqlite3 :as sqlite]))
(defn -main [& _args]
(prn (sqlite/query ":memory:" ["SELECT 1 + 1 AS sum"])))
```
you can then execute the main function, without calling `load-pod`
manually:
```
$ bb -m my-project.db
[{:sum 2}]
```
## Style
A note on style. Babashka recommends the following:
### Explicit requires
Use explicit requires with namespace aliases in scripts, unless you’re writing one-liners.
Do this:
```
$ ls | bb -i '(-> *input* first (str/includes? "m"))'
true
```
But not this:
script.clj:
`(-> *input* first (str/includes? "m"))`
Rather do this:
script.clj:
```
(ns script
(:require [clojure.java.io :as io]
[clojure.string :as str]))
(-> (io/reader *in*) line-seq first (str/includes? "m"))
```
Some reasons for this:
-
Linters like clj-kondo work better with code that uses namespace forms, explicit requires, and known Clojure constructs
-
Editor tooling works better with namespace forms (sorting requires, etc).
-
Writing compatible code gives you the option to run the same script with
`clojure`
## Child processes
For child processes, the babashka process library is recommended. It is built into babashka. Check out the README which gives a good introduction into the library.
## Recipes
### Running tests
Babashka bundles `clojure.test`
. To run tests you can write a test runner script. Given the following project structure:
```
.
├── src
│ └──...
└── test
└── your
├── test_a.clj
└── test_b.clj
```
```
#!/usr/bin/env bb
(require '[clojure.test :as t]
'[babashka.classpath :as cp])
(cp/add-classpath "src:test")
```**(1)**
(require 'your.test-a 'your.test-b) **(2)**
(def test-results
(t/run-tests 'your.test-a 'your.test-b)) **(3)**
(let [{:keys [fail error]} test-results]
(when (pos? (+ fail error))
(System/exit 1))) **(4)**
1 |
Add sources and tests to the classpath |
2 |
Require the test namespaces |
3 |
Run all tests in the test namespaces |
4 |
Exit the test script with a non-zero exit code when there are failures or errors |
### Main file
In Python scripts there is a well-known pattern to check if the current
file was the file invoked from the command line, or loaded from another
file: the `__name__ == "__main__"`
pattern. In babashka this pattern can
be implemented with:
`(= *file* (System/getProperty "babashka.file"))`
Combining this with a conditional invocation of `-main`
creates a script file that is safe to load at a REPL, and easy to invoke at the CLI.
```
#!/usr/bin/env bb
;; Various functions defined here
(defn -main [& args]
;; Implementation of main
)
(when (= *file* (System/getProperty "babashka.file"))
(apply -main *command-line-args*))
```
This can be exceedingly handy for editing complex scripts interactively, while not being able to adjust how they are invoked by other tools.
### Shutdown hook
Adding a shutdown hook allows you to execute some code before the script exits.
```
$ bb -e '(-> (Runtime/getRuntime) (.addShutdownHook (Thread. #(println "bye"))))'
bye
```
This also works when the script is interrupted with ctrl-c.
### Printing returned values
Babashka doesn’t print a returned `nil`
as lots of scripts end in
something side-effecting.
```
$ bb -e '(:a {:a 5})'
5
$ bb -e '(:b {:a 5})'
$
```
If you really want to print the nil, you can use `(prn ..)`
instead.
#### HTTP over Unix sockets
This can be useful for talking to Docker:
```
(require '[clojure.java.shell :refer [sh]])
(require '[cheshire.core :as json])
(-> (sh "curl" "--silent"
"--no-buffer" "--unix-socket"
"/var/run/docker.sock"
"http://localhost/images/json")
:out
(json/parse-string true)
first
:RepoTags) ;;=> ["borkdude/babashka:latest"]
```
### Core.async
In addition to `future`
, `pmap`
, `promise`
and friends, you may use the
`clojure.core.async`
namespace for asynchronous scripting. The following
example shows how to get first available value from two different
processes:
```
bb -e '
(defn async-command [& args]
(async/thread (apply shell/sh "bash" "-c" args)))
(-> (async/alts!! [(async-command "sleep 2 && echo process 1")
(async-command "sleep 1 && echo process 2")])
first :out str/trim println)'
process 2
```
Caveat: currently the `go`
macro is available for compatibility with JVM
programs, but the implementation maps to `clojure.core.async/thread`
and
the single exclamation mark operations (`<!`
, `>!`
, etc.) map to the
double exclamation mark operations (`<!!`
, `>!!`
, etc.). It will not
"park" threads, like on the JVM.
Examples like the following may still work, but will take a lot more
system resources than on the JVM and will break down for some high value
of `n`
:
```
(require '[clojure.core.async :as async])
(def n 1000)
(let [cs (repeatedly n async/chan)
begin (System/currentTimeMillis)]
(doseq [c cs] (async/go (async/>! c "hi")))
(dotimes [_ n]
(let [[v _] (async/alts!! cs)]
(assert (= "hi" v))))
(println "Read" n "msgs in" (- (System/currentTimeMillis) begin) "ms"))
```
### Interacting with an nREPL server
Babashka comes with the nrepl/bencode
library which allows you to read and write bencode messages to a socket.
A simple example which evaluates a Clojure expression on an nREPL server
started with `lein repl`
:
```
(ns nrepl-client
(:require [bencode.core :as b]))
(defn nrepl-eval [port expr]
(let [s (java.net.Socket. "localhost" port)
out (.getOutputStream s)
in (java.io.PushbackInputStream. (.getInputStream s))
_ (b/write-bencode out {"op" "eval" "code" expr})
bytes (get (b/read-bencode in) "value")]
(String. bytes)))
(nrepl-eval 52054 "(+ 1 2 3)") ;;=> "6"
```
### Running from Cygwin/Git Bash
On Windows, `bb`
can be invoked from the bash shell directly:
```
$ bb -e '(+ 1 2 3)'
6
```
However, creating a script that invokes `bb`
via a shebang leads to an error if
the script is not in the current directory. Suppose you had the following script
named `hello`
on your path:
```
#!/usr/bin/env bb
(println "Hello, world!")
```
```
$ hello
----- Error --------------------------------------------------------------------
Type: java.lang.Exception
Message: File does not exist: /cygdrive/c/path/to/hello
```
The problem here is that the shell is passing a Cygwin-style path to `bb`
, but
`bb`
can’t recognize it because it wasn’t compiled with Cygwin.
The solution is to create a wrapper script that converts the Cygwin-style path
to a Windows-style path before invoking `bb`
. Put the following into a script
called `bbwrap`
somewhere on your Cygwin path, say in `/usr/local/bin/bbwrap`
:
```
#!/bin/bash
SCRIPT=$1
shift
bb.exe $(cygpath -w $SCRIPT) $@
```
Make sure to fix your original script to invoke `bbwrap`
instead of `bb`
directly:
```
#!/usr/bin/env bbwrap
(println "Hello, world!")
```
## Differences with Clojure
Babashka is implemented using the Small
Clojure Interpreter. This means that a snippet or script is not
compiled to JVM bytecode, but executed form by form by a runtime which
implements a substantial subset of Clojure. Babashka is compiled to a
native binary using GraalVM. It comes
with a selection of built-in namespaces and functions from Clojure and
other useful libraries. The data types (numbers, strings, persistent
collections) are the same. Multi-threading is supported (`pmap`
,
`future`
).
Differences with Clojure:
-
A pre-selected set of Java classes are supported. You cannot add Java classes at runtime.
-
Interpretation comes with overhead. Therefore loops are slower than in Clojure on the JVM. In general interpretation yields slower programs than compiled programs.
-
No
`deftype`
,`definterface`
and unboxed math. -
`defprotocol`
and`defrecord`
are implemented using multimethods and regular maps. Ostensibly they work the same, but under the hood there are no Java classes that correspond to them. -
Currently
`reify`
works only for one class at a time -
The
`clojure.core.async/go`
macro is not (yet) supported. For compatibility it currently maps to`clojure.core.async/thread`
. More info here.
## Resources
Check out the list of resources in babashka’s README.md.
### Books
#### Babashka Babooka
If you’re a fan of Clojure for the Brave and True, you might enjoy Babashka Babooka, a book by the same author, Daniel Higginbotham!
## Contributing
Visit Babashka book’s Github repository and make an issue and/or PR.
## License
Copyright © 2020-2021 Michiel Borkent
Licensed under CC BY-SA 4.0.
| true | true | true |
A book with scripting recipes for babashka
|
2024-10-12 00:00:00
|
2023-01-23 00:00:00
| null | null | null | null | null | null |
33,820,243 |
https://astiango.com
|
astiango.com
| null |
This webpage was generated by the domain owner using Sedo Domain Parking. Disclaimer: Sedo maintains no relationship with third party advertisers. Reference to any specific service or trade mark is not controlled by Sedo nor does it constitute or imply its association, endorsement or recommendation.
| true | true | true |
astiango.com is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, astiango.com has it all. We hope you find what you are searching for!
|
2024-10-12 00:00:00
| null | null | null | null |
astiango.com
| null | null |
40,251,477 |
https://www.nngroup.com/articles/content-design-systems/
|
Content Standards in Design Systems
|
Anna Kaley
|
Efficiency and consistency are always at the top of UX practitioners' minds, especially for teams facing tight deadlines or resource constraints. While design systems commonly prioritize UI design components and patterns, incorporating content standards is equally important for producing high-quality, consistent content. This article outlines what your design system should include to effectively integrate content standards.
## What Are Content Standards?
Content standardsare the guidelines and best practices for scalable content design and management. They include rules for structuring the content, as well as editorial procedures and policies.
Structuring content into adaptable content blocks facilitates the efficient reuse of elements across various content types and channels. Editorial guidelines help to maintain a coherent content process and consistent voice, tone, and writing style.
## Supporting Content and Design Collaboration
Unlike UI components in a design system, content standards aren’t followed through mere duplication. Content designers aren’t just reusing the exact phrases or the same formulaic sentence structure.
Instead, **content standards ensure that every piece of content, though unique, feels part of a cohesive whole **and aligns with the company’s overarching brand identity and user experience.
Content standards also help bridge the common collaboration gap between visual and content designers. By using the interface patterns and components outlined in the design system, instead of filling these with meaningless lorem ipsum text in early design, UI and content design can happen in parallel.
## Steps for Setting and Integrating Content Standards
### 1. Scope and Prioritize
You can create content standards even if you don’t have a design system; they’ll be ready for integration once it’s established. Begin with a manageable scope for what content standards to create; you don’t have to make them all at once. Prioritize standards for areas critical to your organization’s goals or that qualitative research identified as problematic.
### 2. Collaborate and Advocate
Work collaboratively with the designers and developers who own and manage the company’s design system. Address any concerns about adding content standards by emphasizing their benefits: consistency and time savings in content design.
### 3. Familiarize Yourself with Tools
Determine what tools and processes currently support your organization’s design system. If adding content standards risks overwhelming the product development team, consider establishing a separate but related content system or style guide.
Draw inspiration from organizations with impressive content standards, such as:
Make sure you give these systems due credit.
### 4. Set Structure and Specifications
Determine a structure for your content standards. Some guidelines will be global, and others will be specific to components, patterns, channels, content types, or user groups.
### 5. Audit and Inventory
Audit current content to identify inconsistencies and areas for improvement. Looking at what already exists will inform the development of new standards or the refinement of existing ones. Save examples of what to do and avoid.
## What to Include in Content Standards
Content standards typically include two types of standards.
Content-strategy and process standards |
Content-creation and design standards |
|
|
### Global and Unique Standards
Content strategists can outline global standards that apply to all content, regardless of where the content appears in the user’s experience. They can also set specific standards for various content types (e.g., social media, product content, help content) or design-system patterns (e.g., buttons and links in a banner).
The advantage of setting global standards is that there’s no need to repeat those for every communication channel or UI component — only what differs or varies needs outlining.
For example, Sainsbury’s design system includes design foundations, component guidelines, and an entire section dedicated to content standards. Some standards, labeled as *Content foundations,* apply globally to all content, while other sections focus on writing or content design for specific parts of the experience or components. The design system also covers process elements and tools to accelerate content design while maintaining quality and consistency.
### Who Can Contribute to Content Standards
Content strategists, UX writers, and content designers should be able to ask questions about content standards and provide input on their applicability and practicality. It’s also important to get the perspective of UI designers and engineers to ensure consistency, adaptability, and technical feasibility.
Anytime the standards are updated, communicate the changes to product teams using the design system and content standards. Ideally, outline the changes or updates in the design system itself.
## Benefits of Content Standards in Design Systems
### Efficiency in Decision Making
UX writers and content designers often face pressure to develop content quickly or are brought into design processes too late to work effectively. Content designers and writers are also usually disproportionally represented on product teams compared to their design and development peers.
While content standards can't fully solve these process and organizational issues, they can help teams **speed up decision making** during content-design work. They can also help evangelize practices that promote **a unified and collaborative approach** to design and content strategy.
Component examples should use accurate content that represents what content designers should and shouldn’t do. These can empower UI designers to include a high-level first pass at messaging goals or actual copy, thus lightening the workloads of busy content designers.
### Disambiguating Content Strategy and Execution
Standards are a crucial component of any content strategy. They bridge the gap between strategic planning and consistent, efficient execution. Incorporating content-strategy guidelines into the design system can demystify this often-ambiguous topic, ensuring clarity and coherence.
### Providing a Single Source of Truth
When they exist, content standards are frequently found in isolated documents or PDFs. While some standards are better than none, embedding these standards within component design systems can significantly improve collaboration between UI and content designers.
A single, accessible reference for all design-related tasks accelerates the design process and promotes high-quality content. In contrast, relying on scattered resources like separate PDFs, can make applying standards more cumbersome and inefficient in real-world design work.
### Enabling the Use of AI-Based Content Tools
Establishing content standards is also essential before you can leverage AI-based tools to automate, scale, and expedite the content-design process. Many AI-based tools allow you to set your standards directly within them to ensure that the guidelines are readily available to team members working in platforms like Figma, Word, Google Docs, or your CMS.
Content standards in design systems can also guide how designers and UX writers should — or shouldn't — use AI tools in their projects to ensure a balanced and ethical output.
## Adoption, Adherence, and Maintenance
For content standards to be widely adopted and used, they should have a clear, concise, and well-organized structure. The goal is to **provide straightforward, actionable direction** that all team members can follow quickly and easily. Don’t get tired of talking about the content standards in the design system, either. Reference them often, direct people to specific sections, and share them across digital workplace tools.
If you’re having difficulty getting people to adopt and adhere to the standards, communicate that their purpose isn’t to stifle creativity or introduce unnecessary bottlenecks; it's the opposite!
Content standards aim to **alleviate common content pain points** and **ensure consistency and quality **efficiently, from the start. If you have clearly outlined standards, with examples, teams won’t have to debate or argue about content-design decisions or word usage and will move forward faster. Share success stories from projects and teams that have effectively implemented content standards.
### Roles, Responsibilities, and Training
Define roles and processes for maintaining content standards and regularly review guidelines and components. Many teams** review content standards annually**, with specific components or categories (such as process, workflow, and tools) being reviewed more often or as needed.
Ensure that all stakeholders, from designers to content creators, understand the content standards and how to apply them. **Regular training sessions** and communication can help with successful implementation.
### Showing the Value of Content Standards
To tangibly demonstrate the benefits of adding content standards to your design system, start tracking product and content teams’ **time savings as concrete evidence of improvement.**
Other measurable indicators that highlight the value of incorporating content standards into the design system include the following metrics.
**Organizational and product metrics **(parentheses indicate the desirable direction of change)**: **
- Product and content-team satisfaction (
*increased)* - Number of projects delayed
*(decreased)* - Number of late-stage content requests
*(decreased)* - Usage of the content standards (
*increased)* - Instances of team members ignoring standards and suggestions
*(decreased)*
**User-focused metrics:**
- User satisfaction (
*increased)* - User engagement (
*increased)* - Error rates
*(decreased)* - Conversion rates (
*increased)*
### Conclusion
As organizations grow and evolve, so will their content needs. Design your content standards to be scalable and flexible, allowing for updates and adjustments as needed without sacrificing consistency or quality.
While design-system components and patterns provide the skeleton of user experiences, content standards breathe meaning into them. The real art lies not in applying rigid templates but in the skillful adaptation of content standards to communicate effectively with people to deliver what they need.
| true | true | true |
Content standards in design systems support a holistically consistent user experience and efficient collaboration between writers, content, and UI designers.
|
2024-10-12 00:00:00
|
2024-05-03 00:00:00
|
article
|
nngroup.com
|
Nielsen Norman Group
| null | null |
|
8,630 |
http://yahoo.brand.edgar-online.com/fetchFilingFrameset.aspx?dcn=0000914317-07-000928&Type=HTML
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,928,709 |
https://owwly.com/product/Site-Activity-Index-29/post/Site-Activity-Index-measure-website-changes-79
|
Owwly - Ultimate AI tools list
| null |
🦉 Home
✍️ Blog
🔥 Promote
Discover startups and find your next favourite product!
Products
Featured
Others
🎯 All
Fireflies
Automate your meeting notes
🦄 AI
203
Appspector
Remote debugging platform for iOS and Android apps
😱 Code
356
Fig
The next-generation command line
😱 Code
148
1
Linke
Create short & bio links that alerts you when something happens
💬 Collaboration
312
2
Pirsch
Simple, privacy-friendly and open-source alternative to Google Analytics
📊 Analytics
250
4
UX Jobs Club
Remote & Onsite jobs for Product and UX Designers
🎉 Inspirations
53
Overloop
Sales automation, like a boss!
💡 Automation
646
2
Writings
An all in one writing app
✂️ Productivity
222
1
Invideo
Online video editor
🎥 Video
197
3
Maildroppa
Privacy-First email-marketing-app for solopreneurs
✉️ Email
197
3
🔀 Shuffle products
Sign for Newsletter
🌍 Discover new startups!
Get the hottest products right to your inbox
Top products from last month
Appspector
Remote debugging platform for iOS and Android apps
Fireflies
Automate your meeting notes
Add your product
Follow us on:
@Twitter
@IndieHackers
Contact us
|
Promotion
|
Terms
Please enable JavaScript to continue using this application.
Dark mode
| true | true | true |
A comprehensive collection of the latest AI tools gathered in one place empowering you to explore, evaluate, and integrate cutting-edge technologies seamlessly.
|
2024-10-12 00:00:00
|
2015-01-01 00:00:00
|
website
|
owwly.com
|
Owwly
| null | null |
|
2,369,568 |
http://techcrunch.com/2011/03/25/groupons-real-u-s-revenue-numbers-for-february/
|
Groupon's "Real" U.S. Revenue Numbers For February | TechCrunch
|
Erick Schonfeld
|
Two days ago, I published the chart below with monthly estimates of Groupon’s U.S. revenues. The chart shows a startling 30 percent falloff in February from the month before. As I noted in the post:
Again, these are just estimates based on the equivalent of scraping Groupon’s site, and thus could be missing something.
Well, at least for February, it looks like those numbers are way off. The post obviously caused some ripple effects to the extent that Groupon had to start addressing the issue with potential hires. As a result, it knocked loose the real revenue numbers for February and January. Groupon wouldn’t comment on the revenue numbers when I asked them about it, but according to a source, Groupon is now privately countering the numbers in my post: instead of $62 million in U.S. revenues, the company did $103 million in February. And that is up from $92 million in January (compared to the $89 million in the original data below).
I did some checking around, and I’ve been able to confirm that these two numbers (the $103 million and the $92 million) are right. I was also able to confirm that the 60/40 mix between U.S. and international revenues is about right.
But getting back to the cause of the drop. My original source on the data cautioned that there is a lag time between when the data is published and collected, and it is “definitely possible” that could account for the drop in February. Note that both January numbers are pretty close. The real discrepancy is with February. Also, if Groupon changed the way it published the pages in February, that too could have changed the numbers.
Other external guesstimates such as Yipit’s also point to a drop, but again, the more I learn about how this data is collected, the clearer it is that these are all imperfect methods. Groupon, of course, brings this speculation upon itself by being so tightlipped about its financials. That will change only if and when it files for an IPO.
| true | true | true |
Two days ago, I published the chart below with monthly estimates of Groupon's U.S. revenues. The chart shows a startling 30 percent falloff in February from the month before. As I noted in the post: Again, these are just estimates based on the equivalent of scraping Groupon’s site, and thus could be missing something. Well, at least for February, it looks like those numbers are way off. The post obviously caused some ripple effects to the extent that Groupon had to start addressing the issue with potential hires. As a result, it knocked loose the real revenue numbers for February and January. Groupon wouldn't comment on the revenue numbers when I asked them about it, but according to a source, Groupon is now privately countering the numbers in my post: instead of $62 million in U.S. revenues, the company did $103 million in February. And that is up from $92 million in January (compared to the $89 million in the original data).
|
2024-10-12 00:00:00
|
2011-03-25 00:00:00
|
newsarticle
|
techcrunch.com
|
TechCrunch
| null | null |
|
5,377,534 |
http://tantek.com/2013/073/b1/silos-vs-open-social-web
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,001,415 |
https://www.stedi.com/blog/relative-performance-tradeoffs-of-aws-native-provisioning-methods
|
Relative performance tradeoffs of AWS-native provisioning methods | Stedi - Modern EDI
| null |
# Relative performance tradeoffs of AWS-native provisioning methods
Feb 23, 2023
Engineering
There are many different ways to provision AWS services, and we use several of them to address different use cases at Stedi. We set out to benchmark the performance of each option – direct APIs, Cloud Control, CloudFormation, and Service Catalog.
When compared to direct service APIs, we found that:
Cloud Control introduced an additional ~5 seconds of deployment latency
CloudFormation introduced an additional ~13 seconds of deployment latency
Service Catalog introduced an additional ~33 seconds of deployment latency.
This additional latency can make day-to-day operations quite painful.
## How we provision resources at Stedi
Each AWS service has its own APIs for CRUD of various resources, but since AWS services are built by many different teams, the ergonomics of these APIs vary greatly – as an example, you would use the Lambda `CreateFunction`
API to create a function vs the EC2 `RunInstances`
API to create an EC2 instance.
To make it easier for developers to work with these disparate APIs in a uniform fashion, AWS launched the Cloud Control API, which exposes five normalized verbs (`CreateResource`
, `GetResource`
, `UpdateResource`
, `DeleteResource`
, `ListResources`
) to manage the lifecycle of various services. Cloud Control provides a convenient way of working with many different AWS services in the same way.
That said, we rarely use the ‘native’ service APIs or Cloud Control APIs directly. Instead, we typically define resources using CDK, which synthesizes AWS CloudFormation templates that are then deployed by the CloudFormation service.
Over the past year, we’ve also begun to use AWS Service Catalog for certain use cases. Service Catalog allows us to define a set of CloudFormation templates in a single AWS account, which are then shared with many other AWS accounts for deployment on-demand. Service Catalog handles complexity such as versioning and governance, and we’ve been thrilled with the higher-order functionality it provides.
## Expectations
We expect to pay a performance penalty as we move ‘up the stack’ of value delivery – it would be unreasonable to expect a value-add layer to offer identical performance as the underlying abstractions. Cloud Control offers added value (in the form of normalization) over direct APIs; CloudFormation offers added value over direct APIs or Cloud Control (in the form of state management and dependency resolution); Service Catalog offers added value over CloudFormation (in the form of versioning, governance, and more).
Any performance hit can be broken into two categories: *essential* latency and *incidental* latency. Essential latency is the latency required to deliver the functionality, and incidental latency is the latency introduced as a result of a chosen implementation. The theoretical minimum performance hit, then, is equal to the essential latency, and the actual performance hit is equal to the essential latency plus the incidental latency.
It requires substantial investment to achieve something approaching essential latency, and such an investment isn’t sensible in anything but the most latency-sensitive use cases. But as an AWS customer, it’s reasonable to expect that the actual latency of AWS’s various layers of abstraction is within some margin that is difficult to perceive in the normal course of development work – in other words, we expect the unnecessary latency to be largely unnoticeable.
## Reality
To test the relative performance of each provisioning method, we ran a series of performance benchmarks for managing Lambda Functions and SQS Queues. Here is a summary of the P50 (median) results:
Cloud Control was
*744% (~5 seconds)*and*1,259% (500 ms)*slower than Lambda and SQS direct APIs, respectively.CloudFormation was
*1,736%**(~13 seconds)*and*21,076% (8 seconds)*slower than Lambda and SQS direct APIs, respectively.Service Catalog was
*4,339%*and*86,771% (~33 seconds, in both cases)*slower than Lambda and SQS direct APIs, respectively.
The full results are below.
We experimented with Service Catalog to determine what is causing its staggeringly poor performance. According to CloudTrail logs, Service Catalog is triggering the underlying CloudFormation stack create/update/delete, and then sleeping for 30 seconds before polling every 30 seconds until it’s finished. In practice, this means that Service Catalog can *never* take less than 30 seconds to complete an operation, and if the CloudFormation stack isn’t finished within 30 seconds, then Service Catalog can’t finish in under a minute.
## Conclusion
Our hope is that AWS tracks provisioning latency for each of these options internally and takes steps towards improving them – ideally, each provisioning method only introduces the minimum latecy overhead necessary to provide its corresponding functionality.
## Full results
### Lambda
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50----|-P90----|-P99----|-P10---|-P50---|-P90---|-P99--|
| Lambda | 464 | 744 | 2,301 | 5,310 | | | | |
| Cloud Control | 6,098 | 6,278 | 7,206 | 12,971 | 1214% | 744% | 213% | 144% |
| CloudFormation | 13,054 | 13,654 | 14,591 | 15,906 | 2713% | 1736% | 534% | 200% |
| Service Catalog | 32,797 | 33,013 | 33,389 | 34,049 | 6967% | 4339% | 1351% | 541
```
Methodology:
Change an existing function's code via different services, which involves first calling UpdateFunctionCode then polling GetFunction.
In the case of CloudFormation and Service Catalog, the new code value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
### SQS
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50------|-P90----|-P99----|-P10-----|-P50-----|-P90-----|-P99-----|
| SQS | 34 | 38 | 45 | 51 | | | | |
| Cloud Control | 444 | 516 | 669 | 1,023 | 1,205% | 1,259% | 1,382% | 1,904% |
| CloudFormation | 7,417 | 8,047 | 8,766 | 11,398 | 21,714% | 21,076% | 19,337% | 22,239% |
| Service Catalog | 32,785 | 33,011 | 33,320 | 33,659 | 96,327% | 86,771% | 73,780% | 65,873
```
Methodology:
Change an existing queue's visibility timeout attribute via different services, which involves calling SetQueueAttributes.
In the case of CloudFormation and Service Catalog, the new visibility timeout value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
There are many different ways to provision AWS services, and we use several of them to address different use cases at Stedi. We set out to benchmark the performance of each option – direct APIs, Cloud Control, CloudFormation, and Service Catalog.
When compared to direct service APIs, we found that:
Cloud Control introduced an additional ~5 seconds of deployment latency
CloudFormation introduced an additional ~13 seconds of deployment latency
Service Catalog introduced an additional ~33 seconds of deployment latency.
This additional latency can make day-to-day operations quite painful.
## How we provision resources at Stedi
Each AWS service has its own APIs for CRUD of various resources, but since AWS services are built by many different teams, the ergonomics of these APIs vary greatly – as an example, you would use the Lambda `CreateFunction`
API to create a function vs the EC2 `RunInstances`
API to create an EC2 instance.
To make it easier for developers to work with these disparate APIs in a uniform fashion, AWS launched the Cloud Control API, which exposes five normalized verbs (`CreateResource`
, `GetResource`
, `UpdateResource`
, `DeleteResource`
, `ListResources`
) to manage the lifecycle of various services. Cloud Control provides a convenient way of working with many different AWS services in the same way.
That said, we rarely use the ‘native’ service APIs or Cloud Control APIs directly. Instead, we typically define resources using CDK, which synthesizes AWS CloudFormation templates that are then deployed by the CloudFormation service.
Over the past year, we’ve also begun to use AWS Service Catalog for certain use cases. Service Catalog allows us to define a set of CloudFormation templates in a single AWS account, which are then shared with many other AWS accounts for deployment on-demand. Service Catalog handles complexity such as versioning and governance, and we’ve been thrilled with the higher-order functionality it provides.
## Expectations
We expect to pay a performance penalty as we move ‘up the stack’ of value delivery – it would be unreasonable to expect a value-add layer to offer identical performance as the underlying abstractions. Cloud Control offers added value (in the form of normalization) over direct APIs; CloudFormation offers added value over direct APIs or Cloud Control (in the form of state management and dependency resolution); Service Catalog offers added value over CloudFormation (in the form of versioning, governance, and more).
Any performance hit can be broken into two categories: *essential* latency and *incidental* latency. Essential latency is the latency required to deliver the functionality, and incidental latency is the latency introduced as a result of a chosen implementation. The theoretical minimum performance hit, then, is equal to the essential latency, and the actual performance hit is equal to the essential latency plus the incidental latency.
It requires substantial investment to achieve something approaching essential latency, and such an investment isn’t sensible in anything but the most latency-sensitive use cases. But as an AWS customer, it’s reasonable to expect that the actual latency of AWS’s various layers of abstraction is within some margin that is difficult to perceive in the normal course of development work – in other words, we expect the unnecessary latency to be largely unnoticeable.
## Reality
To test the relative performance of each provisioning method, we ran a series of performance benchmarks for managing Lambda Functions and SQS Queues. Here is a summary of the P50 (median) results:
Cloud Control was
*744% (~5 seconds)*and*1,259% (500 ms)*slower than Lambda and SQS direct APIs, respectively.CloudFormation was
*1,736%**(~13 seconds)*and*21,076% (8 seconds)*slower than Lambda and SQS direct APIs, respectively.Service Catalog was
*4,339%*and*86,771% (~33 seconds, in both cases)*slower than Lambda and SQS direct APIs, respectively.
The full results are below.
We experimented with Service Catalog to determine what is causing its staggeringly poor performance. According to CloudTrail logs, Service Catalog is triggering the underlying CloudFormation stack create/update/delete, and then sleeping for 30 seconds before polling every 30 seconds until it’s finished. In practice, this means that Service Catalog can *never* take less than 30 seconds to complete an operation, and if the CloudFormation stack isn’t finished within 30 seconds, then Service Catalog can’t finish in under a minute.
## Conclusion
Our hope is that AWS tracks provisioning latency for each of these options internally and takes steps towards improving them – ideally, each provisioning method only introduces the minimum latecy overhead necessary to provide its corresponding functionality.
## Full results
### Lambda
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50----|-P90----|-P99----|-P10---|-P50---|-P90---|-P99--|
| Lambda | 464 | 744 | 2,301 | 5,310 | | | | |
| Cloud Control | 6,098 | 6,278 | 7,206 | 12,971 | 1214% | 744% | 213% | 144% |
| CloudFormation | 13,054 | 13,654 | 14,591 | 15,906 | 2713% | 1736% | 534% | 200% |
| Service Catalog | 32,797 | 33,013 | 33,389 | 34,049 | 6967% | 4339% | 1351% | 541
```
Methodology:
Change an existing function's code via different services, which involves first calling UpdateFunctionCode then polling GetFunction.
In the case of CloudFormation and Service Catalog, the new code value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
### SQS
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50------|-P90----|-P99----|-P10-----|-P50-----|-P90-----|-P99-----|
| SQS | 34 | 38 | 45 | 51 | | | | |
| Cloud Control | 444 | 516 | 669 | 1,023 | 1,205% | 1,259% | 1,382% | 1,904% |
| CloudFormation | 7,417 | 8,047 | 8,766 | 11,398 | 21,714% | 21,076% | 19,337% | 22,239% |
| Service Catalog | 32,785 | 33,011 | 33,320 | 33,659 | 96,327% | 86,771% | 73,780% | 65,873
```
Methodology:
Change an existing queue's visibility timeout attribute via different services, which involves calling SetQueueAttributes.
In the case of CloudFormation and Service Catalog, the new visibility timeout value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
There are many different ways to provision AWS services, and we use several of them to address different use cases at Stedi. We set out to benchmark the performance of each option – direct APIs, Cloud Control, CloudFormation, and Service Catalog.
When compared to direct service APIs, we found that:
Cloud Control introduced an additional ~5 seconds of deployment latency
CloudFormation introduced an additional ~13 seconds of deployment latency
Service Catalog introduced an additional ~33 seconds of deployment latency.
This additional latency can make day-to-day operations quite painful.
## How we provision resources at Stedi
Each AWS service has its own APIs for CRUD of various resources, but since AWS services are built by many different teams, the ergonomics of these APIs vary greatly – as an example, you would use the Lambda `CreateFunction`
API to create a function vs the EC2 `RunInstances`
API to create an EC2 instance.
To make it easier for developers to work with these disparate APIs in a uniform fashion, AWS launched the Cloud Control API, which exposes five normalized verbs (`CreateResource`
, `GetResource`
, `UpdateResource`
, `DeleteResource`
, `ListResources`
) to manage the lifecycle of various services. Cloud Control provides a convenient way of working with many different AWS services in the same way.
That said, we rarely use the ‘native’ service APIs or Cloud Control APIs directly. Instead, we typically define resources using CDK, which synthesizes AWS CloudFormation templates that are then deployed by the CloudFormation service.
Over the past year, we’ve also begun to use AWS Service Catalog for certain use cases. Service Catalog allows us to define a set of CloudFormation templates in a single AWS account, which are then shared with many other AWS accounts for deployment on-demand. Service Catalog handles complexity such as versioning and governance, and we’ve been thrilled with the higher-order functionality it provides.
## Expectations
We expect to pay a performance penalty as we move ‘up the stack’ of value delivery – it would be unreasonable to expect a value-add layer to offer identical performance as the underlying abstractions. Cloud Control offers added value (in the form of normalization) over direct APIs; CloudFormation offers added value over direct APIs or Cloud Control (in the form of state management and dependency resolution); Service Catalog offers added value over CloudFormation (in the form of versioning, governance, and more).
Any performance hit can be broken into two categories: *essential* latency and *incidental* latency. Essential latency is the latency required to deliver the functionality, and incidental latency is the latency introduced as a result of a chosen implementation. The theoretical minimum performance hit, then, is equal to the essential latency, and the actual performance hit is equal to the essential latency plus the incidental latency.
It requires substantial investment to achieve something approaching essential latency, and such an investment isn’t sensible in anything but the most latency-sensitive use cases. But as an AWS customer, it’s reasonable to expect that the actual latency of AWS’s various layers of abstraction is within some margin that is difficult to perceive in the normal course of development work – in other words, we expect the unnecessary latency to be largely unnoticeable.
## Reality
To test the relative performance of each provisioning method, we ran a series of performance benchmarks for managing Lambda Functions and SQS Queues. Here is a summary of the P50 (median) results:
Cloud Control was
*744% (~5 seconds)*and*1,259% (500 ms)*slower than Lambda and SQS direct APIs, respectively.CloudFormation was
*1,736%**(~13 seconds)*and*21,076% (8 seconds)*slower than Lambda and SQS direct APIs, respectively.Service Catalog was
*4,339%*and*86,771% (~33 seconds, in both cases)*slower than Lambda and SQS direct APIs, respectively.
The full results are below.
We experimented with Service Catalog to determine what is causing its staggeringly poor performance. According to CloudTrail logs, Service Catalog is triggering the underlying CloudFormation stack create/update/delete, and then sleeping for 30 seconds before polling every 30 seconds until it’s finished. In practice, this means that Service Catalog can *never* take less than 30 seconds to complete an operation, and if the CloudFormation stack isn’t finished within 30 seconds, then Service Catalog can’t finish in under a minute.
## Conclusion
Our hope is that AWS tracks provisioning latency for each of these options internally and takes steps towards improving them – ideally, each provisioning method only introduces the minimum latecy overhead necessary to provide its corresponding functionality.
## Full results
### Lambda
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50----|-P90----|-P99----|-P10---|-P50---|-P90---|-P99--|
| Lambda | 464 | 744 | 2,301 | 5,310 | | | | |
| Cloud Control | 6,098 | 6,278 | 7,206 | 12,971 | 1214% | 744% | 213% | 144% |
| CloudFormation | 13,054 | 13,654 | 14,591 | 15,906 | 2713% | 1736% | 534% | 200% |
| Service Catalog | 32,797 | 33,013 | 33,389 | 34,049 | 6967% | 4339% | 1351% | 541
```
Methodology:
Change an existing function's code via different services, which involves first calling UpdateFunctionCode then polling GetFunction.
In the case of CloudFormation and Service Catalog, the new code value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
### SQS
```
| | Absolute | | | | Delta | | | |
|-Service---------|-P10------|-P50------|-P90----|-P99----|-P10-----|-P50-----|-P90-----|-P99-----|
| SQS | 34 | 38 | 45 | 51 | | | | |
| Cloud Control | 444 | 516 | 669 | 1,023 | 1,205% | 1,259% | 1,382% | 1,904% |
| CloudFormation | 7,417 | 8,047 | 8,766 | 11,398 | 21,714% | 21,076% | 19,337% | 22,239% |
| Service Catalog | 32,785 | 33,011 | 33,320 | 33,659 | 96,327% | 86,771% | 73,780% | 65,873
```
Methodology:
Change an existing queue's visibility timeout attribute via different services, which involves calling SetQueueAttributes.
In the case of CloudFormation and Service Catalog, the new visibility timeout value was passed in as a parameter rather than changing the template.
The "Wait" timings represent how long it took the resource to stabilize. This was determined by polling the applicable service operation every 50 milliseconds.
Share
Backed by
Stedi is a registered trademark of Stedi, Inc. All names, logos, and brands of third parties listed on our site are trademarks of their respective owners (including “X12”, which is a trademark of X12 Incorporated). Stedi, Inc. and its products and services are not endorsed by, sponsored by, or affiliated with these third parties. Our use of these names, logos, and brands is for identification purposes only, and does not imply any such endorsement, sponsorship, or affiliation.
Backed by
Stedi is a registered trademark of Stedi, Inc. All names, logos, and brands of third parties listed on our site are trademarks of their respective owners (including “X12”, which is a trademark of X12 Incorporated). Stedi, Inc. and its products and services are not endorsed by, sponsored by, or affiliated with these third parties. Our use of these names, logos, and brands is for identification purposes only, and does not imply any such endorsement, sponsorship, or affiliation.
Backed by
Stedi is a registered trademark of Stedi, Inc. All names, logos, and brands of third parties listed on our site are trademarks of their respective owners (including “X12”, which is a trademark of X12 Incorporated). Stedi, Inc. and its products and services are not endorsed by, sponsored by, or affiliated with these third parties. Our use of these names, logos, and brands is for identification purposes only, and does not imply any such endorsement, sponsorship, or affiliation.
| true | true | true |
There are many different ways to provision AWS services, and we use several of them to address different use cases at Stedi. We set out to benchmark the performance of each option – direct APIs, Cloud Control, CloudFormation, and Service Catalog.
|
2024-10-12 00:00:00
|
2024-10-01 00:00:00
|
website
|
stedi.com
|
stedi.com
| null | null |
|
1,056,812 |
http://blogs.discovermagazine.com/cosmicvariance/2010/01/15/24-questions-for-elementary-physics/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,181,913 |
https://theintercept.com/2017/09/04/undercover-in-north-korea-all-paths-lead-to-catastrophe/
|
Undercover in North Korea: "All Paths Lead to Catastrophe"
|
Jon Schwarz
|
__The most alarming__ aspect of North Korea’s latest nuclear test, and the larger standoff with the U.S., is how little is known about how North Korea truly functions. For 70 years it’s been sealed off from the rest of the world to a degree hard to comprehend, especially at a time when people in Buenos Aires need just one click to share cat videos shot in Kuala Lumpur. Few outsiders have had intimate contact with North Korean society, and even fewer are in a position to talk about it.
One of the extremely rare exceptions is novelist and journalist Suki Kim. Kim, who was born in South Korea and moved to the U.S. at age 13, spent much of 2011 teaching English to children of North Korea’s elite at the Pyongyang University of Science and Technology.
Kim had visited North Korea several times before and had written about her experiences for Harper’s Magazine and the New York Review of Books. Incredibly, however, neither Kim’s North Korean minders nor the Christian missionaries who founded and run PUST realized that she was there undercover to engage in some of history’s riskiest investigative journalism.
Although all of PUST’s staff was kept under constant surveillance, Kim kept notes and documents on hidden USB sticks and her camera’s SIM card. If her notes had been discovered, she almost certainly would have been accused of espionage and faced imprisonment in the country’s terrifying labor camps. In fact, of the three Americans currently detained in North Korea, two were teachers at PUST. Moreover, the Pentagon has in fact used a Christian NGO as a front for genuine spying on North Korea.
But Kim was never caught, and she returned to the U.S. to write her extraordinary 2014 book, “Without You, There Is No Us.” The title comes from the lyrics of an old North Korean song; the “you” is Kim Jong-il, Kim Jong-un’s father.
Kim’s book is particularly important for anyone who wants to understand what happens next with North Korea. Her experience made her extremely pessimistic about every aspect of the country, including the regime’s willingness to renounce its nuclear weapons program. North Korea functions, she believes, as a true cult, with all of the country’s pre-cult existence now passed out of human memory.
Most ominously, her students, all young men in their late teens or early 20s, were firmly embedded in the cult. With the Kim family autocracy now on its third generation, you’d expect the people who actually run North Korea to have abandoned whatever ideology they started with and degenerated into standard human corruption. But PUST’s enrollees, their children, did not go skiing in Gstaad on school breaks; they didn’t even appear to be able to travel anywhere within North Korea. Instead they studied the North Korea ideology of “juche,” or worked on collective farms.
Unsurprisingly, then, Kim’s students were shockingly ignorant of the outside world. They didn’t recognize pictures of the Taj Mahal or Egyptian pyramids. One had heard that everyone on earth spoke Korean because it was recognized as the world’s most superior language. Another believed that the Korean dish naengmyeon was seen as the best food on earth. And all of Kim’s pupils were soaked in a culture of lying, telling her preposterous falsehoods so often that she writes, “I could not help but think that they – my beloved students – were insane.” Nonetheless, they were still recognizably human and charmingly innocent and for their part, came to adore their teachers.
Overall, “Without You, There Is No Us” is simply excruciatingly sad. All of Korea has been the plaything of Japan, the U.S., the Soviet Union, and China, and like most Korean families, Kim has close relatives who ended up in North Korea when the country was separated and have never been seen again. Korea is now, Kim says, irrevocably ruptured:
It occurred to me that it was all futile, the fantasy of Korean unity, the five thousand years of Korean identity, because the unified nation was broken, irreparably, in 1945 when a group of politicians drew a random line across the map, separating families who would die without ever meeting again, with all their sorrow and anger and regret unrequited, their bodies turning to earth, becoming part of this land … behind the children of the elite who were now my children for a brief time, these lovely, lying children, I saw very clearly that there was no redemption here.
The Intercept spoke recently to Kim about her time in North Korea and the insight it gives her on the current crisis.
**JON SCHWARZ: **I found your book just overwhelmingly sorrowful. As an American, I can’t imagine being somewhere that’s been brutalized by not just one powerful country, but two or three or four. Then the government of North Korea and, to a lesser degree, the government of South Korea used that suffering to consolidate their own power. And then maybe saddest of all was to see these young men, your students, who were clearly still people, but inside a terrible system and on a path to doing terrible things to everybody else in North Korea.
**SUKI KIM: **Right, because there’s no other way of being in that country. We don’t have any other country like that. People so easily compare North Korea to Cuba or East Germany or even China. But none of them have been like North Korea – this amount of isolation, this amount of control. It encompasses every aspect of dictatorship-slash-cult.
What I was thinking about when I was living there is it’s almost too late to undo this. The young men I was living with had never known any other way.
The whole thing begins with the division of Korea in 1945. People think it began with the Korean War, but the Korean War only happened because of the 1945 division [of Korea by the U.S. and Soviet Union at the end of World War II]. What we’re seeing is Korea stuck in between.
**JS:** Essentially no Americans know what happened between 1945 and the start of the Korean War. And few Americans know what happened during the war. [Syngman Rhee, the U.S.-installed ultra right-wing South Korean dictator, massacred tens of thousands of South Koreans before North Korea invaded in 1950. Rhee’s government executed another 100,000 South Koreans in the war’s early months. Then the barbaric U.S. air war against North Korea killed perhaps one-fifth of its population.]
**SK:** This “mystery of North Korea” that people talk about all the time – people should be asking why Korea is divided and why there are American soldiers in South Korea. These questions are not being asked at all. Once you look at how this whole thing began, it makes some sense why North Korea uses this hatred of the United States as a tool to justify and uphold the Great Leader myth. Great Leader has always been the savior and the rescuer who was protecting them from the imperialist American attack. That story is why North Korea has built their whole foundation not only on the juche philosophy but hatred of the United States.
**JS:** Based on your experience, how do you perceive the nuclear issue with North Korea?
**SK:** Nothing will change because it’s an unworkable problem. It’s very dishonest to think this can be solved. North Korea will never give up its nuclear weapons. Never.
The only way North Korea can be dealt with is if this regime is not the way it is. No agreements are ever honored because North Korea just doesn’t do that. It’s a land of lies. So why keep making agreements with someone who’s never going to honor those agreements?
And ultimately what all the countries surrounding North Korea want is a regime change. What they’re doing is pretending to have an agreement saying they do not want a regime change, but pursuing regime change anyway.
Despite it all you have to constantly do engagement efforts, throwing information in there. That’s the only option. There’s no other way North Korea will change. Nothing will ever change without the outside pouring some resources in there.
**JS:** What is the motivation of the people who actually call the shots in North Korea to hold onto the nuclear weapons?
**SK:** They don’t have anything else. There’s literally nothing else they can rely on. The fact they’re a nuclear power is the only reason anyone would be negotiating with them at this point. It’s their survival.
Regime change is what they fear. That’s what the whole country is built on.
**JS: **Even with a different kind of regime, it’s hard to argue that it would be rational for them to give up their nuclear weapons, after seeing what happened to Saddam Hussein and Moammar Gadhafi.
**SK: **This is a very simple equation. There is no reason for them to give up nuclear weapons. Nothing will make them give them up.
**JS:** I’ve always believed that North Korea would never engage in a nuclear first strike just out of self-preservation. But your description of your students did honestly give me pause. It made me think the risk of miscalculation on their part is higher than I realized.
**SK:** It was paradoxical. They could be very smart, yet could be completely deluded about everything. I don’t see why that would be different in the people who run the country. The ones that foreigners get to meet, like diplomats, are sophisticated and can talk to you on your level. But at the same time they also have this other side where they have really been raised to think differently, their reality is skewed. North Korea is the center of the universe, the rest of the world kind of doesn’t exist. They’ve been living this way for 70 years, in a complete cult.
My students did not know what the internet was, in 2011. Computer majors, from the best schools in Pyongyang. The system really is that brutal, for everyone.
**JS:** Even their powerful parents seemed to have very little ability to make any decisions involving their children. They couldn’t have their children come home, they couldn’t come out and visit.
**SK:** You would expect that exceptions were always being made [for children of elites], but that just wasn’t true. They couldn’t call home. There was no way of communicating with their parents at all. There are literally no exceptions made. There is no power or agency.
I also found it shocking that they had not been anywhere within their own country. You would think that of all these elite kids, at least some would have seen the famous mountains [of North Korea]. None of them had.
That absoluteness is why North Korea is the way it is.
**JS: **What would you recommend if you could create the North Korea policy for the U.S. and other countries?
**SK: **It’s a problem that no one has been able to solve.
It’s not a system that they can moderate. The Great Leader can’t be moderated. You can’t be a little bit less god. The Great Leader system has to break.
But it’s impossible to imagine. I find it to be a completely bleak problem. People have been deprived of any tools that they need, education, information, intellectual volition to think for themselves.
[Military] intervention is not going to work because it’s a nuclear power. I guess it has to happen in pouring information into North Korea in whatever capacity.
But then the population are abused victims of a cult ideology. Even if the Great Leader is gone, another form of dictatorship will take its place.
Every path is a catastrophe. This is why even defectors, when they flee, usually turn into devout fundamentalist Christians. I’d love to offer up solutions, but everything leads to a dead end.
One thing that gave me a small bit of hope is the fact that Kim Jong-un is more reckless than the previous leader [his father Kim Jong-il]. To get your uncle and brother killed within a few years of rising to power, that doesn’t really bode well for a guy who’s only there because of his family name. His own bloodline is the only thing keeping him in that position. You shouldn’t be killing your own family members, that’s self-sabotage.
**JS: **Looking at history, it seems to me that normally what you’d expect is that eventually the royal family will get too nuts, the grandson will be too crazy, and the military and whatever economic powers there are going to decide, well, we don’t need this guy anymore. So we’re going to get rid of this guy and then the military will run things. But that’s seems impossible in North Korea: You must have this family in charge, the military couldn’t say, oh by the way, the country’s now being run by some general.
**SK:** They already built the brand, Great Leader is the most powerful brand. That’s why the assassination of [Kim Jong-un’s older half-brother and the original heir to the Kim dynasty] Kim Jong-nam was really a stupid thing to do. Basically that assassination proved that this royal bloodline can be murdered. And that leaves room open for that possibility. Because there are other bloodline figures for them to put in his place. He’s not the only one. So to kill [Jong-nam] set the precedent that this can happen.
**JS: **One small thing I found particularly appalling was the buddy system with your students, where everyone had a buddy and spent all their time with their buddy and seemed like the closest of friends – and then your buddy was switched and you never spent time with your old buddy again.
**SK: **The buddy system is just to keep up the system of surveillance. It doesn’t matter that these are 19-year-old boys making friends. That’s how much humanity is not acknowledged or valued. There’s a North Korean song which compares each citizen to a bullet in this great weapon for the Great Leader. And that’s the way they live.
**JS: **I was also struck by your description of the degeneration of language in North Korea. [Kim writes that “Each time I visited the DPRK, I was shocked anew by their bastardization of the Korean language. Curses had taken root not only in their conversation and speeches but in their written language. They were everywhere – in poems, newspapers, in official Workers’ Party speeches, even in the lyrics of songs. … It was like finding the words *fuck* and *shit* in a presidential speech or on the front page of the New York Times.”]
**SK:** Yes, I think the language does reflect the society. Of course, the whole system is built around the risk of an impending war. So that violence has changed the Korean language. Plus these guys are thugs, Kim Jong-un and all the rest of them, that’s their taste and it’s become the taste of the country.
**JS:** Authoritarians universally seem to have terrible taste.
**SK:** It’s interesting to be analyzing North Korea in this period of time in America because there are a lot of similarities. Look at Trump’s nonstop tweeting about “fake news” and how great he is. That’s very familiar, that’s what North Korea does. It’s just endless propaganda. All these buildings with all these slogans shouting at you all the time, constantly talking about how the enemies are lying all the time.
Those catchy one-liners, how many words are there in a tweet? It’s very similar to those [North Korean] slogans.
This country right now, where you’re no longer able to tell what’s true or what’s a lie, starting from the top, that’s North Korea’s biggest problem. America should really look at that, there’s a lesson.
**JS:** Well, I felt bad after I read your book and I feel even worse now.
**SK:** To be honest, I wonder if tragedies have a time limit – not to fix them, but to make them less horrifying. And I feel like it’s just too late. If you wipe out humanity to this level, and have three generations of it … when you see the humanity of North Koreans is when the horror becomes that much greater. You see how humanity can be so distorted and manipulated and violated. You face the devastation of what’s truly at stake.
*This interview has been edited for length and clarity.*
## Latest Stories
### An Informant Pushed Him to Plot a Subway Bombing. After 20 Years Behind Bars, He Has a Chance at Freedom.
Shahawar Matin Siraj is one of many Muslim men convicted in informant-related terrorism cases. Now he’s seeking compassionate release.
Israel’s War on Gaza
### Four Days in Gaza: Five Journalists Killed or Wounded
“It was not random, but direct targeting on purpose. Fadi was wearing his press uniform.”
Israel’s War on Gaza
### U.S. Journalist Jeremy Loffredo Released After Being Detained by Israel for Four Days
Jeremy Loffredo was taken into custody on suspicion of “assisting an enemy in war” for his reporting on Iran’s missile attack.
| true | true | true |
Suki Kim engaged in risky investigative journalism when she worked in North Korea as a teacher. What she discovered is frightening.
|
2024-10-12 00:00:00
|
2017-09-04 00:00:00
|
article
|
theintercept.com
|
The Intercept
| null | null |
|
25,121,654 |
https://www.macrumors.com/2020/11/16/m1-macbook-pro-cinebench-benchmark/
|
Apple Silicon M1 MacBook Pro Earns 7508 Multi-Core Score in Cinebench Benchmark
|
Juli Clover
|
# Apple Silicon M1 MacBook Pro Earns 7508 Multi-Core Score in Cinebench Benchmark
The new M1 Macs are now arriving to customers, and one of the first people to get the new M1 13-inch MacBook Pro with 8-core CPU, 8-core GPU, and 8GB unified memory has run a much anticipated R23 Cinebench benchmark on the 8GB 13-inch MacBook Pro with 512GB of storage to give us a better idea of performance.
Cinebench is a more intensive multi-thread test than Geekbench 5, testing performance over a longer period of time, and it can provide a clearer overview of how a machine will work in the real world.
The M1 MacBook Pro earned a multi-core Cinebench score of 7508, and a single-core score of 1498, which is similar in performance to some of Intel's 11th-generation chips.
Comparatively, a 2019 16-inch MacBook Pro with 2.3GHz Core i9 chip earned a multi-core score of 8818, according to a *MacRumors* reader who benchmarked his machine with the new R23 update that came out last week. The 2.6GHz low-end 16-inch MacBook Pro earned a single-core score of 1113 and a multi-core score of 6912 on the same test, and the high-end prior-generation MacBook Air earned a single-core score of 1119 and a multi-core score of 4329.
Other Cinebench R23 scores can be found on the CPU Monkey website for both multi-core and single-core performance.
It's worth noting that the new M1 Macs are lower performance machines that aren't meant for heavy duty rendering tasks. The M1 MacBook Pro replaces the low-end machine, while the MacBook Air has always been more of a consumer machine than a Pro machine.
Apple does have plans for higher-end Pro machines with Apple Silicon chips, but the company has said that it will take around two years to transition the entire Mac lineup to Arm-based chips. The Cinebench scores for the MacBook Air bode well for future Macs that are expected to get even higher performance M-series chips.
## Popular Stories
iOS 18.1 will be released to the public in the coming weeks, and the software update introduces the first Apple Intelligence features for the iPhone. Below, we outline when to expect iOS 18.1 to be released. iOS 18.1: Apple Intelligence Features Here are some of the key Apple Intelligence features in the iOS 18.1 beta so far: A few Siri enhancements, including improved understanding...
Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models simultaneously, which is why we sometimes get rumored feature leaks so far ahead of launch. The iPhone 17 series is no different – already we have some idea of what to expect from Apple's 2025 smartphone lineup. If you plan to skip...
Alleged photos and videos of an unannounced 14-inch MacBook Pro with an M4 chip continue to surface on social media, in what could be the worst product leak for Apple since an employee accidentally left an iPhone 4 prototype at a bar in California in 2010. The latest video of what could be a next-generation MacBook Pro was shared on YouTube Shorts today by Russian channel Romancev768, just...
Rumors strongly suggest Apple will release the seventh-generation iPad mini in November, nearly three years after the last refresh. Here's a roundup of what we're expecting from the next version of Apple's small form factor tablet, based on the latest rumors and reports. Design and Display The new iPad mini is likely to retain its compact 8.3-inch display and overall design introduced with...
The current Apple TV was released two years ago this month, so you may be wondering when the next model will be released. Below, we recap rumors about a next-generation Apple TV. In January 2023, Bloomberg's Mark Gurman reported that a new Apple TV was planned for release in the first half of 2024:Beyond the future smart displays and new speaker, Apple is working on revamping its TV box....
Apple often releases new Macs in the fall, but we are still waiting for official confirmation that the company has similar plans this year. We're approaching the middle of October now, and if Apple plans to announce new Macs before the holidays, recent history suggests it will happen this month. Here's what we know so far. As of writing this, it's been 220 days since Apple released a new...
| true | true | true |
The new M1 Macs are now arriving to customers, and one of the first people to get the new M1 13-inch MacBook Pro with 8-core CPU, 8-core GPU, and 8GB...
|
2024-10-12 00:00:00
|
2020-11-16 00:00:00
|
article
|
macrumors.com
|
MacRumors.com
| null | null |
|
21,696,347 |
https://www.fastcompany.com/90436584/trumps-other-quid-pro-quo-is-washingtons-dirtiest-secret
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,103,264 |
https://www.economist.com/science-and-technology/predict-and-survive/21803319
|
Predicting viral evolution may let vaccines be prepared in advance
| null |
# Predicting viral evolution may let vaccines be prepared in advance
## New techniques could programme people’s immune systems against future pathogens
GENERALLY, IMMUNE systems mount responses only against pathogens that have already infected the bodies they are protecting. Science, though, can shorten the path to immunity by vaccination. This involves presenting the immune system with harmless or lookalike versions of dangerous pathogens so that it may create antibodies and killer cells hostile to the real thing in advance of any actual infection, thereby reducing its danger.
This article appeared in the Science & technology section of the print edition under the headline “Predict and survive”
## Discover more
### Could life exist on one of Jupiter’s moons?
A spacecraft heading to Europa is designed to find out
### Noise-dampening tech could make ships less disruptive to marine life
Solutions include bendy propellers and “acoustic black holes”
### Meet Japan’s hitchhiking fish
Medaka catch rides on obliging birds, confirming one of Darwin’s hunches
### AI wins big at the Nobels
Awards went to the discoverers of micro-RNA, pioneers of artificial-intelligence models and those using them for protein-structure prediction
### Google’s DeepMind researchers among recipients of Nobel prize for chemistry
The award honours protein design and the use of AI for protein-structure prediction
### AI researchers receive the Nobel prize for physics
The award, to Geoffrey Hinton and John Hopfield, stretches the definition of the field
| true | true | true |
New techniques could programme people’s immune systems against future pathogens
|
2024-10-12 00:00:00
|
2021-08-05 00:00:00
|
Article
|
economist.com
|
The Economist
| null | null |
|
15,461,491 |
https://medium.com/@mrmnmly/what-disqus-can-learn-from-stackoverflow-and-how-it-can-help-fighting-with-internet-trolls-and-fd2f7ac833d3
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
34,860,525 |
https://www.businessinsider.com/florida-sat-college-board-alternative-classical-christian-desantis-western-thought-2023-2
|
Florida state officials are weighing 'classical and Christian' alternative to the SAT: report
|
John L Dorman
|
- Florida officials have been holding talks to use the CLT as an alternative to the SAT, per the Miami Herald.
- Jeremy Tate, the CLT's founder, told the Herald that the SAT had become "increasingly ideological."
- The discussion over a new test comes as Gov. DeSantis floated ditching state support for AP courses.
Top Florida officials have been holding talks with the founder and chief executive of an education testing company that backers have said is centered on the "great classical and Christian tradition," according to The Miami Herald.
The potential of such a test being used in Florida has come into closer view as Republican Gov. Ron DeSantis has tussled with the College Board in recent weeks over the curriculum of its pilot Advanced Placement (AP) African American Studies course, with the governor on Tuesday floating the idea that the state could pull its support for the rigorous, college-level AP courses.
The Classic Learning Test, which is billed by Classic Learning Initiatives as being "steeped in content that is intellectually richer and more rigorous than other standardized tests and college entrance exams," is largely utilized in private schools and home-school environments.
Jeremy Tate, the founder, told the Herald that the test was intended to be an alternative to the SAT, which is administered by the College Board and has long been the standard in US high schools for students applying to colleges and universities. (In recent years, many universities have made the SAT optional, notably after the coronavirus pandemic.)
Tate told the newspaper that the SAT had become "increasingly ideological" partly because it had "censored the entire Christian-Catholic intellectual tradition."
On the testing company's site, it stated that Tate was working as a high school English instructor when he came to the conclusion that "transcendent, moral, and ethical ideas had been gutted from the classroom," with high-stakes testing being part of the reason.
Tate told the Herald that he had held meetings with Ray Rodrigues, the chancellor of the state university system of Florida, and lawmakers to see if they could make the test more readily available to high school students in the state.
"We're thrilled they like what we're doing," Tate told the Herald. "We're talking to people in the administration, again, really, almost every day right now."
DeSantis has not specifically mentioned the Classic Learning Test as one of the sources he had in mind as a SAT alternative, but he stated that he'd like to look at "other vendors."
Florida Department of Education Senior Chancellor Henry Mack on Thursday expressed interest in using the Classic Learning Test.
"Not only do we need to build anew by returning to the foundations of our democracy, but CLT also offers the opportunity for all our colleges & universities to rightsize their priorities," he said on Twitter.
Rodrigues confirmed with the Herald that he had talks with Classic Learning Test this week so he could get more information about the test.
"As you know the State University System is the largest university system in the country that still requires an entrance exam as part of our admissions process. We currently accept SAT and ACT. Adding another option for our students could be a method of improvement," Rodrigues told the Herald.
Tate told the Herald that conservatives may prefer the CLT, but he didn't want the assessment to become ideological.
"We don't want to be a Trumpy or conservative test," he said.
DeSantis' office did not immediately respond to Insider's request for comment.
| true | true | true |
The Classic Learning Test, billed as "intellectually richer" than other standardized tests, is largely used in private schools and home-schooling.
|
2024-10-12 00:00:00
|
2023-02-18 00:00:00
|
https://i.insider.com/60f03c6fbb790e0018207b5c?width=1200&format=jpeg
|
article
|
businessinsider.com
|
Insider
| null | null |
28,408,315 |
https://github.com/nornagon/jonesforth/blob/master/jonesforth.S
|
jonesforth/jonesforth.S at master · nornagon/jonesforth
|
Nornagon
|
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
| true | true | true |
Mirror of JONESFORTH. Contribute to nornagon/jonesforth development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2012-06-06 00:00:00
|
https://opengraph.githubassets.com/cb639174e722647c62348174fb109d27e4422865815fad89a842d6f412051aae/nornagon/jonesforth
|
object
|
github.com
|
GitHub
| null | null |
36,566,360 |
https://www.thefinalhop.com/unleashing-the-power-of-sherlock-a-tool-for-social-media-username-search/
|
Unleashing the Power of Sherlock - A Tool for Social Media Username Search
|
TFH
|
Exploring Sherlock: A Comprehensive Guide to Tracking Social Media Usernames Across Platforms
### Introduction
Welcome, tech enthusiasts, to another deep dive into the fascinating world of technology. Today, we're exploring a tool that's revolutionizing the way we track online presence - Sherlock. This powerful tool, hosted on GitHub, allows you to hunt down social media accounts by username across a multitude of social networks. Whether you're a cybersecurity professional conducting OSINT (Open Source Intelligence) or a curious individual wanting to understand the breadth of a particular username's online presence, Sherlock is a tool that deserves your attention.
### What is Sherlock?
Sherlock is a Python-based tool that serves as a comprehensive solution for finding usernames across a wide array of social networks. It's not just a simple search tool; it's a powerful instrument designed for OSINT (Open Source Intelligence) purposes. OSINT refers to the process of gathering data from publicly available sources to be used in an intelligence context. In the case of Sherlock, it's about tracking a user's online presence.
The tool works by taking a username and searching for its occurrences on various platforms. These platforms range from popular social media sites like Facebook, Instagram, and Twitter, to less common ones, giving you a broad spectrum of the user's online presence. Sherlock is designed to automate this process, which would be incredibly time-consuming to do manually.
But Sherlock goes beyond just identifying whether a username exists on a platform. It also provides additional information where available, such as the user's profile picture, the number of followers they have, and other public details. This can provide a more detailed picture of a user's online activity and presence.
Moreover, Sherlock is designed with user-friendliness in mind. It doesn't require extensive knowledge of programming or cybersecurity. With a few simple commands, anyone can start using Sherlock to explore the digital footprint of a specific username.
In essence, Sherlock is more than just a tool; it's a gateway to understanding the vast digital landscape and the various identities that inhabit it. Whether you're a cybersecurity professional, a researcher, or just a curious individual, Sherlock offers a unique perspective on the interconnected world of social media.
### How to Install and Use Sherlock
Installing and using Sherlock is a straightforward process. First, you need to clone the repository from GitHub using the command `$ git clone https://github.com/sherlock-project/sherlock.git`
. After changing the working directory to Sherlock, you can install the necessary requirements using `$ python3 -m pip install -r requirements.txt`
.
To use Sherlock, you can simply type `$ python3 sherlock --help`
to display all the available options. For example, to search for a single user, you can use `python3 sherlock user123`
. If you want to search for multiple users, you can separate the usernames with a space, like `python3 sherlock user1 user2 user3`
. The accounts found will be stored in an individual text file with the corresponding username (e.g., user123.txt).
### Docker Support
Sherlock also supports Docker. If Docker is installed, you can build an image and run Sherlock as a container. The results can be accessed using specific Docker commands, and there's also support for Docker Compose.
### Contributing to Sherlock
The Sherlock project welcomes contributions from the community. Whether it's the addition of new site support, bringing back site support for sites that have been removed due to false positives, or any other enhancements, every contribution is greatly valued.
Conclusion
In the realm of online investigations, Sherlock stands out as a game-changer. Its ability to track usernames across various social networks makes it an invaluable tool for anyone interested in OSINT. With its easy installation process, user-friendly commands, and Docker support, Sherlock is not just a tool but a powerful ally in the digital world. As we continue to navigate the ever-evolving landscape of the internet, tools like Sherlock will undoubtedly play a crucial role in shaping our understanding of online identities. So, whether you're a seasoned cybersecurity professional or a tech enthusiast, it's time to embrace Sherlock and unlock a new level of online exploration.
Tweet
| true | true | true |
Exploring Sherlock: A Comprehensive Guide to Tracking Social Media Usernames Across Platforms Introduction Welcome, tech enthusiasts, to another deep dive into the fascinating world of technology. Today, we're exploring a tool that's revolutionizing the way we track online presence - Sherlock. This powerful tool, hosted on GitHub, allows you to
|
2024-10-12 00:00:00
|
2023-07-02 00:00:00
|
article
|
thefinalhop.com
|
The Final Hop
| null | null |
|
15,788,257 |
https://bugcrowd.com/atlassian
|
Atlassian | Bugcrowd
| null |
Hacker Login
Customer Login
| true | true | true |
Learn more about Atlassian’s Public Bug Bounty engagement powered by Bugcrowd, the leader in crowdsourced security solutions.
|
2024-10-12 00:00:00
| null |
website
|
bugcrowd.com
|
Bugcrowd
| null | null |
|
22,023,936 |
https://twitter.com/AukeHoekstra/status/1064529619951513600
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
13,560,668 |
https://davidwalsh.name/spread-operator
|
6 Great Uses of the Spread Operator
|
David Walsh
|
# 6 Great Uses of the Spread Operator
Thanks to ES6 and the likes of Babel, writing JavaScript has become incredibly dynamic, from new language syntax to custom parsing like JSX. I've become a big fan of the spread operator, three dots that may change the way you complete tasks within JavaScript. The following is a listing of my favorite uses of the spread operator within JavaScript!
## Calling Functions without Apply
To this point we've called `Function.prototype.apply`
, passing an array of arguments, to call a function with a given set of parameters held by an array:
function doStuff (x, y, z) { } var args = [0, 1, 2]; // Call the function, passing args doStuff.apply(null, args);
With the spread operator we can avoid `apply`
all together and simply call the function with the spread operator before the array:
doStuff(...args);
The code is shorter, cleaner, and no need to use a useless `null`
!
## Combine Arrays
There have always been a variety of ways to combine arrays, but the spread operator gives use a new method for combining arrays:
arr1.push(...arr2) // Adds arr2 items to end of array arr1.unshift(...arr2) //Adds arr2 items to beginning of array
If you'd like to combine two arrays and place elements at any point within the array, you can do as follows:
var arr1 = ['two', 'three']; var arr2 = ['one', ...arr1, 'four', 'five']; // ["one", "two", "three", "four", "five"]
Shorter syntax than other methods while adding positional control!
## Copying Arrays
Getting a copy of an array is a frequent tasks, something we've used Array.prototype.slice to do in the past, but we can now use the spread operator to get a copy of an array:
var arr = [1,2,3]; var arr2 = [...arr]; // like arr.slice() arr2.push(4)
Remember: objects within the array are still by reference, so not everything gets "copied", per se.
## Convert arguments or NodeList to Array
Much like copying arrays, we've used `Array.Prototype.slice`
to convert `NodeList`
and `arguments`
objects and to true arrays, but now we can use the spread operator to complete that task:
[...document.querySelectorAll('div')]
You can even get the arguments as an array from within the signature:
var myFn = function(...args) { // ... }
Don't forget you can also do this with `Array.from`
!
## Using `Math`
Functions
Of course the spread operator "spreads" an array into different arguments, so any function where spread is used as the argument can be used by functions that can accept any number of arguments.
let numbers = [9, 4, 7, 1]; Math.min(...numbers); // 1
The `Math`
object's set of functions are a perfect example of the spread operator as the only argument to a function.
## Destructuring Fun
Destructing is a fun practice that I'm using a ton of on my React projects, as well as other Node.js apps. You can use destructuring and the rest operator together to extract the information into variables as you'd like them:
let { x, y, ...z } = { x: 1, y: 2, a: 3, b: 4 }; console.log(x); // 1 console.log(y); // 2 console.log(z); // { a: 3, b: 4 }
The remaining properties are assigned to the variable after the spread operator!
ES6 has not only made JavaScript more efficient but also more fun. Modern browser all support the new ES6 syntax so if you haven't taken the time to play around, you definitely should. If you prefer to experiment regardless of environment, be sure to check out my post Getting Started with ES6. In any case, the spread operator is a useful feature in JavaScript that you should be aware of!
Two more:
Convert iterables to Arrays (not just
`arguments`
and`NodeList`
):Eliminate duplicates from an Array:
I love that unique function. You can even use a iteratee function to determine uniqueness (like Lodash’s
`uniqBy`
):@Axel – great share with the elimination of duplicate arrays. Very helpful.
Spread operator can also be used to convert a string to character array.
Thanks for sharing :-)
Regarding browser support (ES5) – TypeScript nor Babel seems to handle the
`NodeList`
to arrayand
`Array.from`
does not work in IE (ref: https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/from)Note that the
`...`
operator used with arrays was added in ES6, but it is only a stage-3 addition to be able to use them with objects as shown. It’s likely to land, and Babel/etc have supported it for awhile, but it’s not quite standard. Should probably have at least a footnote in that section.| You can even get the arguments as an array from within the signature:
AKA rest parameters.
Extending your first example you can use it as a (very hacky looking) way for conditional function arguments:
This is really ugly but can be helpful at times
Pardon me, but what is the difference between your approach,
and:
Which I find cleaner then both of your mentions, but again, I don’t know if this one is in some sort different.
Hm, I have to admit that my example did not clearly highlight what I wanted to achieve with it.
What I wanted to show is that with the spread operator, you can conditionally spread or not spread values. This doesn’t get clear in an example with just one spreaded item though.
Maybe it’s clearer to see the advantages when you have multiple items:
That said, just for the record: Your approach is almost equivalent to my original comment.
The only difference would be if somebody checked for the number of arguments passed:
Remember, just because you can and it’s handy doesn’t mean you should. Depending on your use and if perf is something you have to worry about, test things first. The slice vs spread case, slice is still much faster. In a simple test with an array that has 100 ints, slice reported 4,420,016 ops/sec while using spread operator reported 219,263 ops/sec. That’s quite a large gap
Use it like an optional/null check for nested properties
That I did not know, Thanks
super helpful.
btw @Axel not linking to http://2ality.com/2015/01/es6-set-operations.html is understatement ;)
Just a note on naming, it sounds like the ec6 spec calls it the “spread element”. Is it an operator?
https://stackoverflow.com/questions/36989741/how-can-i-concatenate-two-arrays-in-javascript#comment61533834_36989849
One of my favorites is conditionally including properties
Creating objects with values from a string can be very useful as well:
You’re definitely right not to let us forget about Array.from. People can be so keen on using the spreads they forget this is more useful than ever. It should always be preferred when you’re trying to get a new array mapped from an iterator.
`[...myNodeList].map(mapFn)`
will make an array, then map it.`Array.from(myNodeList, mapFn)`
will iterate directly into the mapper.I was surprised how clean this approach reads when used consistently too. Eg
`Array.from(document.querySelectorAll('p'), ({ textContent }) => textContent)`
is very semantic.
| true | true | true |
Thanks to ES6 and the likes of Babel, writing JavaScript has become incredibly dynamic, from new language syntax to custom parsing like JSX. I've become
|
2024-10-12 00:00:00
|
2017-01-30 00:00:00
|
article
|
davidwalsh.name
|
David Walsh Blog
| null | null |
|
3,927,430 |
http://pseudony.ms/blags/index-your-tweets.html
|
How to Get Your Tweets Indexed by Google
| null |
# How to Get Your Tweets Indexed by Google
Even though it makes me feel like a child casting magic spells, I feel the need to preface this post with a disclaimer: the opinions expressed here are my own, and not those of my employer.
Wish your hilarious tweets were indexed by Google? Wish your beautiful, smiling face would show up next to those search results? Wish no more! This post will walk you through how to get your tweets indexed by Google.
## The Problem
Most of my tweets do not show up in searches, and even when they do they look like:
## The Solution
- Scraped my public tweets.
- Posted them on a site I control.
- On my Google+ profile, I added a link to my site to the "Contributor to" section.
- On my site, I added author metadata pointing to my Google+ profile.
Hurrah:
## Scraping Public Tweets
Thanks to Twitter's swank API, scraping is easy. We don't even need to puzzle out OAuth to make requests for the user timeline:
## Hosting Tweets on Your Site
I'll be honest, I don't really know what to say here. However you like to make internet, put up the tweets you scraped in the last step. Static site generating, posting on tumblr, posting on wordpress, using coldfusion to do whatever people in the stone age used coldfusion for—go nuts!
There are only two rules: you should include the text of the tweet so the Googlybots will be able to see it, and you should make sure you site doesn't do any weird robots.txt voodoo or rate limiting to scare away bots. Don't worry, having no robots.txt or a normal robots.txt you copied from somewhere like html5 boilerplate will be fine--you'd have to go out of your way to scare away the precious search engine gremlins.
## Linking to Your Site From Your Profile
Pretty simple: add a link to your fancy new Twitter mirror in the Contributor To section on your profile.
One odd step here: make sure a face is clearly visible in your profile picture. It's a step I ignored from the directions for authorship that set me back a month or two on getting this working. Don't be dumb like me.
## Linking to Your Profile From Your Site
Just when you thought the rest of this howto was useless, I have a chance to
earn my keep again. To avoid adding any ugly links to your tweets, we can put
a link to your profile the `<head>`
of your document:
Mmm... delicious metadata. No bulky links, no fuss. At this point you should be able to verify that everything worked by pasting your url into the rich snippet testing tool:
## Wait for it…
If your website is anything like mine, it doesn't update often and it doesn't get a lot of visitors. That means those precious search engine crawlers we made a beautiful nest for are not in a huge rush to scour your site for updates.
When they finally do show up, you will be able to share your witticisms with the world outside of twitter:
| true | true | true | null |
2024-10-12 00:00:00
|
2012-05-04 00:00:00
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.