id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
684,685 |
http://www.smoothware.com/danny/weavemirror.html
|
Daniel Rozin Weave Mirror
| null |
DANIEL ROZIN INTERACTIVE ART | Works | About | Contact |
Mechanical Mirrors: The mechanical mirrors are made of various materials but share the same behavior and interaction; any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them. |
|
Weave Mirror - 2007 |
| true | true | true | null |
2024-10-12 00:00:00
|
2007-01-01 00:00:00
| null | null | null | null | null | null |
29,098,411 |
https://wyounas.com/2021/05/10-tips-on-how-to-write-well/
|
10 Tips on How to Write Well
|
Waqas Younas
|
# 10 Tips on How to Write Well
David Ogilvy was an advertising genius and founded the famous advertising agency, Ogilvy & Mather.
In 1982, he sent a memo to all his employees, entitled “How to Write.” I enjoyed reading it and I hope you enjoy reading it as well. Here is the memo.
## David Ogilvy’s Memo: How to write
The better you write, the higher you go in Ogilvy & Mather. People who *think* well, *write* well.
Woolly minded people write woolly memos, woolly letters and woolly speeches.
Good writing is not a natural gift. You have to *learn* to write well. Here are 10 hints:
1. Read the Roman-Raphaelson book on writing. Read it three times.
2. Write the way you talk. Naturally.
3. Use short words, short sentences and short paragraphs.
4. Never use jargon words like *reconceptualize*, *demassification*, *attitudinally*, *judgmentally*. They are hallmarks of a pretentious ass.
5. Never write more than two pages on any subject.
6. Check your quotations.
7. Never send a letter or a memo on the day you write it. Read it aloud the next morning—and then edit it.
8. If it is something important, get a colleague to improve it.
9. Before you send your letter or your memo, make sure it is crystal clear what you want the recipient to do.
10. If you want ACTION, *don’t write*. Go and *tell* the guy what you want.
David
*Source: **The Unpublished David Ogilvy: A Selection of His Writings from the Files of His Partners**; *
| true | true | true |
10 Tips on How To Write Well. To write well follow this famous advice from a well known advertising genius,
|
2024-10-12 00:00:00
|
2021-05-10 00:00:00
| null |
article
|
wyounas.com
|
Waqas Younas | Personal Website
| null | null |
8,001,116 |
http://vellum.nytlabs.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,379,427 |
http://9to5mac.com/2011/07/18/family-ties-earn-this-smart-cover-knock-off-a-samsung-certification-and-a-place-on-their-store-shelves/
|
Family ties earn this Smart Cover knock-off a Samsung certification and a place on their store shelves (UPDATE: product pulled) - 9to5Mac
| null |
*[UPDATE July 19, 2011 8:10 Eastern]: The article has been updated with a comment from Samsung included at the bottom. In addition, an Asian Economy story establishing family bonds between the case maker’s CEO and Samsung’s chairman, provided in the comments, has been added.*
Apple is suing “the copyist” Samsung because they *“imitate the appearance of Apple’s products to capitalize on Apple’s success”. *Be that as it may, the similarities between the two tech giant’s gadgets are nothing compared to what other Asian knockoffs are doing for a living. Like Anymode Corp., which is in the business of designing, manufacturing and selling a blatant Smart Cover rip-off, pictured above and below. Conveniently dubbed the Smart Case – obviously because Apple trademarked it – the accessory comes in five pastel color choices. It too can prop a tablet upwards and it folds like Apple’s accessory as well. The Smart Case is designed exclusively for Samsung’s Galaxy Tab 10.1 – and not by a coincidence, warns our reader Jun.
Apparently Sang-yong Kim, the Anymode CEO, was *“born in Samsung family”.* Jun tells us – and you’re free to take it at face value – that the Anymode CEO *“is nephew of the Samsung’s chairman Kun-Hee Lee“, *the claim we were unable to verify at the time of this writing. UPDATE: This *Asian Economy* article establishes family bonds between Sang-yong Kim and Kun-Hee Lee. The 69-year old chairman of Samsung Electronics stepped down in April 2008 amid the Slush funds scandal, but returned at the group’s helm in March of last year. He is credited for improving the quality of Samsung’s design and products. Anymode is not even attempting to conceal the Samsung link. The company describes itself on a LinkedIn page as…
…a privately held firm founded in Korea in 2007, with *“strong affiliation with Samsung Electronics for key accessories supplier globally”*. And with strong business partnership with Samsung Electronics, *“business has been grow rapidly and becoming one of fastest growing company that covers international retail channels with key distributors world-wide”*, their business profile on LinkedIn boasts. Youngbo Engineering is Anymode’s parent company where Sang-yong Kim is also president and CEO. Youngbo too provides solutions for Samsung, including mobile phone accessories such as earphones, batteries, Bluetooth items and other products, a *BusinessWeek* profile has established. Furthermore, reader Jun explains, Anymode is selling its products in Samsung’s A/S center all over Korea. One final thing: Samsung has apparently certified the Smart Case, as evident in the use of official branding in the two bottom shots. Somebody call Apple legal…
UPDATE 1: Following *9to5Mac’s* story, Samsung Electronics on its official blog acknowledged that the Smart Case has been pulled from the Anymode sales web site due to an* “oversight”* in approving the “Designed for Samsung Mobile” mark. Samsung did not certify Anymode’s Smart Case and the product has not been sold, the statement reads:
As a general practice, Samsung Electronics reviews and approves all accessories produced by partners before they are given the “Designed for Samsung Mobile” mark. In this case, approval was not given to Anymode for the accessory to feature this official designation. We are working with Anymode to address this oversight and the product has already been removed from the Anymode sales website. The product has not been sold.
UPDATE 2: A reader translated a paragraph from the aforementioned *Asian Economy* article divulging family ties:
Sang-yong Kim, CEO of the Anymode Corp., is the first son of Son-hee Lee, the third daughter of former Samsung’s chairman Beoung-chul, Lee. He ran the company through selling the Samsung mobile phone’s accessories and is also nephew of current chairman Gun-hee, Lee and in relation as a cousin of Jae-young, Lee, CEO of the Samsung Electronics.
###### Related articles
- Samsung has sold 3 million Galaxy S IIs in 55 days, 20 million phones in a year (9to5google.com)
- Is a Samsung QWERTY Slider on its way to Verizon Wireless? (9to5google.com)
- Gloomy prognosis for Samsung in spite of impressive phone sales (9to5google.com)
*FTC: We use income earning auto affiliate links.* More.
## Comments
| true | true | true |
[UPDATE July 19, 2011 8:10 Eastern]: The article has been updated with a comment from Samsung included at the bottom....
|
2024-10-12 00:00:00
|
2011-07-18 00:00:00
|
http://9to5mac.com/wp-content/uploads/sites/6/2011/07/samsung-smart-case-for-galaxy-tab-image-001.jpg
|
article
|
9to5mac.com
|
9To5Mac
| null | null |
11,055,165 |
https://medium.com/@johnjiang/how-handy-tries-desperately-to-reduce-churn-3cdd51472ee2
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,806,313 |
https://github.com/fifthsegment/GateSentry
|
GitHub - fifthsegment/Gatesentry: 🌟 Elevate Network Safety with Gatesentry! A powerful Proxy & DNS server combo, adept at blocking harmful content. Ensure a secure and focused online space for kids and adults alike. Dive into a world of enhanced security and productivity now! #SecureNetwork #FocusedBrowsing
|
Fifthsegment
|
An open source proxy server (supports SSL filtering / MITM) + DNS Server with a nice frontend.
Usages:
-
Privacy Protection: Users can use Gatesentry to prevent tracking by various online services by blocking tracking scripts and cookies.
-
Parental Controls: Parents can configure Gatesentry to block inappropriate content or websites for younger users on the network.
-
Bandwidth Management: By blocking unnecessary content like ads or heavy scripts, users can save on bandwidth, which is especially useful for limited data plans.
-
Enhanced Security: Gatesentry can be used to block known malicious websites or phishing domains, adding an extra layer of security to the network.
-
Access Control: In a corporate or institutional setting, Gatesentry can be used to restrict access to non-work-related sites during work hours.
-
Logging and Monitoring: Track and monitor all the requests made in the network to keep an eye on suspicious activities or to analyze network usage patterns.
-
Custom Redirects (via DNS): Redirect specific URLs to other addresses, useful for local development or for redirecting deprecated domains.
There are 2 ways to run Gatesentry, either using the docker image or using the single file binary directly.
- Use the docker-compose.yml file from the root of this repo as a template, copy and paste it to any directory on your computer, then run the following command in a terminal
`docker compose up`
-
Downloading Gatesentry:
Navigate to the 'Releases' section of this repository. Identify and download the appropriate file for your operating system, named either gatesentry-linux or gatesentry-mac.
-
Installation:
**For macOS and Linux:**Locate the downloaded Gatesentry binary file in your system. Open a terminal window and navigate to the directory containing the downloaded binary. Run the following command to grant execution permissions to the binary file:
`chmod +x gatesentry-{platform}`
Replace
`{platform}`
with your operating system (linux or mac). Proceed to execute the binary file to initiate the server.**Running as a Service (Optional)**If you want Gatesentry to keep running in the background on your machine, install it as :
`./gatesentry-{platform} -service install`
Next, on linux you can use your system service runner to start or stop it, for example for ubuntu:
`service gatesentry start #starts the service`
`service gatesentry stop #stops the service`
**For Windows**The installer (GatesentrySetup.exe) contains instructions.
**Running as a Service**The installer (GatesentrySetup.exe) should automatically install a service. You can look for it by searching for gatesentry in your Service manager (open it by running
`services.msc`
) -
Launching the Server:
Execute the Gatesentry binary file to start the server. Upon successful launch, the server will begin listening for incoming connections on port 10413.
By default Gatesentry uses the following ports
Port | Purpose |
---|---|
10413 | For proxy |
10786 | For the web based administration panel |
53 | For the built-in DNS server |
80 | For the built-in webserver (showing DNS block pages) |
Open a modern web browser of your choice. Enter the following URL in the address bar: http://localhost:10786 The Gatesentry User Interface will load, providing access to various functionalities and settings.
```
Username: admin
Password: admin
```
Use the above credentials to log in to the Gatesentry system for the first time. For security reasons, it is highly recommended to change the default password after the initial login.
Note:Ensure your system’s firewall and security settings allow traffic on ports 10413 and 10786 to ensure seamless operation and access to the Gatesentry server and user interface.
This guide now specifically refers to the Gatesentry software and uses the `gatesentry-{platform}`
filename convention for clarity.
Gatesentry ships with a built in DNS server, which can be used to block domains. The server as of now forwards requests to Google DNS for resolution, this can be modified from inside the `application/dns/server/server.go`
file.
`./setup.sh`
To run it:
`./run.sh`
| true | true | true |
🌟 Elevate Network Safety with Gatesentry! A powerful Proxy & DNS server combo, adept at blocking harmful content. Ensure a secure and focused online space for kids and adults alike. Dive into a...
|
2024-10-12 00:00:00
|
2016-02-13 00:00:00
|
https://opengraph.githubassets.com/f8f2e735eb36aeb83035734aaf209cc119e9a81350dded29b7519c7118ed64c9/fifthsegment/Gatesentry
|
object
|
github.com
|
GitHub
| null | null |
15,280,756 |
https://medium.com/@raiderrobert/keeping-up-with-the-valley-on-the-incessant-busyness-that-everyone-in-tech-feels-necessary-to-145bb75afcf8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,854,298 |
https://www.military.com/daily-news/2018/04/09/55-years-after-thresher-disaster-navy-still-keeps-secrets-sub-loss.html
|
55 Years After Thresher Disaster, Navy Still Keeps Secrets on Sub Loss
|
Marty Callaghan
|
*Marty Callaghan is a news editor for Military.com.*
The worst submarine disaster in U.S. Navy history happened on the morning of April 10, 1963, when the nuclear-powered USS Thresher (SSN 593) was lost with 129 crew members and civilian employees on board.
A Naval Court of Inquiry (NCOI) convened to investigate the disaster concluded the probable cause of the Thresher's loss was "major flooding" -- a finding that has since been challenged by naval and submarine experts. After more than a half-century, all but 18 pages of testimony from key witnesses remains closed to the public.
Retired Navy Capt. Jim Bryant, who served on board three Thresher-class subs and commanded the USS Guardfish (SSN 612), recently authored a new analysis of the submarine disaster, highlighting discrepancies between the NCOI's findings and evidence available for its investigation at the time. He raises concerns about the court's accuracy in recording the last understandable message sent by the sub, at about 9:12 a.m., pieced together from the testimony of several witnesses:
*"Experiencing minor difficulties. Have positive up angle. Am attempting to blow. Will keep you informed."*
In his analysis, Bryant said, "Thresher's difficulties were anything but minor by the time Skylark received that message."
The USS Skylark (ASR 20) was the submarine rescue ship that accompanied the Thresher for its sea trials about 200 miles off the Massachusetts coast.
Bryant's paper, excerpted and paraphrased below, faults the Navy for not being forthcoming enough regarding the historic disaster.
"The NCOI report cannot be accepted verbatim. It is not an acceptable reference for defining the sequence of events that occurred as the Thresher lost control and sank," Bryant said in his analysis.
"The boat was below test depth of about 1,300 feet and its nuclear reactor had just shut down. The Thresher had negative buoyancy and there was no power to drive it back to the surface," he continued.
The Thresher tried to blow its main ballast tanks with no effect. According to Bryant, it would take the crew at least another 20 minutes to restore main propulsion -- time they did not have.
The Thresher kept sinking until its hull imploded at a depth of about 2,400 feet, releasing energy equivalent to the explosive force of about 22,000 pounds of TNT. The hull collapsed in 47 milliseconds, about one-twentieth of a second.
The Thresher's crushed and shattered hull was later found just off the Continental Shelf, at a depth of more than 8,000 feet.
Bryant said the Thresher's final descent and implosion was recorded on paper time-frequency plots in great detail by the Navy's underwater Sound Surveillance System (SOSUS).
"All of the data recorded by SOSUS was available to the Naval Court of Inquiry," he said. "But it wasn't used effectively because the court didn't trust it. If the NCOI had thoroughly understood the acoustic data, it could have ruled out major flooding as a cause of the disaster, since the resonances created by such an event were not detected."
Bryant said the court did hear the testimony (in closed session) of a single acoustics expert: Navy Lt. Bruce Rule, analysis officer for the SOSUS Evaluation Center in Norfolk, Va. He went on to become the lead acoustic analyst for the Office of Naval Intelligence.
Rule analyzed the acoustic data from the Thresher during its final dive. Not only did he discount a major flooding incident, Bryant said, he indicated that the sub's nuclear power plant shut down completely at a critical moment -- from an electrical failure -- when all the main coolant pumps stopped.
Rule said the NCOI softened its conclusion by stating that the Thresher’s main coolant pumps *"slowed or stopped,"* a phrase that would deflect blame from Adm. Hyman G. Rickover, who created the Navy’s nuclear propulsion program.
"In fact, I was aggressively confronted by a couple of Navy commanders who challenged my data," Rule said. "I don't recall their names, but I do remember their vicious -- and unsuccessful -- attempt to get me to change my testimony."
Because the court of inquiry didn't trust the SOSUS data, Bryant said, it relied heavily on the Skylark's underwater telephone communications log and testimony from crew members in defining the tragic sequence of events.
The NCOI interviewed many witnesses about underwater communications with the Thresher during its final dive, he said. Yet the Navy has released only a small portion of that testimony since 1963.
"We have no way of comparing the original words from witnesses with the language of the NCOI's final report on the Thresher's loss," Bryant wrote.
As far as Bryant is concerned, it is time for the Navy to release all remaining documents related to the Thresher disaster.
"The entire NCOI report, especially all of the testimony, should be made available to scholars and the public at large," he wrote. "That report is sitting in a federal records center, waiting for more than a decade to be transferred to the National Archives."
In other words, he argues the Navy should comply with the spirit of Executive Order 13526, issued in December 2009. It created the National Declassification Center to facilitate the timely and systematic release of classified material.
Bryant said that even a small gesture, such as releasing the unclassified Sea Trial Agenda, would demonstrate a concern for transparency and provide greater insight for historians.
"To date," he said, "formal requests for Thresher's Sea Trial Agenda have been repeatedly and systematically deferred by the Navy."
*For more information, see "Thresher Disaster: New Analysis" by Capt. Jim Bryant, USN (ret.), a research paper currently under review for publication by the Naval Engineers Journal. A 3,000-word article based on this paper is tentatively scheduled for publication in U.S. Naval Institute Proceedings magazine**.*
*-- The opinions expressed in this op-ed are those of the author and do not necessarily reflect the views of Military.com. If you would like to submit your own commentary, please send your article to **[email protected] for consideration.*
| true | true | true |
The doomed submarine's hull imploded at a depth of 2,400 feet with a force of about 22,000 pounds of TNT.
|
2024-10-12 00:00:00
|
2018-04-09 00:00:00
|
article
|
military.com
|
Military.com
| null | null |
|
15,508,380 |
https://interestingengineering.com/googles-ai-now-creates-code-better-than-its-creators
|
Google's AI Now Creates Code Better Than its Creators
|
Shelby Rogers
|
Google’s AI Now Creates Code Better Than its Creators
Google’s mysterious AutoML program develops neural networks of its own. The company recently announced that the AI had duplicated itself with a more efficient code.
Google’s automated machine learning system recently crafted machine-learning codes more efficient than the codes that built its own system. The (robot) student has now become the teacher. For the AutoML program, it seems as if humans are no longer a necessity.
The project originally started in May as artificial intelligence that would help Google create other AI systems. It was a matter of time before the system out-crafted the master craftsmen; AutoML was made for that.
“Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” said Google CEO Sundar Pichai last week. Pichai briefly touched on the AutoML program at a launch event for the new Pixel 2 smartphones and other gadgets. “We want to enable hundreds of thousands of developers to be able to do it.”
To get a scope of how ‘smart’ AutoML is, note that Google openly admits to it being more efficient than its team of 1,300 people tasked with creating AutoML. Granted, not everyone listed on Google’s research page specializes in AI, but it does include some of the smartest software engineers in the company. Alphabet, Google’s parent company, employs over 27,000 people in Research and Development.
Some of the program’s successes have made headlines. In addition to mastering its own code, AutoML broke a record by categorizing images by content. It scored an accuracy of 82 percent. AutoML also beat out a human-built system in marking the location of multiple objects in an image field. Those processes could be integral to the future of virtual reality and augmented reality.
[see-also]
However, nothing else is really known about AutoML. Unlike Alphabet’s DeepMind AI, AutoML doesn’t have a lot of information available about it other than brief statements from Pichai and other researchers. Google’s research team did dedicate a blog post on its website earlier this year. It described the intricacies of the AutoML system:
“In our approach (which we call “AutoML”), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round,” the researchers wrote. “We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually, the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly.”
The Future for AIs Smarter than Humanity
AutoML’s system of neural networks and its improved efficiency could shorten the traditional woes other developers have had in creating neural networks. It will become increasingly easier for AIs to develop new systems. But where does that leave humans? Ideally, humans would serve as the ‘mediators’ or as checks and balances. Researchers are concerned that AIs pick up the unconscious biases in its creators. A biased AI developing even more biased AIs would be a disaster. Thus, human software engineers will spend the time they would normally spend on developing on refining these new AIs.
Ultimately, Pichai and the research team hope that AutoML could be used beyond Google.
“Going forward, we’ll work on careful analysis and testing of these machine-generated architectures to help refine our understanding of them,” the researchers said. “If we succeed, we think this can inspire new types of neural nets and make it possible for non-experts to create neural nets tailored to their particular needs, allowing machine learning to have a greater impact to everyone.”
| true | true | true |
Google's mysterious AutoML program develops neural networks of its own. The company recently announced that the AI had duplicated itself with a more efficient code.
|
2024-10-12 00:00:00
|
2017-10-18 00:00:00
|
article
|
interestingengineering.com
|
Interesting Engineering
| null | null |
|
27,358,420 |
https://www.ksl.com/article/50177514/large-north-american-meat-plants-stop-slaughter-after-jbs-cyberattack
|
US says ransomware attack on meatpacker JBS likely from Russia
|
Tom Polansek; Mark Weinraub; Reuters
|
Estimated read time: 2-3 minutes
This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.
CHICAGO (Reuters) — The White House said on Tuesday that Brazil's JBS SA has informed the U.S. government that a ransomware attack against the company that has disrupted meat production in North America and Australia originated from a criminal organization likely based in Russia.
JBS is the world's largest meatpacker and the incident caused its Australian operations to shut down on Monday and has stopped livestock slaughter at its plants in several U.S. states.
The ransomware attack follows one last month on Colonial Pipeline, the largest fuel pipeline in the United States, that crippled fuel delivery for several days in the U.S. Southeast.
White House spokeswoman Karine Jean-Pierre said the United States has contacted Russia's government about the matter and that the FBI is investigating.
"The White House has offered assistance to JBS and our team at the Department of Agriculture have spoken to their leadership several times in the last day," Jean-Pierre said.
"JBS notified the administration that the ransom demand came from a criminal organization likely based in Russia. The White House is engaging directly with the Russian government on this matter and delivering the message that responsible states do not harbor ransomware criminals," Jean-Pierre added.
If the outages continue, consumers could see higher meat prices during summer grilling season in the United States and meat exports could be disrupted at a time of strong demand from China.
JBS said it suspended all affected systems and notified authorities. It said its backup servers were not affected.
"On Sunday, May 30, JBS USA determined that it was the target of an organized cybersecurity attack, affecting some of the servers supporting its North American and Australian IT systems," the company said in a Monday statement.
"Resolution of the incident will take time, which may delay certain transactions with customers and suppliers," the company's statement said.
The company, which has its North American operations headquartered in Greeley, Colorado, controls about 20% of the slaughtering capacity for U.S. cattle and hogs, according to industry estimates.
Two kill and fabrication shifts were canceled at JBS's beef plant in Greeley due to the cyberattack, representatives of the United Food and Commercial Workers International Union Local 7 said in an email. JBS Beef in Cactus, Texas, also said on Facebook it would not run on Tuesday, updating an earlier post that had said the plant would run as normal.
JBS Canada said in a Facebook post that shifts had been canceled at its plant in Brooks, Alberta, on Monday and one shift so far had been canceled on Tuesday.
A representative in Sao Paulo said the company's Brazilian operations were not impacted.
(Reporting by Caroline Stauffer, Tom Polansek, Mark Weinraub in Chicago; Additional reporting by Ana Mano in Sao Paulo Editing by Chizu Nomiyama, Will Dunham and Nick Zieminski)
| true | true | true |
JBS canceled shifts at large U.S. and Canadian meat plants on Tuesday after the company was hit by a cyberattack over the weekend, threatening to disrupt food supply chains and further inflate food prices.
|
2024-10-12 00:00:00
|
2021-06-01 00:00:00
|
article
|
ksl.com
|
ksl.com
| null | null |
|
10,522,524 |
https://freedom.press/blog/2015/11/us-officials-have-no-problem-leaking-classified-information-about-surveillance
|
US officials have no problem leaking classified information about surveillance—as long as it fits their narrative
|
Trevor Timm Executive Director
|
In the past few days there have been a flurry of stories about the Russian plane that crashed in the Sinai peninsula, which investigators reportedly think may have been caused by a bomb. Notably, anonymous US officials have been leaking to journalists that they believe ISIS is involved, and it’s a perfect illustration of the US government’s rank hypocrisy when it comes to the Edward Snowden disclosures.
Why do US officials allegedly have a “feeling” that ISIS was involved? According to multiple reports, US intelligence agencies have been intercepting ISIS communications discussing “something big” in the region last week.
CNN published a report on Tuesday based on anonymous sources that ISIS was likely responsible despite the fact that “no formal conclusion has been reached by the U.S. intelligence community and that U.S. officials haven't seen forensic evidence from the crash investigation”:
## Get Notified. Take Action.
Sign up to stay up to date and take action to protect journalists and whistleblowers everywhere.
**Email:**
The signs pointing to ISIS, another U.S. official said, are partially based on monitoring of internal messages of the terrorist group. Those messages are separate from public ISIS claims of responsibility, that official said.
Huh, weren’t we told by Snowden’s critics that it was terrible and traitorous when sources tell journalists that the US has surveillance capabilities that, in addition to collecting information on millions of innocent people, also target alleged terrorists?
Just today, the Daily Beast reported this:
The U.S. intelligence community intercepted a signal from an ISIS-affiliated group in the Sinai Peninsula before a Russian jet crashed there on Saturday that warned of "something big in the area," two officials told The Daily Beast.
An adviser familiar the U.S. intelligence said a call was made between members of Wilayat Sinai, which a U.S. official said Thursday was one of the "most potent"branches of ISIS.The conversation did not mention downing an airplane, but a defense official said comments could be tied to the crash. (emphasis mine)
Here, the leak is more specific: the little-known name of the subgroup targeted by surveillance (Wilayat Sinai), including their general location (Sinai) and the time of the interception (sometime before the crash). Now look at this tweet by NBC News, it dials down on the specificity even more:
And just in case anyone wants to pretend that every other surveillance capability of US intelligence is classified but somehow this investigation is not, the New York Times clarified in their article on Wednesday:
“There’s not one thing that we know what is saying to us, ‘This is a bomb,’ ” said one of the American officials, who like others spoke on the condition of anonymity because
they were discussing intelligence considered preliminary and classified. “It’s just all indications of this or that, and not clear right now.” (emphasis mine)
So many people criticized Edward Snowden for allegedly leaking information showing that the US targeted suspected terrorists in Pakistan and Yemen with their surveillance capabilities. Keep in mind, Snowden did not publish any of this information himself; it was the decision of major newspapers that found the information was newsworthy. It was also vague information that was months or years old, and in the vast majority of cases not the focal point of the stories—which was the information collected on millions of innocent people at the same time.
In this case, US officials have no problem at all leaking classified information about top secret surveillance capabilities which target terrorists, since it fits within their narrative. It’s also more specific information that’s more timely, involving an investigation that is still ongoing. Even the most virulent commentators who claim that Snowden was a traitor for leaking classified information had no problem publishing similarly leaked information about this potential terrorist attack.
We can almost be certain that there will be no leak investigation and no one will be punished—despite the fact that by the government’s own interpretation of the law, this is clearly illegal. (Not that we believe anyone should be prosecuted for leaking, but if the US is going to prosecute, they should do so uniformly and not cherry-pick who they want.)
This has happened over and over since the Snowden revelations started and we can only assume it’ll happen again. That’s because the US government’s policy on leaks has never really been about enforcing the law, or that leaks are so damaging to national security. It's about about controlling the story the media tells.
| true | true | true |
In the past few days there have been a flurry of stories about the Russian plane that crashed in the Sinai peninsula, which investigators reportedly think may have been caused by a bomb. Notably, anonymous US officials have been leaking to journalists that they believe ISIS is involved, and it’s …
|
2024-10-12 00:00:00
|
2015-11-06 00:00:00
|
article
|
freedom.press
|
Freedom of the Press
| null | null |
|
1,617,186 |
http://dealbook.blogs.nytimes.com/2010/08/19/intel-to-buy-mcafee-for-7-7-billion/?hp
|
DealBook
|
Lauren Hirsch
|
### Executives and Research Disagree About Hybrid Work. Why?
Companies like Amazon have required a return to the office five days a week despite findings showing benefits to employers that allow some remote days.
By
Companies like Amazon have required a return to the office five days a week despite findings showing benefits to employers that allow some remote days.
By
JPMorgan Chase, Wells Fargo and BlackRock reported strong quarterly results to kick off earnings season, but concerns linger about the strength of the consumer.
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren HirschEphrat Livni and
The superstorm is expected to inflict costly and lasting damage in Florida, as the Federal Reserve is already keeping an eye on upcoming inflation data.
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren Hirsch and
Make sense of the latest business and policy headlines with our daily newsletter.
Advertisement
How Google Was Forced to Open Up
A federal judge ordered the tech giant to let rival app stores onto its Android smartphone platform, adding to its growing list of legal headaches .
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren Hirsch and
Two Tech Moguls Split on Trump
Elon Musk appeared onstage at a rally for the former president this weekend, but Silicon Valley was abuzz about Ben Horowitz’s U-turn on backing the Republican candidate.
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren Hirsch and
What an Escalating Middle East Conflict Could Mean for the Global Economy
The biggest risk is a sustained increase in oil prices.
By Sarah Kessler and
Jobs and Oil Prices Are Keeping Markets on Edge
Friday’s jobs report could bolster the view that the American economy is holding steady, but an oil price shock could undercut that sense of calm.
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren Hirsch and
The Union Leader Behind the Ports Strike
As president of the International Longshoremen’s Association, Harold Daggett is taking advantage of organized labor’s resurgence to drive a hard bargain.
By Andrew Ross SorkinRavi MattuBernhard WarnerSarah KesslerMichael J. de la MercedLauren Hirsch and
Elon Musk’s Mindset: ‘It’s a Weakness to Want to Be Liked’
In an interview, the tech billionaire slams advertisers for pulling back from X and discusses his emotional state.
By Andrew Ross SorkinEvan RobertsElaine ChenDan Powell and
Kamala Harris on Polling and Polarization
In an interview, the vice president discusses the extent to which she follows polls and why social division is like a virus.
By Andrew Ross SorkinEvan RobertsElaine ChenDan Powell and
Jamie Dimon on Why He Thinks We Are Living in One of the Most Dangerous Times
The JP Morgan chief on E.S.G., the dire state of the global economy and Elon Musk.
By Andrew Ross SorkinEvan RobertsElaine ChenDan Powell and
Bob Iger of Disney on Culture Wars and Streaming
The chief executive talks about returning to the company’s roots while adapting to changing times.
By Andrew Ross SorkinEvan RobertsElaine ChenDan Powell and
How Andrew Ross Sorkin Gets Business and World Leaders to Open Up
The many sides of Elon Musk, the challenges of political interviews, warming up guests beforehand — we take you behind the scenes of the DealBook Summit.
By Andrew Ross SorkinLulu Garcia-NavarroEvan RobertsElaine Chen and
Advertisement
At the DealBook Summit, Leaders Contend With an ‘Existential Moment’
Even leaders who usually display unrestrained confidence expressed anxiety about the state of the world.
By
The 2024 Election Will Be Unlike Any Other. Is the Media Ready?
Journalists are facing “deep fakes,” sagging trust, global unrest and an unprecedented Trump campaign being run “from the courthouse steps.”
By
Addressing the Tensions Between China and the Rest of the World
U.S.-China trade is at a record high, but businesses and governments are wrestling with how to balance national security and commercial interests.
By
In the Creator Economy, There Is Money to Be Made
People from all types of backgrounds have become stars — and it’s a trend that’s expected to get even bigger and make them even richer in years to come.
By
Silicon Valley Confronts a Grim New A.I. Metric
Where do you fall on the doom scale — is artificial intelligence a threat to humankind? And if so, how high is the risk?
By
A district court ruled that Sweden’s constitution prevented it from taking a side in a labor dispute between Tesla and local unions that has dragged on for 11 months.
By Melissa Eddy
Israel and Iran are fighting at a time when prices are under pressure because of weak demand in China and concerns about oversupply.
By Stanley Reed
A state law allowing high schoolers to earn from endorsements, if they commit to attending a public university in Missouri, has helped Mizzou attract blue-chip players.
By Joe Drape
Ryan Salame, an FTX executive, and Michelle Bond, a crypto policy advocate, were once a Washington power couple. Now they both face prison time.
By David Yaffe-Bellany
Among deal makers with fortunes at stake, the consequences of a Harris or a Trump win are increasingly murky.
By Rob Copeland
The Justice Department could push for the tech giant to sell off a business to end its lock on online search. But a move would be tough to pull off.
By Andrew Ross Sorkin, Ravi Mattu, Bernhard Warner, Sarah Kessler, Michael J. de la Merced, Lauren Hirsch and Ephrat Livni
Struggling landlords and developers are seeking leeway on coverage from their lenders — mostly in vain.
By Emily Flitter
European Union officials say the duties are meant to protect the region’s automakers from what they say are unfair trade practices in China.
By Melissa Eddy and Jenny Gross
Neither Benjamin Clymer, its founder, nor the Watches of Switzerland Group would disclose terms, but they stressed that coverage would continue to be independent.
By Victoria Gomelsky
The San Francisco company is gathering the billions its executives believe they will need to continue building new A.I. technology.
By Michael J. de la Merced and Mike Isaac
Advertisement
Advertisement
| true | true | true |
Making sense of the latest news in finance, markets and policy — and the power brokers behind the headlines.
|
2024-10-12 00:00:00
|
2017-05-24 00:00:00
|
article
|
nytimes.com
|
The New York Times
| null | null |
|
3,029,504 |
http://www.technologyreview.com/energy/38636/?ref=rss
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,566,843 |
http://www.macobserver.com/tmo/article/apple_to_eu_set_frand_licensing_standards/
|
Apple to EU: Set FRAND Licensing Standards
|
Jeff Gamet
|
Apple is hoping to convince the EU’s European Telecommunications Standards Institute to set guidelines for how patents that cover industry standards are licensed. In a letter to the organization, Apple’s legal team said that the lack of licensing standards has led to lawsuits that otherwise could’ve been prevented.
“It is apparent that our industry suffers from a lack of consistent adherence to FRAND principles in the cellular standards arena,” Apple’s intellectual property boss, Bruce Watrous, said.
Apple says it’s time for FRAND licensing standards
FRAND, or fair, reasonable and nondiscriminatory, refers to patents that cover necessary components in industry standard technologies. Companies that hold FRAND patents are expected to license the technology on fair and reasonable terms, although there aren’t any standards in place to clearly denote which patents should be licensed as FRAND, nor are there any guidelines to say how much patent holders should charge for those licenses.
Apple’s letter doesn’t stop with claiming there is a licensing issue. It goes on to suggest a solution. “Apple’s letter then moves on to propose a solution based on three specific principles: appropriate royalty rate; common royalty base; no injunction,” Flourian Muller of *Foss Patents* said.
The Mac and iPhone maker’s letter was penned in November 2011, although it came to light a day after word that Motorola Mobility is looking to take 2.25 percent of Apple’s iPhone sales in patent licensing fees. Apple, however, claims it is already covered by licensing fees Qualcomm pays for the chips used in the iPhone.
Apple is hoping to get Motorola Mobility’s licensing agreements with other phone makers into court so it can determine whether or not 2.25 percent is a reasonable since could end up paying out more than US$1 billion in licensing fees based on 2011’s iPhone sales.
Motorola is asking for licensing fees based on the value of the finished product — a prospect that Apple isn’t happy with. Instead, Apple is suggesting licensing fees should be paid based on the value of the components the patents cover.
“This common base, as between two negotiating parties, should be no higher than the industry average sales price for a basic communications device that is capable of both voice and data communications,” Mr. Watrous said in Apple’s letter. In other words, paying licensing fees on the finished product isn’t reasonable when instead companies should pay based on individual parts and technologies.
The European Telecommunications Standards Institute has not commented on Apple’s letter.
| true | true | true |
Apple is hoping to convince the EU’s European Telecommunications Standards Institute to set guidelines for how patents that cover industry standards are licensed. In a letter to the organization, Apple’s legal team said that the lack of licensing standards has led to lawsuits that otherwise could’v
|
2024-10-12 00:00:00
|
2012-02-08 00:00:00
|
website
|
macobserver.com
|
The Mac Observer
| null | null |
|
37,244,203 |
https://www.sqlite.org/optoverview.html
|
1. Introduction
| null |
This document provides an overview of how the query planner and optimizer for SQLite works.
Given a single SQL statement, there might be dozens, hundreds, or even thousands of ways to implement that statement, depending on the complexity of the statement itself and of the underlying database schema. The task of the query planner is to select the algorithm that minimizes disk I/O and CPU overhead.
Additional background information is available in the indexing tutorial document. The Next Generation Query Planner document provides more detail on how the join order is chosen.
Prior to analysis, the following transformations are made to shift all join constraints into the WHERE clause:
SQLite makes no distinction between join constraints that occur in the WHERE clause and constraints in the ON clause of an inner join, since that distinction does not affect the outcome. However, there is a difference between ON clause constraints and WHERE clause constraints for outer joins. Therefore, when SQLite moves an ON clause constraint from an outer join over to the WHERE clause it adds special tags to the Abstract Syntax Tree (AST) to indicate that the constraint came from an outer join and from which outer join it came. There is no way to add those tags in pure SQL text. Hence, the SQL input must use ON clauses on outer joins. But in the internal AST, all constraints are part of the WHERE clause, because having everything in one place simplifies processing.
After all constraints have been shifted into the WHERE clause, The WHERE clause is broken up into conjuncts (hereafter called "terms"). In other words, the WHERE clause is broken up into pieces separated from the others by an AND operator. If the WHERE clause is composed of constraints separated by the OR operator (disjuncts) then the entire clause is considered to be a single "term" to which the OR-clause optimization is applied.
All terms of the WHERE clause are analyzed to see if they can be satisfied using indexes. To be usable by an index a term must usually be of one of the following forms:
column=expressioncolumnISexpressioncolumn>expressioncolumn>=expressioncolumn<expressioncolumn<=expressionexpression=columnexpressionIScolumnexpression>columnexpression>=columnexpression<columnexpression<=columncolumnIN (expression-list)columnIN (subquery)columnIS NULLcolumnLIKEpatterncolumnGLOBpattern
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms.
The initial columns of the index must be used with
the **=** or **IN** or **IS** operators.
The right-most column that is used can employ inequalities.
For the right-most
column of an index that is used, there can be up to two inequalities
that must sandwich the allowed values of the column between two extremes.
It is not necessary for every column of an index to appear in a WHERE clause term in order for that index to be used. However, there cannot be gaps in the columns of the index that are used. Thus for the example index above, if there is no WHERE clause term that constrains column c, then terms that constrain columns a and b can be used with the index but not terms that constrain columns d through z. Similarly, index columns will not normally be used (for indexing purposes) if they are to the right of a column that is constrained only by inequalities. (See the skip-scan optimization below for the exception.)
In the case of indexes on expressions, whenever the word "column" is used in the foregoing text, one can substitute "indexed expression" (meaning a copy of the expression that appears in the CREATE INDEX statement) and everything will work the same.
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,2,3) AND c IS NULL AND d='hello'
The first four columns a, b, c, and d of the index would be usable since those four columns form a prefix of the index and are all bound by equality constraints.
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,2,3) AND c>12 AND d='hello'
Only columns a, b, and c of the index would be usable. The d column would not be usable because it occurs to the right of c and c is constrained only by inequalities.
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,2,3) AND d='hello'
Only columns a and b of the index would be usable. The d column would not be usable because column c is not constrained and there can be no gaps in the set of columns that usable by the index.
For the index above and WHERE clause like this:
... WHERE b IN (1,2,3) AND c NOT NULL AND d='hello'
The index is not usable at all because the left-most column of the index (column "a") is not constrained. Assuming there are no other indexes, the query above would result in a full table scan.
For the index above and WHERE clause like this:
... WHERE a=5 OR b IN (1,2,3) OR c NOT NULL OR d='hello'
The index is not usable because the WHERE clause terms are connected by OR instead of AND. This query would result in a full table scan. However, if three additional indexes where added that contained columns b, c, and d as their left-most columns, then the OR-clause optimization might apply.
If a term of the WHERE clause is of the following form:
expr1BETWEENexpr2ANDexpr3
Then two "virtual" terms are added as follows:
expr1>=expr2ANDexpr1<=expr3
Virtual terms are used for analysis only and do not cause any byte-code
to be generated.
If both virtual terms end up being used as constraints on an index,
then the original BETWEEN term is omitted and the corresponding test
is not performed on input rows.
Thus if the BETWEEN term ends up being used as an index constraint
no tests are ever performed on that term.
On the other hand, the
virtual terms themselves never causes tests to be performed on
input rows.
Thus if the BETWEEN term is not used as an index constraint and
instead must be used to test input rows, the *expr1* expression is
only evaluated once.
WHERE clause constraints that are connected by OR instead of AND can be handled in two different ways.
If a term consists of multiple subterms containing a common column name and separated by OR, like this:
column=expr1ORcolumn=expr2ORcolumn=expr3OR ...
Then that term is rewritten as follows:
columnIN (expr1,expr2,expr3,...)
The rewritten term then might go on to constrain an index using the
normal rules for **IN** operators. Note that *column* must be
the same column in every OR-connected subterm,
although the column can occur on either the left or the right side of
the **=** operator.
If and only if the previously described conversion of OR to an IN operator does not work, the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows:
expr1ORexpr2ORexpr3
Individual subterms might be a single comparison expression like
**a=5** or **x>y** or they can be
LIKE or BETWEEN expressions, or a subterm
can be a parenthesized list of AND-connected sub-subterms.
Each subterm is analyzed as if it were itself the entire WHERE clause
in order to see if the subterm is indexable by itself.
If __every__ subterm of an OR clause is separately indexable
then the OR clause might be coded such that a separate index is used
to evaluate each term of the OR clause. One way to think about how
SQLite uses separate indexes for each OR clause term is to imagine
that the WHERE clause where rewritten as follows:
rowid IN (SELECT rowid FROMtableWHEREexpr1UNION SELECT rowid FROMtableWHEREexpr2UNION SELECT rowid FROMtableWHEREexpr3)
The rewritten expression above is conceptual; WHERE clauses containing OR are not really rewritten this way. The actual implementation of the OR clause uses a mechanism that is more efficient and that works even for WITHOUT ROWID tables or tables in which the "rowid" is inaccessible. Nevertheless, the essence of the implementation is captured by the statement above: Separate indexes are used to find candidate result rows from each OR clause term and the final result is the union of those rows.
Note that in most cases, SQLite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause, a different index might be used for each subterm in the OR-clause.
For any given query, the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. SQLite uses a cost-based query planner that estimates the CPU and disk I/O costs of various competing query plans and chooses the plan that it thinks will be the fastest. If there are many OR terms in the WHERE clause or if some of the indexes on individual OR-clause subterms are not very selective, then SQLite might decide that it is faster to use a different query algorithm, or even a full-table scan. Application developers can use the EXPLAIN QUERY PLAN prefix on a statement to get a high-level overview of the chosen query strategy.
A WHERE-clause term that uses the LIKE or GLOB operator can sometimes be used with an index to do a range search, almost as if the LIKE or GLOB were an alternative to a BETWEEN operator. There are many conditions on this optimization:
The LIKE operator has two modes that can be set by a pragma. The default mode is for LIKE comparisons to be insensitive to differences of case for latin1 characters. Thus, by default, the following expression is true:
'a' LIKE 'A'
If the case_sensitive_like pragma is enabled as follows:
PRAGMA case_sensitive_like=ON;
Then the LIKE operator pays attention to case and the example above would evaluate to false. Note that case insensitivity only applies to latin1 characters - basically the upper and lower case letters of English in the lower 127 byte codes of ASCII. International character sets are case sensitive in SQLite unless an application-defined collating sequence and like() SQL function are provided that take non-ASCII characters into account. If an application-defined collating sequence and/or like() SQL function are provided, the LIKE optimization described here will never be taken.
The LIKE operator is case insensitive by default because this is what the SQL standard requires. You can change the default behavior at compile time by using the SQLITE_CASE_SENSITIVE_LIKE command-line option to the compiler.
The LIKE optimization might occur if the column named on the left of the operator is indexed using the built-in BINARY collating sequence and case_sensitive_like is turned on. Or the optimization might occur if the column is indexed using the built-in NOCASE collating sequence and the case_sensitive_like mode is off. These are the only two combinations under which LIKE operators will be optimized.
The GLOB operator is always case sensitive. The column on the left side of the GLOB operator must always use the built-in BINARY collating sequence or no attempt will be made to optimize that operator with indexes.
The LIKE optimization will only be attempted if the right-hand side of the GLOB or LIKE operator is either literal string or a parameter that has been bound to a string literal. The string literal must not begin with a wildcard; if the right-hand side begins with a wildcard character then this optimization is not attempted. If the right-hand side is a parameter that is bound to a string, then this optimization is only attempted if the prepared statement containing the expression was compiled with sqlite3_prepare_v2() or sqlite3_prepare16_v2(). The LIKE optimization is not attempted if the right-hand side is a parameter and the statement was prepared using sqlite3_prepare() or sqlite3_prepare16().
Suppose the initial sequence of non-wildcard characters on the right-hand
side of the LIKE or GLOB operator is *x*. We are using a single
character to denote this non-wildcard prefix but the reader should
understand that the prefix can consist of more than 1 character.
Let *y* be the smallest string that is the same length as /x/ but which
compares greater than *x*. For example, if *x* is
`'hello'` then
*y* would be `'hellp'`.
The LIKE and GLOB optimizations consist of adding two virtual terms
like this:
column>=xANDcolumn<y
Under most circumstances, the original LIKE or GLOB operator is still
tested against each input row even if the virtual terms are used to
constrain an index. This is because we do not know what additional
constraints may be imposed by characters to the right
of the *x* prefix. However, if there is only a single
global wildcard to the right of *x*, then the original LIKE or
GLOB test is disabled.
In other words, if the pattern is like this:
columnLIKEx%columnGLOBx*
then the original LIKE or GLOB tests are disabled when the virtual terms constrain an index because in that case we know that all of the rows selected by the index will pass the LIKE or GLOB test.
Note that when the right-hand side of a LIKE or GLOB operator is a parameter and the statement is prepared using sqlite3_prepare_v2() or sqlite3_prepare16_v2() then the statement is automatically reparsed and recompiled on the first sqlite3_step() call of each run if the binding to the right-hand side parameter has changed since the previous run. This reparse and recompile is essentially the same action that occurs following a schema change. The recompile is necessary so that the query planner can examine the new value bound to the right-hand side of the LIKE or GLOB operator and determine whether or not to employ the optimization described above.
The general rule is that indexes are only useful if there are WHERE-clause constraints on the left-most columns of the index. However, in some cases, SQLite is able to use an index even if the first few columns of the index are omitted from the WHERE clause but later columns are included.
Consider a table such as the following:
CREATE TABLE people( name TEXT PRIMARY KEY, role TEXT NOT NULL, height INT NOT NULL, -- in cm CHECK( role IN ('student','teacher') ) ); CREATE INDEX people_idx1 ON people(role, height);
The people table has one entry for each person in a large organization. Each person is either a "student" or a "teacher", as determined by the "role" field. The table also records the height in centimeters of each person. The role and height are indexed. Notice that the left-most column of the index is not very selective - it only contains two possible values.
Now consider a query to find the names of everyone in the organization that is 180cm tall or taller:
SELECT name FROM people WHERE height>=180;
Because the left-most column of the index does not appear in the WHERE clause of the query, one is tempted to conclude that the index is not usable here. However, SQLite is able to use the index. Conceptually, SQLite uses the index as if the query were more like the following:
SELECT name FROM people WHERE role IN (SELECT DISTINCT role FROM people) AND height>=180;
Or this:
SELECT name FROM people WHERE role='teacher' AND height>=180 UNION ALL SELECT name FROM people WHERE role='student' AND height>=180;
The alternative query formulations shown above are conceptual only. SQLite does not really transform the query. The actual query plan is like this: SQLite locates the first possible value for "role", which it can do by rewinding the "people_idx1" index to the beginning and reading the first record. SQLite stores this first "role" value in an internal variable that we will here call "$role". Then SQLite runs a query like: "SELECT name FROM people WHERE role=$role AND height>=180". This query has an equality constraint on the left-most column of the index and so the index can be used to resolve that query. Once that query is finished, SQLite then uses the "people_idx1" index to locate the next value of the "role" column, using code that is logically similar to "SELECT role FROM people WHERE role>$role LIMIT 1". This new "role" value overwrites the $role variable, and the process repeats until all possible values for "role" have been examined.
We call this kind of index usage a "skip-scan" because the database engine is basically doing a full scan of the index but it optimizes the scan (making it less than "full") by occasionally skipping ahead to the next candidate value.
SQLite might use a skip-scan on an index if it knows that the first one or more columns contain many duplication values. If there are too few duplicates in the left-most columns of the index, then it would be faster to simply step ahead to the next value, and thus do a full table scan, than to do a binary search on an index to locate the next left-column value.
The only way that SQLite can know that there are many duplicates in the left-most columns of an index is if the ANALYZE command has been run on the database. Without the results of ANALYZE, SQLite has to guess at the "shape" of the data in the table, and the default guess is that there are an average of 10 duplicates for every value in the left-most column of the index. Skip-scan only becomes profitable (it only gets to be faster than a full table scan) when the number of duplicates is about 18 or more. Hence, a skip-scan is never used on a database that has not been analyzed.
SQLite implements joins as nested loops. The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. However, SQLite will nest the loops in a different order if doing so will help it to select better indexes.
Inner joins can be freely reordered. However outer joins are neither commutative nor associative and hence will not be reordered. Inner joins to the left and right of an outer join might be reordered if the optimizer thinks that is advantageous but outer joins are always evaluated in the order in which they occur.
SQLite treats the CROSS JOIN operator specially. The CROSS JOIN operator is commutative, in theory. However, SQLite chooses to never reorder tables in a CROSS JOIN. This provides a mechanism by which the programmer can force SQLite to choose a particular loop nesting order.
When selecting the order of tables in a join, SQLite uses an efficient polynomial-time algorithm graph algorithm described in the Next Generation Query Planner document. Because of this, SQLite is able to plan queries with 50- or 60-way joins in a matter of microseconds
Join reordering is automatic and usually works well enough that programmers do not have to think about it, especially if ANALYZE has been used to gather statistics about the available indexes, though occasionally some hints from the programmer are needed. Consider, for example, the following schema:
CREATE TABLE node( id INTEGER PRIMARY KEY, name TEXT ); CREATE INDEX node_idx ON node(name); CREATE TABLE edge( orig INTEGER REFERENCES node, dest INTEGER REFERENCES node, PRIMARY KEY(orig, dest) ); CREATE INDEX edge_idx ON edge(dest,orig);
The schema above defines a directed graph with the ability to store a name at each node. Now consider a query against this schema:
SELECT * FROM edge AS e, node AS n1, node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in SQLite has basically two choices on how to implement this query. (There are actually six different choices, but we will only consider two of them here.) Pseudocode below demonstrating these two choices.
Option 1:
foreach n1 where n1.name='alice' do: foreach n2 where n2.name='bob' do: foreach e where e.orig=n1.id and e.dest=n2.id return n1.*, n2.*, e.* end end end
Option 2:
foreach n1 where n1.name='alice' do: foreach e where e.orig=n1.id do: foreach n2 where n2.id=e.dest and n2.name='bob' do: return n1.*, n2.*, e.* end end end
The same indexes are used to speed up every loop in both implementation options. The only difference in these two query plans is the order in which the loops are nested.
So which query plan is better? It turns out that the answer depends on what kind of data is found in the node and edge tables.
Let the number of alice nodes be M and the number of bob nodes be N. Consider two scenarios. In the first scenario, M and N are both 2 but there are thousands of edges on each node. In this case, option 1 is preferred. With option 1, the inner loop checks for the existence of an edge between a pair of nodes and outputs the result if found. Because there are only 2 alice and bob nodes each, the inner loop only has to run four times and the query is very quick. Option 2 would take much longer here. The outer loop of option 2 only executes twice, but because there are a large number of edges leaving each alice node, the middle loop has to iterate many thousands of times. It will be much slower. So in the first scenario, we prefer to use option 1.
Now consider the case where M and N are both 3500. Alice nodes are abundant. This time suppose each of these nodes is connected by only one or two edges. Now option 2 is preferred. With option 2, the outer loop still has to run 3500 times, but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop, if at all. So the total number of iterations of the inner loop is around 7000. Option 1, on the other hand, has to run both its outer loop and its middle loop 3500 times each, resulting in 12 million iterations of the middle loop. Thus in the second scenario, option 2 is nearly 2000 times faster than option 1.
So you can see that depending on how the data is structured in the table, either query plan 1 or query plan 2 might be better. Which plan does SQLite choose by default? As of version 3.6.18, without running ANALYZE, SQLite will choose option 2. If the ANALYZE command is run in order to gather statistics, a different choice might be made if the statistics indicate that the alternative is likely to run faster.
SQLite almost always picks the best join order automatically. It is very rare that a developer needs to intervene to give the query planner hints about the best join order. The best policy is to make use of PRAGMA optimize to ensure that the query planner has access to up-to-date statistics on the shape of the data in the database.
This section describes techniques by which developers can control the join order in SQLite, to work around any performance problems that may arise. However, the use of these techniques is not recommended, except as a last resort.
If you do encounter a situation where SQLite is picking a suboptimal join order even after running PRAGMA optimize, please report your situation on the SQLite Community Forum so that the SQLite maintainers can make new refinements to the query planner such that manual intervention is not required.
SQLite provides the ability for advanced programmers to exercise control over the query plan chosen by the optimizer. One method for doing this is to fudge the ANALYZE results in the sqlite_stat1 table.
Programmers can force SQLite to use a particular loop nesting order for a join by using the CROSS JOIN operator instead of just JOIN, INNER JOIN, NATURAL JOIN, or a "," join. Though CROSS JOINs are commutative in theory, SQLite chooses to never reorder the tables in a CROSS JOIN. Hence, the left table of a CROSS JOIN will always be in an outer loop relative to the right table.
In the following query, the optimizer is free to reorder the tables of FROM clause any way it sees fit:
SELECT * FROM node AS n1, edge AS e, node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
In the following logically equivalent formulation of the same query, the substitution of "CROSS JOIN" for the "," means that the order of tables must be N1, E, N2.
SELECT * FROM node AS n1 CROSS JOIN edge AS e CROSS JOIN node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
In the latter query, the query plan must be option 2. Note that you must use the keyword "CROSS" in order to disable the table reordering optimization; INNER JOIN, NATURAL JOIN, JOIN, and other similar combinations work just like a comma join in that the optimizer is free to reorder tables as it sees fit. (Table reordering is also disabled on an outer join, but that is because outer joins are not associative or commutative. Reordering tables in OUTER JOIN changes the result.)
See "The Fossil NGQP Upgrade Case Study" for another real-world example of using CROSS JOIN to manually control the nesting order of a join. The query planner checklist found later in the same document provides further guidance on manual control of the query planner.
Each table in the FROM clause of a query can use at most one index (except when the OR-clause optimization comes into play) and SQLite strives to use at least one index on each table. Sometimes, two or more indexes might be candidates for use on a single table. For example:
CREATE TABLE ex2(x,y,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x=5 AND y=6;
For the SELECT statement above, the optimizer can use the ex2i1 index to lookup rows of ex2 that contain x=5 and then test each row against the y=6 term. Or it can use the ex2i2 index to lookup rows of ex2 that contain y=6 then test each of those rows against the x=5 term.
When faced with a choice of two or more indexes, SQLite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work.
To help the optimizer get a more accurate estimate of the work involved
in using various indexes, the user may optionally run the ANALYZE command.
The ANALYZE command scans all indexes of database where there might
be a choice between two or more indexes and gathers statistics on the
selectiveness of those indexes. The statistics gathered by
this scan are stored in special database tables names shows names all
begin with "**sqlite_stat**".
The content of these tables is not updated as the database
changes so after making significant changes it might be prudent to
rerun ANALYZE.
The results of an ANALYZE command are only available to database connections
that are opened after the ANALYZE command completes.
The various **sqlite_stat***N* tables contain information on how
selective the various indexes are. For example, the sqlite_stat1
table might indicate that an equality constraint on column x reduces the
search space to 10 rows on average, whereas an equality constraint on
column y reduces the search space to 3 rows on average. In that case,
SQLite would prefer to use index ex2i2 since that index is more selective.
*Note: Disqualifying WHERE clause terms this way is not recommended.
This is a work-around.
Only do this as a last resort to get the performance you need. If you
find a situation where this work-around is necessary, please report the
situation on the SQLite Community Forum so
that the SQLite maintainers can try to improve the query planner such
that the work-around is no longer required for your situation.*
Terms of the WHERE clause can be manually disqualified for use with
indexes by prepending a unary **+** operator to the column name. The
unary **+** is a no-op and will not generate any byte code in the prepared
statement.
However, the unary **+** operator will prevent the term from
constraining an index.
So, in the example above, if the query were rewritten as:
SELECT z FROM ex2 WHERE +x=5 AND y=6;
The **+** operator on the **x** column will prevent that term from
constraining an index. This would force the use of the ex2i2 index.
Note that the unary **+** operator also removes
type affinity from
an expression, and in some cases this can cause subtle changes in
the meaning of an expression.
In the example above,
if column **x** has TEXT affinity
then the comparison "x=5" will be done as text. The **+** operator
removes the affinity. So the comparison "**+x=5**" will compare the text
in column **x** with the numeric value 5 and will always be false.
Consider a slightly different scenario:
CREATE TABLE ex2(x,y,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
Further suppose that column x contains values spread out between 0 and 1,000,000 and column y contains values that span between 0 and 1,000. In that scenario, the range constraint on column x should reduce the search space by a factor of 10,000 whereas the range constraint on column y should reduce the search space by a factor of only 10. So the ex2i1 index should be preferred.
SQLite will make this determination, but only if it has been compiled with SQLITE_ENABLE_STAT3 or SQLITE_ENABLE_STAT4. The SQLITE_ENABLE_STAT3 and SQLITE_ENABLE_STAT4 options causes the ANALYZE command to collect a histogram of column content in the sqlite_stat3 or sqlite_stat4 tables and to use this histogram to make a better guess at the best query to use for range constraints such as the above. The main difference between STAT3 and STAT4 is that STAT3 records histogram data for only the left-most column of an index whereas STAT4 records histogram data for all columns of an index. For single-column indexes, STAT3 and STAT4 work the same.
The histogram data is only useful if the right-hand side of the constraint is a simple compile-time constant or parameter and not an expression.
Another limitation of the histogram data is that it only applies to the left-most column on an index. Consider this scenario:
CREATE TABLE ex3(w,x,y,z); CREATE INDEX ex3i1 ON ex2(w, x); CREATE INDEX ex3i2 ON ex2(w, y); SELECT z FROM ex3 WHERE w=5 AND x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
Here the inequalities are on columns x and y which are not the left-most index columns. Hence, the histogram data which is collected no left-most column of indexes is useless in helping to choose between the range constraints on columns x and y.
When doing an indexed lookup of a row, the usual procedure is to do a binary search on the index to find the index entry, then extract the rowid from the index and use that rowid to do a binary search on the original table. Thus a typical indexed lookup involves two binary searches. If, however, all columns that were to be fetched from the table are already available in the index itself, SQLite will use the values contained in the index and will never look up the original table row. This saves one binary search for each row and can make many queries run twice as fast.
When an index contains all of the data needed for a query and when the original table never needs to be consulted, we call that index a "covering index".
SQLite attempts to use an index to satisfy the ORDER BY clause of a query when possible. When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause, SQLite does the same cost analysis described above and chooses the index that it believes will result in the fastest answer.
SQLite will also attempt to use indexes to help satisfy GROUP BY clauses and the DISTINCT keyword. If the nested loops of the join can be arranged such that rows that are equivalent for the GROUP BY or for the DISTINCT are consecutive, then the GROUP BY or DISTINCT logic can determine if the current row is part of the same group or if the current row is distinct simply by comparing the current row to the previous row. This can be much faster than the alternative of comparing each row to all prior rows.
If a query contains an ORDER BY clause with multiple terms, it might be that SQLite can use indexes to cause rows to come out in the order of some prefix of the terms in the ORDER BY but that later terms in the ORDER BY are not satisfied. In that case, SQLite does block sorting. Suppose the ORDER BY clause has four terms and the natural order of the query results in rows appearing in order of the first two terms. As each row is output by the query engine and enters the sorter, the outputs in the current row corresponding to the first two terms of the ORDER BY are compared against the previous row. If they have changed, the current sort is finished and output and a new sort is started. This results in a slightly faster sort. Even bigger advantages are that many fewer rows need to be held in memory, reducing memory requirements, and outputs can begin to appear before the core query has run to completion.
When a subquery occurs in the FROM clause of a SELECT, the simplest behavior is to evaluate the subquery into a transient table, then run the outer SELECT against the transient table. Such a plan can be suboptimal since the transient table will not have any indexes and the outer query (which is likely a join) will be forced to either do full table scan on the transient table or else construct a query-time index on the transient table, neither or which is likely to be particularly fast.
To overcome this problem, SQLite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. For example:
SELECT t1.a, t2.b FROM t2, (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5
Would be rewritten using query flattening as:
SELECT t1.x+t1.y AS a, t2.b FROM t2, t1 WHERE z<100 AND a>5
There is a long list of conditions that must all be met in order for query flattening to occur. Some of the constraints are marked as obsolete by italic text. These extra constraints are retained in the documentation to preserve the numbering of the other constraints.
Casual readers are not expected to understand all of these rules. The point here is that flattening rules are subtle and complex. There have been multiple bugs over the years caused by over-aggressive query flattening. On the other hand, performance of complex queries and/or queries involving views tends to suffer if query flattening is more conservative.
Query flattening is an important optimization when views are used as each use of a view is translated into a subquery.
SQLite implements FROM-clause subqueries in one of three ways:
This section describes the third technique: implementing the subquery as a co-routine.
A co-routine is like a subroutine in that it runs in the same thread as the caller and eventually returns control back to the caller. The difference is that a co-routine also has the ability to return before it has finished, and then resume where it left off the next time it is called.
When a subquery is implemented as a co-routine, byte-code is generated to implement the subquery as if it were a standalone query, except instead of returning rows of results back to the application, the co-routine yields control back to the caller after each row is computed. The caller can then use that one computed row as part of its computation, then invoke the co-routine again when it is ready for the next row.
Co-routines are better than storing the complete result set of the subquery in a transient table because co-routines use less memory. With a co-routine, only a single row of the result needs to be remembered, whereas all rows of the result must be stored for a transient table. Also, because the co-routine does not need to run to completion before the outer query begins its work, the first rows of output can appear much sooner, and if the overall query is abandoned before it has finished, less work is done overall.
On the other hand, if the result of the subquery must be scanned multiple times (because, for example, it is just one table in a join) then it is better to use a transient table to remember the entire result of the subquery, in order to avoid computing the subquery more than once.
As of SQLite version 3.21.0 (2017-10-24), the query planner will always prefer to use a co-routine to implement FROM-clause subqueries that contains an ORDER BY clause and that are not part of a join when the result set of the outer query is "complex". This feature allows applications to shift expensive computations from before the sorter until after the sorter, which can result in faster operation. For example, consider this query:
SELECT expensive_function(a) FROM tab ORDER BY date DESC LIMIT 5;
The goal of this query is to compute some value for the five most recent entries in the table. In the query above, the "expensive_function()" is invoked prior to the sort and thus is invoked on every row of the table, even rows that are ultimately omitted due to the LIMIT clause. A co-routine can be used to work around this:
SELECT expensive_function(a) FROM ( SELECT a FROM tab ORDER BY date DESC LIMIT 5 );
In the revised query, the subquery implemented by a co-routine computes the five most recent values for "a". Those five values are passed from the co-routine up into the outer query where the "expensive_function()" is invoked on only the specific rows that the application cares about.
The query planner in future versions of SQLite might grow smart enough to make transformations such as the above automatically, in both directions. That is to say, future versions of SQLite might transform queries of the first form into the second, or queries written the second way into the first. As of SQLite version 3.22.0 (2018-01-22), the query planner will flatten the subquery if the outer query does not make use of any user-defined functions or subqueries in its result set. For the examples shown above, however, SQLite implements each of the queries as written.
Queries that contain a single MIN() or MAX() aggregate function whose argument is the left-most column of an index might be satisfied by doing a single index lookup rather than by scanning the entire table. Examples:
SELECT MIN(x) FROM table; SELECT MAX(x)+1 FROM table;
When no indexes are available to aid the evaluation of a query, SQLite might create an automatic index that lasts only for the duration of a single SQL statement. Automatic indexes are also sometimes called "Query-time indexes". Since the cost of constructing the automatic or query-time index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N), an automatic index will only be created if SQLite expects that the lookup will be run more than logN times during the course of the SQL statement. Consider an example:
CREATE TABLE t1(a,b); CREATE TABLE t2(c,d); -- Insert many rows into both t1 and t2 SELECT * FROM t1, t2 WHERE a=c;
In the query above, if both t1 and t2 have approximately N rows, then without any indexes the query will require O(N*N) time. On the other hand, creating an index on table t2 requires O(NlogN) time and using that index to evaluate the query requires an additional O(NlogN) time. In the absence of ANALYZE information, SQLite guesses that N is one million and hence it believes that constructing the automatic index will be the cheaper approach.
An automatic query-time index might also be used for a subquery:
CREATE TABLE t1(a,b); CREATE TABLE t2(c,d); -- Insert many rows into both t1 and t2 SELECT a, (SELECT d FROM t2 WHERE c=b) FROM t1;
In this example, the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows, SQLite expects that the subquery will run N times, and hence it will believe it is faster to construct an automatic, transient index on t2 first and then use that index to satisfy the N instances of the subquery.
The automatic indexing capability can be disabled at run-time using the automatic_index pragma. Automatic indexing is turned on by default, but this can be changed so that automatic indexing is off by default using the SQLITE_DEFAULT_AUTOMATIC_INDEX compile-time option. The ability to create automatic indexes can be completely disabled by compiling with the SQLITE_OMIT_AUTOMATIC_INDEX compile-time option.
In SQLite version 3.8.0 (2013-08-26) and later, an SQLITE_WARNING_AUTOINDEX message is sent to the error log every time a statement is prepared that uses an automatic index. Application developers can and should use these warnings to identify the need for new persistent indexes in the schema.
Do not confuse automatic indexes with the internal indexes (having names
like "sqlite_autoindex_*table*_*N*") that are sometimes
created to implement a PRIMARY KEY constraint or UNIQUE constraint.
The automatic indexes described here exist only for the duration of a
single query, are never persisted to disk, and are only visible to a
single database connection. Internal indexes are part of the implementation
of PRIMARY KEY and UNIQUE constraints, are long-lasting and persisted
to disk, and are visible to all database connections. The term "autoindex"
appears in the names of internal indexes for legacy reasons and does
not indicate that internal indexes and automatic indexes are related.
An automatic index is almost the same thing as a hash join. The only difference is that a B-Tree is used instead of a hash table. If you are willing to say that the transient B-Tree constructed for an automatic index is really just a fancy hash table, then a query that uses an automatic index is just a hash join.
SQLite constructs a transient index instead of a hash table in this instance because it already has a robust and high performance B-Tree implementation at hand, whereas a hash-table would need to be added. Adding a separate hash table implementation to handle this one case would increase the size of the library (which is designed for use on low-memory embedded devices) for minimal performance gain. SQLite might be enhanced with a hash-table implementation someday, but for now it seems better to continue using automatic indexes in cases where client/server database engines might use a hash join.
If a subquery cannot be flattened into the outer query, it might still be possible to enhance performance by "pushing down" WHERE clause terms from the outer query into the subquery. Consider an example:
CREATE TABLE t1(a INT, b INT); CREATE TABLE t2(x INT, y INT); CREATE VIEW v1(a,b) AS SELECT DISTINCT a, b FROM t1; SELECT x, y, b FROM t2 JOIN v1 ON (x=a) WHERE b BETWEEN 10 AND 20;
The view v1 cannot be flattened because it is DISTINCT. It must instead be run as a subquery with the results being stored in a transient table, then the join is performed between t2 and the transient table. The push-down optimization pushes down the "b BETWEEN 10 AND 20" term into the view. This makes the transient table smaller, and helps the subquery to run faster if there is an index on t1.b. The resulting evaluation is like this:
SELECT x, y, b FROM t2 JOIN (SELECT DISTINCT a, b FROM t1 WHERE b BETWEEN 10 AND 20) WHERE b BETWEEN 10 AND 20;
The WHERE-clause push-down optimization cannot always be used. For example, if the subquery contains a LIMIT, then pushing down any part of the WHERE clause from the outer query could change the result of the inner query. There are other restrictions, explained in a comment in the source code on the pushDownWhereTerms() routine that implements this optimization.
Do not confuse this optimization with the optimization by a similar name in MySQL. The MySQL push-down optimization changes the order of evaluation of WHERE-clause constraints such that those that can be evaluated using only the index and without having to find the corresponding table row are evaluated first, thus avoiding an unnecessary table row lookup if the constraint fails. For disambiguation, SQLite calls this the "MySQL push-down optimization". SQLite does do the MySQL push-down optimization too, in addition to the WHERE-clause push-down optimization. But the focus of this section is the WHERE-clause push-down optimization.
An OUTER JOIN (either a LEFT JOIN, a RIGHT JOIN, or a FULL JOIN) can sometimes be simplified. A LEFT or RIGHT JOIN can be converted into an ordinary (INNER) JOIN, or a FULL JOIN might be converted into either a LEFT or a RIGHT JOIN. This can happen if there are terms in the WHERE clause that guarantee the same result after simplification. For example, if any column in the right-hand table of the LEFT JOIN must be non-NULL in order for the WHERE clause to be true, then the LEFT JOIN is demoted to an ordinary JOIN.
The theorem prover that determines whether a join can be simplified is imperfect. It sometimes returns a false negative. In other words, it sometimes fails to prove that reducing the strength of an OUTER JOIN is safe when in fact it is safe. For example, the prover does not know the datetime() SQL function will always return NULL if its first argument is NULL, and so it will not recognize that the LEFT JOIN in the following query could be strength-reduced:
SELECT urls.url FROM urls LEFT JOIN (SELECT * FROM (SELECT url_id AS uid, max(retrieval_time) AS rtime FROM lookups GROUP BY 1 ORDER BY 1) WHERE uid IN (358341,358341,358341) ) recent ON u.source_seed_id = recent.xyz OR u.url_id = recent.xyz WHERE DATETIME(recent.rtime) > DATETIME('now', '-5 days');
It is possible that future enhancements to the prover might enable it to recognize that NULL inputs to certain built-in functions always result in a NULL answer. However, not all built-in functions have that property (for example coalesce()) and, of course, the prover will never be able to reason about application-defined SQL functions.
Sometimes a LEFT or RIGHT JOIN can be completely omitted from a query without changing the result. This can happen if all of the following are true:
OUTER JOIN elimination often comes up when OUTER JOINs are used inside of views, and then the view is used in such as way that none of the columns on the right-hand table of the LEFT JOIN or on the left-hand table of a RIGHT JOIN are referenced.
Here is a simple example of omitting a LEFT JOIN:
CREATE TABLE t1(ipk INTEGER PRIMARY KEY, v1); CREATE TABLE t2(ipk INTEGER PRIMARY KEY, v2); CREATE TABLE t3(ipk INTEGER PRIMARY KEY, v3); SELECT v1, v3 FROM t1 LEFT JOIN t2 ON (t1.ipk=t2.ipk) LEFT JOIN t3 ON (t1.ipk=t3.ipk)
The t2 table is completely unused in the query above, and so the query planner is able to implement the query as if it were written:
SELECT v1, v3 FROM t1 LEFT JOIN t3 ON (t1.ipk=t3.ipk)
As of this writing, only LEFT JOINs are eliminated. This optimize has not yet been generalized to work with RIGHT JOINs as RIGHT JOIN is a relatively new addition to SQLite. That asymmetry will probably be corrected in a future release.
When a WHERE clause contains two or more equality constraints connected by the AND operator such that all of the affinities of the various constraints are the same, then SQLite might use the transitive property of equality to construct new "virtual" constraints that can be used to simplify expressions and/or improve performance. This is called the "constant-propagation optimization".
For example, consider the following schema and query:
CREATE TABLE t1(a INTEGER PRIMARY KEY, b INT, c INT); SELECT * FROM t1 WHERE a=b AND b=5;
SQLite looks at the "a=b" and "b=5" constraints and deduces that if those two constraints are true, then it must also be the case that "a=5" is true. This means that the desired row can be looked up quickly using a value of 5 for the INTEGER PRIMARY KEY.
*This page last modified on 2024-07-24 12:16:13 UTC *
| true | true | true | null |
2024-10-12 00:00:00
|
2024-07-24 00:00:00
| null | null | null | null | null | null |
28,166,875 |
https://lifebeyondfife.com/hiring-advice-for-bootcamp-graduates/
|
Hiring advice for bootcamp graduates
| null |
# Hiring advice for bootcamp graduates
I’m currently trying to hire as many software engineers as I can, and more and more I’m seeing applications from candidates who retrained via an intense, engineering bootcamp. I want to hire every single one of them. In general whenever I’m interviewing someone I want them to succeed regardless, but there’s a lack of diversity of thought in tech and when I see someone with a background in marketing, customer support, or even a short order cook, I get excited about what they could teach me.
Anecdotally, I see the tech enthusiastic clique who started coding when they were a teenager, or perhaps even earlier, as the most populous cohort. Solving programming problems is our crossword puzzle, or sudoku. With data from the Stack Overflow Developer Survey 2021, I can say more definitively that over 80% of industry professionals have a university degree, over 90% identify as male, and over 60% are white. It’s my belief that getting people from different paths, different walks of life, with different behaviour types, builds a more diverse, and thus a stronger team (as explained in this book).
*“I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.”* — Bruce Lee
I know why bootcamps arrange their syllabus the way they do. They prioritise breadth. Résumés of bootcamp graduates are impressive because of how many modern technologies are listed, as well as how relelvant these are to modern internet economy companies. The result suggests a software engineer who can work anywhere in the stack whether they’re updating a web frontend in Vue, a mobile app in React Native, or fixing a backend service in Java or Ruby. The cost of this breadth, which is necessary to make someone marketable for the full range of engineering jobs available, comes at the expense of depth.
It’s an industry joke that no-one knows how to program, and we all just use Stack Overflow. Don’t get me wrong, I’ve used Stack Overflow many times, especially when I’m using a language or technology that I’m not an expert in, and won’t be using regularly. However, I learned to code without an IDE; I wrote programs from scratch, again and again until I moved what was a System 2 skill to a System 1 skill. Software engineering as a career path has seen exponential take up over the last few decades, so at any time, the majority of all software engineers in existence are the most inexperienced of all of us. It’s the job of more experienced engineers to teach the correct way of doing things and stop the bad habits; treating software engineering solely as hacking together tech with glue code you copied from Stack Overflow is one of them.
A penny dropped for me in my early twenties seeing this Matt Groening comic. Sometimes the best way of doing something is the easiest — it just takes a lot of work to make something easy. This is the fundamental idea behind Bruce Lee’s 10,000 kicks quote above, or Peter Norvig’s Teach Yourself Programming in Ten Years. Given not even students from full bachelors degrees spend ten years learning to program, I’m not advocating it takes that long to be employable, but this learning through repetition cannot be skipped.
Employers pay for someone who can solve customer problems with code. This can be done by someone who has invested time in three separate activities. First, they must understand the rules of a programming language or technology; they need to know the syntax of how to perform common tasks e.g. writing a C-style `for`
loop to iterate over a collection. At high school, this was taught to me as Knowledge & Understanding. Second, they have practiced the core concepts from the previous step multiple times again and again in new and varied situations. I was taught this as Problem Solving. Finally, the successful practitioner takes the skills from the previous two steps and can apply it to larger, real-world problems e.g. handle all the UI, API, service, and database changes to add a new type of product category to an e-commerce site. This third classification was known as Practical.
Bootcamps have to teach multiple technologies so they cover the knowledge and understanding element. They also have to be able to show real-world results so they cover the practical side of things too. There isn’t enough time to make people experts in only 3 months, so they rush the problem solving, leaving their graduates massively underfitting when exposed to new problems.
I can see evidence of this in prospective engineer’s git activity. It’s full on and frenetic for the months where they’re working on their projects and assignments. But once the grading stops, the coding stops, and this is absolutely the worst thing to do. The ability to draw Binky in the easy way needs constant practice, and similarly, bootcamp graduates need to pick a preferred technology and keep coding to backfill their problem solving gap.
The biggest tech companies have candidates falling over themselves to join, which is a nice problem to have. Interviewers have taken a cookie-cutter template of what their “best” employees look like and, in an effort to “raise the bar”, they make the questions harder and harder. I wish I’d written this blog post because it’s just so perfect: companies should hire for weakness, not talent. If Google had done the same, they would have almost certainly offered Max a job.
With that said, I am a 20+ year computer science veteran and not all theoretical computer science topics are esoteric nonsense to lord over industry novices. A few years ago an outage occurred in a service under my responsibility. It took a lot of investigation into the telemetry and logs to discover but it came back to a long (multiple hundred lines of code) function. This is bad programming practice. It had a single value that was expensive to calculate evaluated multiple times. This is bad programming practice. This recalculation was done in a nested for loop, which was harder to spot owing to the lengthy function and multiple layers of nesting. This is bad… you get the idea.
I want programmers joining my teams to be able to break problems down as easily as tying their shoes. And I want the programs to run in linear time so that the service code doesn’t fall over when large inputs are passed in. This is what bootcamps gloss over by necessity, because they do not have the time. The good news is, the remediation is simple, it’s just hard work that needs to be applied repeatedly.
## Big-O Complexity (Knowledge & Understanding)
For simple algorithms, you may be able to construct a formula determining how many instructions it will take to complete for a given input (for non-simple algorithms, let’s save the Halting Problem for another day). Discard all but the largest term of sums for that formula and you have the Big-O time complexity of the algorithm. For an input of *n* items, a linear complexity, *O(n)*, is ideal, *O(log 2(n))* even better. If you know the input is manageably small,
*O(n.log*or possibly even
2(n))*O(n*, might be acceptable. Anything exponential e.g.
2)*O(2*, should be a red flag that another approach is needed. Premature optimisation and trying to squeeze clock cycle efficiency out of code is unnecessary in almost all cases, but this low bar of algorithmic efficiency isn’t theoretical naval gazing but necessary to create responsive products that respect users’ time.
n)## Data Structures (Knowledge & Understanding)
Choose the correct data structure when solving a problem and the code pretty much writes itself (see this discussion on a relevant quote from Linus Torvalds). It’s obvious to me someone hasn’t practiced enough problem solving when they struggle to consider what data should be persisted to break the problem down, or how it could be represented. Lists, stacks, heaps, hashtables (or maps, dictionaries, objects, whatever you know them as), trees, graphs… they all have their uses. Most will get by with just lists and maps, which is why Python and JavaScript both make creating these data structures extremely easy (`[]`
or `{}`
).
## Data Structure Operations and Big-O (Knowledge & Understanding)
Combining the previous two topics, software engineers should know the common operations available to different data structures e.g. retrieve an object from a map. They should also know the Big-O time complexities of these operations so they can evaluate the algorithms they create and validate they have an acceptable complexity in total. They should know the Big-O complexity of general operations as well e.g. sorting algorithms. Create a working solution however inefficient to begin with is great, but then challenge yourself, “Can this be done better?”
## Apply The Above (Problem Solving)
This is the most important paragraph of this whole post. Find programming problems and solve them. Find more and solve more. Keep going, this is practicing one kick 10,000 times. This is your route to being able easily to convince anyone that you’re a good programmer. My personal recommendation in recent years is Advent of Code. There are several years of problems open to all (2020, 2019, 2018, 2017, 2016, 2015), with a subreddit showing solutions in every programming language imagineable.
## Success As A Software Engineer
There is always more to learn than you could possibly know. Early stage software engineering professionals especially will have to learn a specific tech stack for each product they develop; they’ll have to learn the subtly bespoke ways each team plans and collaborates on coding tasks. You’ll rarely join a company and hit the ground running with nothing new to learn. But solid fundamentals in programming problem solving, data structures, and space and time complexity analysis, make you eminently maleable to any software role.
| true | true | true | null |
2024-10-12 00:00:00
|
2021-08-12 00:00:00
| null | null |
lifebeyondfife.com
|
lifebeyondfife.com
| null | null |
13,721,134 |
https://support.google.com/accounts/answer/1187538
|
Sign in with backup codes
| null |
If you can’t sign into your Google Account with your normal 2-Step Verification, you can use a backup code for the second step. Create backup codes to use in case you lose your phone, change your phone number, or otherwise can't get codes by text, call, or Google Authenticator.
**Important: **
- To use backup codes, make sure 2-Step Verification is on.
- After you use a backup code to sign in, that code becomes inactive.
- You can get a new set of 10 backup codes whenever you want. When you create a new set of codes, the old set automatically becomes inactive.
- Do not share your backup codes with anyone. Google never asks for a backup code other than at sign in.
## Create & find a set of backup codes
To store your backup codes somewhere safe, like where you keep your passport or other important documents, you can print a copy of them.
- Go to your Google Account.
- On the left, click
**Security**. - Under "How you sign in to Google," click
**2-Step Verification**. You may need to sign in. - Under "Backup codes," click Continue .
- From here you can:
**Get backup codes:**To add backup codes, click**Get backup codes**.**Create a new set of backup codes and inactivate old ones:**To create new codes, click Refresh .**Delete your backup codes:**To delete and automatically inactivate your backup codes, click Delete .**Download your backup codes**: Click Download Codes .**Print your backup codes**: Click Print .
**Tips:**
- If you lose or run out of codes, or if you think your backup codes were stolen, you can create a new set. To create a new set of codes, click Refresh .
- When you create new codes, your old set automatically becomes inactive.
## Find your lost backup code
Search your computer for:
`Backup-codes-username.txt`
with your username. For example, if your username is google123, search for: `Backup-codes-google123.txt`
. You'll need the codes downloaded to your computer for this to work.## Sign in with a backup code
- Find your backup codes.
- Sign in to your Google Account.
- Click
**Try another way**. - Click
**Enter one of your 8-digit backup codes**. - Enter one of your unused backup codes.
**Tip:**As each code can be used only once, you might want to mark the code as used.
| true | true | true |
If you can’t sign into your Google Account with your normal 2-Step Verification, you can use a backup code for the second step. Create backup codes to use in case you lose your phone, change your phon
|
2024-10-12 00:00:00
|
2024-09-03 00:00:00
| null | null |
google.com
|
support.google.com
| null | null |
6,954,838 |
https://medium.com/p/d6cbe96c0bd8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
145,541 |
http://www.links.org/?p=310
| null | null |
crossorigin="anonymous">
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
40,598,533 |
https://medium.com/graalvm/writing-truly-memory-safe-jit-compilers-f79ad44558dd
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,059,220 |
https://techcrunch.com/2019/05/30/once-poised-to-kill-the-mouse-and-keyboard-leap-motion-plays-its-final-hand/
|
Once poised to kill the mouse and keyboard, Leap Motion plays its final hand | TechCrunch
|
Lucas Matney
|
The company sought to completely change how we interact with computers, but now Leap Motion is selling itself off.
Apple reportedly tried to get their hands on the hand-tracking tech, which Leap Motion rebuffed, but now the hyped nine-year-old consumer startup is being absorbed into the younger, enterprise-focused UltraHaptics. The Wall Street Journal first reported the deal this morning; we’ve heard the same from a source familiar with the deal.
The report further detailed that the purchase price was a paltry $30 million, nearly one-tenth the company’s most recent valuation. CEO Michael Buckwald will also not be staying on with the company post-acquisition, we’ve learned.
Leap Motion raised nearly $94 million off of their mind-bending demos of their hand-tracking technology, but they were ultimately unable to ever zero in on a customer base that could sustain them. Even as the company pivoted into the niche VR industry, the startup remained a solution in search of a problem.
In 2011, when we first covered the startup, then called OcuSpec, it had raised $1.3 million in seed funding from Andreessen Horowitz and Founders Fund. At the time, Buckwald told us that he was building motion-sensing tech that was “radically more powerful and affordable than anything currently available,” though he kept many details under wraps.
As the company first began to showcase its tech publicly, an unsustainable amount of hype began to build for the pre-launch module device that promised to replace the keyboard and mouse for a PC. The device was just a hub of infrared cameras; the magic was in the software that could build skeletal models of a user’s hands and fingers with precision. Leap Motion’s demos continued to impress; the team landed a $12.8 million Series A in 2012 and went on to raise a $30 million Series B the next year.
In 2013, we talked with an ambitious Buckwald as the company geared up to ship their consumer product the next year.
The launch didn’t go as well as planned for Leap Motion, which sold 500,000 of the modules to consumers. The device was hampered by poor developer support and a poorly unified control system. In the aftermath, the company laid off a chunk of employees and began to more seriously focus its efforts on becoming the main input for virtual reality and augmented reality headsets.
Leap Motion nabbed $50 million in 2017 after having pivoted wholly to virtual reality.
The company began building its own AR headset, all while it was continuing to hock tech to headset OEMs, but at that point the company was burning through cash and losing its lifelines.
The company’s sale to UltraHaptics, a startup that has long been utilizing Leap Motion’s tech to integrate its ultrasonic haptic feedback solution, really just represents what a poor job Leap Motion did isolating their customer base and its unwillingness to turn away from consumer markets.
Hand-tracking may still end up changing how we interact with our computers and devices, but Leap Motion and its later investors won’t benefit from blazing that trail.
| true | true | true |
The company sought to completely change how we interact with computers, but now Leap Motion is selling itself off. Apple reportedly tried to get their
|
2024-10-12 00:00:00
|
2019-05-30 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
170,829 |
http://depressedprogrammer.wordpress.com/2008/04/20/worst-captcha-ever/
|
Worst Captcha Ever
|
Arsenalist
|
I’m trying to download a file from the evil Rapidshare (who make you wait about 2 painful minutes before giving you the file) and just after the wait time is over, I get a Captcha looking like this:
WTF man? I mean, does it really need to be this hard? Are you telling me that it has come down to us hiding domestic animals in our captcha characters in order to hold off bots? Plus, there are only 3 “letters” in that image but its asking for four.
This is a sad day for the HCI crowd.
On an unrelated note, here’s my cat.
dudei haz ur capcha
My nameBesides… there is only 3 letters there 😛
g62y8
dude2irony is that text in the captcha reminds of word ‘pussy’…
JimThis captcha is already cracked 😀
AlanDUDE The same thing happened to me. Its rubbish. It took me 6 fail attempts and my brothers help to find the cats. I missed my RS Pro account. 😦 To bad i can’t buy another one.
Timo ZimmermannFULLACK.
btw. they got happy hours without 2 minutes waiting…
HoweverJust use CaptchaSolver.
Shanti BrafordIt took me some Norwegian brothers to help find the cats. (I am an American, and very proud of it, thank you very much.)
Several fail attempts (uncountable) before my Euro bros helped me out in a pinch.
harshadA proud day for AI (have captcha crackers advance THAT much already)
jasonbNo sh*t. They changed it up and it took me 3 attempts BEFORE I actually read the instructions. I pretty much keep searching when I find a site using “Rapid”Share.
hmmmmmman i would have entered cu*t
woops!
* = n
nathanPretty obviously 62y8, but thats beside the point; Shit is ridonkulous.
AndrewHoly moly — I was just about to blog this same site for the same reason. Amazing, isn’t it?
Worse: If you get it wrong you have to sit through the “non-paying-user” waiting period all over again.
spacebrewJust wait until the bots can see the cats. And determine that they are indeed cats, just twisted at different angles. Then where will we be?
Clearly, we need to start eradicating the cats.
Benive gotten way worst dude, although I totally agree rapidshare and its lolcats are not so funny =[
Adam BardMy favorite bit is that OCR (optical recognition) schemes are actually BETTER than people at recognizing single characters. Some decent, easily-accessible OCR could pick out all the letters, and even without matching the cats guess at the 4 letters and get in 1/15 of the time – a perfectly good ratio for a bot.
tonDon’t forget that those b#$t@rds also throttle the hell out of your connection when downloading files. I’ve consistently seen absolute drivel like my connection falling from 500 kbps to 50 kbps!
jorgelol I notice that like 4 days ago, I should have posted it and I’ll be in the frontpage :p
This IS the MOST annoying thing. I fail it like 2 times before realizing I had to look for cats.
P6S2Y8Well it is probably harder than other captchas because you are doing it one-handed.
In the case of rapid-share files (mostly pr0n), I guess they should make it easy for anybody, to let hackers and bots onto their site so they can lose business and lessen the bandwidth provided to actual users.
Why not post something constructive. This captcha serves everybody and IS NOT that hard, if you aren’t downloading with your pants around your ankles.
arsenalistPost authorP6S2Y8, you appear to be a dick. It’s meant as a joke, obviously its not the “Worst Captcha Ever”, its just funny that there are cats in there.
And I was downloading Arsenal highlights for my other blog, not porn:
http://rapidshare.com/files/108847926/Arsenal_V_Reading_19.04.08_MotD_highlights_f54.avi
http://arsenalist.com/2008/04/19/arsenal-vs-reading-highlights-english-premier-league/
Stop being a little c*nt.
garethi like you blog it is very oldskool
billYou mean
8 my PUSSY
joe bobarsenalist isn’t saying that he couldn’t find the four characters with the cat on them, he’s saying he couldn’t find four **letters** with the cat on them — in fact, there aren’t even four letters to choose from.
WonziYou guys are getting too caught up in this. The fact there are cats in a captcha is in itself ridiculous. The misuse of the word ‘letters’ is just adding to the ridiculousness.
hfirst. i lub yer kitty.
second. i think i just showed everyone at my work that Captcha. I’m glad it wasn’t me. I would have thrown things at the screen.
hanushhtwo day before the capcha was not this much out of focus
Som time yesterday I saw at rapid share that “Happy Hours Activated”
el NinjaEverytime im going to download something Rapidshare is on happy hour.
Oh I love Rapidshare (not something i would say last week)
lilyputtsYeppers, some are so bad.. Just bypass and move on..seems they are the ones who really does not want any one to get it, or get it write!
Michele
lilyputts.com
wesoh, rapidshare. having a premium account is about the most rewarding thing possible, though.
IGoByChadCan we have captchas that draw blood and require finger-prints yet?
DanielEranI immediately saw “pussy 8,” and thought that was the joke. Apparently it was just me.
Pingback: I haz worst CAPTCHA — winandmac.com
Pingback: First Magazine® // Technology & the Web // Worst Captcha Ever
BryanOMG. Seriously! I came across thus the other day too! WTF man? Captchas are supposed to be hard for robots, not humans! … I do like cats though. 🙂
JordanAfter 10 minutes of analyzing…. 6 2 Y 8 !!!!! ??, i think?
blogitforuNo sure about this
alPGSY?
SanjeevMwuhahahhahahahaha!!!!!!!!!!!
I’ve been bloody angered by that cat on the letter!!!!!!!!I’ve had worser captchas!!!!!!!!!!
N hey, ur cat luks cute!!!!!!!!!!
KimmyYour cat is so cute. 🙂
Evil CarbonThat is an evil CAPTCHA
Global Warming Alarmists Beware… http://www.EvilCarbon.com
ChiyaHaha it took me a while to figure out what they meant by showing the picture of a cat. That would be tricky to see whether it has a cat in the background.
Your cat is cute.
TheDeeZoneThat is ridculus. Are they trying to limit it just to members or something.
RiverstyxxxI think your kitty is cute 🙂
SWFlashwhat for captchas to files? >:l it possible to comments (no spam allowed) but files? …
mahdi yusufi have seen this alot, i would have wrote about it too if i knew it would get me on digg front page.
Zack KatzIs this an IQ test?
DaveMoraYou counted 3, they want 4. I only see 1 damn I would have never been able to download anything from that site.
Pingback: FuzzLinks.com » Worst Captcha Ever
benjiFor those of you who are having trouble…
There are letters with DOGS on them, subtract those and dont be afraid to use the numbers with cats on them.
This took me like 40 f*cking minutes.
Cody SortoreWow… That’s freaking hilarious! And if the guy is correct about it already being cracked, even better! Just goes to prove that security measures only stop the stupid hackers, or normal users. I was reading an article on game management recently and some of the top selling companies don’t use any sort of CD protection for their games because their customers don’t like it, and it doesn’t stop anyone.
Cody Sortore@ benji dogs? I can see maybe one dog, but the first letter looks like it’s got a chicken on it, and the “2” appears to have a monkey on it… or something.
myndziThe worst part isn’t how bad captchas have gotten, it’s how overused they’ve become. Why do you need a captcha to download a file? What’s going to happen if a bot does it instead? … the file gets downloaded anyway?
They’re not registering accounts via the captcha so you get no special privileges. I can’t see RapidShare having much of a bot problem, nor can I see such problems with most of the sites I’ve seen captchas on. Hopefully it turns out to be a fad that will just go away.
NathanI think someone heard “Kitten Auth” was much better. Then thought this horrible idea was what was ment by it.
rapidshareleechers complain about cat-cha? if you don’t like the wavy impossible to read cat-cha then don’t use rapidshare
mephistophelesif you install noscript for firefox and block the scripts from rapidshare.com, you can return and try the captcha again by pressing back on your browser without waiting for another two minutes.
however, i’ve never had problems with finding the cats – my trouble with the old rapidshare captchas was distinguishing between the capital C and G.
RuggyI’m in ur captcha humpin ur letterz!
PussyCatcher6, 2, Y, and 8 all have the cat, there are four letters, not three, you blind fucking moron. And stop bitching about the slight inconvenience you have to put up with for downloading stolen shit from rapidshare.
TargetXThat is indeed way too difficult lol!
Your cat looks cute though 🙂
laurelineby the way… you’re right but there are indeed four of them.
mei got almost the same thing! And it was freaking hard.
MarcoFunny! Posted the same kind of article a couple of days ago, reviewing the same stupid CAPTCHA of RapidShare 🙂 .
namePGSY, don’t be dumb…
myndziPussycatcher: 6, 2, and 8 are not letters.
JCCute kitty!
farah2008at least it is easy to read 🙂
rockDude, that’s easy.. they really screw you up when they put O and 0 in the same font and sizes. I tried to enter the right letter but got it wrong 50% of the time.
Pingback: Worst Captcha Ever · Acne Care
DanIt is obvious that this at first human glance looks like pussy 8, which si scode for Octopussy, 4 letters, related to Octopussy…
BOND
solved!
JAckYeah its solid, I hate it on Rapidshare especilly
Brucee6SY8 – the brilliant programmer changed the cats to dogs on the other “letters / (numbers)” which is what he meant. Look at the “P” its not the cat image.
I Am Dali“Plus, there are only 3 “letters” in that image but its asking for four.”
You my friend have passed the test.
businessI’m glad you guys feel my pain. This is ridiculous.
HungCute cat.
andrewSomething tells me you are at least mildly retarded if this presents a problem to you.
Chin YingCatcha
hahaomg, rapidshare kicks ass, but you guys wouldnt know, just get a premium account, it is so worth it.
enjoy
louisville designwhoa!
chrisWow my internet days are at an end. I cant see the dang cat in any but one.
Jacob ParkerI had that, took 5 tries to get it
Geeks are SexyBy the way, your cat is fat 😉
RHSThe cats have bigger heads, and there are 4 of them… Still a piss off!
ElijahThat was a little cute
NickI’ve got one better.
Check this out 🙂
Leandro Corrêaman…
why do they make iilegal downloads so difficult?
AndrewThat’s why I hate RapidShare. Seriously. Use torrents or soemthing.
adude they made it like that cause (if u rad another digged page) spammers have created bots that can decipher a normal captcha code in under 6 seconds. and rapidshare awards free month subscriptions to those who have files that have been downloaded a large amount of times and if they kept with the old captcha code system people could just plug their own file and get cost rapidshare more money than it should
MikNice pussy, it looks well fed.
microchpI would prefer this one: http://www.defectiveyeti.com/iacaptchas/ That captcha seems more realistic and useful.
HollyJust don’t download from rapidshare!
mattthere is 4 of them.. look closer!!!!!!!!@#!@$@#!%#$%^$%^#$%!#$%#$^
letters/numbers are : 62Y8
Williams.LI saw this same story a week before on another site…
http://www.hack5.blogspot.com
I think there is a entry in digg too…
Anyway thanks…nice find…
engtechI love how captcha has evolved to the point where we need computers to solve them.
engtechHey, we’re kitty twins.
Riddlerwhy dont you… i dont know… suck it the fuck up and dont complain about rapidshare and just buy a dam account?
or if you really hate the whole wait time, use theyre automatic downloader, or RapGet, or some other auto downloader
and buy an account so u dont have to wait 2 minutes every time
Nandakaduck cat dog cat cat cat
anonymousthere are 4. but you need to twist your head. hard one man!!
forex killeri found 4 cats
waterpupYeah it says “Pussy” so that’s 5 letters and no numbers inless you feel like adding a 69 to it. Sounds good to me
AnonymousNowlooking at this one, it becomes apparent that “letters” means “letters or numbers.”
THus, this captcha is designed to fool even humans.
I was wondering why in the 4 times I have tried to download a rapidshare file I’ve been unable to crack the capcha.
Pingback: Worst Captcha Ever | Waterpup
Pingback: Random Musings » Blog Archive » Worst Captcha Ever
Pingback: Business and financial blog » Blog Archive » Worst Captcha Ever
subcorpusseriously … get a pro account … and stop this madness …
hehe 😛
JoshI ALWAYS have this stupid captcha thing on rapidshare, and I end up redoing it about 3 times. I emailed them about it, but I doubt I’ll get a response.
LynnI’s go with “pCsy”…the cat’s back make the C look lmost like a 6 or G.
kitchententBad Kitty.
CAPTCHA KITTYCAPTCHA KITTY IS BLOCKING UR ACCESS!
Jordan MeeterHaha wow stop crying, that captcha is easy! And the cats only make it harder for computers to solve…
necoI’d much rather have the cats climbing on readable letters than this one I got: http://insanesecurity.com/article.php?story=20080422125639914
mimhaha , very funny and cruel indeed
Goronmon“And stop bitching about the slight inconvenience you have to put up with for downloading stolen shit from rapidshare.”
It’s funny because it’s true.
Pingback: pligg.com
RizzlerThat’s hilarious. And I hate Rapidshare’s tease at making you buy.
In actuality though, the only bad part of buying a rapidshare pro account is that it’s done in Euros. Not saying anything bad about the Euros, but the American dollar… I bought an account one month for 6.99Euros… and it was $10 roughly. Bought another a month or so later, it was $11.50. It’s a shock to see the expression of the value of the dollar changing like that. And I’m not saying it isn’t either. Just kind of shocking.
davecompletely agree! it took me 5 attempts to enter yesterday! bloody rapidshare!
jiveCaptcha is outliving its usefulness more and more every year.
navinUm, your kitty is rather well fed.
DSLCaptcha often sucks, especially when your comment got lost after reload.
VDSLAmazing how fast the digg counter goes up…
J saintI totally agree that this is the worst ever. I can never get the code to work. Thank god someone put the new nin song on a torrent or I never would have heard it!
Dick JohnsonI agree with arsenalist, the other day I went to download a file from rapidshare and I couldn’t tell what the hell the letters even were let alone which ones had the right cat in them. I had to try about 4 different times (meaning I had to wait about 8 minutes total with their retarded wait time) just to get my file.
In conclusion, I hope the people who run rapidshare die in a fire.
Pingback: LiberalFamily.com
cpwotoni think that 6 is actually a G
jaafthey only wants $$$$$
thats why they are using this worst thing.
MJI downloaded a file today and it took me like 4 attempts to get it right!! That is definitely the worst captcha ever!!!! I mean, the purpose is to keep bots off, not humans.
A.HoYeah it’s nuts
Cemetary HëdRapidshit sucks!
¬¬
wweadam62Y8 are the symbols with cats. Simple.
– http://wweadam.com
RhonersenGood thing you are not paying for it. …
Pingback: CAPTCHA’s are a pain in the arse « Ladgeful
VijayInitially, the captcha words were quite simple. As time passes by, they’re getting more difficult and unrecognizable. I think its time someone found a better alternative
BryanDownloading stolen porn or games from Rapidshare are we?
Go fuck yourself
HexadecimalMan, that sucks. I hate rapidshare so much. I pretty much make sure that’s my last resort if I can’t find the file elsewhere.
streaky“omg, rapidshare kicks ass, but you guys wouldnt know, just get a premium account, it is so worth it.”
Which is of course exactly what they want to do.
Pretty sure that what they did there is illegal under EU disability regs also – though I guess you could argue no discrimination due to that fact people without disabilities can’t figure it out either.
But then rapidshare always were epic fail.
Daniel SmithI think the P and S have a “dog”. BTW, the instructions are faulty; they ask for “four letters” and clearly (!) there are numbers mixed in.
You know, I wonder if this is supposed to be like that joke about the bus? All these people get on and off the bus and then you’re asked a question unrelated to the number of people that get on and off. I wonder if this is like that since there are only 4 “letters” displayed (although I’m not sure if that one is a 6 or a G…)
I hate RapidShare too…
cht086LOLz
foufgaSo I have a question… first day reading your blog (you were on the front page of WP)… but how did you format your css for code blocks to look so amazing?
ultimoAdiosPlease enter all letters having three (instead of 4 ) letters with a cat, might be more appropriate. But hey! we made mistakes.
pirateI found this on bayimg. now you can has:
http://bayimg.com/EAJlfAABh
arsenalistPost authorfoufga, it’s just a wordpress.com feature, check it out here:
http://faq.wordpress.com/2007/09/03/how-do-i-post-source-code/
Pingback: Ponderings For 2008-04-22 » Ponderings Of Guy
Pingback: » RapidShare nuprotėjo su savo katinais. Gal kas bandėt neseniai ką … blogeriai.lt
BoboIf you really see only 3 letters with a cat instead of 4 (6, 2, Y, and 8), maybe you are a bot.
darkmelancholy27>.< rapidshare’s no good, man.
afrodream 'n' beadedA friend of mine called it a retarded captcha. I guess its retarded too. Whats on the hell is this. Do we have to bleed and crack our head just to fill a form or submit it. Mamamia captcha with domestic images plus 1 ( one )
Pingback: Worst Captcha Ever « Ganesha Speaks
YuusouAt least they now show you what the cat looks like. Before they just said please enter all the letters with a cat in them.
sathienWTF!? I saw this on the front page and was like “Oh, could be funny!” – But apparently you are just stupid, since this is one of the easiest Captchas I’ve ever seen. Seriously, I don’t know what you’re actually complaining about. Perhaps you’re just an attention whore.
Pingback: vândpupăză » Blog Archive » Cats to save CAPTCHA
WendelFunny how some people think “2” is a “letter” (at least in my language, “2” is called “number”, or “digit”, not “letter”)
Also, yes, sometimes it is friggin’ hard to tell the cats apart – but at least now they show an example of the cat, that makes things easier. When I first saw that cat-cha (without examples), I’ve tried 5~10 times because I could never tell if they were cats, dogs, or cats distorted until they looked like dogs. -_-‘
Also, I guess some guys on the comments are bots: #66 said the captcha is “6SY8”, and #20 said “P6S2Y8” (!?)
atomashlol. at least tt made you laugh.
ok maybe it made us laugh only.=)
Lee KelleherI saw “pussy 8” straight-away too! Gotta be some kind of joke!
slackermageeI agree with the earlier post, the thing is 62Y8. The P and the S look like they have dogs in. Still a stupid system.
Pingback: Worst Captcha ever - TechEnclave
B FirstIts because they want you to frustrate you into a paid user.
SRiSucks! Too bad we don’t have the captcha crackers to program a better version
andreawell, no it’s 4 characters, but to be fair it took me about 3 times before i figured out how the hell to download!
RaphI DESPISE the new captcha. Its horrible. Finding kitties. Urgh.
Justin Loshmeh, people should start using recaptcha (http://recaptcha.net) its a lot easier too lol.
Rahul Patilits G2Y8… damn easy lol 😛
Dick PullerYour cat is far easier to see than their cats. I definitely count ONE cat in your cat’s picture. I can’t figure out how many cats there are in the CAPTCHA.
Pingback: Worst Captcha Ever | DLLOZ
DAFTAt least ya’ll don’t live in New Zealand, where our high speed internet is about as fast as a a snail cross bred with a tortoise. So bad even that as an election year bribe one of our major parties has said that they’re going to put $150 million NZ into speeding internet up!
So while you’re deciphering the captcha, I’m wating two or three hours for gmail to load. PFFFFT
jeremyI agree with all your points, and your cat looks suprised to be sold out in such a way… evil rapidshare.
VTCAPTHCAS suck,.. but something has to be done about the spammers.
BeyondRandomlol when I first glanced I thought it spelled out “PUSSY” but now I see what your talking about. Captcha’s are getting out of hand!!
Eduardo EstrellaThat is just too funny
foufgaarsenalist: Thanks!
emersondirective seen worse, but my eyes are starting to hurt
listofoptionsya know if ya use “download them all” on firefox there’s no time limit?
riraraThere are worst than that and all of them are of free stuff, just purchase the paid and it would be easier … pfffff
Pingback: Il peggior captcha di sempre | daniele rollo
DavidThe trick is to look for the tail of the cat to determine the letters 😀
ChipI see 4 cats.
ZimHaven’t you seen the mathematic captcha? If you can’t recognize a kitty you won’t solve hard equations. It’s a problem.
Pingback: Algorithm Blogs » Blog Archive » Breaking Rapidshare’s Annoying Captcha the Easy Way
Pingback: Chat Marchet News Digest » Worst Captcha Ever
SparcManWhat is the big deal anyways?? Just use the Captcha auto-fill plugin for Firefox. Geeez!
Orikuuido6 2 Y 8
There are four letters with the cat.
reiciellohaha I blogged about the same thing geez damn that evil rapidshare XD
JensssWoot, didn’t know the bots are so smart today :p.
Rapidshare sucks, you always have to wait and most times it didn’t work for me.
Antisocial NetworkWhat a load of shit
Pingback: Month in Review: April 2008 | The Girls Entertainment Network
Pingback: Rapidshare Sucks. « Gay Hacker
Pingback: Rapidshare Sucks. » lolcat.us
asimovI think the last character is an ‘S’ not an ‘8’ 🙂
Pingback: Worst Captcha Ever « Outofbound’s Weblog
Pingback: Worst Captcha Ever
shook_1meaw
Pingback: Worst Captcha Ever « Ganesha Speaks
CRXlol frankly speaking, I tried up to 5 times failed with the captcha..
1st expression was like: heh ? *so I guess I have to input ‘all the codes’
2nd expression was like: what the..*ok maybe I type them all wrong..
3rd expression was like: now what? * i did type all the codes perfectly and I did double triple check on the codes there
4th expression was like: argh…wtf now man *is there a sound-speak option so he can read what should I type in the captcha submit box*
5th expression was like: alrite it’s gettin on my nerv now, so what is the ‘word by word’ mannual to properly execute the captcha system by rapidshare on that particular page
Pingback: Worst Captcha Ever · P812
Pingback: Say NO !
Pingback: Bookmarks about Captcha
Dizi izleDogs had eat the cats.There is no capthcha anymore 🙂
dblackshellwasn’t it an April’s fool day joke?
GadaffiNice bro!
Thanks
http://www.searchenginerankingfree.com
Pingback: Top 10 Worst Captchas | The best interesting news online
GlagnorI’M IN UR CAPTCHAZ MESSIN UP UR LETTERZ
ucopi have experience it, it painful to wait for several minute and then try open my eye widely to recognize symbol..
Aliso bad, what a captcha.
Anthony DamascoI completely agree, I normally keep guessing the number until i get it right
5942This is terrible.
http://www.softballnews.net
neohaaaaaaaaaaaaaa
thnaks but it is for a year ago
Kücük Hanim OyunlarıThank you
Hockey RulesHaha that is ridiculous lol. WTF lol.
dpatrickcaldwellI hate captchas like that. I actually found your post because I was writing an article on captchas today (http://dpatrickcaldwell.blogspot.com/2009/02/completely-automated-public-turing-test.html). I think they have good intentions, but I think they’re more frustrating to legit users than they are to spammers.
RaiulBaztepoHello!
Very Interesting post! Thank you for such interesting resource!
PS: Sorry for my bad english, I’v just started to learn this language 😉
See you!
Your, Raiul Baztepo
PiterKokonizHello ! ^_^
My name is Piter Kokoniz. Just want to tell, that I like your blog very much!
And want to ask you: what was the reasson for you to start this blog?
Sorry for my bad english:)
Thank you:)
Piter.
agcbjvharb45 test test544343
Pingback: Impossible CAPTCHA : It Doesn’t Really Matter if You are Human or Not
mlumaadi don’t like it, you know
Naruto Episodes & Manga
One Piece Episodes & Manga
Bleach Episodes & Manga
Mr TruthI’ll probably get spammed and hate threats for this *puts on spam and hate proof armor* but I think by making it so hard for humans we are giving in to the fear the “bots” want us to have.
They really want to spoil the rest of the apples and the community is going right along with it.
Putting away liberty for Safety sucks. Either freedom is on or off there is no middle ground.
If the bots really wanted to they will find away around it and probably do and we don’t know it cause Rapidshare can’t look bad you know.
It’s essentially a cash cow or an other words. ‘Greed’ If you can’t read the letters you’ll pay them $$$…………or so they think. 😛
Unfortunately all the bot haters which have only a SLIGHTLY larger brain then the spam bots they hate will probably skim my message just to make me look bad and make themselves look like an ass.
*locks and loads the cyber gun*
Mike Bolderspam this spam is
goes through here easily spammer win.
Pingback: SEO Article Directory » Impossible CAPTCHA : It Doesn’t Really Matter if You are Human or Not
MistyIt’s a lolcat
JoHeh. I have seen worse:
A captcha consisting of a single image and a hardcoded correct answer.
Frank Xiepretty cat
SWFlashif you use captcha solver it show up like “PUSSY 8”. someone create better solver??? please
WilheminaDear Friends, Happy April Fool’s Day!
Three guys were fishing in a lake one day, when an angel appeared in the boat.
When the three astonished men had settled down enough to speak, the first guy asked the angel humbly, “I’ve suffered from back pain ever since I took shrapnel in the Vietnam War… Could you help me?”
“Of course,” the angel said, and when he touched the man’s back, the man felt relief for the first time in years.
The second guy wore very thick glasses and had a hard time reading and driving. He asked if the angel could do anything about his poor eyesight.
The angel smiled, removed the man’s glasses and tossed them into the lake. When they hit the water, the man’s eyes cleared and he could see everything distinctly.
When the angel turned to the third guy, the guy put his hands out defensively – “Don’t touch me!” he cried, “I’m on a disability pension.”
Happy April Fool’s Day!
RoseNike air forces one
geldeasyA person have to risk going too far to learn just how far a person can really go
medyumthank you very much good archives
Pingback: 0dayarchive is Original Scene Download Game From Filesonic
Pingback: Bayswater Partition Walls
Pingback: hello kitty wallpapers
Pingback: medyum
Pingback: blogger wordpress
Tutoriais InformaticaGenerally I do not learn article on blogs, however I wish to say that this write-up very compelled me to take a look at and do it! Your writing taste has been amazed me. Thank you, quite great article.
ыфыфффI have been exploring for a bit for any high quality articles
or blog posts on this sort of house . Exploring in Yahoo I eventually stumbled upon this website.
Studying this info So i’m happy to convey that I have a very just right uncanny feeling I found out just what I needed. I most indubitably will make certain to do not forget this web site and provides it a glance regularly.
Pingback: Add Captcha | Open Cart Know How
Social News & Bookmarking Directory websiteThis is really attention-grabbing, You’re an excessively skilled blogger. I have joined your feed and look forward to in the hunt for more of your excellent post. Additionally, I’ve shared your site in my social networks
seoUseful information. Fortunate me I discovered your site accidentally, and I am surprised why this coincidence did not happened earlier! I bookmarked it.
levitraAfter I initially commented I appear to have clicked
on the -Notify me when new comments are added- checkbox and from now on
every time a comment is added I get four emails with the same
comment. There has to be a means you are able to remove me from that service?
Kudos!
lidarGreat work! That is the type of info that should be shared across the net.
Disgrace on Google for not positioning this publish upper!
Come on over and discuss with my website . Thanks =)
FranI am extremely impressed with your writing skills
as well as with the layout on your weblog. Is this a paid
theme or did you modify it yourself? Anyway keep
up the nice quality writing, it is rare to see a nice blog
like this one nowadays.
DennyI pay a quick visit everyday a few web sites and information sites to read articles, except this website offers feature based
posts.
live sex on stage videoYour style is really unique in comparison to other people I have read stuff from.
Thanks for posting when you have the opportunity,
Guess I will just bookmark this site.
free webcam sex showsWonderful, what a web site it is! This web site
gives valuable information to us, keep it up.
AmosHello there! This post could not be written any better!
Reading through this post reminds me of my old room mate!
He always kept talking about this. I will forward this
post to him. Fairly certain he will have a good read.
Thank you for sharing!
mojopages.comAw, this was a very nice post. Spending some time and actual effort to generate a great article… but
what can I say… I put things off a lot and don’t seem to get anything done.
free live webcam sex showHello, its pleasant article about media print, we all
understand media is a great source of data.
harga grosir jilbab| peniti jilbab grosir| grosir jilbab modern| grosir peniti jilbab| grosir bros jilbab| grosir jilbab cantik| grosir jilbab terbaru| grosir jilbab anak| grosir jilbab termurah| jual jilbab grosir| pusat grosir jilbab| jilbab grosir| groYou could definitely see your expertise within the paintings you write. The world hopes for more passionate writers like you who are not afraid to say how they believe. At all times follow your heart.
roland-jupiter.orgHi there would you mind letting me know which webhost you’re working with? I’ve loaded your blog in 3 different browsers and I must say this blog loads
a lot quicker then most. Can you suggest a good internet hosting provider at a fair price?
Cheers, I appreciate it!
BigGenerally I do not learn article on blogs, but I would like to say that this write-up very compelled me to take a look at and do so! Your writing style has been amazed me. Thanks, quite great post.
free cams for sexI loved as much as you will receive carried out right here.
The sketch is attractive, your authored material stylish.
nonetheless, you command get got an impatience
over that you wish be delivering the following. unwell unquestionably come further formerly again since exactly the same nearly a lot often inside case you shield this increase.
seo contentArt supplies shops may sometimes carry good-quality plywood
frames that can be used for a broader range of products than rigid cell
panels, and they have performed reliably for the last fifteen years.
This way students can see for themselves the large distances between the planets are
to scale. Credit Suisse analyst Satya Kumar wrote that the warranty issues are a
big deal in arriving at a neutral rating for First seo duplicate content url’s stock.
solar panels minecraftThere are many variations in building a solar power salespeople.
After all, can’t these components be bought from producers? Gloomy weather had left many eclipse-chasers who had travelled to Australia from around the globe, with the most impressive aurora borealis in years. There are 3 categories: X-, M- and C-class with each class ten times more powerful than the X1. The term” Parabolic Trough Solar Field Technology” is quite a mouth full. You will be able to comparison shop for top level deal.
minneapolis dentistsHeya this is kind of of off topic but I was wondering if blogs use WYSIWYG editors or if you
have to manually code with HTML. I’m starting a blog soon but have no coding knowledge so I wanted to get advice from someone with experience. Any help would be enormously appreciated!
| true | true | true |
I’m trying to download a file from the evil Rapidshare (who make you wait about 2 painful minutes before giving you the file) and just after the wait time is over, I get a Captcha looking lik…
|
2024-10-12 00:00:00
|
2008-04-20 00:00:00
|
article
|
wordpress.com
|
Depressed Programmer
| null | null |
|
28,139,265 |
https://www.bbc.com/news/business-58163917
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
253,452 |
http://www.techcrunch.com/2008/07/22/are-facebook-ads-going-to-zero-lookery-lowers-its-gaurantee-to-75-cent-cpms/
|
Are Facebook Ads Going to Zero? Lookery Lowers Its Guarantee to 7.5-Cent CPMs. | TechCrunch
|
Erick Schonfeld
|
Nobody can make money on social network ads. Even Google (which controls a lot of the inventory on MySpace) is having a hard time. How worthless are these ads? Lookery, an ad network for social apps on Facebook and elsewhere, is renewing a promotion, guaranteeing 15 cents per thousand page impressions to app developers who sign up. With two ads per page, that comes to 7.5 cents per thousand ad impressions (CPMs). Back in January, Lookery was offering 12.5 cents per ad impression. So that means Lookery has cut its ad rates nearly in half.
Other social app ad networks, such as Social Media, are commanding CPM ad rates of around 50 cents by focusing on higher-quality inventory. Lookery is not so picky, and thus is probably more reflective of the what the majority of Facebook apps can expect to get (85 percent of its inventory is from Facebook).
Promoting a guarantee to starving app developers who have no other options is working for Lookery. When it offered its first guarantee in January, it was serving 140 million ad impressions per month. Now it is serving about three billion per month. (Social Media serves two billion).
Lookery is hoping all of those pennies will add up, but it isn’t counting on it. CEO Scott Rafer says the ad network is running at break even in terms of gross profits. But his plan is to use it to “bootstrap a data services business.” To that end, he is beginning to collect age and gender audience metrics from all the publishers in the Lookery network. For instance, the Facebook app Friendzii (which seems like it is geared towards people with no friends who are hoping to meet some) is actually most popular among 35-to-44-year olds.
If Lookery can’t sell ads to marketers, maybe it can sell the data.
| true | true | true |
Nobody can make money on social network ads. Even Google (which controls a lot of the inventory on MySpace) is having a hard time. How worthless are these
|
2024-10-12 00:00:00
|
2008-07-22 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
3,796,161 |
http://www.wired.com/threatlevel/2012/04/shady-companies-nsa/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,608,108 |
https://www.nytimes.com/2022/06/02/business/media/tosca-musk-passionflix.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,108,216 |
https://medium.com/better-programming/build-a-swiftui-animal-crossing-application-part-1-aaf3528c1df
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,527,411 |
https://ruffle.rs/#
|
An open source Flash Player emulator
|
Ruffle Contributors
|
Made to run natively on all modern operating systems and browsers, Ruffle brings Flash content back to life with no extra fuss.
**Safe to use** - Using the guarantees of Rust and WASM, we avoid the security pitfalls Flash was known for.**Easy to install** - Whether you're a user or a website owner, we've made it as easy as possible to get up and running.**Free and open source** - Licensed MIT/Apache 2.0, you're free to use Ruffle how you please!
| true | true | true |
Ruffle is a Flash Player emulator written in Rust. Ruffle targets both desktop and the web using WebAssembly.
|
2024-10-12 00:00:00
|
2024-10-12 00:00:00
| null | null | null |
Ruffle
| null | null |
22,092,095 |
https://hindenburgresearch.com/opera-phantom-of-the-turnaround/
|
Opera: Phantom of the Turnaround – 70% Downside
| null |
*Initial Disclosure: After extensive research, we have taken a short position in shares of Opera. All APR extrapolations/calculations were based on a 365 day year. This report represents our opinion, and we encourage every reader to do their own due diligence. Please see our full disclaimer at the bottom of the report.*
When a new management team takes over a declining business, it can become a race against the clock to cash out. This is what we think is going on at Opera, a company based around a once-popular web browser that is now seeing its userbase erode.
In the year and a half since its IPO, Opera’s browser has been squeezed by Chrome and Safari, with market share down about 30% globally. Operating metrics have tightened, and the company’s previously healthy positive operating cash flow has swung to negative $12 million in the last twelve months (LTM) and negative $24.5 million year to date.
With its browser business in decline, cash flow deteriorating (and balance sheet cash finding its way into management’s hands…more on this later), Opera has decided to embark on a dramatic business pivot: predatory short-term lending in Africa and Asia.
The pivot is not new for Opera’s Chairman/CEO, who was recently involved with another public lending company that saw its stock decline more than 80% in the two years since its IPO amidst allegations of illegal and predatory lending practices.
Opera has scaled its “Fintech” segment from non-existent to 42% of its revenue in just over a year, providing a fresh narrative and “growth” numbers to distract from declining legacy metrics. But with defaults comprising ~50% of lending revenue, this new endeavor strikes us more as short-term window dressing than a long-term fix.
Furthermore, Opera’s short-term loan business appears to be in
open, flagrant violation of the Google Play Store’s policies on short-term and
misleading lending apps. Given that the vast majority of Opera’s loans are
disbursed through Android apps, ** we think
this entire line of business is at risk of disappearing or being severely
curtailed **when Google notices and ultimately takes
corrective action.
Meanwhile, Opera has exhibited a troubling pattern of raising large amounts of cash (almost $200 million over the past 1.5 years), and then directing portions of it to entities owned or influenced by its Chairman/CEO through a slew of questionable related-party transactions. For example:
We have a 12-month price target of $2.60 on Opera, representing ~70% downside.
We take the midpoint of the company’s $43 million annual adjusted EBITDA expectations and assign multiples to its business units weighted by contribution. We apply a 7x EBITDA multiple to its browser & news segment (despite the steep profit decline) and a 2x EBITDA multiple to its lending apps, in-line with Chinese peers. We do not assign a multiple to its licensing segment, which the company has stated it expects to “significant decline”. The company has about $134 million in cash (no debt) which we add.
We then apply a 15% discount to account for risks relating to its fintech division, which we believe will be significantly curtailed over the next 12 months (for reasons we explain) and risks relating to management and cash dissipating via questionable related party transactions.
Opera is a browser and mobile app business that has existed since the early days of the internet.
The browser business emerged in 1995 and maintained a niche share of the market over the years. In November 2016, Opera’s browser and apps division was acquired by a China-based consortium including Kunlun Tech and Qihoo 360. Kunlun Tech is a publicly traded Chinese company focused on online game development and is led by Opera’s chairman and CEO, Yahui Zhou. Qihoo is a popular, but controversial (1,2,3,4), browser company in China led by Opera Director Hongyi Zhou.
When taking Opera public in July 2018, the browser and apps business was growing gross profit at 30-40%. The business had 40% EBITDA margins and it was generating positive cash flow [pg. S-8].
Within months of transitioning to new management, Opera’s growth and profitability in the browser and mobile ad business began to decline rapidly. The decline has continued post-IPO. Opera’s global browser market share has dropped from 5%+ pre-acquisition to just over 2% most recently.
In Opera’s strongest market, Africa, the declines were even more pronounced. Opera’s browser market share in Africa hit highs of ~40% prior to its acquisition by new management and has plunged below 12% as of the most recent period. Opera’s browser share has quickly been squeezed out by Google on one side, and Safari on the other, as Android and Apple have both developed stronger footholds on the continent.
This deterioration has contributed to “Browser and News” segment gross profit declines of 22.6%, from $76 million to $59 million in the most recent y/y period [Q3 2019 report].
On the cash flow side, the company generated negative $24.5 million in operating cash flow in the nine months ended September 30, compared to positive cash flow of $21.7 million for the comparable 2018 period.
Coinciding with these challenges, Opera launched a mobile app based short-term lending business, now labeled under its “Fintech” segment, that scaled from no revenue in 2018 to 42.5% of Opera’s revenue in Q3 2019.
As we will show, Opera’s apps have entered the African and Asian markets offering short-term loans with sky-high interest rates ranging from ~365%-876% per annum.
As one former employee of an Opera lending app described to us, in many cases **“these (loans) are for people (who) could not even afford their basic needs.”** Another employee described a desperate Kenyan borrowing market, stating:
“Most Kenyans, they are low income earners. And apparently most of them they don’t have enough even for their families.”
In all loan businesses, giving away money is
easy and growth can be as fast as a company wants – until, of course, *the
loans need to be paid back*. In its latest quarter, Opera reported that its credit losses reached $20 million, an
astounding ~50% of its $39.9 million Fintech segment revenue for the quarter.
While mobile app loans can be a lot less profitable (bottom line) than the traditional search & advertising businesses due to high incidences of non-performing loans, Opera had nonetheless given itself the ability to report high revenue growth (top line) and project a more optimistic future.
The short-term lending business was initially launched in Kenya and showed immediate growth from $6.5 million in Q1 2019 [Q1 Results] to $11.6 million in Q2 2019 [Q2 Results] to $39.9 million in Q3 2019 [Q3 Results].
The apps have improved reported net income as well, but largely through non-cash valuation increases. Year to date (YTD) net income was $35.9 million, with $26.2 million (73%) stemming largely from level 3 asset markups among its Fintech apps.
We think Opera’s lending business will fail purely on economics: default rates, competition across dozens of similar apps and user turnover will continue to take its toll on cash flow and profitability despite any top line revenue growth.
**And we think Opera’s Chairman/CEO Yahui Zhou knows this
drill well, having recently lived it. Zhou has a close association with another
lending business, Qudian (NYSE:QD), which has plummeted more than 80% since its
IPO ~2 years ago due to the same types of concerns we are raising about Opera.
We dig into the striking parallels between these two companies later in our
report.**
Beyond basic economic unsustainability, we have found several additional issues that we think could lead to a near term evisceration of the company’s newfound predatory lending business.
Opera has 4 apps that collectively offer lending products in Kenya, India, and Nigeria, mostly through Google’s Android operating system. Google/Android has over 84% market share in Kenya, over 94% market share in India, and over 79% market share in Nigeria, making it the overwhelmingly dominant platform that individuals in these markets use for personal loan apps. Opera’s access to the Google Play store is therefore critical to the success of its lending apps.
We have found clear evidence that all 4 of Opera’s lending apps are in black and white violation of Google’s rules on short-term lending and deceptive/misleading content. We will demonstrate this evidence in this report. We have also reached out to Google for comment on our findings.
We believe this is a significant risk to Opera investors. Without the support of Google, we have a hard time imagining this predatory lending business survives. We also have a hard time imagining Google takes no action when they realize the extent of the violations and the havoc these apps have created in the lives of some of the world’s most vulnerable users in Africa and India. The social consequences of these mass-default products appear to be mounting, as we will detail.
Historically, Google had relatively vague policies against harmful financial products, stating:
“We don’t allow apps that expose users to deceptive or harmful financial products and services.”
In August 2019, Google updated its policies in response to a proliferation of predatory lending taking place on its app ecosystem. The updated policies were much more specific, prohibiting “short-term personal loans” (defined as loans less than 60 days). [Source 1, Source 2, Source 3]
The updated policy reads:
“
We do not allow apps that promote personal loans which require repayment in full in 60 days or less from the date the loan is issued(we refer to these as “short-term personal loans”). This policy applies to apps which offer loans directly, lead generators, and those who connect consumers with third-party lenders.”
Opera’s mobile loan business operates through four Android apps: (1) OKash and (2) OPesa in Kenya, (3) CashBean in India, and (4) OPay in Nigeria.
We had consultants test Opera’s lending apps in
December 2019 and January 2020 and found that **all four of its apps were in
black and white violation of Google’s rule**, as we will show. **In
fact, none of the loan products offered across Opera’s apps appear to be in
compliance with this policy**, despite these rules
going into effect over 4 months ago.
**About 2 months after Google instituted its personal loan
policy change****, Opera’s Chief Financial Officer, Frode Jacobsen, was asked about
the company’s loan profile. ****Jacobsen **stated** on the company’s November 2019
conference call that its loan duration was still about 2 weeks:**
“So our loans in India tends to be a bit bigger, in the $50; whereas in Kenya, it’s in the $30. So while duration of loans, i
t’s about the same with an average of about 2 weeks, as you mentioned.“
This is corroborated by Opera’s most recent prospectus, dated September 2019 (after the rule change). Disclosures show that Opera’s entire microlending business provides loans between 7 to 30 days, which all fall outside of Google’s policies [pg. F-11]:
“The Group
currently provides loans to consumers with a duration of between 7 to 30 days.”
The same prospectus fails to mention Google’s rule change.
When we first discovered that Opera’s lending apps were in flagrant violation of Google’s rules, we wondered how they had not been banned or been required to bring their terms into compliance. The reason, we think, is because each app claims to be in compliance with the new policies in their respective Google app descriptions, but then offers prohibited loans once users have downloaded and signed up for the apps.
For example, here is the Google Play app description for OKash, which clearly states that its loans range from 91 days to 365 days, which would place it in compliance with Google’s policies. Even the example image shows a loan offered with a term of 360 days:
But these products don’t appear to exist at all. An email to the company’s OKash app division confirms that loans range from 15 days to 29 days in duration:
We further confirmed this by having a local consultant apply for a loan through the OKash app. They were given a 2-week loan:
In addition to OKash, we reviewed the claimed loan length on the Google Play Store versus actual loan length for Opera’s other lending apps. The pattern is clear. Here is the summary of our findings for all 4 apps, with more individual details following:
OPesa’s app description similarly presents its loan terms as being between 91 days to 365 days, despite no evidence that it ultimately provides any loans of those lengths.
There wasn’t even much sleuth work required for OPesa. After all, the app’s own FAQ page shows that it offers a loan term of 14 days with an origination fee of 16.8%, which equates to an APR of ~438%.
To confirm this, we had our consultant download the app in mid-December. They were offered a repayment term of 15 days.
For CashBean (India), the app description once again makes the same 91 day to 365 day loan term claim.
Yet recent online reviews for the app consistently show the loan term to be 15 days:
We emailed the company to see if they offer loans for any term other than 15 days and have not heard back as of this writing.
OPay’s app description also makes the exact same 91 day to 365 day loan term claim.
We also inquired about the company’s loan terms via an email to [email protected]. They replied that the loan’s rate and term would be “an origination fee charged of 1.2% per day **for a fixed term of 15 days**”:
In a separate email chain, OPay ultimately provided us a loan application that offered only loans with a 7-day tenure:
And so, despite Opera having 4 different apps across 3 countries, it appears that each violates Google’s Play Store rules and skirts compliance using the exact same technique. We have a hard time believing that to be accidental.
Google’s short-term lending policies were updated in August 2019 and would have had a materially negative impact on Opera’s lending model, had the company chose to abide by them.
Amortizing high-interest rate loans over a longer period of time to “high-risk” borrowers would change the borrower profile, impact default rates and create a headwind for the segment.
We believe most law-abiding companies would have promptly disclosed this rule change and reassured the market on their plans for adapting to it.
Instead, Opera appears to have disregarded the
new policies entirely, disclosed nothing about the change to the market, even
when it launched a secondary offering** in
mid-September that **raised** net proceeds of about $82 million.**
The above documentation clearly shows that Opera is violating the Google Play Store rules on short-term loans. A further review shows the company to be in violation of additional rules.
For example, Google’s policies require that the metadata of apps be accurate:
Furthermore, Google’s personal loan policies require certain metadata to be included in app descriptions. This includes:
Opera’s apps flagrantly violate each of these terms in its app descriptions.
Kenya, Nigeria, and India are home to some of the world’s most disadvantaged individuals. Borrowers turning to short-term loans may lack access to traditional bank loans. These people are more likely to be vulnerable and fall for misleading offers.
We think Opera is taking advantage of these people by claiming to offer low rates and longer-term loans then gouging borrowers with sky high rates and shorter terms.
We reviewed all of Opera’s lending apps and found that each claimed to offer low rates in its app descriptions. Based on our research, however, none of its apps charge the stated interest rates and all ultimately charge obscenely high rates:
Additionally, we saw what appeared to be a 3-step pattern of ‘bait and switch’ on loan terms in each app:
A former employee of OKash described to us how unemployed individuals were often totally unaware of the high interest rates they were paying, until it was too late:
“Most people are not educated. You see when you are downloading an app and opening an account there are those terms and conditions. Most people never read the terms and conditions.
So when you are telling a person you are expected to pay a 1% fee (per day) after you failed to pay the loan back…by the time that person finds out it’s like 20 days.“
“…Now the rates had gone high and these are for (unemployed) people who could not even afford their basic needs.”
We took a granular look at what Opera’s apps claim they offer in the way of interest rates, versus what is happening in practice.
OPay’s app description says loans are offered with a ‘maximum’ interest rate of 24% a year in the form of an origination fee:
When users get into the app, however, they see options for 60 to 90-day loans with origination fees that correspond to a 91% APR.
But when users actually apply, a process that requires providing personal information and paying a fee, they appear to instead be granted 15-day loans for significantly lower amounts. See an example from one user below:
We saw multiple recent users on social media complaining of these same ‘bait and switch’ tactics [1,2,3]:
As to the actual interest rates, we received loan documents from OPay as part of our research. The fine print at the bottom of the contract shows that the **interest rate is 1% a day, plus another 1% per day if a user is late**. In other words; 365% per year, or 730% for late borrowers:
An email to OPay customer service detailed an even higher 1.2% per day fee for 15 days and a 2% fee per day for late loan repayments.
Given that these rates aren’t presented until the end of the borrowing process, **users likely don’t learn that they have a 2-week loan at absolutely crippling interest rates at the very last minute**.
2. **OPesa (Kenya)**
OPesa’s lending app description shows a reasonable 12% APR with no service fee:
When our consultant tested OPesa in late December, the screen prompting them to apply for a loan showed a less attractive loan term of 70 days with service fees that suggested an APR of about 86%:
Once they put in all of their personal information**, the loan terms were again worsened to a 15-day loan with an APR of 438%!**
In the fine print of the OPesa terms
of service, **we
see that late loans are charged at a rate of 2.4% per day, an APR of 876%.**
The only other place we see the real OPesa terms are the app’s FAQ page, which contradicts the terms presented in the app description and the loan application process.
3. **OKash (Kenya)**
And here is the app description for Opera’s OKash, showing that the maximum interest rate is 24% per year with no origination fees.
When it came time to apply, the app first suggested that our consultant could get a loan for 28-70 days, at an implied APR of 84% through origination fees:
But then the actual loan term for our consultant ended up being 2 weeks, at an APR of 365%.
These rates were also corroborated by our email exchange with OKash support shown earlier, which confirmed that the actual interest rate is approximately 1% per day (365% APR), charged in the form of an up-front origination fee.
According to the app’s Terms of Service, late users are charged 2% per day if late, an APR of 730%!
4. **CashBean (India)**
Lastly, Opera’s CashBean description suggests a “maximum” APR of 33%.
Once our consultant began the loan application process by inputting their basic information, the app once again lured them in with a suggestion that they could get a loan “duration up to 61 days”:
Our consultant was unable to procure a loan, but we found numerous user reviews that claimed 15 day loan terms at 15%, which doubles if late (365%-730% APRs). Here are two recent examples:
We also learned of a ruthless OKash and OPesa collection policy that was only recently changed as of June 2019, according to a former employee.
If a user was late to repay, the app had previously indiscriminately texted or called contacts in the user’s phone as part of loan collection efforts. This process began immediately after a loan repayment was delayed, according to user reviews.
Numerous users reported that friends, family, employers, and other contacts were harassed and threatened through Opera’s apps when a borrower was late. A Kenyan news article provided one example of the threatening messages used to elicit payment:
“Hello, kindly inform XX to pay the OKash loan of Sh2560
TODAY before we proceed and take legal action to retrieve the debt,’ says the text message the service provider sends to people in one’s contact list.”
This type of public shaming and pressure obviously created devastating social consequences for the borrower. See several examples below from user reviews:
In another example, the apps threatened to place friends or family of a borrower on a national credit blacklist if they didn’t convince the actual borrower to pay:
Additionally, complaints about violation of privacy were found on social media, like Twitter:
We reached out to two Kenyan credit agencies (CRB’s) to ask whether it is actually possible to blacklist a person for simply being a contact in a person’s phone who owes money and have not heard back.
A former OKash employee told us that this practice has been discontinued as of June 2019 “because it was said it was illegal.”
These mass-default lending products can have crippling societal consequences.
For example, several OKash employees described
to us a treacherous cycle of fees, which in Kenya can result in the borrower
loaded with debt and *unable to get a job*. Most government and corporate
employers require a certificate from the Kenyan credit agencies, known as
Credit Reference Bureaus (CRBs), showing good credit. In the absence of such a
certificate, many Kenyans are rendered unemployable.
One former OKash employee told us:
“Before you take a loan, they usually ask for your ID number. And for us, using your ID number – there’s usually something called CRB, Credit Reference Bureau…a
fter 30 days they usually forward names of people who haven’t paid to the CRB – the Credit Bureau is where now the Kenyan government can come to ensure that most people pay their loans.“
She continued:
“You cannot get a job if you have a negative CRB – credit score or something. So that’s how businesses ensure that you pay back.“
When we asked another OKash employee about the risks facing the business he said:
“University students. Most of them they are getting into loans. And most of them they don’t have even stable income. That’s a very risky part for the business because
you end up having a big number of defaulters of which in the end its very young people who are not yet employed at any point.“
He later described the consequence of these defaults:
“
When you are applying for a job in Kenya you need CRB certificate. So that’s also a very major concern…the main problem again, again the issue I was telling you about, the young people, the age group that is around from 18 to around 28…Those are the most guys who are in CRB right now.“
We were also told of how common it is for university students to download multiple lending apps and borrow from each of them to pay off loans from the other, effectively running thousands of mini Ponzi schemes in order to (temporarily) avoid default while they struggle to pay exorbitant interest rates and afford basics.
Opera’s sudden pivot to “micro finance” is not Chairman/CEO Zhou’s first foray into short-term lending or listed companies on US exchanges. Zhou was a director of Chinese lending business Qudian (NYSE:QD) from February 2016 to February 2017, before it went public [Pg. 212]. Kunlun, a China-based company that Zhou controls, was one of the largest investors in the company and owned 19.7% of it when it went public [F-1 Pg. 8].
Qudian raised $900 million in its IPO in late
2017, the biggest U.S. listing ever by a Chinese Fintech firm. **The
company IPO’d at $24 and has already cratered to about $4.35 as of this
writing, a more than 80% decline in a little over two years.**
According to a detailed class action lawsuit, Qudian was alleged to have engaged in flagrantly illegal and deceptive lending practices. The parallels to Opera’s current business are striking. The second amended class action complaint against Qudian alleged, among other things:
1. The company falsely stated that it exited making loans to college students in response to a ban by the Chinese government:
“
Qudian nevertheless continued to actively operate and promote its business of lending to college students up to the IPO and continued these illegal practices even after the IPO.“
2. The company “employed illegal and unethical means in violation of Chinese laws and regulations, such as threatening students and calling their teachers, parents, or spouses to exert pressure on the borrowers. **A former company employee reported that the methods used were so humiliating to young vulnerable students that at least one committed suicide.”**
3. The company lent at exorbitant and illegal rates:
“The Company had charged or attempted to charge overdue borrowers a
daily interest rate as high as 5%as “penalties” for overdue loans, which,on an annualized basis is 1,825%, i.e., 76 times higher than allowed under the Chinese law.“
The allegations have not been proven and the lawsuit is still pending before the court.
Beyond the obvious similarities to Opera’s new predatory lending business, Qudian’s financial metrics also show parallels. The company had periods of massive revenue growth (at one point almost 500% y/y) yet its delinquency metrics, competition, and partner/regulatory hurdles ultimately have begun to catch up with it.
**Once
again, it is easy to give money away**—the hard part is
making sure it comes back (particularly when lending to the world’s most
disadvantaged at sky-high rates.)
We think Opera’s new ‘high growth’ lending business should raise alarm bells for investors. Opera’s willingness to engage in deceptive, predatory lending to some of the world’s most vulnerable people should say something about the approach embraced by management.
As we were told by a former OKash employee in Kenya, the “issue is repayment”:
“Now the issue is repayment.
All these citizens are usually able – they can borrow, but most of them don’t have the ability to pay back. Most people who are borrowing from the app – most people are not employed. So when you’re dealing for example – when I was there – when I used to call people during time for repayment,most of them would tell you that they’re not working, so it’s a 50/50 kind of market.“
If Google becomes active about enforcing policies against deceptive and harmful short-term loan apps, we think Opera’s apps are prime examples. We have reached out to Google for comment on these apps and whether they violate its terms and will update this report should we hear back.
Management’s pivot to misleading and deceptive lender to the world’s most disadvantaged raises the question of what other behavior the company condones.
Regarding Opera’s “OKash” Kenyan lending app, we see that the company paid $9.5 million for an entity that Hong Kong records show was 100% owned by the Chairman/CEO, despite company disclosures stating otherwise.
In December 2018, Opera paid $9.5 million to acquire a Hong Kong entity called Tenspot Pesa Limited, which actually owned OKash. [Pg. 92]
Opera stated that it had acquired Tenspot from OPay, a separate company that Opera has a 19.9% stake in. Per the company’s annual results:
“In late December, Opera acquired OKash from Opay for a consideration of $9.5 million.”
This disclosure made sense given Opera’s other public statements on OKash. When Opera initially announced the launch of OKash in March 2018 (about 9 months before Opera acquired it) it stated that it was launched by “OPay, the FinTech company part of the Opera Group.”
We pulled the Hong Kong corporate records for Tenspot. **The **annual report filing** from just 3 weeks prior to Opera’s acquisition states that OPay did not own the entity. Rather, Opera’s Chairman/CEO Yahui Zhou owned 100% of the entity:**
In fact, every Hong Kong corporate document we found showed that the entity was owned 100% by Chairman/CEO Yahui Zhou – from inception until the acquisition by Opera. Here are the articles of association for Tenspot, around the time of its creation in November 2017, showing the same:
How could Tenspot be simultaneously wholly owned by 2 separate parties at the same time?
We reached out to Opera’s investor relations to confirm whether it bought OKash from OPay (as company disclosures stated), or from Chairman/CEO Zhou directly, as Hong Kong entity records state. The investor relations rep wrote that he was “not sure on the specifics of how the entity was set up” but nonetheless re-affirmed “Okash was purchased from OPay (so OPay received the cash not Yahui Zhou).”
Opera’s investor relations contact also wrote in relation to the OKash transaction that “The business was basically a license when Opera took over”.
This makes sense. Hong Kong corporate records show that Tenspot was created in late 2017, just months before the OKash app was launched, suggesting that the entity was created for the purpose of owning OKash or perhaps a license for OKash to operate.
But as part of the disclosures around the transaction we see that Opera had actually lent at least $2 million dollars to Tenspot, thus it appears to have been funded by Opera since the beginning. [Pg. F-55]
Opera also looks to have *operated* OKash since the beginning. An Opera filing dated 5 months prior to the transaction [F1 Pg. 51] shows that the company owned 80% of the local operating entity for OKash, O-Play Kenya Limited [pg. 52]:
In other words, Opera funded an entity that it then later purchased for $9.5 million, in order to own a business that it already operated. This strikes us as peculiar.
If the Hong Kong corporate records are correct, and our suspicion is that they are, the “acquisition”, once unpacked, appears to simply be a cash withdrawal by Chairman/CEO Zhou, from the public company and its shareholders.
On November 5th, 2018, Opera announced it had invested $30 million in cash into Chairman/CEO Yahui Zhou’s
private karaoke app, StarMaker. The
investment gave Opera a 19.35% stake in Zhou’s company, valuing the app business
at about $155 million.
Zhou had acquired StarMaker in late 2016 for an undisclosed sum. StarMaker’s financials have not been disclosed, but Opera’s most recent 20-F states that it was generating losses [Pg. F-56]. We asked Opera’s investor relations about StarMaker’s financials and they replied that StarMaker was growing revenue and is now profitable.
The app looks to be fairly popular, boasting over 50 million users largely in India, Indonesia, and the Middle East. Nonetheless, investors may wonder, what on earth does a karaoke app have to do with the strategic long-term success of Opera’s browser business (or even with its predatory lending business?)
Opera’s investor relations told us there is “No integration today with Opera and right now viewed as an investment (versus strategic).”
Keep in mind that Opera’s July 2018 IPO raised $107 million in net proceeds [Pg. 1], so the investment into Zhou’s StarMaker app and the investment into the OKash entity represented about 37% of its newly raised cash, out the door, within just about 5 months.
The StarMaker deal also included “an option to increase its ownership to 51% in the second half of the year 2020”, which we estimate would translate into another $49 million cash investment, assuming the valuation remains constant.
We hope the company discloses StarMaker’s financials to investors before Zhou makes an executive decision on that investment.
StarMaker ran into some immediate controversy following the Opera investment. In 2018, prior to Opera’s investment, StarMaker appears to have partnered with a crypto ICO called ICST. The ICST token (short for Individual Content and Skill Token) was intended to give content creators a better ability to monetize their intellectual property via the blockchain.
StarMaker was the first partner for the ICO, and StarMaker owner (and Opera Chairman/CEO) Yahui Zhou was the ICO’s key investor/advisor.
The plan was for ICST to transform StarMaker’s revenue model into a tokenized business, and then later branch out into other apps. Yahui was quoted as having a grand vision for the project:
“
I see an opportunity to make a great investment, disrupt an entire industry and help creators earn what they deserve“
ICST’s whitepaper detailed the partnership with StarMaker and described how its revenue model would be reliant on the new token.
The backing of Zhou and StarMaker helped the ICO raise an estimated $2.5 million USD by June 2018. Opera made its $30 million investment into StarMaker several months later, on November 5th. Four days after Opera’s investment, the CEO of ICST was arrested, with the corresponding DoJ indictment alleging he had stolen ICST funds, among other misdeeds.
See count 8 from the indictment alleging illicit transfers of ICST funds in August, months prior to Opera’s investment:
At the time Opera announced its investment, there was no disclosure of any missing ICO funds or any public signs of trouble with the partnership. The token is now valued at zero.
Token buyers have claimed they were cheated by Yahui Zhou and that StarMaker should make good on their losses. Zhou has stated that the claims are baseless and has subsequently distanced himself from the project. Opera’s most current financial statements have begun to disclose the potential for litigation relating to the StarMaker crypto currency partnership. [Pg. S-25]
All told, we find the nature and timing of Opera’s $30 million related-party karaoke app investment to be unusual. At best, it raises questions about the Chairman/CEO’s judgement relating to this crypto karaoke misadventure.
Usually, when a company wants help with its marketing, it hires a marketing company.
Contrary to what’s typical, Opera has instead directed over $31 million of marketing cash to another of Chairman/CEO Yahui Zhou’s related companies, 360 Mobile Security.
Opera has had a marketing relationship with 360 Mobile since mid-2016, when a deal to acquire Opera by current management was already in the works. The agreement called for 360 Mobile Security to negotiate and manage its advertising/media services.
360 Mobile Security describes itself as a security company. **We could find no other examples of the company acting as an advertising agency. We reached out to 360 Mobile to ask whether it had any other marketing clients and have not heard back as of this writing.**
The original service agreement had billed Opera at an annualized rate of about $10 million.
That rate seems to have stepped up considerably as of late. Two months after Opera’s IPO, flush with investor cash, an amended agreement called for a prepayment of $10 million to 360 Mobile Security.
**That prepayment has steadily increased, with $18.4 million in prepayments due to the related-party as of Opera’s September 2019 **secondary offering prospectus [Pg. F-17]:
**The same prospectus indicates that Opera has paid out almost $13 million of marketing and distribution expenses to 360 Mobile Security in the first half of 2019. **
We find this combined $31 million in cash out the door in expenses and prepayments to be concerning, and raises further questions for us about Opera’s cash payments to related parties.
Beyond questioning Opera’s related party transactions, we also noticed some issues with the company’s reported financials.
When numbers from prior periods start to
suddenly move around without explanation it suggests there could be an internal
controls issue.** **Most companies that
restate financials provide detail on restatements in order to assure investors
that any mistakes or issues won’t happen again.
Opera has seemingly taken a different approach. Over the past several quarters, the company has apparently restated past financials without disclosing why. The result has been that year over year and quarter over quarter numbers have appeared better than otherwise.
Take, for example, the most recent Q3 2019 quarter. Opera reported that Q3 2018 revenue was $42.795 million:
But
when we checked the Q3 2018 numbers reported at the time, **we
see that revenue had actually been $44.7 million. Where did the other ~$2
million go?**
In another example, in Q2 2019, Opera reported $49.8 million of revenue in Q1 2019.
**But when we checked the Q1 2019 numbers reported ****3 months earlier****, we see that revenue had actually been $51.3 million**:
In both cases, the revising down of the previous period made the year over year and quarter over quarter growth rates more impressive on the headline numbers.
We checked to see if any recent accounting changes could have been applied retroactively and been the source of these silent restatements. The company has adopted several accounting methodology changes as of January 1, 2018, but none appear to influence the revenue numbers from Q1 2018 and beyond. [Pg. F-20]
We contacted Opera investor relations about the quiet restatements. IR had no explanation for the restatement of the Q3 ‘18 numbers but said they would get back on the specific reason.
The Q2 ’19 restatement was explained as follows:
“We changed our methodology as it related to microlending once we had more data. In Q1 we recognized 100% of late fees. The data showed us that about a minority of late fees are recoverable, so in Q2 we started recognizing only what our historical data showed we could recover.”
We appreciate the answer from Opera, but we think these types of changes should be disclosed to investors without the need for prompting.
We also think this answer shows that the lending segment may be employing aggressive revenue recognition practices. To recognize 100% of late fees as revenue suggests that every late borrower, no matter how impoverished, would be able to pay back late fees at a rate of 730%+ per annum. This is obviously an absurd notion given the massive loan default rates, and makes us question the revenue/default recognition methodologies in the overall segment.
Opera’s deteriorating legacy business, declining financials (except revenue), bizarre business pivot, and related party transactions suggest to us that we may not be witnessing the miraculous “turnaround” story that the company would want investors to believe.
We believe Opera’s foray into predatory microlending – a business that has already led Qudian shareholders to over 80% losses since the company’s IPO – will result in a tab that will also be coming due soon for Opera shareholders.
We also believe that Google, once it realizes the abuses that it is (likely inadvertently) facilitating, will eventually curtail or eliminate Opera’s lending practices.
Beyond witnessing the ‘microfinance’ playbook in Qudian’s collapse, we’ve also seen a litany of other US listed China based management teams engaging in extensive related party transactions. In many cases, insiders are able to enrich themselves while the result is far less favorable for shareholders.
Put simply: We feel like we’ve seen this opera before – and the final act ends poorly for shareholders.
**Disclosure:
We are short shares of Opera**
*Additional
disclaimer:** Use
of Hindenburg Research’s research is at your own risk. In no event should
Hindenburg Research or any affiliated party be liable for any direct or
indirect trading losses caused by any information in this report. You further
agree to do your own research and due diligence, consult your own financial,
legal, and tax advisors before making any investment decision with respect to
transacting in any securities covered herein. You should assume that as of the
publication date of any short-biased report or letter, Hindenburg Research
(possibly along with or through our members, partners, affiliates, employees, and/or
consultants) along with our clients and/or investors has a short position in
all stocks (and/or options of the stock) covered herein, and therefore stands
to realize significant gains in the event that the price of any stock covered
herein declines. Following publication of any report or letter, we intend to
continue transacting in the securities covered herein, and we may be long,
short, or neutral at any time hereafter regardless of our initial
recommendation, conclusions, or opinions. This is not an offer to sell or a
solicitation of an offer to buy any security, nor shall any security be offered
or sold to any person, in any jurisdiction in which such offer would be
unlawful under the securities laws of such jurisdiction. Hindenburg Research is
not registered as an investment advisor in the United States or have similar
registration in any other jurisdiction. To the best of our ability and belief,
all information contained herein is accurate and reliable, and has been
obtained from public sources we believe to be accurate and reliable, and who
are not insiders or connected persons of the stock covered herein or who may
otherwise owe any fiduciary duty or duty of confidentiality to the issuer.
However, such information is presented “as is,” without warranty of any kind –
whether express or implied. Hindenburg Research makes no representation,
express or implied, as to the accuracy, timeliness, or completeness of any such
information or with regard to the results to be obtained from its use. All
expressions of opinion are subject to change without notice, and Hindenburg
Research does not undertake to update or supplement this report or any of the
information contained herein.*
Comments are closed.
Great note. What % do management own? (65?) Have they sold any since IPO? What is the float ex management and top 5 institutional holders?
Wonderful staff, assuming Google will ban them ASAP, what prevents them from opening a new co under Opera with different brand and offer the same loans. i.e. Google won’t know that until someone will let them know and then the same drill
Edgium browser zone
I use Opera to reduce mobile traffic
This is awesome
Great article! Wonderful collection
Great reading this article.
Thanks for sharing.
Great Research!
Great reading this article.
Great article
Wow, cool post. I’d like to write like this too taking time and real hard work to make a great article.
amazing article and nice information thanks for this amazing information we are waiting for the new articles.
Great article! Wonderful collection
So fantabulous, thanks for the post, you guys have done well. I would like to visit here again
Awesome post.
OPay, Opera’s African fintech startup, has confirmed that the company will shut down some of its businesses. This includes a B2C and B2B eCommerce platform, OMall and OTrade respectively, a food delivery service; OFood, a logistics delivery service; OExpress as well its ride-hailing service, ORide and OCar.
Great and Enlighten… Thanks for sharing.
Nice post
Nice post plus amazing content
amazing article thanks for this amazing information
Great and Enlighten… Thanks for sharing
Thank you for a great way to get traffic and links. I’ve hear good things about your from Richard Legg. Your list is excellent! I have an https://www.hightime420.shop/
and I can clear blockages to a person success. Sounds strange? Check it out! Am sure you will like it.
breifly explained every thing and knoweldge is up to date.
Winderful art of sharing and providing best knowledge to readers
What an awesome post. Thanks for sharing
Finally got the information which I want. Thank you so much dear for sharing this, visit NaijaRetro for all round entertainment
one of the best information available on the web regarding the topic. If want to know the difference between the Facebook and Facebook lite visit the post https://www.thesolutionnation.com/know-the-difference-between-facebook-and-facebook-lite/
Winderful art of sharing and providing best knowledge to readers
nice post here
Nice
DISTINCTVALUED RESEARCH PROJECTS
nice post Project Topics/Materials
nice post Project Topics/Materials
thanks for sharing good information.
Thank you for sharing great content
Thank you so much for sharing information , nice post
Remarkable turnaround.
I really love this article and I have learnt a new thing here today.
Wow this is so amazing
Thanks for your nice sharing Information..
regards from Indonesia
Good article, but it would be better if in future you can share more about this subject. Keep p`osting
Good article, but it would be better if in future you can share more about this subject. Keep p`osting
Wow this is so amazing
WOW i really like this article
Well knowledge that you provide, one of my well experience
If there are no consequences for screwing up, then people will keep screwing up.
Thanks for sharing.
Great Research!
Anna Lena
| true | true | true | null |
2024-10-12 00:00:00
|
2020-01-16 00:00:00
| null | null |
hindenburgresearch.com
|
hindenburgresearch.com
| null | null |
36,276,268 |
https://www.thetimes.co.uk/article/inside-wuhan-lab-covid-pandemic-china-america-qhjwwwvm0
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,357,360 |
http://techcrunch.com/2015/04/10/microsoft-drops-a-new-windows-10-build-bringing-the-love-to-lots-more-phones/
|
Microsoft Drops A New Windows 10 Build, Bringing The Love To Lots More Phones | TechCrunch
|
Alex Wilhelm
|
It’s New Build Day for Microsoft kids, as the Redmond software shop dropped new code that will bring Windows 10 to a host of smartphones that were previously not able to handle to the beta operating system.
The build, number 10051, brings a host of new features to Windows 10 for phones, including refreshed core applications and the Project Spartan browser that has already cropped up for PCs running Windows 10.
Also in the mix is a new “universal” Maps application. Microsoft is working to bring all of computing under a single roof. That makes the Maps app important: Here is Microsoft showing off how apps will scale from small to large across various screen sizes.
According to the company, the new app “includes the best maps, aerial imagery, rich local search data, and voice-guided navigation experiences from both Bing Maps and HERE maps, integrated together for the first time into a single app for Windows.”
Below is a list of supported phones:
If you want the new build, make sure that you are on the fast release ring. Otherwise, you will have to sit tight.
The development pace of Windows 10 remains rapid. It’s been notable to watch the enthusiasm gap between the current Windows 10 release cycle, and what took place with Windows 8. That’s to say that people seem to care more this time around.
An example of this is my new Twitter nemesis, Gabe Aul from the Windows 10 team, who regularly racks up oodles of favorites for his oracular drippings concerning when new code might, or might not, land:
Riveting. You can check shots of the new build here.
| true | true | true |
It's New Build Day for Microsoft kids, as the Redmond software shop dropped new code that will bring Windows 10 to a host of smartphones that were
|
2024-10-12 00:00:00
|
2015-04-10 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
5,862,990 |
http://arstechnica.com/apple/2013/06/a-critical-look-at-the-new-mac-pro/
|
A critical look at the new Mac Pro
|
Dave Girard
|
Hell finally froze over yesterday and Apple announced a new Mac Pro at WWDC. At first glance, the new machine was as mysterious as it was terrifying to me and many other creative pros who have been waiting for ages for this thing to drop. But now that Apple has a full site page for the new machine and I’ve gotten some info from people familiar with its internals and with OS X 10.9, the Mac Pro has become less of a mystery.
But that’s also what’s freaking us out.
## The design
At 6.6" × 9.9" for its cylindrical stretched aluminum case, the new Mac Pro is tiny, and no other workstation-class Xeon desktop with a discrete workstation GPU—or two, in this case—looks anything like it. You get the feeling that the designers sat around coming up with ideas for the new Mac Pro and said, “If Darth Vader edited video, what would his computer look like?” Well... it would probably look like this:
If you haven’t seen the inside already, it’s a truly amazing bit of engineering, organized in a tube-like shape with a triangular arrangement of the motherboard elements along the exterior walls of the “thermal core,” a unibody-like heatsink that draws heat away from the GPU, CPU, and memory:
Even more unusually, the machine has only one (1!) fan that cools everything, wind-tunnel style:
So the Mac Pro will, I suspect, be a ridiculously quiet workstation as well. This is Apple engineering at its best, and I won’t have any concerns about using this for long sessions of V-Ray rendering or ZBrush sculpting. Detractors will say it’s going to overheat if you do anything serious, but Apple knows these things need to run around the clock for days on end. It didn’t put a dual workstation GPU in there and expect people not to use it extensively. More about that further on.
| true | true | true |
A graphics pro breaks down Apple’s new machine.
|
2024-10-12 00:00:00
|
2013-06-11 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
40,075,764 |
https://arstechnica.com/tech-policy/2024/04/feds-appoint-ai-doomer-to-run-us-ai-safety-institute/
|
Feds appoint “AI doomer” to run AI safety at US institute
|
Ashley Belanger
|
The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation.
Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.
There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano’s association" with effective altruism and "longtermism could compromise the institute’s objectivity and integrity."
NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible” and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based.
On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10–20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level."
| true | true | true |
Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.
|
2024-10-12 00:00:00
|
2024-04-17 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
23,105,262 |
https://www.air-wave.org/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
41,316,355 |
https://github.com/diffusionstudio/core
|
GitHub - diffusionstudio/core: The Video Creation Engine: Edit videos with code, featuring the fastest WebCodecs renderer for in-browser video processing.
|
Diffusionstudio
|
Diffusion Studio is an open-source, browser-based video editing library that allows developers to automate video editing workflows at scale, build custom editing applications, or seamlessly integrate video processing capabilities into existing projects.
Visit https://docs.diffusion.studio to view the full documentation.
💻 100% **client-side**
📦 Fully **extensible** with Pixi.js
🩸 Blazingly **fast** WebGPU/WebGL renderer
🏎️ **Cutting edge** WebCodecs export
`npm install @diffusionstudio/core`
Let's take a look at an example:
```
import * as core from '@diffusionstudio/core';
const source = await core.VideoSource // convenience function for fetch -> blob -> file
.from('https://diffusion-studio-public.s3.eu-central-1.amazonaws.com/videos/big_buck_bunny_1080p_30fps.mp4');
// create a video clip and trim it
const video = new core.VideoClip(source) // compatible with the File API
.subclip(0, 160); // The base unit is frames at 30 FPS
// create a text clip and add styles
const text = new core.TextClip({
text: 'Bunny - Our Brave Hero',
position: 'center',
stop: 80,
stroke: { color: '#000000' }
});
const composition = new core.Composition(); // 1920x1080
// this is how to compose your clips
await composition.add(video); // convenience function for
await composition.add(text); // clip -> track -> composition
// render video using webcodecs at 25 FPS
// use resolution: 2 to render at 4k
new core.Encoder(composition, { fps: 25 }).render();
```
This may look familiar to some. That is because the API is heavily inspired by **Moviepy** and Swift UI. It models the structure of popular video editing applications such as Adobe Premiere or CapCut. The current state can be visualized as follows:
Whereas each track contains zero or more clips of a single type in ascending chronological order.
A track will be created implicitly with `composition.add(clip)`
however you can also create them manually like this:
```
const track = composition.createTrack('text');
await track.add(text0);
await track.add(text1);
await track.add(text2);
...
```
You can find more examples here., or give them a whirl on: https://examples.diffusion.studio
## font.mp4
**Remotion** is a React-based video creation tool that transforms the entire DOM into videos. It's particularly suited for beginners, as web developers can start creating videos using the skills they already have.
**Motion Canvas** is intended as a standalone editor for creating production-quality animations. It features a unique imperative API that adds elements to the timeline procedurally, rather than relying on keyframes like traditional video editing tools. This makes Motion Canvas ideal for crafting detailed, animated videos.
In contrast, **Diffusion Studio** is not a framework with a visual editing interface but a **video editing library** that can be integrated into existing projects. It operates entirely on the **client-side**, eliminating the need for additional backend infrastructure. Diffusion Studio is also dedicated to supporting the latest rendering technologies, including WebGPU, WebGL, and WebCodecs. If a feature you need isn't available, you can **easily extend** it using Pixi.js.
**Video/Audio**trim and offset**Tracks & Layering****Splitting**clips**Html & Image**rendering**Text**with multiple styles- Web & Local
**Fonts** **Custom Clips**based on Pixi.js**Filters****Keyframe**animations**Numbers, Degrees and Colors****Easing**(easeIn, easeOut etc.)**Extrapolation**`'clamp' | 'extend'`
**Realtime playback****Hardware accelerated**encoding via WebCodecs**Dynamic render resolution and framerate**
Contributions to Diffusion Studio are welcome and highly appreciated. Simply fork this respository and run:
`npm install`
Before checking in a pull request please verify that all unit tests are still green by running:
`npm run test`
This project began in March 2023 with the mission of creating the "video processing toolkit for the era of AI." As someone passionate about video editing for over a decade, I saw Chrome’s release of Webcodecs and WebGPU without a feature flag as the perfect moment to build something new.
Currently, most browser-based video editors rely on server-side rendering, requiring time-consuming uploads and downloads of large video files. With Webcodecs, video processing can now be handled directly in the browser, making it faster and more efficient.
I’m excited to be part of the next generation of video editing technology.
✅ Supported 🧪 Experimental ❌ Not supported
Browser | Operating System | ||
---|---|---|---|
Chrome | ✅ | Windows | ✅ |
Edge | ✅ | Macos | ✅ |
Firefox | ✅ | Linux | ✅ |
Safari | ✅ | ||
Opera | ✅ | ||
Brave | ✅ | ||
Vivaldi | ✅ |
Browser | Operating System | ||
---|---|---|---|
Brave Android | ✅ | Android | ✅ |
Chrome Android | ✅ | iOS | 🧪 |
Firefox Android | 🧪 | ||
Opera Android | ✅ | ||
Safari iOS | 🧪 |
Demultiplexing | Multiplexing | |
---|---|---|
Mp4 | ✅ | ✅ |
Webm | ✅ | ❌ |
Mov | ✅ | ❌ |
Mkv | ❌ | ❌ |
Avi | ❌ | ❌ |
Decoding | Encoding | |
---|---|---|
Avc1 | ✅ | ✅ |
AAC | ✅ | ✅ (Chromium only) |
Opus | ✅ | ✅ |
Wav | ✅ | ✅ |
Hevc | ✅ | ❌ |
VP9 | ✅ | ❌ |
VP8 | ✅ | ❌ |
Mp3 | ✅ | ❌ |
Ogg | ✅ | ❌ |
| true | true | true |
The Video Creation Engine: Edit videos with code, featuring the fastest WebCodecs renderer for in-browser video processing. - diffusionstudio/core
|
2024-10-12 00:00:00
|
2024-07-30 00:00:00
|
https://opengraph.githubassets.com/0459fc07507981414d2d041c0892fd421416d670832c3055b22f789bae6b81c3/diffusionstudio/core
|
object
|
github.com
|
GitHub
| null | null |
8,735,733 |
http://techcrunch.com/2014/12/11/the-da-vinci-1-0-aio-is-the-future-of-all-in-one-3d-printers/
|
The da Vinci 1.0 AiO Is The Future Of All-In-One 3D Printers | TechCrunch
|
John Biggs
|
As we enter the second half of this, the Decade of 3D Printing, we are coming to a crossroads. On one hand the Rebel open source RepRap crowd are clamoring to keep 3D printing free, man, while the Imperial forces of 3D Systems and Stratasys – along with countless imitators all attempting to commercialize 3D printing and create the first popular home printer – are locked in a race to the bottom in order to gain market share and users. The resulting dichotomy pits amazingly advanced DIY printers that sometimes explode into a gush of melted plastic and sadness with amazingly advanced proprietary printers that also sometimes explode into a gush of melted plastic and sadness. The XYZPrinting da Vinci 1.0 AiO is firmly on the latter side.
The AiO is a closed box that contains a full ABS 3D printing system as well as a laser 3D scanner. A turntable under the built platform spins objects slowly as a laser takes in their contours and the resulting objects can be printed directly from the scanning software. It is literally a 3D copier with true object-in/object-out systems. In short, it is a Star Trekian replicator – within reason.
First, lets’ take a moment to marvel at what this thing truly is. You can place an object into it and make a 3D copy of that object. If you really think about what that means you realize that we have moved from the age of bits into the age of atoms. While the AiO might not be the best 3D printer in the world it does bring 3D copying into your home or office. Let that sink in. A few years ago that was deemed impossible, the realm of science fiction. But no longer. But that’s not the most amazing thing. The most amazing thing about this printer is its $799 price tag. That’s right: $799 gets you a 7.8×7.8 x7.5 inch build envelope in ABS as well as a 3D scanner. A good color laser printer cost that much in 2013.
But how does it work? Everything about the AiO is adequate. The prints are surprisingly smooth and detailed. A 3D print test I ran (below) passed with flying colors and a Mario star tree topper I printed looked like it could come out of the Nintendo Store. There was no clean-up – the printer prints onto a heated glass surface that is pre-calibrated to ensure excellent prints – and the machine is nearly silent except for the muffled motion of the motor and a small fan. I had no complaints regarding the printing process either although the software was a bit buggy on the Mac.
The scanner was good but required planning. Scanning shiny objects is not recommended and even some detail is lost on matte objects. I scanned a few objects using the machine including a matte plaster gargoyle and a porcelain elephant. You can check the gargoyle out here but the elephant didn’t make the cut. A little lion statue, however, looked great except for some missing pixels around the head. The results, while not perfect, were just fine for printing. Like the photocopiers of old, the quality of the 3D copies that come out of this machine is lacking. I can only imagine what would happen if I printed a copy of a copy of a copy. Perhaps I’d create the first 3D zine?
Put these two amazing features together and you get something truly special. Be forewarned, however: the AiO is actually huge, probably twice as big as a Makerbot and a little bigger than a home laser printer. It’s also limited in a few important ways.
When the AiO worked well it was miraculous. Objects printed onto the glass substrate without sticking and came up like magic. If you’re familiar with 3D printing, trying to dig a plastic part off of a stubborn plate is disturbing at worst and impossible at best. These objects seemed to just slide off like cookies off of a Teflon cookie sheet. When trying to print the gargoyle, for example, the failed spectacularly. Filament balled up into a smoky lump and started to stink. The plastic melted all over nozzle and the resulting clog required a lot of digging with small tweezers to clear. Because the entire machine is inside a closed box access to the print head is limited. This was a testament to the direction 3D printing is heading – all-in-one ease with proprietary consumables – as well as many of the pitfalls. Most hobbyists will bristle at having to deal with a hermetically sealed case and filament cartridge but, as HP and other printer makers well know, the money isn’t in the printer, it’s in the ink.
Therein lies the rub. The AiO uses a 1.75mm ABS filament but requires a special cartridge. This isn’t any ordinary box, however. Inside is a tiny EEPROM that tells the printer how much filament is left in the cartridge and, most important, prevents you from refilling the cartridge on your own. You can hack the cartridge to read “full” again. While the 600g cartridge costs a mere $30, it would still be nice to use your own filament if you have it. This requirement is the first inkling that we are entering an odd new world of DRM-protected 3D printing.
However, if you can accept the proprietary filament and/or are ready to refill the filament cartridges when (and let’s face it, this will probably happen) XYZPrinting stops making these cartridges or goes out of business, you might be in luck. You could also just wait for a more open 3D printer model that uses standard filament and offers slightly better scan quality, but for $799 you might be waiting for a while. In short the AiO is a fascinating, inexpensive, and impressive piece of technology that is well worth looking at if you’re into 3D printing and want to give it a try.
| true | true | true |
As we enter the second half of this, the Decade of 3D Printing, we are coming to a crossroads. On one hand the Rebel open source RepRap crowd are
|
2024-10-12 00:00:00
|
2014-12-11 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
40,091,769 |
http://news.bbc.co.uk/2/hi/uk_news/magazine/8487106.stm
|
The rise and fall of the waist
|
Elizabeth Diffin
|
By Elizabeth Diffin
BBC News Magazine
|
**The way a man wears his trousers may reveal his age, research says. But when it comes to waistband placement, history shows there is no golden rule.**
You no longer have to eye his hairline to determine a man's age. There's a new way to figure out just how old he is: take a look at his beltline. A survey from department store Debenhams (illustrated below) suggests that a man's waistband rises and falls throughout his life. Trousers bottom out at the age of 16 with below-the-hip styles and peak at 57, just seven inches below the armpit. Young boys may wear their trousers at their natural waist while being dressed by their parents, but they generally don't return to this style until they reach their late 20s.
Fashion history shows this seesaw isn't such a new thing - waistlines have been bouncing up and down for hundreds of years. In Henry VIII's time, men wore trousers called "cannons", whose bulkiness around the thigh drew the eye. The first true trousers in Western Europe - pantaloons - were high-waisted and used light-coloured fabric to elongate a man's figure. The invention of elastic braces in the 1840s meant that trousers continued to be kept hiked up, although waistcoats prevented waistbands from being seen. But even with the waistbands hidden from prying eyes, this ushered in a problem that continues until today: Men don't know where to wear their trousers. **Ups and downs**
"Historically, braces are used to keep up trousers and undergarments," says Andrew Groves, course director of fashion design at the University of Westminster. "They hold the trouser so it doesn't really touch the body." By the turn of the 20th Century, with the advent of baggier lounge-style suits, the waistline dropped, ushering in a century of yo-yoing waistlines. "As fashion has inevitably speeded up, the waistband has shifted up and down seasonally," says Shaun Cole, the principal lecturer in history and culture at the London College of Fashion. In particular, Alexander McQueen's "bumsters" (revealingly low-cut trousers) and hip-hop music in the 1980s and 1990s influenced people to wear their trousers on their hips
or even lower.
"Young people of today are used to having boxer shorts hanging out," Mr Groves says. "They think they're wearing [their trousers] normally, but they're actually on the hip." That perception of where trousers should sit is at the root of the mockery of Simon Cowell, Britain's face of the so-called natural waist. But according to the Debenhams study, Mr Cowell, at age 50, is a bit old to wear his trousers there. "It just looks odd," Mr Groves says. "[But] it wasn't like he was making a fashion statement." Fashion tradition is also on Mr Cowell's side. It dictates that trousers be worn on the "natural waist", the narrowest part of the body between the chest and hip. Most suits are designed for the natural waist or slightly below, but jackets hide that fact. Mr Cole says that a man's body shape determines where he views his "waist" to be. Men entrenched in the gym and fitness culture may be hyper-aware of the natural waist. With a rise in obesity, overweight men may not know whether to wear them above or below their stomach. This confusion is also reflected in the Debenhams' survey, with research into what the clothing industry calls Under and Over Achievers. Although most men would prefer to fasten the waistband over their natural waist, the survey shows that 20% of older men will ignore their changing body shape and wear their trousers below their natural waistline, rather than buying a larger size.
Women led the way in low-cut styles like Alexander McQueen's bumsters
|
Many men may simply follow what their friends and acquaintances are doing, says Mr Groves, or follow a shop assistant's instructions. They also can be influenced by the fashions of their personal coming-of-age period and then carry them throughout life. Mr Groves has a hard time believing the waistband trends will continue indefinitely. "The idea of old people walking around with their pants hanging out is not pleasant," he says. "Are people suddenly going to pull their trousers up? I don't think so." Although there is no single rule for where a man's waistband should sit, Mr Groves offers a simple test: Don't expose your socks or your belly. "Pay attention to where [your trousers] join the rest of the body," he says. And to those who bemoan living in a country of lowering waist lines, he offers some encouragement: what goes around comes around. Fashion tends to be cyclical, so high waists may not be gone for long. Just give it another 10 years.
**Below is a selection of your comments.**
I am a 49 year old with a 30" waist and struggle to buy trousers off the peg that size. I am often directed to the boys section of shops, what are males waistlines coming to?
**Dave, Solihull** Stange comment about showing socks though. Anyone who sees something from the 1960s will see that there was always a 1" gap between trouser and shoe to match the 1" gap between cuff and shirt. Bond does it all the time. That is something that has vanished completely, but I think looks pretty smart.
**Peter, London** The height of one's waistband is not dependent on your age, as much as on the fashion of the day. In my grandfather's day, trousers such as the Oxford Bag were cut with a high waistband - he persisted in wearing what he found smart and comfortable. Fashion tradition dictates nothing: it merely reflects the average waistband. High-waisted trousers can be extremely flattering - although they are barely seen these days except amongst vintage enthusiasts or in military formal wear. Today I happen to be wearing high-waisted 1930s cut jeans - and they're great.
**PJ Ayres, London** I don't think it has anything to do with age at all - it all depends on the belly issue. There comes a time in a man's life when he has to make the decision that he will stick to until the day he dies - am I an up-and-over or an up-and-under? Whereas my Dad & my brother both chose the up-and-over camp, my hubby has gone to the up-and-under. I guess it all depends on where you want the belly bulge to be seen - in your shirt or in your trousers.
**Lily** This reaffirms my suspicion that I'm a middle-aged man in a 20-something's body. For a while, I've lamented the trouser styles widely available for my age that are to sit on (or slightly below) the natural waist. I find that by constantly sitting and getting up my shirt rides out and needs to be tucked in time and time again. If so many people would prefer a higher waistline, why are they so hard to find in shops. Perhaps a return to kidney-warmers and braces is too much to hope for?
**Rhys, Carmarthenshire** Sadly your graph is wrong: no line for the* below* the bum cheeks waistline.
**AndrewM, London** Trousers are made with a certain distance from where the legs join to the waistband and a certain length of leg.. You can't pull them up any further than this distance unless you want to be a soprano, or you can wear them lower. If the leg to waist is large and you wear them at your waist then the point of legs will be halfway down your thighs. It's nothing to do with your age, it's to do with the fashion of the trousers when you bought them and how long they last. You can't buy trousers that button round your nipples like you saw old men wearing 30 years ago, and they bought them years before that. It's like flat front and pleats, and straight and angled pockets. Fashion designers decree that something is not happening not the people who buy them.
**Robert, Glos UK** What makes me laugh is that everyone I've asked who wears their trousers down low say they do this to be different. But they all look the same.
**Mik Hatcher, Rochester** Nonsense. You get no choice in what's offered these days - and if that leaves a gut above the belt, that's life.
**Rahere, Smithfield** Big belly? Wear 'em high with a waistcoat or sweater or tank top or something to cover the waistband. A belly hanging over a waistband never looks good and can never be "fashionable". The cardinal crime IMO is not the relative height of the waistband, it's whether the inside leg has been correctly judged. People, men especially, tall men worst of all, who wear their trousers floating a few inches above their shoes and socks should be ritually de-bagged in the street and forced to wear their trousers on their head for such a grievous style crime. Unless your trousers are rolled up, if you can see your socks in the mirror when you are wearing trousers and standing up... don't. Go and buy a pair that fit properly!
**Richard, London**
|
## Bookmark with:
What are these?
| true | true | true |
The way a man wears his trousers may reveal his age, research says. But when it comes to waistband placement, history shows there is no golden rule.
|
2024-10-12 00:00:00
|
2010-01-29 00:00:00
| null | null | null |
BBC
| null | null |
9,412,999 |
http://blog.venturepact.com/7-project-management-tools-having-a-mammoth-user-base-of-70114000?utm_campaign=Content%20Curation%20Networks&utm_medium=social&utm_source=hackernews
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,820,187 |
https://www.theverge.com/2018/4/11/17226416/reddit-ceo-steve-huffman-racism-racist-slurs-are-okay
|
Reddit CEO says racism is permitted on the platform, and users are up in arms
|
Nick Statt
|
Reddit CEO Steve Huffman has found himself once again embroiled in a controversy surrounding his website’s policy on moderation. In a Reddit thread announcing the platform’s 2017 transparency report findings, in which Reddit identified and listed close to 1,000 suspected Russia-linked propaganda accounts that have been banned, Huffman replied to a straightforward question about the company’s rules around hate speech, which is a verbal attack based on race, religion, or another protected class.
“I need clarification on something: Is obvious open racism, including slurs, against reddits rules or not?” asked Reddit user chlomyster. “It’s not,” Huffman, who operates on Reddit under his original handle “spez,” responded.
Huffman elaborated on his point, adding:
“On Reddit, the way in which we think about speech is to separate behavior from beliefs. This means on Reddit there will be people with beliefs different from your own, sometimes extremely so. When users actions conflict with our content policies, we take action.”
Our approach to governance is that communities can set appropriate standards around language for themselves. Many communities have rules around speech that are more restrictive than our own, and we fully support those rules.
It’s a controversial approach, to say the least, and it has many Reddit users outraged that communities like the Trump-centric r/The_Donald are allowed to walk up to and over the line of racism repeatedly without any site action. Many Reddit users responded to Huffman by pointing out that hate speech does constitute behavior in a way, and that communities like r/The_Donald directly participated in the conversation and organizing of events like the Charlottesville, Virginia, white supremacist rally that resulted in Heather Heyer’s death. This conversation around Reddit’s light moderation has been simmering for quite some time, boiling over most recently last month when the company discussed its approach to Russia propaganda.
Huffman's position here is an evolving one. Nearly a decade ago, Huffman’s approach to hate speech mirrored that of other major social media platforms today, which is to ban it except in extremely narrow or uniquely circumstantial situations. For instance, Facebook’s policies on hate speech are well-documented, and saying something racist will typically lead to some type of disciplinary action. Other platforms like Twitter, YouTube, and Instagram all have hate speech policies as well that can result in suspensions or bans.
Huffman’s approach to hate speech has evolved over the last 10 years
“I guess I’m a little late to the party, but I banned him. We rarely ban non-spammers, but hate-speech used in that context is not something we tolerate,” Huffman wrote in a thread nine years ago about banning a user over hate speech. “This isn’t any change in policy: we’ve always banned hate speech, and we always will. It’s not up for debate. You can bitch and moan all you like, but me and my team aren’t going to be responsible for encouraging behaviors that lead to hate,” Huffman wrote in response to another user in the same thread.
Yet, when Huffman took over in 2015 for interim CEO Ellen Pao, who was pushed out of her position in part due to the platform’s toxic and vehement opposition to Pao’s leadership, his approach to hate speech had shifted. “While my personal views towards bigotry haven’t changed, my opinion of what Reddit should do about it has,” Huffman wrote on the topic nearly three years ago, a few weeks after he had returned to lead the company. “I don’t think we should silence people just because their viewpoints are something we disagree with. There is value in the conversation, and we as a society need to confront these issues. This is an incredibly complex topic, and I’m sure our thinking will continue to evolve.”
Reddit still takes hardline stances on calls to violence, threats, doxxing, and other activities that may lead to real-world harm. But Huffman has often been wishy-washy on moderating the more complex gray areas in between innocuous content and those extreme examples. That’s where hate speech, which is not illegal in the US, thanks to First Amendment protection, typically falls. For instance, in 2015, Reddit banned the fat-shaming community r/fatpeoplehate and the openly racist community r/coontown. Infamous situations prior to that included the banning of the community sharing leaked celebrity nude photos and a community dedicated to sharing so-called “creepshots” of underage girls.
Reddit takes action against some communities only when it seems it has to
More recently, Reddit took action against the artificial intelligence-generated fake porn community r/deepfakes as well as a handful of alt-right subreddits and Nazi boards. But each time it does this, Reddit cites a specific rule like the use of violent speech, doxxing, or the sharing of non-consensual pornography.
When it comes to raw speech, however, Huffman seems to be more permissive, which stands in stark contrast to other tech industry platforms, nearly all of which are grappling with hard questions about moderation these days. Just this week, Facebook CEO Mark Zuckerberg was grilled by Congress over the ongoing Cambridge Analytica data privacy scandal, and he was asked numerous questions about how the company plans to handle hate speech on its platform. The matter for Facebook is especially pressing, as ethnic violence in Myanmar has erupted, thanks in part to organizing and the spreading of propaganda on the social network.
Facebook’s approach seems largely centered on AI. Zuckerberg says his company is increasingly looking to automated algorithms that parse text, photos, and videos to do the work even tens of thousands of human moderators cannot. That work decides whether involving a piece of content breaks the company’s policies around fake news, hate speech, obscenity, and other inadmissible forms of content.
Huffman thinks banning speech won’t make it go away
Reddit’s approach, on the other hand, seems to be focused less on sweeping rules and more on case-by-case evaluations. That won’t do very much to calm critics who want it to ban communities like r/The_Donald or make the use of racial slurs a punishable offense. Huffman seems to take the free speech absolutism approach of letting sunlight disinfect the world of extremist viewpoints and bigotry, or to offload the work to subreddit admins and site moderators when applicable.
Yet that approach falls apart when it becomes inconvenient for Reddit as a company, like in the presence of a legal and PR nightmare resulting from letting neo-Nazis or illegal pornography run rampant on the site. As many Reddit users pointed out to Huffman in responses to the thread, a study published last year on the banning of r/fatpeoplehate and r/coontown, titled “You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban,” showed clear positive effects of banning hateful communities.
“Many more accounts than expected discontinued their use of the site; and, among those that stayed active, there was a drastic decrease (of at least 80 percent) in their hate speech use,” the study authors concluded. “Though many subreddits saw an influx of r/fatpeoplehate and r/coontown ‘migrants,’ those subreddits saw no significant changes in hate speech use. In other words, other subreddits did not inherit the problem.”
Banning hateful groups helps clean up communication platforms
Whatever Huffman’s evolving approach on the topic, it’s Reddit users that seem to be the most directly affected by the proliferation of hate speech on the platform.
“Spez, what qualifies as bannable hate speech to you? Because I kinda wonder if you’d be able to justify allowing some of the things on your platform that you do allow on your platform in front of Congress,” wrote user PostimusMaximus. “Zuckerberg is sitting over here getting grilled for not removing hate-speech fast enough due to AI limitations and yet you find yourself passing hate speech off as okay because you think it’s not a dangerous thing to allow on your platform or because you expect t_d [r/The_Donald] to self-moderate and hopefully if they troll long enough they’ll die out on their own.”
“I think aside from Russian interference you need to give a thorough answer explaining what the logic is here,” the user added, linking to specific Reddit threads filled with anti-Muslim hate speech on the Trump-centric subreddit. “You are literally letting users spread hate speech and pretend it’s politics in some weird sense of free speech as if it’s okay and nothing bad is happening.”
**Update 4/12, 1:33M ET:** Following publication of this story, Huffman *issued a follow-up statement** clarifying his position on racism and expanding on his views about Reddit moderation. Here is the statement in full: *
In the heat of a live AMA, I don’t always find the right words to express what I mean. I decided to answer this direct question knowing it would be a difficult one because it comes up on Reddit quite a bit. I’d like to add more nuance to my answer:
While the words and expressions you refer to aren’t explicitly forbidden, the behaviors they often lead to are.
To be perfectly clear, while racism itself isn’t against the rules, it’s not welcome here. I try to stay neutral on most political topics, but this isn’t one of them.
I believe the best defense against racism and other repugnant views, both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation, and empower our communities to do so on Reddit.
When it comes to enforcement, we separate behavior from beliefs. We cannot control people’s beliefs, but we can police their behaviors. As it happens, communities dedicated racist beliefs end up banned for violating rules we do have around harassment, bullying, and violence.
There exist repugnant views in the world. As a result, these views may also exist on Reddit. I don’t want them to exist on Reddit any more than I want them to exist in the world, but I believe that presenting a sanitized view of humanity does us all a disservice. It’s up to all of us to reject these views.
These are complicated issues, and we may not always agree, but I am listening to your responses, and I do appreciate your perspectives. Our policies have changed a lot over the years, and will continue to evolve into the future. Thank you.
| true | true | true |
Reddit’s Steve Huffman clarifies his more radical approach to free speech on the internet
|
2024-10-12 00:00:00
|
2018-04-11 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
28,353,162 |
https://jamanetwork.com/journals/jama/fullarticle/2783690
|
Body Mass Index Among Children and Adolescents During the COVID-19
|
Susan J Woolford; Margo Sidell; Xia Li; Veronica Else; Deborah R Young; Ken Resnicow; Corinna Koebnick; MD; MPH
|
The COVID-19 pandemic has been associated with weight gain among adults,1 but little is known about the weight of US children and adolescents. To evaluate pandemic-related changes in weight in school-aged youths, we compared the body mass index (BMI; calculated as weight in kilograms divided by height in meters squared) of youths aged 5 to 17 years during the pandemic in 2020 with BMI in the same period before the pandemic in 2019.
We conducted a retrospective cohort study using Kaiser Permanente Southern California (KPSC) electronic health record data. Youths between 5 and 17 years with continuous health care coverage were included if they had an in-person visit with at least 1 BMI measure before the pandemic (March 2019-January 2020) and another BMI measure during the pandemic (March 2020-January 2021 with at least 1 BMI after June 16, 2020, ie, about 3 months into the pandemic). Youths with complex chronic conditions were excluded.2,3 Race and ethnicity based on caregiver report or birth certificates were used to compare with the underlying population. Outcomes were the absolute distance of a youth’s BMI from the median BMI for sex and age,4 weight adjusted for height, and overweight or obesity (≥85th or ≥95th percentile of BMI for age, respectively).5,6 We fit mixed-effect and Poisson regression models accounting for repeated measures within each individual, using an autoregressive correlation structure and maximum likelihood estimation of covariance parameters to assess each outcome. Similar to an interrupted time-series design, we included a binary indicator representing the periods before or during the pandemic plus a calendar month by period interaction term. We divided youths into 3 age strata (5.0-<12, 12-<16, 16-<18 years) based on age at the start of the pandemic.
Models were adjusted for sex, race and ethnicity, state-subsidized health insurance, neighborhood education, neighborhood income, and number of parks in the census tract. Mixed-effects models also included BMI-for-age class at baseline. All analyses were performed with α = .05 for 2-sided tests using SAS version 9.4 (SAS Institute Inc). The KPSC institutional review board approved the study and granted a waiver for informed consent.
The cohort (n = 191 509) was racially and ethnically diverse (10.4% Asian and Pacific Islander, 50.4% Hispanic, 7.0% non-Hispanic Black, and 25.3% non-Hispanic White) with 49.6% girls, a mean age of 11.6 years (SD, 3.8 years), and a mean prepandemic BMI of 20.7 (SD, 5.4). The study population was comparable with the overall KPSC pediatric population with regard to sex, age, race and ethnicity, and socioeconomic factors. Before the pandemic, 38.9% of youths in the cohort were overweight or obese compared with 39.4% in the KPSC source population.
Youths gained more weight during the COVID-19 pandemic than before the pandemic (Table). The greatest change in the distance from the median BMI for age occurred among 5- through 11-year-olds with an increased BMI of 1.57, compared with 0.91 among 12- through 15-year-olds and 0.48 among 16- through 17-year-olds. Adjusting for height, this translates to a mean gain among 5- through 11-year-olds of 2.30 kg (95% CI, 2.24-2.36 kg) more during the pandemic than during the reference period, 2.31 kg (95% CI, 2.20-2.44 kg) more among 12- through 15-year-olds, and 1.03 kg (95% CI, 0.85-1.20 kg) more among 16- through 17-year-olds. Overweight or obesity increased among 5- through 11-year-olds from 36.2% to 45.7% during the pandemic, an absolute increase of 8.7% and relative increase of 23.8% compared with the reference period (Table). The absolute increase in overweight or obesity was 5.2% among 12- through 15-year-olds (relative increase, 13.4%) and 3.1% (relative increase, 8.3%) among 16- through 17-year-olds. Most of the increase among youths aged 5 through 11 years and 12 through 15 years was due to an increase in obesity.
Significant weight gain occurred during the COVID-19 pandemic among youths in KPSC, especially among the youngest children. These findings, if generalizable to the US, suggest an increase in pediatric obesity due to the pandemic.
Study limitations include the observational design and inclusion of only those with in-person appointments. However, the analyses benefited from longitudinal data with prepandemic BMI and in-person well-child visits resuming at 84% of prepandemic levels by June 2020. Furthermore, the sample was comparable in all relevant characteristics with the overall KPSC pediatric membership.
Research should monitor whether the observed weight gain persists and what long-term health consequences may emerge. Intervention efforts to address COVID-19 related weight gain may be needed.
**Corresponding Author:** Corinna Koebnick, PhD, Department of Research & Evaluation, Kaiser Permanente Southern California, 100 S Los Robles, Second Floor, Pasadena, CA 91101 ([email protected]).
**Accepted for Publication:** August 18, 2021.
**Published Online:** August 27, 2021. doi:10.1001/jama.2021.15036
**Author Contributions:** Dr Koebnick had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs Woolford and Sidell shared equal first-author roles.
*Concept and design:* Woolford, Sidell, Resnicow, Koebnick.
*Acquisition, analysis, or interpretation of data:* Woolford, Sidell, Li, Else, Young, Koebnick.
*Drafting of the manuscript:* Woolford, Sidell, Resnicow, Koebnick.
*Critical revision of the manuscript for important intellectual content:* Woolford, Sidell, Li, Else, Young, Resnicow.
*Statistical analysis:* Woolford, Sidell, Li, Resnicow, Koebnick.
*Obtained funding:* Koebnick.
*Administrative, technical, or material support:* Else, Resnicow, Koebnick.
*Supervision:* Koebnick.
**Conflict of Interest Disclosures:** None reported.
**Funding/Support**: The current project was supported by Kaiser Permanente Community Benefits.
**Role of the Funder/Sponsor:** Kaiser Permanente had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
3.Feudtner
C, Feinstein
JA, Zhong
W, Hall
M, Dai
D. Pediatric complex chronic conditions classification system version 2: updated for
*ICD-10* and complex medical technology dependence and transplantation.
* BMC Pediatr*. 2014;14:199. doi:
10.1186/1471-2431-14-199PubMedGoogle ScholarCrossref 5.Kuczmarski
RJ, Ogden
CL, Guo
SS,
et al. 2000 CDC growth charts for the United States: methods and development.
* Vital Health Stat 11*. 2002;11(246):1-190.
PubMedGoogle Scholar
| true | true | true |
This study compares body mass index (BMI) of youths during the COVID-19 pandemic with BMI during the same period in 2019 to determine whether they experienced pandemic-related weight gain.
|
2024-10-12 00:00:00
|
2021-10-12 00:00:00
|
Article
|
jamanetwork.com
|
JAMA Network
| null | null |
|
11,938,989 |
http://eng.tapjoy.com/blog-list/moving-to-memsql
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
27,151,847 |
https://entrepreneurshandbook.co/starting-a-remote-first-company-dos-and-don-ts-2b9e03349347
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
41,322,461 |
https://www.theverge.com/2024/8/14/24220421/sonos-s2-app-relaunch
|
Exclusive: Sonos considers relaunching its old app
|
Chris Welch
|
Sonos has explored the possibility of rereleasing its previous mobile app for Android and iOS — a clear sign of what an ordeal the company’s hurried redesign has become. *The Verge* can report that there have been discussions high up within Sonos about bringing back the prior version of the app, known as S2, as the company continues toiling away at improving the performance and addressing bugs with the overhauled design that rolled out in May to a flood of negative feedback. (The new Sonos app currently has a 1.3-star review average on Google Play.)
Letting customers fall back to the older software could ease their frustrations and reduce at least some of the pressure on Sonos to rectify every issue with the new app. At least for now, the redesigned version is all that’s available, which makes it impossible for some customers to avoid its flaws. The situation has gotten substantially better with recent updates and the app has turned a corner for many, but there’s still plenty of work to be done.
CEO Patrick Spence has remained insistent that rebuilding the Sonos app from the ground up was the right choice and will make it possible for the company to innovate more frequently and expand into new product categories.
But he has also readily acknowledged that Sonos severely let down its customers. “While the redesign of the app was and remains the right thing to do, our execution — my execution — fell short of the mark,” he said during last week’s earnings call. He went on to say:
The app situation has become a headwind to existing product sales, and we believe our focus needs to be addressing the app ahead of everything else. This means delaying the two major new product releases we had planned for Q4 until our app experience meets the level of quality that we, our customers and our partners expect from Sonos.
One of those two delayed products is the successor to the Sonos Arc soundbar — codenamed Lasso — and sources tell *The Verge* that Sonos still hopes to release that product sometime in October. (Sonos’ fiscal year ends in late September, so October would bring the company into fiscal year 2025 and line up with Spence’s statement.)
Last week, Spence estimated that righting the ship is likely to cost between $20 and $30 million in the near term as Sonos works to assuage current customers and keep them from abandoning the company’s whole-home audio platform. The new app is being updated every two weeks with improvements, and Spence has said that cadence will continue through the fall. S2’s potential return would not change this. Restoring the old app could prove to be a technical headache since Sonos’ new software shifts a lot of core functionality to the cloud.
This has unquestionably become one of the most turbulent times in Sonos’ history. In the span of just a few months, the company has gone from a well-regarded consumer tech brand to a painful example of what can happen when leadership pushes on new projects too aggressively. Spence himself admitted that the app controversy has completely overshadowed the release of Sonos’ first-ever headphones, the Sonos Ace. Just today, Sonos laid off around 100 employees as the fallout from its rushed app makeover continues.
| true | true | true |
The S2 app could make a comeback while Sonos fixes the new one.
|
2024-10-12 00:00:00
|
2024-08-14 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
4,075,657 |
http://uxmyths.com/post/654047943/myth-people-dont-scroll
|
Myth #3: People don’t scroll - UX Myths
|
Zoltan Gocza; Zoltan Kollin; Uxmyths
|
# Myth #3: People don’t scroll
Although people weren’t used to scrolling in the mid-nineties, nowadays it’s absolutely natural to scroll. For a continuous and lengthy content, like an article or a tutorial, scrolling provides even better usability than slicing up the text to several separate screens or pages.
You don’t have to squeeze everything into the top of your homepage or above the fold. To make sure that people will scroll, you need to follow certain design principles and provide content that keeps your visitors interested. Also keep in mind that content above the fold will still get the most attention and is also crucial for users in deciding whether your page is worth reading at all.
## Many research findings prove that people do scroll:
- Chartbeat, a data analytics provider, analysed data from 2 billion visits and found that “66% of attention on a normal media page is spent below the fold.” - What You Think You Know About the Web Is Wrong
- Heatmap service provider ClickTale analyzed almost 100.000 pageviews. The result: people used the scrollbar on 76% of the pages, with 22% being scrolled all the way to the bottom regardless of the length of the page. That said, it’s clear that page top is still your most valuable screen estate. - Unfolding the Fold and ClickTale Scrolling Report and Part 2
- The design agency Huge measured scrolling in a series of usability tests and found “that participants almost always scrolled, regardless of how they are cued to do so – and that’s liberating.” - Everybody Scrolls
- Usability expert Jakob Nielsen’s eye-tracking studies show that while attention is focused above the fold, people do scroll down, especially if the page is designed to encourage scrolling. - Scrolling and Attention
- On mobile, half of the users start scrolling within 10 seconds and 90% within 14 seconds.- Stats from MOVR (published in Luke Wroblewski’s tweet)
- Upon reviewing the analytics data of TMZ.com, Milissa Tarquini found that the most clicked link on the homepage is at the very bottom. She also points out that polls and galleries at the bottom of AOL’s Money & Finance homepage get a lot of clicks in spite of their position. - Blasting the Myth of the Fold
- Another eye-tracking study conducted by CX Partners confirms that people do scroll if certain design guidelines are followed. - The myth of the page fold: evidence from user testing
- Usability studies by the Software Usability Research Laboratory show that users can read long, scrolling pages faster than paginated ones. Their studies confirm that people are accustomed to scrolling. - The Impact of Paging vs. Scrolling on Reading Online Text Passages
- Jared Spool’s usability tests from 1998 tell us that, even though people say they don’t like to scroll, they are willing to do so. Moreover, longer and scrollable pages even worked better for users. - As the Page Scrolls
- SURL conducted another usability study, confirming that people find both scrolling and paging natural on search results pages. - Paging vs. Scrolling: Looking for the Best Way to Present Search Results
- Luke Wroblewski provides small snippets of stats and advice on scrolling behavior - There’s No Fold
## More articles about scrolling:
- In July 2011, Apple removed the scrollbar from Mac OS X (it’s the default setting, though users can put it back). This clearly shows that people are so familiar with scrolling that they don’t even need the visual clue for it.
- Jared Spool’s article on design guidelines to encourage scrolling: Utilizing the Cut-off Look to Encourage Users To Scroll.
- Don’t miss Life below 600px, a witty article on the page fold
| true | true | true |
UX Myths collects the most frequent user experience misconceptions and explains why they don't hold true.
|
2024-10-12 00:00:00
|
2010-06-01 00:00:00
|
blog
|
uxmyths.com
|
UX Myths
| null | null |
|
39,689,458 |
https://www.youtube.com/watch?v=AItTqnTsVjA
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,725,797 |
https://www.truenas.com/docs/references/zfsdeduplication/
|
ZFS Deduplication
| null |
# ZFS Deduplication
10 minute read.
ZFS supports deduplication as a feature. Deduplication means that identical data is only stored once, and this can greatly reduce storage size. However deduplication is a compromise and balance between many factors, including cost, speed, and resource needs. It must be considered exceedingly carefully and the implications understood, before being used in a pool.
Deduplication is one technique ZFS can use to store file and other data in a pool. If several files contain the same pieces (blocks) of data, or any other pool data occurs more than once in the pool, ZFS stores just one copy of it. In effect instead of storing many copies of a book, it stores one copy and an arbitrary number of pointers to that one copy. Only when no file uses that data, is the data actually deleted. ZFS keeps a reference table which links files and pool data to the actual storage blocks containing their data. This is the deduplication table (DDT).
The DDT is a fundamental ZFS structure. It is treated as part of the metadata or the pool. If a pool (or any dataset in the pool) has ever contained deduplicated data, the pool contains a DDT, and that DDT is as fundamental to the pool data as any of its other file system tables. Like any other metadata, DDT contents might be temporarily held in the ARC (RAM/memory cache) or L2ARC (disk cache) for speed and repeated use, but the DDT is not a disk cache. It is a fundamental part of the ZFS pool structure, how ZFS organizes pool data on its disks. Therefore like any other pool data, if DDT data is lost, the pool is likely to become unreadable. So it is important it is stored on redundant devices.
A pool can contain any mix of deduplicated data and non-deduplicated data, coexisting. Data is written using the DDT if deduplication is enabled at the time of writing, and is written non-deduplicated if deduplication is not enabled at the time of writing. Subsequently, the data remains as at the time it was written, until it is deleted.
The only way to convert existing current data to be all deduplicated or undeduplicated, or to change how it is deduplicated, is to create a new copy, while new settings are active. This can be done by copying the data within a file system, or to a different file system, or replicating using the Web UI replication functions. Data in snapshots is fixed, and can only be changed by replicating the snapshot to a different pool with different settings (which preserves its snapshot status), or copying its contents.
It is possible to stipulate in a pool, to deduplicate only certain datasets and volumes. The DDT encompasses the entire pool, but only data in those locations is deduplicated when written. Other data which is not deduplicate well or where deduplication is inappropriate, is not be deduplicated when written, saving resources.
The main benefit of deduplication is that, where appropriate, it can greatly reduce the size of a pool and the disk count and cost. For example, if a server stores files with identical blocks, it could store thousands or even millions of copies for almost no extra disk space. When data is read or written, it is also possible that a large block read or write can be replaced by a smaller DDT read or write, reducing disk I/O size and quantity.
The deduplication process is very demanding! There are four main costs to using deduplication: large amounts of RAM, requiring fast SSDs, CPU resources, and a general performance reduction. So the trade-off with deduplication is reduced server RAM/CPU/SSD performance and loss of top end I/O speeds in exchange for saving storage size and pool expenditures.
When data is not sufficiently duplicated, deduplication wastes resources, slows the server down, and has no benefit.
When data is already being heavily duplicated, then consider the costs, hardware demands, and impact of enabling deduplication *before* enabling on a ZFS pool.
High quality mirrored SSDs configured as a special vdev for the DDT (and usually all metadata) are strongly recommended for deduplication unless the entire pool is built with high quality SSDs. Expect potentially severe issues if these are not used as described below. NVMe SSDs are recommended whenever possible. SSDs must be large enough to store all metadata.
The deduplication table (DDT) contains small entries about 300-900 bytes in size. It is primarily accessed using 4K reads. This places extreme demand on the disks containing the DDT.
When choosing SSDs, remember that a deduplication-enabled server can have considerable mixed I/O and very long sustained access with deduplication. Try to find real-world performance data wherever possible. It is recommended to use SSDs that do not rely on a limited amount of fast cache to bolster a weak continual bandwidth performance. Most SSDs performance (latency) drops when the onboard cache is fully used and more writes occur. Always review the steady state performance for 4K random mixed read/write.
Special vdev SSDs receive continuous, heavy I/O. HDDs and many common SSDs are inadequate. As of 2021, some recommended SSDs for deduplicated ZFS include Intel Optane 900p, 905p, P48xx, and better devices. Lower cost solutions are high quality consumer SSDs such as the Samsung EVO and PRO models. PCIe NVMe SSDs (NVMe, M.2 “M” key, or U.2) are recommended over SATA SSDs (SATA or M.2 “B” key).
When special vdevs cannot contain all the pool metadata, then metadata is silently stored on other disks in the pool. When special vdevs become too full (about 85%-90% usage), ZFS cannot run optimally and the disks operate slower. Try to keep special vdev usage under 65%-70% capacity whenever possible. Try to plan how much future data you wan to add to the pool, as this increases the amount of metadata in the pool. More special vdevs can be added to a pool when more metadata storage is needed.
Deduplication is memory intensive. When the system does not contain sufficient RAM, it cannot cache DDT in memory when read and system performance can decrease.
The RAM requirement depends on the size of the DDT and the amount of stored data to be added in the pool. Also, the more duplicated the data, the fewer entries and smaller DDT. Pools suitable for deduplication, with deduplication ratios of 3x or more (data can be reduced to a third or less in size), might only need 1-3 GB of RAM per 1 TB of data. The actual DDT size can be estimated by deduplicating a limited amount of data in a temporary test pool.
Use the tunable **vfs.zfs.arc.meta_min** (*type*=*LOADER*, *value*=*bytes*) to force ZFS to reserve no less than the given amount of RAM for metadata caching.
Deduplication consumes extensive CPU resources and it is recommended to use a high-end CPU with 4-6 cores at minimum.
If deduplication is used in an inadequately built system, these symptoms might be seen:
**Cause**: Continuous DDT access is limiting the available RAM or RAM usage is generally very high RAM usage. This can also slow memory access if the system uses swap space on disks to compensate.**Solutions**:- Install more RAM.
- Add a new
**System > Tunable**:**vfs.zfs.arc.meta_min**with**Type**=**LOADER**and**Value**=**bytes**. This specifies the minimum RAM that is reserved for metadata use and cannot be evicted from RAM when new file data is cached.
**Cause**: The system must perform disk I/O to fetch DDT entries, but these are usually 4K I/O and the underlying disk hardware is unable to cope in a timely manner.**Solutions**: Add high quality SSDs as a special vdev and either move the data or rebuild the pool to use the new storage.
**Cause**: This is a byproduct of the disk I/O slowdown issue. Network buffers can become congested with incomplete demands for file data and the entire ZFS I/O system is delayed by tens or hundreds of seconds because huge amounts of DDT entries have to be fetched. Timeouts occur when networking buffers can no longer handle the demand. Because all services on a network connection share the same buffers, all become blocked. This is usually seen as file activity working for a while and then unexpectedly stalling. File and networked sessions then fail too. Services can become responsive when the disk I/O backlog clears, but this can take several minutes. This problem is more likely when high speed networking is used because the network buffers fill faster.
**Cause**: When ZFS has fast special vdev SSDs, sufficient RAM, and is not limited by disk I/O, then hash calculation becomes the next bottleneck. Most of the ZFS CPU consumption is from attempting to keep hashing up to date with disk I/O. When the CPU is overburdened, the console becomes unresponsive and the web UI fails to connect. Other tasks might not run properly because of timeouts. This is often encountered with pool scrubs and it can be necessary to pause the scrub temporarily when other tasks are a priority.**Diagnose**: An easily seen symptom is that console logins or prompts take several seconds to display. Generally, multiple entries with command`kernel {z_rd_int_[`
can be seen using the CPU capacity, and the CPU is heavily (98%+) used with almost no idle.*NUMBER*]}**Solutions**: Changing to a higher performance CPU can help but might have limited benefit. 40 core CPUs have been observed to struggle as much as 4 or 8 core CPUs. A usual workaround is to temporarily pause scrub and other background ZFS activities that generate large amounts of hashing. It can also be possible to limit I/O using tunables that control disk queues and disk I/O ceilings, but this can impact general performance and is not recommended.
| true | true | true |
Provides general information on ZFS deduplication in TrueNAS,hardware recommendations, and useful deduplication CLI commands.
|
2024-10-12 00:00:00
|
2024-09-19 00:00:00
|
/images/TrueNAS_Open_Enterprise_Storage.png
|
article
| null | null | null | null |
41,156,595 |
https://lovinggrace.dev/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,450,227 |
https://www.metabase.com/learn/debugging-sql/sql-logic
|
Redirecting…
| null |
Redirecting…
Click here if you are not redirected.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null |
metabase.com
|
metabase.com
| null | null |
32,330,115 |
https://www.quantamagazine.org/the-computer-scientist-trying-to-teach-ai-to-learn-like-we-do-20220802/
|
The Computer Scientist Trying to Teach AI to Learn Like We Do | Quanta Magazine
|
Allison Whitten August 2
|
# The Computer Scientist Challenging AI to Learn Better
## Introduction
Artificial intelligence algorithms are designed to learn in fits and starts. Instead of continuously updating their knowledge base with new information over time as humans do, algorithms can learn only during the training phase. After that, their knowledge remains frozen; they perform the task they were trained for without being able to keep learning as they do it. To learn even one new thing, algorithms must be trained again from scratch. It’s as if every time you met a new person, the only way you could learn her name would be to reboot your brain.
Training from scratch can lead to a behavior known as catastrophic forgetting, where a machine incorporates new knowledge at the cost of forgetting nearly everything it’s already learned. This situation arises because of the way that today’s most powerful AI algorithms, called neural networks, learn new things.
These algorithms are based loosely on our brains, where learning involves changing the strength of connections between neurons. But this process gets tricky. Neural connections also represent past knowledge, so changing them too much will cause forgetting.
Biological neural networks have evolved strategies over hundreds of millions of years to ensure that important information remains stable. But today’s artificial neural networks struggle to strike a good balance between new and old knowledge. Their connections are too easily overwritten when the network sees new data, which can result in a sudden and severe failure to recognize past information.
To help counter this, Christopher Kanan, a 41-year-old computer scientist at the University of Rochester, has helped establish a new field of AI research known as continual learning. His goal is for AI to keep learning new things from continuous streams of data, and to do so without forgetting everything that came before.
Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence — a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.
Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others’ work, catastrophic forgetting is no longer quite as catastrophic.
*Quanta* spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.
**How does your training in philosophy impact the way you think about your work?**
It has served me very well as an academic. Philosophy teaches you, “How do you make reasoned arguments,” and “How do you analyze the arguments of others?” That’s a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.
My lab has been inspired by asking the question: Well, if we can’t do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, don’t. You train them once. It’s a fixed entity after that. And that’s a fundamental thing that you’d have to solve if you want to make artificial general intelligence one day. If it can’t learn without scrambling its brain and restarting from scratch, you’re not really going to get there, right? That’s a prerequisite capability to me.
**How have researchers dealt with catastrophic forgetting so far?**
The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate.
In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences.
There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.
Unfortunately, though, replay isn’t a very satisfying solution.
**Why not?**
To learn something new, the neural network has to store at least some information about every concept that it learned in the past. And from a neuroscientific perspective, the hypothesis is that you and I have replay of a relatively recent experience — not something that happened in our childhoods — to prevent forgetting of that recent experience. Whereas in the way we do it in deep neural networks, that’s not true. It doesn’t necessarily have to store everything it has seen, but it has to store something about every task it learned in the past to use replay. And it’s unclear what it should store. So replay as it’s done today still seems like it’s not all the way there.
**If we could completely solve catastrophic forgetting, would that mean AI could learn new things continuously over time?**
Not exactly. I think the big, big, big open questions in the field of continual learning are not in catastrophic forgetting. What I’m really interested in is: How does past learning make future learning more efficient? And how does learning something in the future correct the learnings of the past? Those are things that not very many people are measuring, and I think doing so is a critical part of pushing the field forward because really, it’s not about just forgetting stuff. It’s about becoming a better learner.
That’s where I think the field is kind of missing the forest for the trees. Much of the community is setting up the problem in ways that don’t match either interesting biological questions or interesting engineering applications. We can’t just have everybody do the same toy problem forever. You’ve got to say: What’s our gauntlet task? How do we push things forward?
**Then why do you think most people are focusing on those simple problems?**
I can only speculate. Most work is done by students who are following past work. They are copying the setup of what others have done and showing some minor gains in performance with the same measurements. Making new algorithms is more likely to lead to a publication, even if those algorithms aren’t really enabling us to make significant progress in learning continually. What surprises me is that the same sort of work is produced by large companies who don’t have the same incentives, except for intern-driven work.
Also, this work is nontrivial. We need to establish the correct experiment and algorithmic setup to measure whether past learning helps future learning. The big issue is we don’t have good data sets for studying continual learning right now. I mean, we’re basically taking existing data sets that are used in traditional machine learning and repurposing them.
Essentially, in the dogma of machine learning (or at least whenever I start teaching machine learning), we have a training set, we have a test set — we train on the training set, we test on the test set. Continual learning breaks those rules. Your training set then becomes something that evolves as the learner learns. But we’re still limited to existing data sets. We need to work on this. We need a really good continual learning environment in which we can really push ourselves.
**What would the ideal continual learning environment look like?**
It’s easier to tell you what it’s not than what it is. I was on a panel where we identified this as a critical problem, but it’s not one where I think anybody immediately has the answer.
I can tell you the properties it might have. So for now, let’s assume the AI algorithms are not embodied agents in simulations. Then at the very least, ideally, we’re learning from videos, or something like that, like multimodal video streams, and hopefully doing more than just classification [of static images].
There are a lot of open questions about this. I was in a continual learning workshop a few years ago and some people like me were saying, “We’ve got to stop using a data set called MNIST, it’s too simple.” And then someone said, “OK, well, let’s do incremental learning of [the strategy-based video game] StarCraft.” And I’m doing that too now for various reasons, but I don’t think that really gets at it either. Life is a much richer thing than learning to play StarCraft.
**How has your lab tried to design algorithms that can keep learning over time?**
With my former student Tyler Hayes, I pioneered a continual learning task for analogical reasoning. We thought that would be a good area to study the idea of transfer learning, where you acquire skills and now need to use more complex skills to solve more complex problems. Specifically, we measured backward transfer — how well does learning something in the past help you in the future, and vice versa. And we found good evidence for transfer, much more significant than for a simple task like object recognition.
**Your lab also focuses on training algorithms to learn continuously from one example at a time, or from very small sets of examples. How does that help?**
A lot of continual learning setups still use very large batches of examples. So they will essentially say to the algorithm, “Here’s 100,000 things; learn them. Here’s the next 100,000 things; learn them.” That doesn’t really match what I would say is the real-world application, which is, “Here’s one new thing; learn it. Here’s another new thing; learn it.”
**If we want AI to learn more like us, should we also aim to replicate how humans learn different things at different ages, always refining our knowledge?**
I think that’s a very fruitful avenue for making progress in this field. People tell me that I’m just obsessed with development now that I have a child, but I can see that my daughter is capable of one-shot learning, where she sees me do something once and she can copy it immediately. And machine learning algorithms can’t do anything like that today.
It really opened my eyes. There’s got to be a lot more going on in our heads than in our modern neural networks. That’s why I think the field needs to go toward this idea of learning over time, where algorithms become better learners by building upon past experience.
**Do you think AI will ever really learn the same way humans do? **
I think they will. Definitely. It’s a lot more promising today because there’s just so many people working in the field. But we still need more creativity. So much of the culture in the machine learning community is a follow-the-leader approach.
I think of us as just biochemical machines, and eventually we’ll figure out how to make our algorithms for the correct architectures that I think will have more of our capabilities than they have today. There’s no convincing argument for me that says it’s impossible.
| true | true | true |
Christopher Kanan is building algorithms that can continuously learn over time — the way we do.
|
2024-10-12 00:00:00
|
2022-08-02 00:00:00
|
article
|
quantamagazine.org
|
Quanta Magazine
| null | null |
|
9,861,878 |
https://en.wikipedia.org/wiki/WD-40
|
WD-40 - Wikipedia
| null |
# WD-40
Product type | Water displacer |
---|---|
Owner | WD-40 Company |
Country | San Diego, California, United States |
Introduced | September 23, 1953 |
Website | www |
**WD-40** is an American brand and the trademark of a penetrating oil manufactured by the WD-40 Company based in San Diego, California.[1] Its formula was invented for the Rocket Chemical Company in 1953, before it became the WD-40 Company. WD-40 became available as a commercial product in 1961.[2] It acts as a lubricant, rust preventive, penetrant and moisture displacer. There are specialized products that perform better than WD-40 in many of these uses, but WD-40's flexibility has given it fame as a jack of all trades.[3] WD-40 stands for Water Displacement, 40th formula.
It is a successful product to this day, with steady growth in net income from $27 million in 2008 to $70.2 million in 2021.[4] In 2014, it was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.[5]
## History
[edit]Sources credit different people with inventing WD-40 formula in 1953 as part of the Rocket Chemical Company (later renamed to WD-40 Company), in San Diego, California; the formula was kept as a trade secret and was never patented.[6]
According to Iris Engstrand, a historian of San Diego and California history at the University of San Diego, Iver Norman Lawson invented the formula,[7] while the WD-40 company website and other books and newspapers credit Norman B. Larsen. According to Engstrand, "(Iver Norman) Lawson was acknowledged at the time, but his name later became confused with company president Norman B. Larsen."[8][9][6] "WD-40" is abbreviated from the term "Water Displacement, 40th formula",[10] suggesting it was the result of the 40th attempt to create the product.[1] The spray, composed of various hydrocarbons, was originally designed to be used by Convair to protect the outer skin of the Atlas missile from rust and corrosion.[11][12] This outer skin also functioned as the outer wall of the missile's delicate balloon tanks. WD-40 was later found to have many household uses[1] and was made available to consumers in San Diego in 1958.[11]
In Engstrand's account, it was Iver Norman Lawson who came up with the water-displacing mixture after working at home and turned it over to the Rocket Chemical Company for the sum of $500 (equivalent to $5,700 in 2023). It was Norman Larsen, president of the company, who had the idea of packaging it in aerosol cans and marketed it in this way.[7]
It was written up as a new consumer product in 1961.[13] By 1965 it was being used by airlines including Delta and United; United, for example, was using it on fixed and movable joints of their DC-8 and Boeing 720s in maintenance and overhaul.[14] At that time, airlines were using a variant called WD-60 to clean turbines, removing light rust from control lines, and when handling or storing metal parts.[14] By 1969 WD-40 was being marketed to farmers and mechanics in England.[15] In 1973, WD-40 Company, Inc., went public with its first stock offering. Its NASDAQ stock symbol is (Nasdaq: WDFC).[16]
## Formulation
[edit]WD-40's formula is a trade secret.[17] The original copy of the formula was moved to a secure bank vault in San Diego in 2018.[18]
To avoid disclosing its composition, the product was not patented in 1953, and the window of opportunity for patenting it has long since closed.[12]
WD-40's main ingredients as supplied in aerosol cans, according to the US material safety data sheet information,[19] and with the CAS numbers interpreted:[20]
- 45–50% low vapor pressure aliphatic hydrocarbon (isoparaffin)
- <35% petroleum base oil (non-hazardous heavy paraffins)
- <25% aliphatic hydrocarbons (same CAS number as the first item, but flammable)
- 2–3% carbon dioxide (propellant)
The European formulation[21] is stated according to the REACH regulations:
- 60–80% hydrocarbons C
9– C11n-alkanes, iso-alkanes, cyclics <2% aromatics - 1–5% carbon dioxide
The Australian formulation[22] is stated:
- 50–60% naphtha (petroleum), hydrotreated heavy
- <25% petroleum base oils
- <10% naphtha (petroleum), hydrodesulfurized heavy (contains: 1,2,4-trimethyl benzene, 1,3,5-trimethyl benzene, xylene, mixed isomers)
- 2–4% carbon dioxide
In 2009, *Wired* published an article with the results of gas chromatography and mass spectrometry tests on WD-40, showing that the principal components were C9 to C14 alkanes and mineral oil.[23]
## See also
[edit]## References
[edit]- ^
**a****b**"Q&A WD-40 CEO Garry Ridge explains company's slick success".**c***Los Angeles Times*. July 30, 2015. Archived from the original on September 5, 2015. Retrieved July 30, 2015. **^**"WD-40 COMPANY 2020 10-K". October 21, 2020. Retrieved June 8, 2021.**^**Davies, Adam (August 31, 2010). "The Case Against WD-40".*Popular Mechanics*. Archived from the original on June 19, 2022. Retrieved June 13, 2022.**^**"Statista - WD-40 Net Income, 2008-2021". March 19, 2022. Archived from the original on March 20, 2022. Retrieved March 20, 2022.**^**Sprekelmeyer, Linda, editor (2006).*These We Honor: The International Aerospace Hall of Fame*. Donning Co. Publishers, ISBN 978-1-57864-397-4.- ^
**a**Martin, Douglas (July 22, 2009). "Obituary: John Barry, Popularizer of WD-40, Dies at 84".**b***The New York Times*. Archived from the original on February 18, 2019. Retrieved February 26, 2017. - ^
**a**Engstrand, Iris H.W. (Fall 2014). "WD-40: San Diego's Marketing Miracle" (PDF).**b***The Journal of San Diego History*.**60**(4): 253–270. Archived (PDF) from the original on December 22, 2015. Retrieved March 7, 2017. **^**"WD-40 History – History and Timeline". WD-40 Company. Archived from the original on February 10, 2017. Retrieved April 10, 2017.**^**Bobby Mercer (2011).*ManVentions: From Cruise Control to Cordless Drills – Inventions Men Can't Live Without*. Adams Media. pp. 181–. ISBN 978-1-4405-1075-5. Retrieved June 28, 2013.**^**"WD-40 History | Learn the Stories Behind the WD-40 Brand | WD-40".*www.wd40.com*. Archived from the original on December 9, 2020. Retrieved November 7, 2020.- ^
**a**"Our History". WD-40. Archived from the original on June 23, 2014. Retrieved April 20, 2011.**b** - ^
**a**Martin, Douglas (July 22, 2009). "John S. Barry, Main Force Behind WD-40, Dies at 84".**b***The New York Times*. Archived from the original on February 18, 2019. Retrieved February 26, 2017. **^***Changing Times*(pre-1986) 15.5 (May 1, 1961): p. 36.- ^
**a**"New Materials".**b***Aircraft Engineering and Aerospace Technology*.**37**(5): 165. May 1965. doi:10.1108/eb034021. **^**"New on the Market".*Farm & Country*. London. January 1969. p. 72.**^**"History".*WD-40*. January 2017. Archived from the original on February 18, 2020. Retrieved February 18, 2020.**^**"Explore myths, legends and fun facts". WD-40. 2023. Archived from the original on March 16, 2023. Retrieved March 16, 2023.**^**"WD-40 Company Enlists Armoured Security to Move Top-Secret Formula".*WD-40 UK*. September 14, 2018. Retrieved December 4, 2020.[*dead link*]**^**"SDSUSA" (PDF).*www.wd40.com*. March 5, 2019. Archived (PDF) from the original on September 11, 2012. Retrieved February 17, 2020.**^**"ChemIDplus".*chem.nlm.nih.gov*. Retrieved February 17, 2020.**^**"WD-40® Multi-Use Product".*wd40.co.uk*. March 7, 2017. Archived from the original on February 18, 2020. Retrieved February 17, 2020.**^**"WD-40® Multi-Use Product" (PDF).*wd40.com.au*. July 5, 2018. Archived (PDF) from the original on August 13, 2021. Retrieved August 7, 2020.**^**Di Justo, Patrick (April 20, 2009). "What's Inside WD-40? Superlube's Secret Sauce".*Wired*. Archived from the original on January 19, 2014. Retrieved April 24, 2014.
| true | true | true | null |
2024-10-12 00:00:00
|
2003-11-01 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
28,742,970 |
https://herget.me/investing-guide
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
22,768,756 |
http://josecamachocollados.com/book_embNLP_draft.pdf
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
20,152,704 |
https://reeftracks.org/scuber
|
The Great Reef Census
| null | null | true | true | false |
Every boat, every Citizen, every reef. The Great Reef Census is a world-first citizen science effort to survey the Great Barrier Reef. Join us for the journey.
|
2024-10-12 00:00:00
| null |
website
|
greatreefcensus.org
|
The Great Reef Census
| null | null |
|
37,289,435 |
https://riskmusings.substack.com/p/on-identifying-risks
|
On Identifying Risks
|
Stephanie Losi
|
At the dawn of this blog in summer 2023 (edit: I meant 2022!), I wrote about how to identify emerging risks. Things like underwater mortgages in the mid-2000s, a spate of odd respiratory disease cases in late 2019 and early 2020, and a pullback of home insurers from areas viewed as high-risk for climate change now.
I want to revisit these how-to points because I added them as an afterthought to the bottom of an essay last summer, and I think they deserve a bigger spotlight. (Most of you weren’t here then, either, and I’m glad you’re readers now! Thank you). These remain my guideposts for identifying emerging risks and at least trying to steer clear of or, when possible, mitigate them.
**So, how do we find needles in haystacks before they stab us? **
In general, important anomalous risks start under the radar. But an important anomaly *isn’t *“a needle in a haystack,” at least not for long. It’s often hundreds of needles, all surfacing at once, glinting in the sunlight if we’re willing to look and accept what we see. And there’s a window of time to identify the problem and act to mitigate it before the haystack bursts into flame.
These emerging, anomalous risks tend to have a few characteristics:
**1.They are ****unusual****. **They break from the patterns of the past. Maybe something is happening in a different location than usual, or at a different time of year (like low Antarctic sea ice levels at the height of winter now), or in greater numbers than expected. The key is the noticeable divergence from past patterns.
For example, in looking at climate data, scientists began to notice divergence from long-term range-bound fluctuations in the 1990s or even earlier. That was a flag, one we could have heeded as an opportunity to take early action.
**2. They are**** suddenly widespread in small numbers****.** Early in 2020, it was easy to think of the Covid outbreak in Wuhan, China, as contained by harsh lockdowns. But then cases popped up in Iran, in Italy, and all over the world. That was a massive flag, and global and local health organizations could have acted earlier. (It’s still uncertain whether we could have contained Covid, since it’s so contagious. But we squandered some opportunities to try before it evolved to make that impossible.)
With climate change, temperatures aren’t rising in lockstep everywhere, and that may always be true because climate isn’t a single monolithic entity; it fluctuates day to day, season to season, year to year, place to place. *But on average, more places are recording higher temperatures than ever before. Warming is widespread. *We could have acted earlier when we saw this trend, before we recorded the eight highest global average temperatures in history in the last eight years. Expect this trend to continue in 2023 and 2024.
(As a side note, despite the undeniable heat trends, corporations are still dawdling on real action and likely will keep dawdling until mandated to take action. What will be the tipping point that finally spurs massive action?)
**3. They seem ****relatively ignored or dismissed****. **The biggest risks may not be pre-built into models, or the assumptions used may not match what will actually happen. (I’d like to see companies’ 2019-era projections for office attendance during a pandemic.) It takes some time to adjust and account for emerging risks. Many poorly performing individual mortgage securities in 2006 and 2007 were largely ignored, buried under mounds of models that predicted better outcomes for large bundles of those same securities.
There are many other examples in history. By August 2001, a “stream of warnings” was flowing in to intelligence agencies (both unusual and relatively widespread chatter). It was identified and flagged up the chain. But that risk was ultimately dismissed, and we lost a possible opportunity to mitigate the 9/11 attacks.
Now we see multiple home insurers retreating from areas at high risk for wildfires and floods, which are likely to become more frequent because of climate change. Yet, if you’re not directly affected as a homeowner and don’t follow the financial press, you might not think too much about future insurance coverage in your day-to-day or even in your moving decisions. This 2022 Allied US Migration Report1 shows that the top inbound states were: Arizona, South Carolina, North Carolina, Tennessee, and Texas.
**Does climate change still qualify as an emerging risk? **
Well, it was an emerging risk in the 1990s, and now the haystack is on fire. Yet, climate change is still treated as an emerging risk by many people and companies, and the response—*how we will collectively deal with it*—is still very much emerging.
More broadly, I hope these steps will help you identify risks early and take action to avoid the worst impacts if you can. And the sooner we see risks and take action to mitigate them (which, at an individual level, may mean working directly on a problem, donating to organizations working directly on problems, or speaking up in favor of corporate or political action), the less severe the eventual impact of risk is likely to be.
Allied is a large moving company, so their data is likely a decent sample. So much has changed since the 2020 census that relying on that data already feels outdated.
This is a topic which interests me a lot, because I worked on an emerging risk system for pests and diseases affecting plants and animals. One of the things I learned looking at the issue was that there was often a lack of communication between people who were aware of something emerging and people needing to take action on it. But there was also a "too much information" problem, where there was so much coming in that was potentially of interest that the really crucial pieces of evidence got missed.
This is an interesting topic Stephanie. Modelling the thermodynamics and the flow of liquids and gases in a small and enclosed space is real hard. The planet doesn't qualify as a small and enclosed space. I think really talented and hard-working people are doing their thing. I think what I liked about this post is the sensibility that there are a whole lot of surprises ahead. Great advice to look for the anomalies and the unexpected. Love your writing. Always makes me think.
| true | true | true |
At the dawn of this blog in summer 2023 (edit: I meant 2022!), I wrote about how to identify emerging risks.
|
2024-10-12 00:00:00
|
2023-08-04 00:00:00
|
https://substackcdn.com/image/fetch/f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Friskmusings.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D901183796%26version%3D9
|
article
|
substack.com
|
Risk Musings
| null | null |
27,066,299 |
http://oceans.nautil.us/feature/692/the-largest-cells-on-earth
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,779,545 |
http://blog.ycombinator.com/sign-up-for-summer-yc-open-office-hours
|
Sign up for Summer YC Open Office Hours | Y Combinator
| null | null | true | true | false |
Starting today you can sign up for June, July and August YC Open Office Hours. What are Open Office Hours? Learn more here [http://www.ycopenofficehours.com/about/]. June 23: General Open Office Hours All founders are welcome to apply.Apply here by June 3 [https://apply.ycombinator.com/events/46] July 14: Black, Latino and Native American Founders Apply h [https://apply.ycombinator.com/events/47]ere by June 21 [https://apply.ycombinator.com/events/47] [https://apply.ycombinator.com/event
|
2024-10-12 00:00:00
|
2016-05-26 00:00:00
| null |
website
|
ycombinator.com
|
Y Combinator
| null | null |
3,647,994 |
http://samuelmullen.com/2012/02/advice-on-attracting-good-developers/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,206,596 |
http://www.theregister.co.uk/2013/02/12/spoof_zombie_apocalypse_warning/
|
Montana TV warns of ZOMBIE ATTACK in epic prank hack
|
John Leyden
|
This article is more than **1 year old**
# Montana TV warns of ZOMBIE ATTACK in epic prank hack
## Cops: 'Wait. What if ... ?'
Pranksters managed to hack a TV emergency alert system in Montana on Monday to broadcast an on-air audio warning about the supposed start of a zombie apocalypse.
Viewers of Great Falls, Montana, television station KRTC watching a Jerry Springer-style show (specifically the Teen Cheaters Take Lie Detectors segment of *The Steve Wilkos Show*) had their ears assaulted by an on-air warning that "bodies of the dead are rising from their graves and attacking the living". The video of two teenagers squaring up to each other was not interrupted but audio of their argument was replaced by the following brief, but chilling, message. The alert also featured a scrolling warning at the top of the screen naming various Montana counties as targets for the spoof announcement of doom.
Civil authorities in your area have reported that the bodies of the dead are rising from their graves and attacking the living. Follow the messages onscreen that will be updated as information becomes available. Do not attempt to approach or apprehend these bodies as they are considered extremely dangerous.
A slightly longer statement along the same lines was broadcast by another KRTC channel, interrupting a commercial break at the end of a weather report, as recorded by a YouTube clip here. Viewers of this clip were instructed to tune into 920AM on their battery-powered radio if electricity supplies became interrupted.
KRTV quickly repudiated the statement and launched an investigation into the incidents, which it blames on as yet unidentified hackers.
"Someone apparently hacked into the Emergency Alert System and announced on KRTV and the CW that there was an emergency in several Montana counties. This message did not originate from KRTV, and there is no emergency," the CBS affiliate station said in a short statement on the incident.
"Our engineers are investigating to determine what happened and if it affected other media outlets."
A brief video clip featuring the warning is embedded in a story by local paper *The Great Falls Tribune* about the incident, which it reports "prompted quite a few confused phone calls [to police on] Monday afternoon". Local police have yet to be called in to investigate the incident, the paper adds
"We had four calls checking to see if it was true. And then I thought, 'Wait. What if?'” Lt. Shane Sorensen with the Great Falls Police Department told the paper. “We can report in the city, there have been no sightings of dead bodies rising from the ground.”
US Motorway signs have been hacked to warn of "zombies ahead" and similar incidents before but the epic KRTV hack takes this to another level. *El Reg*'s security desk has considered that it might be an elaborate promo for the current movie *Warm Bodies* or for the premiere of the second half of the third season of *The Walking Dead*, which arrived on US TV screens on Sunday. The zombie show drew a series best of 12.3 million US viewers at the weekend. ®
59
| true | true | true |
Cops: 'Wait. What if ... ?'
|
2024-10-12 00:00:00
|
2013-02-12 00:00:00
| null |
article
|
theregister.com
|
The Register
| null | null |
29,479,303 |
https://www.latimes.com/entertainment-arts/story/2021-12-07/thomas-guide-los-angeles-orange-county-new-2022
|
The Thomas Guide is back. Why seemingly obsolete map books will publish for 2022
|
James Bartlett
|
# The Thomas Guide is back. Why seemingly obsolete map books will publish for 2022
For decades a Thomas Guide was a driver’s well-thumbed tool for survival in the freeway-heavy, ever-expanding sprawl of Southern California — as essential as a spare tire, at least until traffic apps made paper books seemingly obsolete.
Younger Angelenos may never have heard of them, but the Thomas Guide lives on. The 2022 editions — the first in three years — are due out next week.
Guidebooks and fold-out maps were first published by three Thomas brothers in Oakland in 1915. Their first city map, Los Angeles, came in 1946. San Francisco and other California cities followed a few years later, and the operation expanded to cover regions across America and Canada.
L.A.’s Central Library has the only known complete collection of the rectangular, ring-bound guides. Recently retired map librarian Glen Creason recalled how the library offered free photocopies of the Hollywood and downtown pages because they were so often torn out by cheap (or maybe lost) motorists.
Then came GPS and traffic-beating apps.
Despite its huge database, the Thomas Guide stumbled in the digital age, and owner Rand McNally replaced cartographers and cut costs. The guides seemed destined to be lost to nostalgia, yet they didn’t disappear.
A reissue of the Doors’ “L.A. Woman,” and a memoir from guitarist Robby Krieger, find the band navigating Jim Morrison’s addictions during its final sessions.
From his office outside San Antonio, Larry Thomas, now the majority owner of the Thomas Maps brand, said that at one time the Thomas Guide had seven or eight distributors in California alone and printed a Spanish-language edition too.
Only the L.A./Orange County and San Diego/Imperial counties guides are still produced today, but Thomas is expecting good sales.
“Private car owners are a very small part of our business now,” he said. “But California state legislation says that every police and fire vehicle must have a Thomas Guide on board. Fire roads often aren’t on GPS, and ambulances can’t get lost, as every second could be life-saving. They often buy laminated copies too, as they get so beat up.”
That accounts for 1,000 to 1,500 sales a year. As a distributor for Rand McNally and other map and atlas brands, Thomas also supplies custom maps with “extreme detail” for transit agencies, hospitals, animal control and others.
Thomas said he was “in the right time and place” to start his career as a distributor about 30 years ago — and that his last name was a lucky factor.
“I got the blessings of the original owners, the Thomas brothers, who said that my last name wouldn’t infringe their copyright,” he said, “so I could legitimately call myself Thomas Maps.”
Angelenos complain about how a glitch in their traffic app made them late. Could the Thomas Guide be poised for a vinyl-like revival?
Nicole Kidman and Javier Bardem play Lucille Ball and Desi Arnaz in Aaron Sorkin’s latest hyper-articulate behind-the-scenes drama.
### More to Read
The biggest entertainment stories
Get our big stories about Hollywood, film, television, music, arts, culture and more right in your inbox as soon as they publish.
You may occasionally receive promotional content from the Los Angeles Times.
| true | true | true |
Hate your traffic app? There's a reason that new editions of the beloved Thomas Guide are coming for L.A., Orange County and San Diego.
|
2024-10-12 00:00:00
|
2021-12-07 00:00:00
| null |
newsarticle
|
latimes.com
|
Los Angeles Times
| null | null |
12,685,423 |
https://medium.com/stareable-s-blog/the-weekly-binge-3691351e1be4#.l4gbhymyc
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,387,884 |
http://www.bigomaha.com/
|
Oxide | Drew Davies | Big Omaha
| null |
## Big Omaha
Big Omaha broke the mold on how to brand a technology and innovation conference.
- Naming
- Brand identity
- Logo
- INTERIOR SIGNAGE
- INTERIOR DESIGN
- Website
- AD / PROMO DESIGN
- MESSAGING / COPYWRITING
- PRINT MATERIALS
- ILLUSTRATION
- USER EXPERIENCE (UX)
- Event Promotion
Oxide collaborated with Silicon Prairie News to brand the first six years of the nation’s most passionate conference on innovation and entrepreneurship, helping the conference grow to the point of the founders being able to sell it.
In the first year of Silicon Prairie News’ conference on innovation and entrepreneurship in the heartland, Oxide branded the conference, developing the concept of “giant cow” as the perfect tongue-in-cheek symbol for Omaha. Since the conference’s content centers around the idea of taking a leap of faith, we used daredevils and risk-takers — interacting with the cow — to capture its spirit.
Each year we helped craft the experience of Big Omaha attendees, making sure return guests had something new and exciting to look forward to.
| true | true | true |
Oxide collaborated with Silicon Prairie News to brand the first six years of Big Omaha.
|
2024-10-12 00:00:00
|
2019-08-09 00:00:00
|
article
|
oxidedesign.com
|
Oxide Design
| null | null |
|
2,466,639 |
http://www.marketwatch.com/story/history-bodes-ill-for-stock-market-2011-04-12
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,388,680 |
https://www.dolthub.com/blog/2023-03-28-swiss-map/
|
SwissMap: A smaller, faster Golang Hash Table
|
Andy Arthur
|
# SwissMap: A smaller, faster Golang Hash Table
Today's blog is announcing SwissMap, a new Golang hash table based on SwissTable that is faster and uses less memory than Golang's built-in map. We'll cover the motivation, design and implementation of this new package and give you some reasons to try it. This blog is part of our deep-dive series on the Go programming language. Past iterations include posts about concurrency, "inheritance", and managing processes with Golang.
At DoltHub, we love Golang and have used it to build DoltDB, the first and only SQL database
you can branch, diff and merge. Through our experience building Dolt, we've gained some expertise in the language. We
found a lot of features we appreciate and a few more sharp edges that have bitten us. One of the hallmarks of the Go
language is its focus on simplicity. It strives to expose a minimal interface while hiding a lot of complexity in the
runtime environment. Golang's built-in `map`
is a great example of this: its read and write operations have dedicated
syntax and its implementation is embedded within the runtime. For most use cases, `map`
works great, but its opaque
implementation makes it largely non-extensible. Lacking alternatives, we decided to roll our own hash table.
## Motivation
Hash tables are used heavily throughout the Dolt codebase, however they become
particularly performance critical at lower layers in stack that deal with data persistence and retrieval. The abstraction
responsible for data persistence in Dolt is called a `ChunkStore`
. There are many `ChunkStore`
implementations, but they
share a common set of semantics: variable-length byte buffers called "chunks" are stored and fetched using a `[20]byte`
content-addressable hash. Dolt's table indexes are stored in Prolly Trees
a tree-based data structure composed of these variable-sized chunks. Higher nodes in a Prolly tree reference child nodes
by their hash. To dereference this hash address, a ChunkStore must use a "chunk index" to map hash addresses to physical
locations on disk. In contrast, traditional B-tree indexes use fixed-sized data pages and parent nodes reference children
directly by their physical location within an index file.
Large Prolly Tree indexes in Dolt can be 4 to 5 levels deep. Traversing each level requires using the chunk index to
resolve references between parent and child nodes. In order to compete with traditional B-tree indexes, the chunk index
must have very low latency. The original design for this chunk index was a set of static, sorted arrays. Querying the
index involved binary searching each array until the desired address was found. The upside of this design was its
compactness. Chunk addresses alone are 20 bytes and are accompanied by a `uint64`
file offset and a `uint32`
chunk length.
This lookup information is significantly more bloated than the 8 byte file offset that a traditional B-Tree index would
store. Storing lookups in static arrays minimized the memory footprint of a chunk index. The downside is that querying
the index has asymptotic complexity of `m log(n)`
where `m`
is that number of arrays and `n`
is their average size.
While designing our new ChunkStore implementation, the Chunk Journal,
we decided to replace the array-based chunk index with a hash table. A hash-table-based index would support constant time
hash address lookups, reducing ChunkStore latency. The tradeoff is that the hash table used more memory. Exactly
*how much* more memory depends on what type of hash table you're using. Our first implementation used Golang's built-in
hash table `map`
which has a "maximum load factor" of 6.5/8. This meant that in the best-case-scenario `map`
uses 23%
more memory than the array-based chunk index. However, the average case is much worse. So how could we get constant-time
chunk lookups without blowing our memory budget? Enter SwissMap.
## SwissTable Design
SwissMap is based on the "SwissTable" family of hash tables from Google's open-source C++ library Abseil.
These hash tables were developed as a faster, smaller replacement for `std::unordered_map`
from the C++ standard library. Compared to `std::unordered_map`
, they have a denser, more cache-friendly memory layout
and utilize SSE instructions to accelerate key-value lookups.
The design has proven so effective that it's now being adopted in other languages. Hashbrown,
the Rust port of SwissTable, was adopted into the Rust standard library in Rust 1.36. There is even an
ongoing effort within the Golang community to adopt the SwissTable design
as the runtime `map`
implementation. The SwissTable design was a perfect fit for our chunk index use-case: it was fast
and supported a higher maximum load factor of 14/16.
The primary design difference between the built-in `map`
and SwissMap is their hashing schemes. The built-in map uses an
"open-hashing" scheme where key-value pairs with colliding hashes are collected into a single "bucket". To look up a value
in the map, you first choose a bucket based on the hash of the key, and then iterate through each key-value pair in the
bucket until you find a matching key.
A key optimization in the built-in `map`
is the use of "extra hash bits" that allow for fast equality checking while
iterating through slots of a bucket. Before directly comparing the query key with a candidate, the lookup algorithm first
compares an 8-bit hash of each key (independent from the bucket hash) to see if a positive match is possible. This fast
pre-check has a false-positive rate of 1/256 and greatly accelerates the searches through a hash table bucket. For more
design details on the Golang's built-in map, checkout Keith Randall's 2016 GopherCon talk
"Inside the Map Implementation".
The SwissTable uses a different hashing scheme called "closed-hashing". Rather than collect elements into buckets, each
key-value pair in the hash-table is assigned its own "slot". The location of this slot is determined by a probing
algorithm whose starting point is determined by the hash of the key. The simplest example is a linear probing search
which starts at slot `hash(key) mod size`
and stops when the desired key is found, or when an empty slot is reached. This
probing method is used both to find existing keys and to kind insert locations for new keys. Like the built-in Golang map,
SwissTable uses "short hashes" to accelerate equality checks during probing. However, unlike the built-in map, its hash
metadata is stored separately from the key-value data.
The segmented memory layout of SwissTable is a key driver of its performance. Probe sequences through the table only access the metadata array until a short hash match is found. This access pattern maximizes cache-locality during probing. Once a metadata match is found, the corresponding keys will almost always match as well. Having a dedicated metadata array also means we can use SSE instructions to compare 16 short hashes in parallel! Using SSE instruction is not only faster, but is the reason SwissTable supports a maximum load factor of 14/16. The observation is that "negative" probes (searching for a key that is absent from the table) are only terminated when an empty slot is encountered. The fewer empty slots in a table, the longer the average probe sequence takes to find them. In order to maintain O(1) access time for our hash table, the average probe sequence must be bounded by a small, constant factor. Using SSE instructions effectively allows us to divide the length of average probe sequence by 16. Empirical measurements show that even at maximum load, the average probe sequence performs fewer than two 16-way comparisons! If you're interested in learning (a lot) more about the design of SwissTable, check out Matt Kulukundis' 2017 CppCon talk “Designing a Fast, Efficient, Cache-friendly Hash Table, Step by Step”.
## Porting SwissTable to Golang
With a design in hand, it was time to build it. The first step was writing the `find()`
algorithm. As Matt Kulukundis
notes in his talk, `find()`
is the basis for all the core methods in SwissTable: implementations for `Get()`
, `Has()`
,
`Put()`
and `Delete()`
all start by "finding" a particular slot. You can read the actual implementation
here, but for simplicity
we'll look at a pseudocode version:
```
func (m Map) find(key) (slot int, ok bool) {
h1, h2 := hashOf(key) // high and low hash bits
s := modulus(h1, len(m.keys)/16) // pick probe start
for {
// SSE probe for "short hash" matches
matches := matchH2(m.metadata[s:s+16], h2)
for _, idx := range matches {
if m.keys[idx] == key {
return idx, true // found |key|
}
}
// SSE probe for empty slots
matches = matchEmpty(m.metadata[s:s+16])
for _, idx := range matches {
return idx, false // found empty slot
}
s += 16
}
}
```
The probing loop continues searching until it reaches one of two exit conditions. Successful calls to `Get()`
, `Has()`
,
and `Delete()`
terminate at the first `return`
when both the short hash and key value match the query `key`
. `Put()`
calls and unsuccessful searches terminate at the second return when an empty slot is found. Within the metadata array,
empty slots are encoded by a special short hash value. The `matchEmpty`
method performs a 16-way SSE probe for this
value.
Golang support for SSE instructions, and for SIMD instructions in general, is minimal. To leverage these intrinsics, SwissMap uses the excellent Avo package to generate assembly functions with the relevant SSE instructions. You can find the code gen methods here.
The chunk index use case requires a specific hash table mapping hash keys to chunk lookup data. However, we wanted SwissMap
to be a generic data structure that could be reused in any performance-sensitive context. Using generics, we could
define a hash table that was just as flexible as the built-in `map`
:
```
package swiss
type Map[K comparable, V any] struct {
hash maphash.Hasher[K]
...
}
```
SwissMap's hash function is `maphash`
, another DoltHub package that uses Golang's
runtime hasher capable of hashing any `comparable`
data type. On supported platforms, the runtime hasher will use
AES instructions to efficiently generate strong hashes. Utilizing
hardware optimizations like SSE and AES allows SwissMap to minimize lookup latency, even outperforming Golang's builtin
`map`
for larger sets of keys:
```
goos: darwin
goarch: amd64
pkg: github.com/dolthub/swiss
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkStringMaps
num_keys=16
num_keys=16 runtime_map-12 112244895 10.71 ns/op
num_keys=16 swiss.Map-12 65813416 16.50 ns/op
num_keys=128
num_keys=128 runtime_map-12 94492519 12.48 ns/op
num_keys=128 swiss.Map-12 62943102 16.09 ns/op
num_keys=1024
num_keys=1024 runtime_map-12 63513327 18.92 ns/op
num_keys=1024 swiss.Map-12 70340458 19.13 ns/op
num_keys=8192
num_keys=8192 runtime_map-12 45350466 24.77 ns/op
num_keys=8192 swiss.Map-12 58187996 21.29 ns/op
num_keys=131072
num_keys=131072 runtime_map-12 35635282 40.24 ns/op
num_keys=131072 swiss.Map-12 36062179 30.71 ns/op
PASS
```
Finally, let's look at SwissMap's memory consumption. Our original motivation for building SwissMap was to get constant
time lookup performance for our chunk index while minimizing the additional memory cost. SwissMap supports a higher
maximum load factor (87.5%) than the built-in map (81.25%), but this difference alone doesn't tell the whole story. Using
Golang's pprof profiler, we can measure the *actual* load factor of each map for a range of
key set sizes. Measurement code can be found here.
In the chart above we see markedly different memory consumption patterns between SwissMap and the built-in map. For comparison, we've included the memory consumption of array storing the same set of data. Memory consumption for the built-in map follows a stair-step function because it's always constructed with a power-of-two number of buckets. The reason for this comes from a classic bit-hacking optimization pattern.
Any hash table lookup (open or closed hashing) must pick a starting location for its probe sequence based on the hash of
the query key. Mapping a hash value to a bucket or slot accomplished with remainder division. As it turns out, the
remainder division operator `%`
is rather expensive in CPU cycles, but if divisor is a power of two, you can replace the
`%`
operation with a super-fast bit mask of the lowest `n`
bits. For this reason, many if not most hash tables are
constrained to power-of-two sizes. Often this creates negligible memory overhead, but when allocating hash tables with
millions of elements, the impact is significant! As shown in the chart above, Golang's built-in map uses 63% more memory,
on average, than SwissTable!
To get around the slowness of remainder division, and the memory bloat of power-of-two sizing, our implementation of SwissMap uses a different modulus mapping, first suggested by Daniel Lemire. This idea is deceptively simple:
```
func fastModN(x, n uint32) uint32 {
return uint32((uint64(x) * uint64(n)) >> 32)
}
```
This method uses only a few more operations than the classic bit-masking technique, and micro-benchmarks at just a
quarter of a nanosecond. Using this modulus method means we're limited by the range of `uint32`
, but because this
integer indexes buckets of 16 elements, SwissMap can hold up to `2 ^ 36`
elements. More than enough for most use-cases
and well-worth the memory savings!
## Give SwissMap a Try!
Hopefully this was an informative deep-dive on hash table design and high-performance Golang. SwissMap proved to be an effective solution to our chunk index problem, but we hope it can also be a general purpose package for other performance sensitive use-cases. While it's not an ideal fit for every situation, we think it has a place when memory utilization for large hash tables is a concern. If you have any feedback on SwissMap feel free to cut an issue in the repo. Or if you'd like to talk to us directly, come join our Discord!
| true | true | true |
Initial release of SwissMap, a Golang port of Abseil's flat_hash_map.
|
2024-10-12 00:00:00
|
2023-03-28 00:00:00
|
article
|
dolthub.com
|
Dolthub
| null | null |
|
32,270,838 |
https://www.thestandard.com.hk/breaking-news/section/4/192858/Mirror-concert-accident-%27a-fortune-out-of-misfortune%27:-expert
|
Mirror concert accident 'a fortune out of misfortune': expert
|
The Standard
|
# Mirror concert accident 'a fortune out of misfortune': expert
Local | 29 Jul 2022 12:04 amA mechanical engineering expert told The Standard that, the accident happened at Mirror's concert on Thursday night is “a fortune out of misfortune.”
Lo Kok-keung, a chartered mechanical engineer and a fellow of the Institution of Mechanical Engineers said, “Estimating the monitor weight 90 kilograms, and at 10 meters high, and if it hits the dancer with its corner, it will create 708 pounds of force, which we could not rule out causing the dancers to die."
But he added that since the monitor first hit the ground, and then crushed the dancers like a chopper, the force created is only one-third of the estimated.
The accident happened at Hong Kong Coliseum Thursday at about 10.35pm, where a big TV screen suddenly fell off and land on at least two dancers. They were sent to Queen Elizabeth Hospital consciously.
| true | true | true |
A mechanical engineering expert told The Standard that, the accident happened at Mirror's concert on Thursday night is “a fortune out of misfortune.”Lo Kok-keung, a chartered...
|
2024-10-12 00:00:00
|
2022-07-29 00:00:00
|
article
|
thestandard.com.hk
|
The Standard
| null | null |
|
1,716,313 |
http://techcrunch.com/2010/09/22/att-iphone-verizon/
|
AT&T Not Concerned About iPhone Defections -- CEO Boasts That 80% Are Basically Trapped | TechCrunch
|
MG Siegler
|
At this point, my head is spinning. Earlier tonight, I wrote about how Verizon is still full-steam ahead on destroying the fabric of Android. Meanwhile, on the other side of the aisle, we have AT&T playing up the fact that they got a “D-” on a coverage test instead of an “F”. I seriously just can’t decide which carrier is worse.
Earlier today, a study by Credit Suisse was released stating that 23 percent of iPhone users currently on AT&T would switch to Verizon if that carrier offered the phone. That number is slightly off from the 34 percent that was previously reported, but is still pretty massive. In total, that represents about 1.4 million customers that would jump ship from AT&T to Verizon without hesitation. But speaking today at the Goldman Sachs media and technology conference, Communacopia (yes, awful name), AT&T CEO Randall Stephenson had something interesting to say about possible defections.
Stephenson noted that 80 percent of AT&T’s iPhone base is either in family plans or business relationships with the carrier and that these type of customers tend to be “very sticky.” So essentially what he’s saying is that those 80 percent of iPhone users probably won’t leave even if they want to. Wow, that’s a fresh approach.
The correct answer there would have been to say that AT&T will be doing all it can to improve its network and its customer service to ensure these people stay. And that they’re confident that they will. Or really, anything would have been better than an answer that basically amounts to “we have them trapped.”
Of course, this seems to be the company line these days. The same 80 percent figure was touted in a recent SEC filing.
But this may be my favorite part of Stephenson’s talk, from CNBC’s report:
Stephenson emphasized the “extended array” of smartphones Apple subscribers can pick from, which reads as AT&T saying it’s not too reliant on Apple.
Does he really believe that iPhone users are going to switch to some other phone that AT&T offers instead of switching to the iPhone on another network? I mean, seriously?
This is basically like saying, “well, we offer you crappy service on one of the most popular devices out there, so why don’t you try this less popular device and stick with us?”
That’s what we call a lose-lose situation. Brilliant.
| true | true | true |
At this point, my head is spinning. Earlier tonight, I wrote about how Verizon is still full-steam ahead on destroying the fabric of Android. Meanwhile, on the other side of the aisle, we have AT&T playing up the fact that they got a "D-" on a coverage test instead of an "F". I seriously just can't decide which carrier is worse. Earlier today, a study by Credit Suisse was released stating that 23 percent of iPhone users currently on AT&T would switch to Verizon if that carrier offered the phone. That number is slightly off from the 34 percent that was previously reported, but is still pretty massive. In total, that represents about 1.4 million customers that would jump ship from AT&T to Verizon without hesitation. But speaking today at the Goldman Sachs media and technology conference, Communacopia (yes, awful name), AT&T CEO Randall Stephenson had something interesting to say about possible defections.
|
2024-10-12 00:00:00
|
2010-09-22 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
26,145,423 |
https://arxiv.org/abs/2102.02503
|
Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models
|
Tamkin; Alex; Brundage; Miles; Clark; Jack; Ganguli; Deep
|
# Computer Science > Computation and Language
[Submitted on 4 Feb 2021]
# Title:Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models
View PDFAbstract:On October 14th, 2020, researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities convened to discuss open research questions surrounding GPT-3, the largest publicly-disclosed dense language model at the time. The meeting took place under Chatham House Rules. Discussants came from a variety of research backgrounds including computer science, linguistics, philosophy, political science, communications, cyber policy, and more. Broadly, the discussion centered around two main questions: 1) What are the technical capabilities and limitations of large language models? 2) What are the societal effects of widespread use of large language models? Here, we provide a detailed summary of the discussion organized by the two themes above.
### References & Citations
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
| true | true | true |
On October 14th, 2020, researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities convened to discuss open research questions surrounding GPT-3, the largest publicly-disclosed dense language model at the time. The meeting took place under Chatham House Rules. Discussants came from a variety of research backgrounds including computer science, linguistics, philosophy, political science, communications, cyber policy, and more. Broadly, the discussion centered around two main questions: 1) What are the technical capabilities and limitations of large language models? 2) What are the societal effects of widespread use of large language models? Here, we provide a detailed summary of the discussion organized by the two themes above.
|
2024-10-12 00:00:00
|
2021-02-04 00:00:00
|
/static/browse/0.3.4/images/arxiv-logo-fb.png
|
website
|
arxiv.org
|
arXiv.org
| null | null |
4,920,514 |
http://markboulton.co.uk/journal/participation
|
Participation - Mark Boulton
| null |
# Participation
Last week, there was an argument on the internet.
As usual, it started on Twitter and a flurry of blog posts are cropping up this week to fill in the nuances that 140 characters will not allow. So, here’s mine…
[Aside: I did actually make a promise to myself that I wouldn’t get involved, but, I find that cranking out a quick blog post, gets my head in the space for writing generally.]
I started speaking at web conferences in 2006. After attending SXSW the year before, I proposed a panel discussion (with the lofty title: Traditional Design and New Technology) with some design friends of mine: Khoi Vinh, Jason Santa Maria and Toni Greaves and moderated by Liz Danzico. I was terrified. But, in the end, it was fun – there was some lively debate.
I wanted to do a panel at SXSW since seeing Dave Shea, Doug Bowman and Dan Cederholm sit on a CSS panel at SXSW in 2004. Not because I saw the adulation, but because I saw – for the first time – what it was like to contribute to this community. To be part of it. To give back: be it code, techniques, thoughts, debate or discussion. And I wanted a part of it. So, that’s what I did. I started blogging – I felt I had some things to say, about typography, grids, colour theory. All of the traditional graphic design stuff that mattered to me. Not because of some egotistical trip, but because I genuinely wanted to make things better. Trite, I know.
Fast forward a couple of years and I’m speaking at the inaugural New Adventures conference in Nottingham for my friend Simon Collison. On that day, every speaker up on stage was trying to give the best talk they could. Not because of the audience, not because of who they were, but because of Simon. It was personal.
The talk I gave at new Adventures took about two years to write. Yes, it took me that long to write a twenty five minute talk. You throw that into the equation, a high-risk personal favour for a good friend, plus my family and best friend in the audience, and you’ve got a recipe for bad nerves and vomit. And I did vomit.
But, I got up there. Cast aside the nerves and held my head up and spoke for twenty minutes on things I’ve been thinking about for years. It was received well. Afterwards, all I did was sit in the green room for about two hours and didn’t really speak to anyone.
## My natural preference
It may surprise you that most speakers I know are not extroverts. I don’t mean extroverts in the way you may think, either. I mean it in the Myers Briggs type: they are not the type of people who gain energy from other people, they gain that energy from themselves. I’m one of these people, too.
Being on stage is firmly out of my comfort zone. Firmly. I’ve had to learn to harness the nerves and put them to good use. A good friend of mine calls this ‘peak performance’ – the nerves help you bring your ‘A’ game.
My natural preference is to be on my own, working. Either thinking, sketching, writing, building, exercising… whatever. All through my life, I’ve enjoyed solitary sports and pastimes, from angling to cycling. Now, that’s not to say I’m a hermit. I’m not. I’m pretty sociable when I need to be, but it’s not my preference. So being on stage – sticking my head above the parapet – takes incredible effort, and then afterwards, I generally want to go and hide in a corner for a bit. It wipes me out.
So, why do I do it? Why does anyone do it in this community? If you’re a regular speaker, or your first time? Almost everyone I know does it because they want to give back. They have something they’d like to share in the hope it may help someone else in a similar position.
This brings me full circle to my opening sentence. Why, then, knowing all of this, is there a general feeling of discontent in a vocal minority that speakers who do this regularly are:
- In it for the ego
- Not doing any real work
- Not leaving room for new speakers
I’d like to address these points from my own experience.
### In it for the ego
Why would someone get up on stage and speak to hundreds of people? Sure, some may get a kick out of that. People applauding you feels nice. But, let’s be clear: that’s not egotistical. That’s being rewarded, and there’s nothing wrong with that.
### Not doing any real work
I wrote about defining ‘work’ last week. I see speaking as part of my work as a graphic designer. If you’ve studied graphic designers, art directors, ad copywriters and the like, you’ll know that a lot of them speak to their peers – either at conferences or through publications. Writing and talking about what we do with each other *is* work. Not only that, it’s fucking important work too. Without it, there would be no web standards, no open source, no progression.
### Not leaving room for new speakers
Experienced speakers leave room for everyone. Experienced speakers do not run conferences: conference organisers do. And conference organisers need to put bums on seats. Just like a big music festival, you need the draw, but also you need the confidence that a speaker will deliver to the audience. Every experienced speaker I know works damned hard to make sure they deliver the best they can, every single time. They’re professional. They don’t screw it up, or spring surprises. They deliver. And *that’s* why you may see their faces at one or two conferences.
A couple of months ago, I saw Heather Champ talk at Web Directions South in Sydney. Amongst many hilarious – and equally terrifying - stories of how she’s managed and curated communities over the years, she came out with the nugget:
“What you tolerate defines your community”
— Heather Champ
At this point, I’d like to ask you this:
*What will you tolerate in this community?*
Will you tolerate a conference circuit swamped by supposedly the same speakers and vote with your wallet? Or will you tolerate conference organisers being continually beaten up for genuinely trying to do the right thing? Will you tolerate speakers being abused for getting on stage and sharing their experiences?
Will you tolerate harassment, bullying and exclusion?
As I’ve said before, Twitter is like a verbal drive-by. It’s fast, efficient, impersonal and you don’t stick around for the consequences. Let’s stop it.
Tags that this post has been filed under.
| true | true | true | null |
2024-10-12 00:00:00
|
2012-12-08 00:00:00
| null |
website
|
markboulton.co.uk
|
Mark Boulton
| null | null |
7,770,950 |
http://forum.dlang.org/thread/[email protected]
|
Dash: An Open Source Game Engine in D
|
Colden Cullen
|
Thread overview | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
May 19, 2014 Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Hi everyone, I’m super excited to be able to announce that the Dash game engine[1] is finally stable and ready for public use! I’m currently the Lead Engine Programmer at Circular Studios[2] (the group behind Dash). We had 14 people working on the team, 6 engine programmers and 8 game developers creating Spectral Robot Task Force, a turn-based strategy game built with Dash. Dash is an OpenGL engine written in the D language that runs on both Windows and Linux. We use a deferred-rendering model in the current pipeline, and a component model for game development and logic. Other major features at the moment include networking, skeletal-animation support, content and configuration loading via YAML, and UI support through Awesomium[3] (though we are in the process of moving over to using CEF[4] itself). Our vision for Dash is to have the programmer-facing model of XNA/Monogame combined with the designer-friendliness of Unity in a fully free and open source engine. We also hope that Dash can help to prove the power and maturity of D as a language, as well as push D to continue improving. We’re open to any feedback you may have, or better yet, we’d love to see pull requests for improvements. [1] https://github.com/Circular-Studios/Dash [2] http://circularstudios.com/ [3] http://awesomium.com/ [4] https://code.google.com/p/chromiumembedded/ |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Colden Cullen | ```
On Mon, 19 May 2014 19:50:35 +0000, Colden Cullen wrote:
> Hi everyone,
>
> I’m super excited to be able to announce that the Dash game engine[1] is
> finally stable and ready for public use! I’m currently the Lead Engine
> Programmer at Circular Studios[2] (the group behind Dash). We had 14
> people working on the team, 6 engine programmers and 8 game developers
> creating Spectral Robot Task Force, a turn-based strategy game built
> with Dash.
>
> Dash is an OpenGL engine written in the D language that runs on both Windows and Linux. We use a deferred-rendering model in the current pipeline, and a component model for game development and logic. Other major features at the moment include networking, skeletal-animation support, content and configuration loading via YAML, and UI support through Awesomium[3] (though we are in the process of moving over to using CEF[4] itself).
>
> Our vision for Dash is to have the programmer-facing model of XNA/Monogame combined with the designer-friendliness of Unity in a fully free and open source engine. We also hope that Dash can help to prove the power and maturity of D as a language, as well as push D to continue improving.
>
> We’re open to any feedback you may have, or better yet, we’d love to see pull requests for improvements.
>
> [1] https://github.com/Circular-Studios/Dash [2]
> http://circularstudios.com/
> [3] http://awesomium.com/
> [4] https://code.google.com/p/chromiumembedded/
Very exciting! Thank for the very liberal license; this is a great
contribution to the community.
I know you guys are probably crunching on the million things that stand
between alpha and release, but when you have time, a series of blog posts
or articles would be awesome. Topics such as your usage of mixins and
your experience with the GC would be great and speak to the advantages of
using D.
BTW, The "Setting up Your Environment page" link on the main repo page (the README) is broken.
Justin
``` |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Colden Cullen | On 5/19/2014 12:50 PM, Colden Cullen wrote: > I’m super excited to be able to announce that the Dash game engine[1] is finally > stable and ready for public use! http://www.reddit.com/r/programming/comments/25yw89/dash_an_opensource_game_engine_coded_in_d/ I recommend posting your message text on Reddit, as that will generate more interest than just a link. |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Justin Whear | On Monday, 19 May 2014 at 20:45:51 UTC, Justin Whear wrote: > Very exciting! Thank for the very liberal license; this is a great > contribution to the community. > I know you guys are probably crunching on the million things that stand > between alpha and release, but when you have time, a series of blog posts > or articles would be awesome. Topics such as your usage of mixins and > your experience with the GC would be great and speak to the advantages of > using D. > > BTW, The "Setting up Your Environment page" link on the main repo page > (the README) is broken. > > Justin Thanks! We're super excited. We wanted to make sure that anyone could do anything with it, hence the license. We're also all huge open source geeks, with no business people to tell us no :) We are super busy, but we've also been trying to blog as much as we can. Myself[1] and one of my teammates[2] have been blogging a little (although I should warn you, there is some cruft from a class we took requiring posts for other stuff). We do really want to get some more stuff on paper, though. If you've got any ideas for stuff you want to read, suggestions are absolutely welcome! Also, thanks for the heads up, I just fixed the link. [1] http://blog.coldencullen.com/ [2] http://blog.danieljost.com/ |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Monday, 19 May 2014 at 20:52:04 UTC, Walter Bright wrote: > On 5/19/2014 12:50 PM, Colden Cullen wrote: >> I’m super excited to be able to announce that the Dash game engine[1] is finally >> stable and ready for public use! > > http://www.reddit.com/r/programming/comments/25yw89/dash_an_opensource_game_engine_coded_in_d/ > > I recommend posting your message text on Reddit, as that will generate more interest than just a link. Good call, check it out here[1]. We also have an /r/gamedev post[2], where we've gotten some good D-related questions. [1] http://www.reddit.com/r/programming/comments/25yw89/dash_an_opensource_game_engine_coded_in_d/chm21bv [2] http://www.reddit.com/r/gamedev/comments/25yub3/introducing_dash_an_opensource_game_engine_in_d/ |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Colden Cullen | ```
On 5/19/2014 1:55 PM, Colden Cullen wrote:
> Good call, check it out here[1]. We also have an /r/gamedev post[2], where we've
> gotten some good D-related questions.
>
> [1]
> http://www.reddit.com/r/programming/comments/25yw89/dash_an_opensource_game_engine_coded_in_d/chm21bv
>
> [2]
> http://www.reddit.com/r/gamedev/comments/25yub3/introducing_dash_an_opensource_game_engine_in_d/
Wonderful!
``` |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Colden Cullen | On Monday, 19 May 2014 at 19:50:37 UTC, Colden Cullen wrote: > Hi everyone, > > I’m super excited to be able to announce that the Dash game engine[1] is finally stable and ready for public use! I’m currently the Lead Engine Programmer at Circular Studios[2] (the group behind Dash). We had 14 people working on the team, 6 engine programmers and 8 game developers creating Spectral Robot Task Force, a turn-based strategy game built with Dash. > > Dash is an OpenGL engine written in the D language that runs on both Windows and Linux. We use a deferred-rendering model in the current pipeline, and a component model for game development and logic. Other major features at the moment include networking, skeletal-animation support, content and configuration loading via YAML, and UI support through Awesomium[3] (though we are in the process of moving over to using CEF[4] itself). > > Our vision for Dash is to have the programmer-facing model of XNA/Monogame combined with the designer-friendliness of Unity in a fully free and open source engine. We also hope that Dash can help to prove the power and maturity of D as a language, as well as push D to continue improving. > > We’re open to any feedback you may have, or better yet, we’d love to see pull requests for improvements. > > [1] https://github.com/Circular-Studios/Dash > [2] http://circularstudios.com/ > [3] http://awesomium.com/ > [4] https://code.google.com/p/chromiumembedded/ Looks awesome. Don't have time now (finals) but will check it out later. (I'm developing my own gamedev related... stuff so I'm unlikely to be a user but looks like it might finally be something a new user can pick up right away and just start making a game in D) For now all criticism I can give is that http://dash.circularstudios.com/v1.0/docs is completely useless with NoScript. At least put a warning for NoScript users. |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Colden Cullen | ```
On Monday, 19 May 2014 at 19:50:37 UTC, Colden Cullen wrote:
> Hi everyone,
>
> I’m super excited to be able to announce that the Dash game engine[1] is finally stable and ready for public use! I’m currently the Lead Engine Programmer at Circular Studios[2] (the group behind Dash). We had 14 people working on the team, 6 engine programmers and 8 game developers creating Spectral Robot Task Force, a turn-based strategy game built with Dash.
>
> Dash is an OpenGL engine written in the D language that runs on both Windows and Linux. We use a deferred-rendering model in the current pipeline, and a component model for game development and logic. Other major features at the moment include networking, skeletal-animation support, content and configuration loading via YAML, and UI support through Awesomium[3] (though we are in the process of moving over to using CEF[4] itself).
>
> Our vision for Dash is to have the programmer-facing model of XNA/Monogame combined with the designer-friendliness of Unity in a fully free and open source engine. We also hope that Dash can help to prove the power and maturity of D as a language, as well as push D to continue improving.
>
> We’re open to any feedback you may have, or better yet, we’d love to see pull requests for improvements.
>
> [1] https://github.com/Circular-Studios/Dash
> [2] http://circularstudios.com/
> [3] http://awesomium.com/
> [4] https://code.google.com/p/chromiumembedded/
This is all awesome. I'll have to check this out.
I hate to be the guy who says "you missed a spot," but you did name one module in your source tree "core." You might want to rename that to avoid issues with core modules.
``` |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kiith-Sa | On Monday, 19 May 2014 at 20:58:52 UTC, Kiith-Sa wrote: > For now all criticism I can give is that http://dash.circularstudios.com/v1.0/docs is completely useless with NoScript. At least put a warning for NoScript users. We'll be switching off of readme.io to homebrew docs[1] hosted on Github. [1] https://github.com/Circular-Studios/Dash-Docs |
May 19, 2014 Re: Dash: An Open Source Game Engine in D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Daniel Jost | ```
On Monday, 19 May 2014 at 21:05:01 UTC, Daniel Jost wrote:
> On Monday, 19 May 2014 at 20:58:52 UTC, Kiith-Sa wrote:
>> For now all criticism I can give is that http://dash.circularstudios.com/v1.0/docs is completely useless with NoScript. At least put a warning for NoScript users.
>
>
> We'll be switching off of readme.io to homebrew docs[1] hosted on Github.
>
> [1] https://github.com/Circular-Studios/Dash-Docs
This is super super not ready yet though, unfortunately. It will be our focus (or mine, at least) for then next few weeks.
``` |
Copyright © 1999-2021 by the D Language Foundation
| true | true | true |
D Programming Language Forum
|
2024-10-12 00:00:00
|
2014-05-19 00:00:00
|
https://www.gravatar.com/avatar/9e757591a85ae272031631963ecc215b?d=identicon&s=256
|
website
|
dlang.org
|
forum.dlang.org
| null | null |
9,550,637 |
http://ckmurray.blogspot.com/2015/05/back-scratching-do-what-is-best-for.html
|
Back-scratching: Do what's best for your mates and screw the rest
| null |
*tl;dr*
*I have a new experiment demonstrating the sheer power of back-scratching, even when it imposes huge costs on others.*
These days it seems that just about every political decision is about doing favours for the connected few at the expense of the many. It is part of an implicit quid pro quo; trading of favours now for favours in the future.
While trading favours can lead to amazing levels of productive human cooperation, it can also generate a considerable amount of unproductive cooperation when the trades benefit the few at a cost to the many. More than that, back-scratching can come with substantial efficiency losses; the costs to outsiders can far outweigh the gains to favoured insiders.
The revolving door between regulators and the regulated is one clear mechanism sustaining back-scratching. Ben Bernanke is just the latest long list of powerful regulators swinging through this door.
But studying back-scratching in the wild faces a major problem. Favours are almost impossible to objectively observe. Not only is there a powerful incentive to conceal favours, determining the ‘no favour’ counterfactual is almost impossible. Was the government contract given to the most efficient firm? Or was it a favour to the winner because an alternative bidder could have delivered a better outcome for the price? More often than not we just don’t know.
I have a new working paper out reporting an experiment on the economics of back-scratching. Studying back-scratching in a controlled experiment, while sacrificing realism, allows a close examination of the fundamental cooperative processes at play. The ‘big new thing’ I was able to do, in light of the long history of cooperation games in social psychology and economics, was to measure costs of back-scratching against an efficient counterfactual, and test which institutional designs encourage or curtail back-scratching.
To be brief, the basic experiment has 4 players in a group (the minimal number for an in-group and out-group to form), with one player able to choose which of the others to receive a prize of $25 in a round (in experimental currency). The payer who receives the prize allocates a fresh $25 the next round to one of the other three.
Obviously the best thing to do is form a back-scratching alliance with one other player and trade the $25 favour back and forth. Over 25 rounds an alliance pair would make $625, while the other two would make nothing.
What makes back-scratching costly is that each of the potential recipients is given a randomly shuffled ‘productivity number’ each round, either 1, 2 or 3, which determines a payoff for everyone in the group. Each player gets an amount equal to the receivers productivity number in a round. Give the prize to the player who is 1, the group gets $1 each. Give the prize to the player with 3, the group gets $3 each. Think of this productivity number as reflecting the efficiency firms competing for a government contract.
To sustain a back-scratching alliance means not choosing the most productive player for the prize in two-thirds of the rounds. It earns the alliance $725, the outside group $100, and comes with an efficiency loss of $100 from the repeated non-productive choices.
It turns out that most players repeatedly formed alliances, even though they were young honest university students who didn’t know each other. In real money terms (rather than experimental currency) alliance pairs ‘stole’ $30 from the outsiders to increase their alliance earnings by $20.
Surprisingly alliance players were happy about their actions. They thought they had been very cooperative by doing their mate a favour, and didn’t feel guilty about the costs they imposed on others. They also rationalised their behaviour, saying that forming an alliance is a justifiable strategy, while also concealing it by lying when explicitly asked in a later survey if they had formed an alliance.
I tested two institutional changes in the experiment. Staff rotation, a common anti-collusion policy, and a low-rent policy mimicking bureaucratic procedures to limit the size of prize able to be allocated with discretion. Neither was particularly effective at stopping back-scratching. I also manipulated the strength of relationships between players.
For me the take-home lessons are:
- Loyalty is strong. Rotation policies are good, unless those people being rotated in have existing loyalties. This means there is a trade-off for regulators; staff with more industry experience are also likely to come with stronger prior alliances and hence be more prone to back-scratching. In politics it means voting in a different political party brings with it the alliances of that party.
- Bureaucracy can work. The array of procedures emerging in our large organisations could simply be the result of seeking internal fairness over favouritism. But back-scratching can still arise even with minimal amounts of discretion.
- Social norms are strong. In organisations or groups where some people are observed ‘doing the right thing for the group’, this quickly becomes the norm. Whereas where favouritism is observed, the group descends into counterproductive back-scratching.
- Be loyal, but not too loyal. If your alliance partner fails to come through with favours when expected it pays to look for someone else whose back needs scratching.
After spending three years researching this topic, of which this experiment is a small part, I have come to the conclusion that group formation through favouritism is probably the primary determinant of political outcomes. Countries themselves can even be seen as a loose alliance of insiders looking to do what’s best for themselves even when it comes at a cost to other countries.
And if you look deep enough there is an evolutionary foundation for this behaviour. As Joshua Green explains:
Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Out moral brains did not evolve for cooperation between groups (at least not all groups). How do we know this? Because universal cooperation is inconsistent with the principles of natural selection. I wish it were otherwise, but there's no escaping this conclusionThis. Fundamentally. Is why corrupt back-scratching is so hard to eradicate, and why it will continue to be the main game in politics.
Excellent work here. Provokes a lot of thought. Having been put off completely by Kahnemann's silly System 1 System 2, I find this refreshing. Back scratching within bureaucracy which involves "protective" functions as well as "productive" functions is also described by this, I would think.
ReplyDeleteLook forward to reading the paper.
| true | true | true |
Image source: Hungarian sociologists’ study on corruption voted best in world tl;dr I have a new experiment demonstrating the sheer ...
|
2024-10-12 00:00:00
|
2015-05-13 00:00:00
| null |
blogspot.com
|
Drcameronmurray
| null | null |
|
19,285,619 |
https://edition.cnn.com/travel/article/duga-radar-chernobyl-ukraine/index.html
|
Huge Soviet ‘mind control’ radar hidden in forest | CNN
|
Pavlo Fedykovych
|
The peaceful untouched forest north of Ukraine’s capital, Kiev, is a perfect spot to enjoy the outdoors – save for one fact.
It contains the radiation-contaminated Chernobyl Exclusion Zone, established in 1986 after the world’s worst nuclear disaster sent a wave of radiation fallout across Europe.
Since 2011 it’s been a major draw for adventurous tourists, but the forests here conceal another legacy of the Cold War, with a far more sinister and mysterious reputation.
The Duga radar.
Though once a closely guarded secret, this immense structure can be seen for miles around, rearing up through the mist over the horizon – a surreal sight.
From a distance, it appears to be a gigantic wall. On close inspection, it’s an enormous, dilapidated structure made up of hundreds of huge antennas and turbines.
The Duga radar (which translates as “The Arc”) was once one of the most powerful military facilities in the Soviet Union’s communist empire.
It still stands a towering 150 meters (492 feet) high and stretches almost 700 meters in length. But, left to rot in the radioactive winds of Chernobyl, it’s now in a sad state of industrial decay.
Anyone exploring the undergrowth at its feet will stumble upon neglected vehicles, steel barrels, broken electronic devices and metallic rubbish, the remainders of the hasty evacuation shortly after the nuclear disaster.
## Long-range missile threat
For decades, the Duga has stood in the middle of nowhere with no one to witness its slow demise. Since 2013, visitors exploring the Chernobyl Exclusion Zone have been permitted access to the radar installation as part of a guided group.
Even those aware of its presence are still struck by the sheer scale of it, says Yaroslav Yemelianenko, director of Chernobyl Tour, which conducts trips to the Duga.
“Tourists are overwhelmed by the enormous size of the installation and its aesthetic high-tech beauty,” he tells CNN Travel. “No one expects that it is that big.
“They feel very sorry that it’s semi-ruined and is under threat of total destruction,” he adds.
Even decades after the collapse of the Soviet Union, the story behind the Duga still poses more questions than answers, its true purpose not fully understood.
## Doomed to failure
Construction of the Duga began in 1972 when Soviet scientists looking for ways to mitigate long-range missile threats came up with the idea of building a huge over-the-horizon-radar, that would bounce signals off the ionosphere to peer over the Earth’s curvature.
Despite the gigantic scale of the project, it transpired the scientists lacked full understanding of how the ionosphere works – unwittingly dooming it to failure before it was even built.
Some of what we know today about the Duga – also known as Chernobyl-2 – comes from Volodymyr Musiyets, a former commander of the radar complex.
“The Chernobyl-2 object, as a part of the anti-missile and anti-space defense of the Soviet military, was created with a sole purpose,” he told the Ukrainian newspaper Fakty, “to detect the nuclear attack on the USSR in the first two-three minutes after the launch of the ballistic missiles.”
The Duga radar was only a signal receiver, the transmitting center was built some 60 kilometers away in a town called Lubech-1, now also abandoned.
These top-secret facilities were protected with extensive security measures.
## Wild speculations
To confuse their “enemies,” Soviet command often designated such installations with numbers or fake identities.
On Soviet maps, the Duga radar was marked as a children’s camp (there’s even a bizarre bus stop on the road to one facility decorated with a bear mascot from the 1980 Summer Olympics in Moscow.
Legend has it that Phil Donahue, one of the first US journalists to be granted access to Chernobyl after the disaster, asked his official guide about the surreal sight of the Duga on the horizon and was told it was an unfinished hotel.
When it was in operation, according to Musiyets, the Duga supposedly used short radio waves capable of traveling thousands of kilometers using a technique called “over-the-horizon” radiolocation to detect the exhaust flames of launching missiles.
In 1976 the world heard for the first time the eerie woodpecker-like repetitive pulse coming from the transmitters.
Conspiracy theories followed instantly, generating Western media headlines about mind and weather control.
## ‘Russian woodpecker’
Amid growing fears of nuclear war some claimed that the low-frequency “Russian signal” could change human behavior and destroy brain cells.
Such wild speculations were further fueled by the Soviet Union’s denial of the very existence of the radar – it was a children’s camp after all.
While it’s highly unlikely that Duga was used as a mind control weapon directed at Americans, its true purpose and the important details of its functioning are covered in mystery.
Was there a connection to the nearby Chernobyl Nuclear Power Plant? It’s speculated that the doomed facility was built in the particular area in order to provide the enormous radar with energy.
Supporters of this idea point out that the Duga radar cost the Soviet Union twice as much as the power plant, despite its questionable military capabilities.
A Sundance-awarded 2015 documentary “Russian Woodpecker” goes deep into this theory following Ukrainian artist Fedor Alexandrovich’s investigation into the causes of the Chernobyl tragedy, with the Duga radar playing a role at the core of the conspiracy.
## Soviet ghosts
The explosion at Chernobyl on April 26, 1986 was the beginning of the end for the Duga array. The complex was closed due to the radiation contamination and its workers evacuated – the silence only broken by the sound of crackling geiger counters tracking radiation.
Due to the Duga’s top-secret status, all the documents about its operation were either destroyed or archived in Moscow, a state of things that continues to the present day. The antenna’s vital components transported to Moscow or spirited away by looters.
In the chaos that followed the collapse of the Soviet Union, the radar’s fate was entrenched by its location in the middle of the Chernobyl Exclusion Zone, sealed off from the public for more than two decades.
The Chernobyl catastrophe impacted the lives of thousands of innocent people, covered the whole continent in radiation and led to death and decay.
Enduring fascination for the incident and the Cold War, perhaps some of it inspired by recent diplomatic strains between East and West, have meant no shortage of people wanting to explore such forsaken relics.
“Many people have heard about it,” says Yemelianenko. “Mostly they like [the radar] because their personal life story is in some ways connected with the history of the Cold War.
“Some people were engaged in these events … They would like to witness [Duga] with their own eyes,” he says, adding that most of visitors are from the United States, aged from 30 to 60.
Yemelianenko, among a group of Ukrainian tourism professionals working to get the Exclusion Zone inscribed on UNESCO’s World Heritage list, adds that many visitors to the Exclusion Zone claim that seeing the Duga is the highlight of their trip.
So, while the sinister woodpecker sound may have departed the radio waves, the Duga continues to transmit its eerie presence across the abandoned landscape.
The Soviet Union may have gone forever but its ghosts still haunt Ukraine.
| true | true | true |
Deep in the radiated Chernobyl Exclusion Zone in the Ukraine stands the abandoned Duga radar, a mysterious piece of Soviet Cold War technology also known as the “Russian Woodpecker.”
|
2024-10-12 00:00:00
|
2019-03-01 00:00:00
|
article
|
cnn.com
|
CNN
| null | null |
|
1,076,139 |
http://www.gatesfoundation.org/annual-letter/2010/Pages/bill-gates-annual-letter.aspx?cmp=36
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,961,079 |
http://www.forbes.com/sites/benkepes/2015/01/28/amazon-changes-the-game-again-aws-introduces-workmail/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,549,970 |
http://codinginmysleep.com/the-dollar-printers-petition/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,234,815 |
http://news.bbc.co.uk/1/hi/technology/8598871.stm
|
The Day the Web Turned Day-Glo
| null |
Is Chatroulette a sign of internet punk?
**Bill Thompson is pleased to see the punk ethic alive and well online**
Anyone with a few minutes to spare online might enjoy visiting
Chatroulette
- the finest expression of punk mentality from the emerging internet generation that I've yet come across.
It's not hard to play, as there are only three rules. You have to be aged 16 or over. You're asked to "please stay clothed". And you can alert the management by clicking F2 "if you don't like what you see".
Click 'New Game' to start a game, give the service access to to your camera and microphone and you begin a video conversation with a random stranger. That's it. Chatroulette uses Adobe Flash to turn on the camera and microphone on a visitor's computer and register their IP address with the site. It then connects that user with another, random IP address and opens up a connection between the two, so you can start to chat. **Causing a stir**
Even though it's getting millions of users, Chatroulette is very scalable because, like the original Napster, the data doesn't actually go through the Chatroulette site itself. Instead it uses Flash's peer-to-peer streaming service to make a direct link between the two computers and only has to keep track of the IP addresses and "next" calls.
|
**The kids have arrived online and they want to shape it in their image**
|
It is also causing an enormous fuss, largely because it is unmediated, requires no registration or verification and is open to every exhibitionist, deviant and random stranger online. My son reckons he is getting a ratio of 14 naked men to one worthwhile conversation, which sounds about right for a service that is intended to do for video chat what Twitter has done for communication in 140 characters or less, and show us the real potential of the unfettered connectivity that the internet makes possible.
Of course it's a scandal, and of course it is potentially corrupting and dangerous, though the random nature of the connection and the lack of any way to choose who you talk to mean that the chances of coming across someone in the same country never mind the same city or town are vanishingly small. Yes, someone could use it to make contact by writing their email address or phone number on a card or calling it out as soon as a connection is made, but you'd have to be pretty stupid to think of this as a reliable way to make new friends or find victims. **Punk personified**
The point about Chatroulette is that is has no point, that it strips away the wooden panelling from this finely modelled room we call the internet to reveal all the workings beneath and show that in the end it's just a space for making connections between people. It reminds me of the day in 1977 when I went into the sixth form common room at Southwood Comprehensive School in Corby and my mate Dougie Gordon played me his newly-arrived copy of God Save The Queen and everything I thought I knew about politics, music and revolution coalesced around the Sex Pistols into a punk sensibility that has stayed with me ever since. Chatroulette is a pure expression of that punk spirit, delivered through the tools available to today's teenagers rather than the electric guitar and seven-inch single of my childhood, and the anger with which it has been received by the establishment is a testament to its disruptive potential. The kids have arrived online - Chatroulette creator Andrey Ternovskiy is the same age as the Mosaic browser - and they want to shape it in their image. I hope they pull it off, though in another echo of punk history Ternovskiy is already being wooed by the majors to sign up and sell out, and the temptation to turn his rebellion into money must be intense. Rather like Jimmy, the punk-precursor hero of The Who's Quadrophenia, he is under pressure to conform from his parents as his mother doesn't like the way Chatroulette can be used. Perhaps he will stay true to punk, like Joe Strummer of The Clash or Siouxsie Sioux. Perhaps he'll sell out like Johnny Rotten and we'll see Chatroulette used to advertise butter. But whatever may happen to his site the impact will be felt as other kids realise that they can pick up a keyboard and become punk programmers, just as my generation picked up a guitar and learned three chords. Chatroulette's launch was the day the net turned day-glo, and Poly Styrene and X Ray Spex would be so proud.
*Bill Thompson is an independent journalist and regular commentator on the BBC World Service programme Digital Planet. He is currently working with the BBC on its archive project.*
|
## Bookmark with:
What are these?
| true | true | true |
Bill Thompson is pleased to see the punk ethic alive and well online
|
2024-10-12 00:00:00
|
2010-04-01 00:00:00
| null | null | null |
BBC
| null | null |
4,679,231 |
http://techcrunch.com/2012/10/20/facebooks-first-server-cost-85month/
|
Facebook's First Server Cost $85/Month | TechCrunch
|
Rip Empson
|
At Startup School today in the Memorial Auditorium at Stanford University, Facebook co-founder and CEO Mark Zuckerberg spoke to Y Combinator founder Paul Graham about the early days of Facebook. He revealed that, early on, the team did not intend for the TheFacebook.com (as it was called) to become a business.
In fact, being at Harvard and all, Facebook had no cash to run the business and operated it for the first few months for $85/month, which was the cost of renting one server. Facebook’s first server.
But how did Facebook support its early operations and pay for these wildly expensive servers? Banner ads! Yes, banner ads. As Zuck mentioned in the interview, it was (co-founder) Eduardo Saverin’s job early on to find ads for Facebook to use, as AdSense had just been released, for example. TheFacebook.com had a network of banner ads, but as Zuck implies, they really weren’t making any money. Just enough to pay for those $85 servers.
Facebook, as the story goes, it took about a year for Facebook to reach its first million users. For most, that would be considered speedy expansion, but Zuckerberg implied that (humorously enough) the early constraints on Facebook’s scaling were having to pay for those first servers. Apparently, $85 didn’t come so easy back then.
In terms of expansion, Zuckerberg told the audience the reason the co-founders chose Yale, Stanford and Columbia as the first schools at which they’d launch Facebook, was because they knew it would be tough to succeed at those schools. That was because those three schools already had some kind of simple student social network already established.
If they could succeed in finding quick penetration into Yale, Stanford and Columbia, Zuck said, with nascent competition already in place, then they knew the rest would come easier. So that’s potentially some great advice for the young entrepreneurs out there: Start with the biggest challenge. If it works, the rest will seem easy in comparison. (Of course, don’t expect to get a billion users.)
| true | true | true |
At Startup School today in the Memorial Auditorium at Stanford University, Facebook co-founder and CEO Mark Zuckerberg spoke to Y Combinator founder Paul Graham about the early days of Facebook. He revealed that, early on, the team did not intend for the TheFacebook.com (as it was called) to become a business. In fact, being at Harvard and all, Facebook had no cash to run the business and operated it for the first few months for $85/month, which was the cost of renting one server. Facebook's first server.
|
2024-10-12 00:00:00
|
2012-10-20 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
37,987,463 |
https://tribecap.co/venture-capital-restarts/
|
Venture capital restarts: Navigating the funding desert to unlock extreme value - Tribe Capital
|
Alex Chee
|
In the current venture capital landscape, a phenomenon known as the “funding desert” – where the amount of capital that startups need to sustain operations relative to the amount they are receiving is at historic lows – is reshaping how investors and startups interact.
We believe that this scarcity of funding is not merely a hurdle; **it’s a forcing function that increases the role that the Venture Capital Restart plays in the ecosystem**.
Unlike traditional venture investments, restarts often demand a higher level of conviction due to their inherent complexity and risk. However, when executed correctly—especially for companies that have achieved product-market fit—we believe that these restarts can be a wellspring of extreme value creation. Conversely, VCs that lack the ability to appropriately underwrite and support restarts can leave immense value on the table.
### What is a Venture Capital Restart?
A Venture Capital Restart is not a mere pivot; **it’s a comprehensive refocusing of a company’s business model, often accompanied by a reconfiguration of its capital structure, which includes recaps**. This could range from a complete recapitalization to more modest financial adjustments. However, there’s nothing modest about the shift in product focus. It’s usually a radical change that requires extreme conviction from both investors and founders. This shift often manifests in a completely different growth pattern compared to the company’s previous trajectory.
We are excited to share three case studies of restarts, successes and failures, as told by the CEOs who navigated those tumultuous times.
Finally, we share some ways that measuring product-market fit characteristics can aid in the process of understanding restarts. Our newly-launched sister company Termina opens up such quantitative methodology to the broader Venture Capital ecosystem.
### The Fundraising Desert moves restarts to the forefront
The process of overhauling a business model is fraught with risk, and not all restarts may yield the desired outcomes. As mentioned, we are in the midst of a historically extreme shortfall of capital in Venture. Our gauges, which we explore in depth in our previous writing, are updated below. Today we are 1 year into the desert and we anticipate at least another year before the supply of Venture Capital exceeds the demand. **And to get there, a continued capital restructuring has begun taking place, which we believe will likely accelerate and become an increasingly prominent toolkit for sophisticated operators and investors during this period.**
In an environment defined by funding scarcity, the Venture Capital Restart offers a compelling alternative for both investors and startups willing to embrace extreme conviction and calculated risk. So, as we navigate this funding desert, we believe that Venture Capital Restarts can be a powerful tool for extreme value creation—a high-risk, high-reward strategy that, when executed well, can redefine the growth narrative for startups and investors alike.
### When Docker restarted, Scott Johnston focused on calming the storm and financially incentivizing the go-forward team.
- Docker is among the most well-known brands in the world among software developers for their pioneering products around containers, a technology that makes applications portable across infrastructure.
- As of October 2023, Docker is over $150M ARR and profitable. But in late 2019, when Docker went through its restart, they had low-single-digit revenue and a complete executive team overhaul.
- Read more detail in our short report on Docker’s product-market fit turnaround.
In our discussion on this topic with Scott Johnston, CEO of Docker, he underscored the pivotal role of focusing on the team and culture during the transition.
Docker’s commitment to transparency was evident. They introduced measures like equity reloads, tenure benefits, and accelerated incentives to retain talent. They also championed an open roadmap, actively seeking feedback from their internal developers as well as the broader Docker developer community, and allowing them to influence the company’s product direction.
We focused on hyper-transparency. We started with daily all-hands meetings, then weekly, and eventually monthly. We were open about our attrition stats and our financial situation. We also had a trusted HR professional on the executive staff to help manage the transition.
Scott Johnston, CEO of Docker
However, the journey was fraught with challenges. Docker had to manage unforeseen restructuring costs and ensure they maintained an 18-month operational runway. The company also grappled with a 40% attrition rate in the first 12 months, prompting introspection about employee fit at every level of the organization.
Given the complexity and unknowns of a restructuring and divestiture, we had to be very careful with our financial planning. It was a lesson in ‘measure twice, cut once.’ We also had to adjust for the fact that the risk-reward calculus of Series D employees was different from that of Series A employees.
Scott Johnston, CEO of Docker
### When Shiprocket restarted, Saahil Goel focused on value creation: “In a restart, the valuation is an afterthought, what really matters is how large the company can get”.
- Shiprocket, an emerging N-of-1 company, services tens of thousands of Indian ecommerce merchants every month, servicing 200 million ecommerce transactions a year across their verticalized logistics offerings.
- The insight that led to their sustained growth happened while their first Indian customer-to-customer (C2C) offering “Kraftly” was failing but led them to the insight that the core problem in Indian ecommerce is logistics.
- Saahil Goel, Shiprocket’s CEO, could have started a new company with a fresh ownership structure. Still, instead, he focused on building value first due to extreme conviction and never turned back.
Saahil Goel, CEO of Shiprocket, shared the journey of his company, emphasizing the resilience and commitment of their board and leadership. Before Shiprocket, their previous venture, Kraftly, was on the brink of running out of funds. Despite this, the team’s attachment to their vision kept them pushing forward. They had raised significant capital for Kraftly, but when funds dwindled, they had to shut it down.
The team then needed just $500k to clear liabilities. Their plan was to reduce the team to 25 members and pivot to building Shiprocket. They were confident in this move, having already worked on the Shiprocket side and having active contracts in place. However, investors presented them with a challenging proposition: either take $4 million or nothing. Despite only wanting half of that amount to avoid further dilution, they accepted the full sum.
Some advised the founders to start a new company altogether, but loyalty to their investors, who had supported them throughout, made them stick with Shiprocket. The founders felt a deep sense of responsibility and gratitude towards their backers.
Equity is a touchy subject. Some key team members left partly because of it. But we believe that if the company does well, everyone will benefit. We’re more focused on growing the company than worrying about individual equity stakes.
SAAHIL GOEL, CEO OF SHIPROCKET
The team dynamics were also highlighted. Akshay, one of the founders, considered leaving after shutting down Kraftly, the business he was brought on to lead. However, the excitement around a fresh 0 to 1 build with the core team convinced him to stay. The original idea for Shiprocket came from Gautam, another founder, who spent months teaching Saahil the ropes. The team’s complementary skills were a significant asset.
The team received an acquisition offer from one of India’s leading communications companies but declined, believing in their long-term vision. Saahil emphasized the importance of the right investors over mere valuation. He believes that if you have the right backers and a vision spanning decades, short-term valuations become less significant.
In a restart, the valuation is an afterthought, what really matters is how large the company can get. If you have the right people around the table, it pays back in gold. When you’re committed to a venture for the long haul, day-to-day valuations become less significant.
SAAHIL GOEL, CEO OF SHIPROCKET
### Karn Saroya, previously the CEO of Cover, shares the hard lessons he learned from a failed restart.
- Tribe Capital was an early backer of Cover, a direct-to-consumer (D2C) insurance company. As they became an Insurance Carrier from their initial managing general agent (MGA) structure.
- Cover was growing rapidly but required a deep balance sheet to support that growth. When their fundraising efforts drew out, Tribe recapped the company to pursue their strategy as a Carrier.
- Ultimately, Cover was unable to access the capital needed to fully realize their strategy and Karn Saroya, Cover’s CEO through good and tough times, shares his account of key people and key moments.
We wrap up with a conversation with Karn Saroya from Cover. Karn had raised around $20 million before a recapitalization. Their fundraising efforts, which spanned about nine months, were challenging and reached a crescendo during the onset of COVID-19 in March 2020, which caused a crucible moment. Despite these challenges, Karn remained committed to the vision. He contemplated shutting down but persevered, especially after expanding operations in Texas and California. The hiring of industry experts further boosted his confidence. However, he admitted that a significant mistake was assuming that they could operate capital efficiently.
Despite losing fundraising momentum, I still had conviction. We had opened up in Texas and California and had hired strong industry experts. My biggest mistake was thinking we could do it capital-efficiently, which was not the case.
KARN SAROYA, FORMER CEO OF COVER
After the recap led by Tribe, there was a material carve-out for management. The majority of this was allocated to existing employees and some to new hires. The company’s headcount was reduced by almost half, transitioning to independent agency distribution. Karn emphasized the importance of not delaying tough decisions and going deeper when necessary. During the recap, the overall business environment was daunting. Communicating with smaller investors and angels about the “pay to play” model, especially during market volatility, was challenging.
Dealing with new investors was tough, but Karn later appreciated the feedback and learned to handle it better. He emphasized the dangers of scaling too quickly, advocating for a leaner approach.
Be very picky about doing a recap. It’s so hard to make it work even if you have some existing business. Starting fresh has a ton of benefits. Also, people who raised at high valuations often think ‘we made it,’ but the metrics and a real business sometimes just aren’t there.
KARN SAROYA, FORMER CEO OF COVER
### Diligencing Venture Capital Restarts
We are launching Termina to broaden access to the same **“operating system”** that Tribe Capital and others have used for quantitative diligence. While Termina can integrate into investment processes for a variety of opportunities, three ways it helps specifically with diligencing restarts are by (1) breaking mental anchors, (2) exploring reconfigurations of the business, and (3) mapping go-forward growth efficiency.
**Breaking mental anchors –**A common situation with restarts is**the ecosystem gets anchored in a company’s operations or value proposition pre-restart**. For example, even to this day there are many folks in the ecosystem who believe “Docker is dead,” despite their meteoric product launches since their restart. To this end, using Termina to diligence the company’s product-market fit is an effective way to take an objective view of a company irrespective of the biases or preconceptions that might exist. For this reason, understanding end user engagement, which would include developer engagement in Docker’s case, can provide a fresh look for parts of the business or brand that could be thriving as the legacy business model is sunsetting.**Learning which configurations of a company have product-market fit, and which don’t –**At restart, a company may have the kernel of a successful product but elements of the legacy product or existing business that aren’t working. In the case of Shiprocket, their “Kartrocket” storefront product, launched after Kraftly, had sublinear LTVs indicating unhealthy growth and churn for a software company since software products tend to have linear LTVs. Their newly launched shipping aggregator, which led to the Shiprocket platform, had linear LTVs, which was uncharacteristically strong for transactional products with MSME’s which tend to be sublinear (especially in India).**That is, the configuration of the business around shipping and logistics was displaying superior retention to another configuration around their prior “Kartrocket” product**, an insight that Saahil understood deeply.
**Growth efficiency and capital requirements benchmarking –**Companies going through restarts don’t get a lot of extra chances. A restart can pull irreversible levers with capital structure and team morale. This means that the restart needs to be effectively capitalized because a restart is threading a needle. In the case of Cover, with the benefit of hindsight, Cover needed a lot more capital to reach escape velocity than it had at the time.
| true | true | true |
In the current venture capital landscape, a phenomenon known as the "funding desert" is reshaping how investors and startups interact.
|
2024-10-12 00:00:00
|
2023-10-19 00:00:00
|
article
|
tribecap.co
|
Tribe Capital
| null | null |
|
20,639,681 |
https://itnext.io/a-journey-with-quarkus-ff73fc64cfe1
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,933,433 |
https://techblog.realtor.com/a-better-ecs/
|
A Better ECS - realtor.com Tech Blog
|
Author Brian Masney
|
As more application services migrate to the AWS cloud, a pattern quickly emerges in which EC2 resources are considerably underutilized. While a wide array of EC2 instance types and autoscaling options help to match the consumed infrastructure with current demand, many services still make little use of the available memory, CPU, or bandwidth. In order to make better use of available resources, AWS provides Elastic Container Service (ECS), which enables multiple services to run on a single set of EC2 instances.
Developers moving onto ECS will most likely encounter difficulties getting the instance autoscaling to operate as expected. This article describes how we were able to improve the instance autoscaling, save money by running our Dev and QA EC2 instances on spot instances, several other management improvements and best practices to manage the cluster.
# Instance autoscaling that works
Anyone that has ran multiple applications inside a single ECS cluster has most likely encountered this error:
service XXX was unable to place a task because no container instance met all of its requirements
The desired instance count of the EC2 autoscaling group would be below the maximum instance count but the ECS scheduler is not aware of this. ECS provides CloudWatch metrics about the overall CPU and memory reservation inside the cluster, however ECS currently does not provide metrics about the number of pending tasks. Setting the scale up and scale down policy based on multiple CloudWatch metrics can be problematic since there can be conflicts if one metric says to scale up but the other metric says to scale down.
One solution to this problem is to use a Lambda function that publishes a custom CloudWatch metric called SchedulableContainers. The Lambda function needs to know the largest CPU and memory reservation that can be requested inside your cluster so that it can calculate how many of the largest containers can be started. The instance autoscaling is configured to only use this metric. In essence, this means that the cluster will always have available capacity for one additional instance of the largest task.
For large applications, the ECS instance and service autoscaling had to be tightly coupled in the past. We were initially getting around some of the ECS autoscaling issues by running our ECS clusters a little larger than they needed to be. We are now able to run some of our ECS clusters at 80-90% reservation capacity with no issues.
The Lambda function is included inline in the CloudFormation template.
# Scaling down the cluster without affecting end users
When an ECS cluster scales down, your applications will likely see intermittent 50X errors from the ALB when an instance is taken out of service. This is caused by AWS AutoScaling not being aware of the ECS containers running on the instance that is terminated, so the instance is shutting down while it is currently serving traffic. Ideally, the instance should stop receiving traffic prior to shutting down.
AWS AutoScaling supports lifecycle hooks to notify a Lambda function when an instance is about to be terminated. AWS Support recommends the use of a Lambda function to gracefully drain the ECS tasks before the instance is terminated. The version provided by AWS has several issues and a rewritten version is provided inline in the ECS cluster template with the following changes:
- The AWS code can post messages to the wrong SNS topic when retrying. It looks for the first SNS topic in the account that has a lambda function subscribed to it and posts the retry message to that topic.
- The AWS code does not do any kind of pagination against the ECS API when reading the list of EC2 instances. So if it couldn’t find the instance ID that was about to be terminated on the first page, then the instance was not set to DRAINING and the end users would see 50X messages when the operation timed out and autoscaling killed the instance.
- The retry logic did not put in any kind of delay in place when retrying. The Lambda function would be invoked about 5-10 times a second, and each Lambda function invocation would probably make close to a dozen AWS API calls. A 5 second delay between each retry was introduced.
- There was a large amount of unused code and variables in the in the AWS implementation.
- Converted the code from Python 2 to 3.
- Previously, the old Lambda function was included as a separate 8.1 MB ZIP file that needed to be stored at S3 and managed separately from the rest of your ECS cluster. Python code in AWS Lambda no longer needs to bundle all of its dependencies . With all of the refactoring above, the new Python code is small enough that it is embedded directly in the CloudFormation template to reduce external dependencies. This will make it easy to make changes to this code on a branch and test it against a single ECS cluster.
**Update: The issues with the autodraining Lambda have been corrected upstream via the pull request https://github.com/aws-samples/ecs-cid-sample/pull/23/ on 2018-08-01.**
Other container schedulers, such as Kubernetes, will have the same issue and the same approach can be used to drain pods.
# Spot instances in Dev and QA environments
**Update: See the AWS blog post New Amazon EC2 Spot pricing model: Simplified purchasing without bidding and fewer interruptions for changes to spot instances since this article was written. The AWS blog post Scale Amazon EC2 Instances across On-Demand, Spot and RIs in a Single Auto Scaling Group may also be helpful for some users.**
EC2 supports spot instances that allow you to bid on excess computing capacity that is available at AWS. This typically saves between 70-90% off of the posted on-demand price. However, AWS can terminate the spot instances at any time with only a two-minute termination notice given.
To reduce our AWS costs, we run our Dev and QA environments on spot instances when the spot bid price is low. Since the bid price may be too high for several hours or more, we needed a way to fall back to using on-demand instances when the bid price is too high. The Autospotting Lambda will automatically replace the expensive on-demand instances with spot instances of equal size or larger when the bid price is low. If one or more spot instances are terminated (such as due to a high bid price), then EC2 AutoScaling will start new on-demand instance(s). These on-demand instances will eventually be replaced with spot instances once the bid price goes back down. Autospotting also tries to use a diverse set of instance types to avoid issues all of the spot instances suddenly going away.
A script listens on each EC2 instance for the two-minute spot instance termination notification from the EC2 metadata service. When an instance is scheduled to be terminated, the container instance state is automatically set to DRAINING so that the existing containers can gracefully drain.
We have plans to run a small subset of our production webservers on spot instances with the help of Autospotting after more testing is completed.
# cfn-init and forcing new EC2 instances
You can use AWS::CloudFormation::Init to manage resources on the underlying EC2 instances. However, sometimes there are situations where a file is changed, and services may need to be restarted. For instance, maybe a service is no longer needed. Now you need to test the create and update code paths, which adds more administrative overhead. In keeping with the “*cattle, not pets*” philosophy of infrastructure, we put a version number in the autoscaling launch configuration user data script, and increment that number to force new EC2 instances.
ECSLaunchConfiguration: Type: AWS::AutoScaling::LaunchConfiguration Properties: UserData: "Fn::Base64": !Sub | #!/bin/bash # Increment version number below to force new instances in the cluster. # Version: 1
With this change, we now only need to test the code path that creates new EC2 instances.
# Logging drivers
The ECS logging driver is configured so that the Splunk, CloudWatch logs, and json-file log drivers are available to containers. It is up to each application’s container definition(s) to configure the appropriate logging driver. For example, the Splunk logging driver can be configured on the ECS task definition like so:
TaskDefinition: Type: AWS::ECS::TaskDefinition Properties: ContainerDefinitions: - Name: my-app-container LogConfiguration: LogDriver: splunk Options: splunk-token: my-apps-token splunk-url: https://splunk-url.local splunk-source: docker splunk-sourcetype: my-apps-env-name splunk-format: json splunk-verify-connection: false
# IAM roles
Task-based IAM roles are implemented so that the cluster doesn’t need to run with the permissions of all applications running inside it.
The ECS cluster itself needs some IAM roles configured for its proper operation and the provided CloudFormation template uses the AWS-managed IAM roles when available so that the clusters automatically get the required IAM permissions as new AWS features are made available in the future.
ECSRole: Type: AWS::IAM::Role Properties: ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
# CloudFormation exports
After your ECS cluster is setup, you will need to know some duplicate information such as VPC IDs, load balancer information, etc when setting up your ECS services. We use CloudFormation exports so that the service can look up all of this information from the ECS cluster CloudFormation stack. When setting up a new ECS service via CloudFormation, we only need to know 1) the AWS region, 2) the CloudFormation stack name that has our ECS cluster, and 3) which shared load balancer to attach to (internet-facing or internal). The ECS service can lookup the VPC that the cluster is in with the CloudFormation snippet ‘Fn::ImportValue’: “cluster-stack-name-VPC”. This reduces the number of parameters that our ECS services need to have.
# Tagging compliance
All taggable AWS resources at realtor.com must have the owner, product, component, and environment tags present. We use the equivalent of
1 | aws cloudformation create-stack --tags ... |
to provision our CloudFormation stacks so that all taggable AWS resources will get the proper tags. There are two exceptions in the ECS cluster template:
- The EC2 AutoScaling group will get the tags, however
*PropagateAtLaunch: true*will not be set so the EC2 instances that are started will not get the proper tags. These four tags are explicitly configured on the AutoScaling group so that the EC2 instances are tagged properly. - The EBS volumes associated with the EC2 instances do not inherit the tags of the EC2 instance. On startup, each EC2 instance takes care of adding the appropriate tags to its EBS volumes.
# Application Load Balancers (ALBs)
The ECS cluster template allows you to create an internet-facing and an internal load balancer to allow easily running multiple applications inside the same cluster. One or both of the load balancers can be disabled via CloudFormation parameters if desired. Be aware that the ALB currently has a limit of 100 listener rules per load balancer.
A dedicated S3 bucket is created for the cluster to store the ALB access logs.
# Start a task on each ECS instance
ECS currently does not have the ability to start a task on each instance inside the cluster. To work around this, each EC2 instance has the ability to start a task that will run only on the current instance.
# CloudFormation Template
By following these best practices and techniques, ECS can significantly lower infrastructure costs and simplify scaling, deployment, and management concerns. We’ve made available on GitHub a fully functional CloudFormation template which implements all of these best practices.
The next article in this series describes how we do blue/green deployments, canary containers, and rollbacks using ECS.
Thanks guys!
This documents is really well done.
| true | true | true |
As more application services migrate to the AWS cloud, a pattern quickly emerges in which EC2 resources are considerably underutilized. While a wide array of EC2 instance types and autoscaling options help to match the consumed infrastructure with current demand, many services still make little use of the available memory, CPU, or bandwidth. In order … Continue reading "A Better ECS"
|
2024-10-12 00:00:00
|
2018-04-24 00:00:00
|
article
|
realtor.com
|
realtor.com Tech Blog
| null | null |
|
25,621,945 |
https://bastian.rieck.me/blog/posts/2021/gratitude/
|
Developing and Maintaining Gratitude
| null |
# Developing and Maintaining Gratitude
## Tags: musings
As I reflect back on 2020, a year that was tough on human civilisation
as a whole, I am nevertheless grateful for the many positive experiences
of this year. This post is not a ‘humblebrag’ or a denial of the many
negative things of 20201, but rather a brief recipe and reminder to
my future self to (further) develop and maintain an attitude of
gratitude. This post is written in the form of questions that I used to
ask myself, as well as the current set of answers I came up with. It is
my hope that readers will find some wisdom here.
**Why should I be grateful?** Because, fundamentally, your life is not
about you. At least, it is not *just* about you. You are interacting2
with so many other people and doing a lot of things on a daily basis.
You take all of this for granted—until it is not available any more.
So why not make an effort to be cognisant of their positive impact in
your life?
**Can I choose to be grateful?** Yes, you can (see below for more
concrete tips). It is not built into most of us—and as the last year
mercilessly demonstrated, we often take things for granted, until they
are gone, and only
*then*do we start bemoaning their loss.
**But is focusing on the positive not a denial of the negative?** Our
brains are—at best—capable of producing medium-fidelity
representations of events in our lives, told from our perspective. While
some parts of our memory are more reliable than others, we are still
plagued by ‘bugs’ such as the misinformation effect.
Hence, why not make use of this and deliberately store the positive
things? In my experience, negative events, such as the loss of a loved
on or a severe illness, do not need additional reinforcement to be
remembered. But the small stuff, such as receiving a nice ‘Thank you’
note, will probably fall through the cracks. So, without denying that
bad things happen, why not pay attention to the positive ones?
**Will gratitude not stymie my efforts?** This is a though one that
I admittedly wrestled with for a long time. If I am grateful for what
I already *have*, will I not stop wanting to be *more*? Setting aside
the question of whether it is useful to want ‘more’, I realised that
gratitude also brings a certain amount of clarity with it. For example,
I realised how grateful I was for the interactions with my colleagues
and students this year. This gave me a good idea of how large I want
my future research group to be—so in this sense, I now know that
I do not necessarily require a larger group, but this does not stifle my
application efforts or my research efforts in the slightest. On the
contrary: it makes me appreciate the time I can spend on these things
even more! I think that it is possible to be grateful for what you have,
and still aspire to improve your skills, your life situation, your
relationships, and so on.
**How to be grateful and remain it?** Living on the command-line and in
`vim`
for many of my working and waking hours,
the easiest solution for me is to have a file in which I can journal
things I am grateful for. This could be anything, from having a nice cup
of coffee while reading a well-written paper in the garden to receiving
a nice e-mail about my research or blog, thus reminding me that this is
not a solipsist adventure3. Put everything in there whenever you are
moved to do so, and after a few months, you can go back and re-read the
list—I promise you that you will be positively surprised!
As a parting thought, I can also encourage you to be included in other
people’s gratitude files: do not hesitate to express your gratitude to
those that helped you, inspired you, and went out of their way to
support you in any way!4 With that, I wish you a blessed 2021. May of
all you find lots of things to be grateful about. Until next time!
-
And for all of us, there were plenty of these! ↩︎
-
Maybe not in person, but certainly through other media… ↩︎
-
One of my (many) blind spots is not being aware of the impact that even a short positive e-mail or tweet can have on people. Keeping track of these things is a powerful reminder. ↩︎
-
I am the first to admit that I do not do this often enough, hence this post as a reminder. ↩︎
| true | true | true | null |
2024-10-12 00:00:00
|
2021-01-03 00:00:00
| null | null |
rieck.me
|
bastian.rieck.me
| null | null |
22,262,846 |
https://www.reuters.com/article/us-space-exploration-boeing-idUSKBN20106A
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,557,478 |
https://www.marca.com/en/lifestyle/world-news/2022/05/29/6293b4df22601dc3408b45b6.html
|
Mona Lisa gets caked by man disguised as old woman at the Louvre
|
Marca
|
- American Finances Updates. Cheap Gas Prices, Stimulus Checks, Social Security Benefits...
- Lakers are closer to landing Giannis Antetokounmpo than ever before
Images of the **Mona Lisa** painting stained with cake cream after a person stamped a cake on it went viral on Sunday, despite the fact that the cake actually collided with the glass that protects **Leonardo da Vinci's** work in the** Louvre Museum in Paris**.
According to witness testimony, the perpetrator was a man in a wheelchair who wore a wig. To the surprise of the other guests, he would have suddenly stood up and approached **La Gioconda**, throwing the cake at her.
Those in charge of the museum's security rushed to eject the man from the room, while the rest of those present continued to photograph the situation nonstop.
The painting, which was created between 1503 and 1519 by **Leonardo da Vinci**, was unaffected because it was exposed and protected by safety glass, which was where the sweet's remains were impregnated.
And, despite the astonishment of those who were in the museum's most inaccessible room at the time, which is always packed with tourists, the incident did not escalate. As seen in some of the videos shared on social media, Louvre security workers rushed to remove the attacker from the building and clean the glass.
## Not the first attack on the painting
Attempts to deface, steal, or use the 77 by 53 centimeter canvas to raise awareness for various causes have been made throughout history.
A man threw sulfuric acid at it in the 1950s, which had an effect on the painting, and a Bolivian student hit it with a stone. A woman in a wheelchair sprayed red paint on her wheelchair while she was at an exhibition in Tokyo in 1974, expressing her dissatisfaction with the lack of access ramps, though she never reached him.
A Russian tourist threw a cup of tea at him in the summer of 2009. The work was stolen over a century ago, in 1911, and went missing for nearly three years.
| true | true | true |
Images of the Mona Lisa painting stained with cake cream after a person stamped a cake on it went viral on Sunday, despite the fact that the cake actually collided with the glass t
|
2024-10-12 00:00:00
|
2022-05-29 00:00:00
| null |
article
|
marca.com
|
Marca
| null | null |
37,832,846 |
https://www.youtube.com/watch?v=4sLJapXEPd4
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,393,314 |
http://insidetechtalk.com/it-faces-playstation-expectations-with-atari-technology/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,267,629 |
https://puri.sm/posts/3d-gaming-on-the-librem-5/
|
3D Gaming on the Librem 5 – Purism
| null |
The Librem 5 is the first phone running a full-blown desktop Operating System–PureOS; the same operating system that runs Purism’s Librem Laptops, Mini, and Servers. Productivity tools are abundant, but how good is the Librem 5 when it comes to gaming?
The Vivante GC7000Lite GPU in the Librem 5 provides a lot of 3D rendering power while still protecting your freedom with free software drivers. Here’s a look at how some 3D games run on the Librem 5 today.
There are a number of games like SuperTuxKart and Neverball that are already fully touch compatible. First-person shooters tend to benefit from a keyboard and mouse, and the list of games that work with UI scaling is so long you would have to use extra storage to install them all.
The Librem 5 running PureOS for gaming is starting to look a lot like PC gaming thanks to the investment Purism has made in a convergent operating system that powers all our products.
Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.
| true | true | true |
Purism makes premium phones, laptops, mini PCs and servers running free software on PureOS. Purism products respect people's privacy and freedom while protecting their security.
|
2024-10-12 00:00:00
|
2020-08-24 00:00:00
|
article
|
puri.sm
|
Purism SPC
| null | null |
|
36,621,686 |
http://seo2.onreact.com/google-search-is-bad
|
This is Why Google Search is Dead* and How to Search Instead - SEO2.blog
|
Tadeusz Szewczyk
|
# This is Why Google Search is Dead* and How to Search Instead
*Do you think Google sucks?* Yes. **Google** search is dying*.
Why that? Ads, on-site features and AI take over gradually.
In this post I reflect on how complex the Google search experience is nowadays.
To solve the problem *I will suggest alternative* ways to search and beyond.
Before we start take note though: I do not hate Google. I just criticize them.
That’s a big difference. Constructive criticism is actually helpful.
As an search engine optimizer for 20 years I know Google pretty well.
I hope my feedback helps improve Google or find alternatives for searchers.
Why Google search does not work
There is one straightforward reason why I prefer not to use Google search.
When you search for something *you expect to see actual search results* don’t you?
The Google search experience is frustrating!
It’s not even search anymore. It’s
- Google Ads.
- Google owns services.
- AI Overviews.
Google does not show real organic search results “above the fold”! What fold?
That is in the visible part of the search engine results pages (SERPs).
It’s a metaphor from the newspaper era. They folded in half.
The top news were above the fold while less important ones below.
For most keywords Google displays just ads, features, overviews on top.
By 2024 Google often shows auto-generated answers created by AI.
Google is not a search engine anymore. It’s an answering machine!
Don’t believe me? Have you been looking for search results recently?
Just “search” anything on Google.com?
*Can you spot even one real unpaid or so called “organic” result?*
Take note that I use a clean browser here with no toolbars, only a bookmark bar.
I am not using a tablet or something with a small screen.
I’m not even using a standard sized 15,6′ laptop screen. It’s a 16′ notebook!
OK, so Google is not a real search engine anymore. It does not work as expected.
What is it then? It’s a dynamic portal worse than Yahoo has ever been.
Most “results” on top are either
- ads aka “sponsored”
- Google’s own properties
- or auto-generated by AI.
When you search for [hotels] we see Google’s proprietary hotel booking tool below the adverts e.g
How to actually search the Web?
What can you use instead to search for real? There is more than one option.
### Startpage
The best alternative right now or **in 2024** is still Startpage. Why?
Essentially Startpage uses Google results but without the tracking, profiling and other privacy nightmares.
I covered **Startpage** in a separate article about private search.
Indeed the post was meant to be published on the Startpage blog itself.
I wrote it in German originally and translated it back to English as it was meant for their German blog.
So I did not get the job at the end of the day. Even though I had several interviews and was almost there.
Thus I’m not as biased as you might suspect otherwise.
### Ecosia
Another viable option is **Ecosia**. It’s also green as in environmentally friendly. How that?
**Ecosia** shows ads above the search results too but far fewer of them.
And you even support reforestation efforts all over the world with 80% of the ad revenue when you click their ads.
I like to plant trees to keep climate change at bay and thus I am even more likely to click ads.
In the case of the [hotels] search there weren’t many though.
I disabled all ad blockers and privacy tools to make sure I can see them:
Hooray! What do we see? We recognize two relevant search results for [hotels]!
Even without trying hard: Hotels.com and Expedia are on top.
Then we’ll notice a Wikipedia entry on the right side. It’s Hotels.com again.
These results are real results determined by site authority and incoming links.
The more people recommend those sites the higher they are in the results.
Until 2023 Ecosia used mainly Bing results and showed them in a cleaner interface.
At the end of 2023 they changed their data sources and added Google results.
As of now it’s not really clear whether they exclusively use Google results like Startpage though.
It seems they are combing several data sources including Bing and Google.
The Ecosia site itself says under the subheading “Get your website listed on Ecosia within Google markets”:
If you want your website to be listed in Ecosia’s results we recommend building your website suitable for SEO. You can find out more about Google’s recommendations for search engine optimization (SEO).
To me this means that to get shown in Ecosia’s organic results you have to optimize for Google.
### DuckDuckGo
Like Ecosia until 2023, privacy-first search engine DuckDuckGo uses mainly Bing search results and a clean user interface for search.
I used DDG for many years until I discovered Ecosia and later Startpage.
It’s still worth a try but the other two offer more so I gave up on DDG and moved on.
DDG is still a clean and usable alternative to Google I’d give a try if I were in your shoes.
I recommended Neeva in an earlier version of this post but sadly the search engine went defunct by now.
Are you scrolling or searching?
I could go on explaining why the above Google “results” are bad and the Startpage or Ecosia results are better but IMHO it’s obvious.
I do not even have to use colors to highlight like others did before me.
Just compare it to the distant past of the “ten blue links“.
Back then Google had real results in the visible area.
Now they want you to forget about it!
Ask.com had the most ads but still fewer than Google now and where is Ask now?
A more current report from BrightEdge shows how wide-spread the “ads-only above the fold” issue was in 2016.
My advice for all those who have no stake in Google: *Let’s move on to a real search engine.*
Why have to sift through a cluttered interface to see actual results dozens of times a day?
Don’t waste your time scrolling. Start searching!
Ecosia still uses mostly Bing results so that the technology behind the results is also state of the art.
The environmental business model and the much cleaner interface are additional benefits.
In some cases – for so called “long tail” searches of three or more words Google can yield better results (when you spot tham) than Bing and with it Ecosia.
Startpage is the best solution then – it offers Google results without the
- tracking
- filter bubble
- visual clutter.
It’s not as good for the planet as Ecosia though.
I haven’t used Startpage often enough to tell whether their results always match Google.
Yet Startpage, Ecosia and DuckDuckGo do work well for most the searches. They returned very helpful results.
Take note that all of the above use Google or Bing results internally and just repackage them.
Startpage, Ecosia or DuckDuckGo take the data and clean up the interfaces.
That’s probably the most important reason to try them beyond the privacy.
Why does Google suck for search?
As I rank on top of Google for queries like [google sucks] many people come here to complain.
Why? They vent about the overall Google search quality and experience in my comment section.
I assure you that once you find the actual search results they are still OK!
Search results are just hidden below a huge pile
- pay per click ads
- Google owned services
- SERP features
- AI answers
Google excels especially when searching for so called “long tail” queries with 3, 4 or more words.
Yet by 2024 you won’t easily spot them. You will most likely just see AI based summaries on top.
In most case you won’t click through to any other websites.
Do you want the “ten blue links” search back?
OK. So *why exactly does Google suck* for searching the Web in 2024?
Google search is sometimes worse than Bing and its derivatives.
Google has problems with “bigger” or more popular keywords!
Why? It’s because they show not the
- best
- most popular
- authoritative
pages but instead the
- latest
- newest
- most current ones.
Thus an article from 2024 may outrank a much better post from 2019 mainly because it’s 5 years younger.
*Often the best results are those that have aged for a few years like good wine*.
Yet Google prefers “fresh content”.
One way to deal with this is to constantly update your content.
I do that and that’s why I keep on ranking with some pages for a decade.
- When you are looking for the latest inside scoop on technology, politics or gossip – you might want to use Google (or Startpage)…
- When you are looking for some weird sentence long questions also ask Startpage or Google as well.
- For timeless and evergreen searches Ecosia or DuckDuckGo are often better as they both use Bing results.
- When it comes to important searches (think health, money, law, science) you want to search Ecosia or DDG because they rather show the most credible sources first.
- Also for one word “huge” searches everybody else is also searching for Bing-based search engines like Ecosia and DDG may offer the more reliable options.
To overcome the problem of hidden organic results below the fold you can use Startpage which is based on Google results.
Apart of that some sites, think Reddit or Quora, have deals with Google to show their results on top of everybody else.
In general Google also prefers big brands over specialized independent publishers. This brand bias often leads to shallow articles surfacing on queries where topical authorities or experts would be needed.
Last but not least so-called AI Overviews overshadow all other results.
They just feature summaries created from regurgitated content then.
These AI Overviews also tend to be error-prone.
How to get rid of all that clutter and nonsense?
How to clean up Google search?
You can also use a tool called Simple Search by The Markup that simply gets rid of all the clutter above Google search results and just shows the actual results in an overlay.
You add it to your browser (Chrome or Firefox e.g) and every time you search Google you just get to see the real search results not those someone paid for to get shown on top.
*Most people do not even realize that what they see on top of Google are ads*.
In case you are one of them just make sure to view and click the proper results.
Google has been increasingly aware of disgruntled user who have been vocal about their criticism.
Thus they also now offer a cleaned up and feature free search result page again.
You have to click “Web” below the search text input on the result page to see that one!
It’s quite hidden though as you see on the partial screen shot above!
You have to search first and above the cluttered ad hoc portal you have to click “More” and then “Web”.
There are also ways to remove AI Overviews from search results using other tools.
As of July 2024 only logged in users based in the US can see AI Overviews and only a relatively small percentage of search queries show them.
So it’s not too late yet. There is still some search functionality you can actually perform on Google.
It’s not just AI summaries and Google services. Search features still exist on Google!
Yet many so-called searches start and end on Google and never even result in a click to the open Web.
Experts call them “zero-click searches”. Those are the majority even without AI overviews.
What Google alternatives do you use?
What search engine do you use and recommend for 2024? **Do you still use Google?**
Why? Why not? Did you try some alternatives? Please tell me all about it below!
I’m always on the look out for new Google alternatives. So make sure to share those!
*Is Google really dead?
P.S.: *Is Google really dead or dying?* Are you ignoring the numbers? No.
I don’t mean **the company behind Google called Alphabet** is out of business. They just struggle in 2024.
Also I don’t think the product they call search is going away.
Google search is by name here to stay.
Behind the scenes it’s not actual search though.
What I mean to say by “dead” or “dying” is that they are not what they were or were meant to be initially.
Google *search is just not search anymore!* Just like a blackboard is not black or a board nowadays.
Google destroys all these guys when it comes to local, and that’s where relevancy matters most.
I think that Google’s reach a point of too big to fail, some of the changes they’ve made to shopping this week shows they can do what they like. I’m unsure how the public regard search quality. Bing’s also showing good results too but does anyone still use Yahoo. I’d love to see Facebook partner with Bing (pull demographic data to drive search). That would hit Google where it hurts!
I don’t think is that bad. And they keep changing algorithm for better quality result.
Kent: You don’t see any search results up there thus the improved algorithms do not matter.
I disagree – I found this article through Google!
But seriously – Come on!! I think what you really mean to say is that Google is changing by increasing ad space.
Don’t worry – a challenger will be along any day to redress this imbalance (http://duckduckgo.com/ perhaps?)
Sol: Yes, DuckDuckGo is another good alternative to Google.
Btw. you can’t have found my post via Google because I banned Google on my blog and this post wasn’t even indexed.
Don’t start your comment with a lie because it doesn’t support the rest of your message.
whenever i make a search on google, he knows what i am looking for. With google, i always find what i was searching for.
ofc it’s full of publicity and crap, but when i seek for academic information in google, i always find what i was searching for, and it supports lots of languages others search engines wont support.
I know google is evil (i have read your articles and i agree), but i still think that google is the best search engine.
I want to change it for duckduckgo, I REALLY DO!! but i cant… google’s search engine is the best i know ):
Duck Duck go is a joke and it uses google anyhow, why not just use brave or something? So much for the thoery about google being replaced quickly lol. Definitely not by DDG but think they are selling auto ins now lol. It’s honestly not funny to have to poor through manuals to retrieve facts you know are readily available to google for work, school, or just the inquisitive mind.
Hey Kent!
Thank you for your feedback.
DDG uses Bing results (among others) though.
Startpage uses Google results.
I didn’t even recommend DDG in this post.
I suggested Ecosia and Startpage.
Sincerely, Tad
Tadeusz, i also found your site from Google search so i don’t think the other person is lying at all. In my search for “Why are Google search results so bad right now” on 04-26-2021, you, or more particularly this blog entry on your site was the #2 organic result.
I will say that something has happened to Google as a whole in the last month since they made the update to card based layouts. That says it seems like it’s really affecting their predictive systems and query and ad engine elevance and accuracy for the worst.
Thanks for the feedback Mike!
The comment was from 2012 (I updated this post continuously ever since) and I allowed Google search bots to crawl and index my blog in 2015 again.
Sincerely, Tad
Hey Tad, I am agreeing that Google gets worse and worse for providing quick or even accurate results, but you definitely owe Sol an apology, I am reading this article on Android. THROUGH GOOGLE SEARCH.
I was using Google to search why Google it so bad, as in three pages, I could not see a single result that was even close to relevant to the original question I was asking.
This article was the third or fourth link on the first page relating to my search, so before you outright call someone a liar, please research the situation for yourself so you can give an informed reply.
You may have blocked Google, but either.
1.) Whatever you did, did not work in the intended manner.
Or
2) Google, being a powerful organisation, found your article, and displayed it on their engine anyway.
There is absolutely no reason to jump down somebody’s throat and insult them without having bothered to verify your initial thought first.
And I don’t want to read something containing any version of “I didn’t know” ignorence is not an excuse for plain A-holery.
Thanks for reading and please stay well.
Never mind, I see it was addressed to after on and you actually opened up access by Google again, I apologize profusely.
No problem Carlos. Maybe I should delete comments that lost their original context.
I saw this blog through Google search as well. I typed “are Google search results getting worse” then scrolled a bit and found this blog.
Sorry I should’ve realized this was from years ago my bad I’m embarrassed
Google only let’s us see what Google wants us to see. Google is bundled with fake subsideraries that acutely belong to Google. Just like the countries, they are all in cahoots. Sell and buy sites , all grouped with Amazon!!! Even single Privite sellers. I say we form our own internet. Just Google that see what you get. You will get only what works best for Google. And only those that have a stake and big cut of bacon! Etc later. There has to be a way to do it like moris code kind of thing. Analog is fine anyhow.
Every single time I search for anything I’m interested in on Google I can’t find it. Back in the good ol days 1990’s you could type in Google sucks and find thousands of personal stories to read about it. It seems they’ve completely eradicated anything they want to. I’ve read articles about how if you want to find anything you need to access the dark web. It’s funny they call it dark web to put a criminal spin on searching for things im interested in that goes against the grain. The big internet corps dont want people to have freedom to choose. I’m going to have to learn Tor, Onions and fake IP addresses to access the dark web. They call America free but I’m thinking the dark web is probably a search engine in Nigeria. Go figure.
I think it’s great this is from 2012 and we are in 2021and we are still asking the same question. I never remember a time when Google ran out of results, I now have experienced it several search times.
What the frick lol
I think it needs a add on of a few thibgs that we now have controlled results in certain places, I am not 100% sure on how or what obviously otherwise I wouldn’t be here but yes this was also good to know how Google has changed and continues to change, not for thw best though clearly!
My name is Joeal Manimtim and I’m the only person in the world with this name and when I search Google for my name, google ranks other names spelled differently higher on serps. Wtf Google, learn how to spell. All other search engines get it right.
Google is terrible. I just searched for QLED TV less than 50 inches. The first 10 results were NOT what I was looking for.
This is what happens when you have a monopoly and you replace most of your engineers with SJW’s
Google is now garbage. I will try your search engine suggestion.
A large Purge of search results happened approximately 2 years ago. It’s hard to tell now the time frame. Google search results used to be near perfect. They don’t like you talking about them purging information. In a professional, yet critical manner I wrote my feelings on this in quite a few Google forums… only to have the Forum shut down for no further comment, and even at one point, I was banned.. Yup! Imagine that, banned from a Google product Forum. When I reached out to them, they undid the ban. My personal belief on the reasoning why search results were purged two years ago was due to pressure from Fortune 500 companies with critical articles, forums with people complaining, etc. And political interests who can easily be judged or criticized online with the same forums or articles. I believe there were many antitrust and class action lawsuits. So much for Freedom eh? I really don’t think this has much to do with Google Ads – the way they are displayed was already leaning towards this way prior to two years ago.. search relevancy is what suffered.
I used to search information about technical subjects, Health subjects, political subjects.. and what used to take me maybe five seconds to be satisfied, now takes me upwards of 15 to 20 minutes. Or, I just give up and move on without the information I requested.
The good news is that search results are getting slightly better over the past two years; but nowhere near 2018 or 2019 awesomeness. I’d say we are at Circa 2009 or 2010 level Google search results right now. Google, if you are listening.. this is not who you used to be.
I was referring to another poster, little has changed as I still search the google sucks results it’s good to see this blog still up, p.s. also found with google search but hey they obviously are doing whatever they want thanks
You realise you’ve posted this on google?
h3: I didn’t “post” it on Google. Google crawled and indexed an article I posted on my blog. I banned Google search bots for a few years on my blog (between 2012 and 2015) but allowed Google search to spider it again after that.
Also make sure to read the whole post and its latest update in the last section.
I wish more people realised that results from Google are not organic. Great post and I would also like to add another alternative into the mix – https://dropicon.com
Even within the commercial realm Google has become completely useless. A search for small appliance stores in the city I am currently in and get a response of 4 solid pages of hotels. Every day I seem to get more of this useless garbage. Not worth the bother.
Hello,
Google search is useless now. Banned words, only adds and first page always the same few big news sites. I used to be able to find interesting stuff, like forums or obscure sites on certain topics, now its only adds and maybe you were searching for this … when I was not.
The predictive algorithm ruined internet searches.
Alternatives are not so good also, I don’t care about adds as long I get correct searches.
I don’t believe Google is dumb because of ads, without it.. No other search engines would even be out there.
Oh and, Google is one of the most highly advanced programs out there.
Thank you for taking your time to read this. We all have different opinions I’m not trying to start an argument. I appreciate yours.
Algorithm mind warp. Check out Alphabet activities. Great Reset. Huxley and others are in the shade. The nightmare is only just beginning. Matrix and some. Sordid. Terrifying. TIME TO WALK!
Darpa FBI etc. etc. etc. fist in glove
Google used to be the best search engine out there. Especially for doing research…actually digging in and finding what you want/need…but now I have actually had them come back with unable to locate anyting under my search…what the… how is that even possible. Whatever Google is doing, it is dumbing down the search engine to the point is not even a viable research tool anymore. Sad and very frustrating. Grrrr.
In short,i started to hate google,WHY?All other search engines index and rank my main keywords on my site,only google doesnt,they hold me on block until i pay them,but thats not gonna happen.
Cheers all.
I tried Ecosia and it is very much like Google . A simple search of, “I made my own water garden” no longer shows any of the key words that I type in. All adds. I want to know from actual humans in my search of how they make their own things. Yes, they are on youtube but not on web. If only Google and the other search engines would resort back to the old days of searching.
Hey Vivian!
Thank you for the feedback! Indeed I tested your search query from the US and it shows three ads on top and three on the side but I still see three actual results below them.
It’s not ideal for sure.
I’m already in the process on updating this post for 2022. I will recommend another search engine that is completely ad free, it’s called Neeva.
Give it a go. There is one catch though: you have to pay for it. It’s like $2 a month or something.
Sincerely, Tad
What can I say that hasn’t already been said? Google is completely useless for me. I am looking for an alternative even if I have to pay. I am finding all Google products useless any more including things like Google Play. I spent hours in two different languages searching “white gold” pendants. I got every kind of gold and metal but not white gold. I thought quotation marks are supposed to return sites and ads that only include white gold? But apparently not. Usually I found what I want on the first page, but now I search ten pages down and never see what I am looking for. Really, what use does Google have when you can’t even find an advertisement with your key words!? I recently moved to a different country and I cannot get anything on Google Play that is related solely to my new country. They won’t let me get results until I have a new bank in the country I’m in which I need Google Play to switch over and I cant get an account for six months and there is no alternative to Google Play. Personally I am finished with Google. I will try your suggestion and I used to love my Chrome browser but it is crap too. I think I will try Firefox for now until Google screws it up. I used to be a major player on the dark web but maybe I will have to go back to the Onion. :s
I found your site via Startpage, Google doesn’t destroy anything expect the persons ability to get search results that are accurate relevant, devoid of adds and or Google relationships, that don’t facilitate confirmation bias.
I am astounded at the masses, misguide understanding of how to research the internet. The belief that Google is so venerable that the word search is replaced by the phrase ‘Google it’. Clearly they have not bother nor do they care to learn how search engines work. Searching a topic to get accurate and correct data should be your first priority not confirmation that what you think or currently believe is the truth. So I would suggest researching how to research.
I’ve been on the net since the 90s, when altavista was king. Google was awesome about 15 years ago, returning many pages for any weird search. Even files in ftp sites would appear. Gradually it got weaker. Now, I sometimes get 5 results only for a search that a few months ago only gave me 20 pages!
The world is changing for the worse, information is kept from us and google is in the thick of it all.
Google is CRAP!!!!
Then I will say it again
GOOGLE SEARCH SUCKS AND I MEAN REALLY SUCKS What ever happened to the good old phone book. Try to find something on Google and it LIES like a top level democrat. Must be run by robotic democrat piglets
I hate google!
Qgap is a way better search engine!
qgap.2539901.repl.co
There are no Ads!
If Neeva are charging they wont last long
I agree with you and disagree with all these Google panderers on here, prob paid by Google to stick up for them, or they work for Google, Google gets paid to remove things from their search engine, so you can find them, and find out the truth about some of these companies, like if I search bad reviews from something and it won’t show up, like I’m supposed to believe no one said anything bad about said company.
I gave up Google and all the other search engines. I now use Qwant, which is a European engine.
I only use MyPal browser because it is the only one that works with Windows XP, which I will never ‘update’ as I only use my computer seriously and have no use for idiotic anti-social networks.
in my opinion has been a long time that gg has shifted from being a ‘search engine’ (the best at the time) to a ‘recommendation engine’ (paid from advertisers)
it’s annoying that they pretend, despite the keywords i enter, to know what i am looking for, better than my father and my mother when i was nine y/o
so it has become a ‘reccomendation engine for kids’ (paid from advertisers)
recently is getting even worse, from ‘recommendation engine’ is shifting to ‘entertainment engine’ or in other words a ‘grab-attention engine’
like facebook, instagram, youtube and ticktok
when the query has not a commercial intent (aka ads aka $$$ from advertisers) then the second priority become that you don’t leave the google properties, youtube, maps.. everything that may be at least a bit relevant
i think that on one side this is manipulative and on the other is humiliating for the users that are treated like dumbs
Is this a joke?
I just tried “neeva”, looking for a place to find bathroom sinks and the results were REST AREA BATHROOMS ON THE INTERSTATE.
Even google knows what a sink is.
Google search sucks in no small part because of people who do SEO. So quite frankly part of the blame is yours. It’s your game they’re playing.
We actually need a proper search engine. Like the earlier google. Google is now PURE manipulation engine. You don’t get search results. All you get is suggestions. And most of the time you can’t find what you are searching for. In fact it’s so bad, they don’t even reply any complaints/feedbacks etc. They don’t care.. Same applies to many other search engines. SUGGESTIONS… This is horribly wrong. YouTube is also as bad.
So DuckDuckGo is still better. Type into GOOGLE, NEEVA, and DUCKDUCKGO, “How to make baby formula”. Both Google and NEEVA will give you an entire page of why Corporate Baby Formula is better and also scare campaigns to keep you from making your own. DuckDuckGo takes you on page 1 to several sites telling you the recipe and how to make it straight up. I am looking for HOW TO MAKE BABY FORMULA, not “why I shouldn’t try and make baby formula, here buy overpriced factory formula instead” so NEEVA and GOOGLE results are irrelevant, 100% useless. This is why we are getting away from Google by the way.
So DuckDuckGo is still better. Type into GOOGLE, NEEVA, and DUCKDUCKGO, “How to make baby formula”. Both Google and NEEVA will give you an entire page of why Corporate Baby Formula is better and also scare campaigns to keep you from making your own. DuckDuckGo takes you on page 1 to several sites telling you the recipe and how to make it straight up. I am looking for HOW TO MAKE BABY FORMULA, not “why I shouldn’t try and make baby formula, here buy overpriced factory formula instead” so NEEVA and GOOGLE results are irrelevant, 100% useless. This is why we are getting away from Google by the way. Ok, just test Lycos and also Startpage. Startpage gives the same Corporate sponsored stuff however, Lycos, still the old Lycos, and gives good relevant results same as DuckDuckGo.
In the past the google search engine could be refined using the keywords or operators (+ or – for example) which are now clearly just ignored.
DDG is doing exactly the same, giving less useless results but enough to go to google, get worse results and ending here by despair after making a “why google search sucks” search into google search.
I ll definitly try the others here; could the be very worse than the others ? Not sure
PS you can f… SEO if you had operators like “not including”, the old (-) operator
Absolutely EVERYTHING that crap company makes is total trash!
Another fun thing they’ve done in the last ten years or so (so since around the original posting of this topic) is intentionally change auto predict or text complete results to what they think they should be. This isn’t conspiracy stuff, this is things that people working there have proudly disclosed. And it shows. The same force that makes it difficult to see any criticism for Google through Google (especially compared to a decade ago) is the same force that influences how any number of queries come out. It’s getting harder and harder to find anything actually related to what you’re searching for.
At this point if I want to see honest opinions about anything (not even necessarily political though that one is obvious, could be as basic as travel suggestions, opinions, troubleshooting, etc) is to specifically look for forums or in anyway possible cut out most of the results that will naturally pop up on the page. And of course you need to scroll past all the paid content that I imagine captures +80% of web traffic.
It’s funny that they’ve become these deciders of morals, who feel that they should control the flow of discourse and ideology, but then we see what’s allowed to trend on platforms owned by them like Youtube and it’s obvious that the driver has always just been money. Google has essentially ruined what the internet was meant to be about. The web developers who set up the framework that Google spidered into being the biggest company in the world must hate to see the nightmarish garbage that came of it.
Nowadays, Google has NO privacy. They read everything you do and more. Sadly, it is also much harder to steer clear from Google, as many Google alternatives don’t have the money Google has and are typically either paid or freemium services, with exceptions from search engines. Even worse, schools usually use Chromebooks, a Google product, meaning that students give out their data to Google. On spreadprivacy.com, a DuckDuckGo website, I found an article listing Google alternatives.
Link: https://spreadprivacy.com/how-to-remove-google/
Tad, how difficult and how much do you think it would cost to build a search engine like google used to be but without the corporate greed. I looked at neeva that you mentioned and was started by two former google employees and the investors invested in dangerous CRAP like facebook.
I don’t know what to use now for a browser or search engine. Brave was ok for privacy I guess but their search results are worse than google.
Your recommendation of Neeva is no longer worth anything. I’m getting the exact same results with Neeva as I do with Duck Duck Go – in other words, irrelevant but slightly less irrelevant than Google.
The problem isn’t just the search engines, it’s the Search Engine Optimization industry. The whole point in SEO is to manipulate a site to appear more favorable to search engines, regardless of actual relevancy, in order to drive traffic to your site rather than someone else’s, even if someone else’s site might be objectively more relevant.
Asshole, just another dishonest person, no different than google search. Neeva, the recommended search engine requires a log on to use it. Does anyone fucking have any ethics in today’s society. Did the .com industry do away with any sense of ethics
Google Search was, for two decades, my favorite thing about the Internet, literally. If not for Google Search, I’d have [had] little to no use for the rest of the Internet.
My search results, for even the most obscure and nerdy queries, used to number in the hundreds. By refining those queries via operators/modifiers, I could, with consistency, find precisely what I was looking for… or discover something fun, interesting, novel, etc.
Enter the same queries nowadays and I’m presented with an animated blue troll fishing for garbage.
When I don’t get the troll, I get garbage instead.
When I don’t get garbage, I get nothin’ at all.
Then there are the times when it’s difficult to not feel as if one’s actively being steered away from information relevant to the query entered..
Many assert that Search has been in a state of degradation since the naughts, and I won’t argue, but for me it was 2017 when it transitioned from ‘going downhill’ to ‘crashing into the canyon floor after being pushed from the edge.’
Something insidious is afoot.
It’s concerning, to say the least.
I don’t believe Sol is lying about finding your article through Google’s Search Engine because that is how I found your article, except I believe my query was likely significantly different than Sol’s. This is what my query was: “fuck you google!!! your search is absolute shit now! no matter how many different ways I phrase my query I never find the information I seek. your algorithms have skewed the search engine into falsely believing that all users don’t know how to word their queries properly”, and the link to the above article was the fifth hit (result) on the first page. So, whatever means you are utilizing to prevent your blog from showing up on Google searches is not functioning as you believe!
Anybody who has spent some time on YouTube over the past few years has surely noticed content creators moaning, complaining, and begging over the changes which YouTube has made to its policies, rules, regulations, guidelines, and whatever other terminology they may utilize for such things. YouTube is making these changes, and implementing mechanisms for a specific reason, as are all major social media websites. I will touch upon that reason shortly.
These new rules and regulations are written with the highest degree of ambiguousness. This allows YouTube to interpret their own rules and regulations however they want, which then permits them the power to apply said rules and regulations against any content creator they so choose.
The content creators have been begging YouTube to just inform them of what is allowed and what is not allowed. They have witnessed YT penalize some channels for violating a specific rule, but then sees YT ignoring the same violation of the same rule when it comes to other channels. YouTube is playing favorites. Some of the content creators have begged YT to just make the rules clear, and comprehensible. They inform YT that they want to, and will, follow the rules, but they desperately need to know what those rules are first. YT refuses to write the rules in a clear and concise manner, free from ambiguity. They need that ambiguity!
YouTube is owned by Google, for anyone who was unaware, and I bring that to your attention because of the reason for these changes. Now, I could go deep on this one, for it is just one of numerous elements within a bigger agenda, an agenda which is means for reaching a specific endgame. It is a multifaceted agenda, and with the greater understanding you have of the whole, the more you can understand the individual parts. Yet, this is not the place for that. Suffice it to say that all of these changes, along with the degradation of Google Search results, have nothing to do with creating a safe place for their users as they have claimed on several occasions, nor is it about protecting the children, nor about eliminating all the bad people from a site. Regardless, they will never tell us the real reason, they will persistently use some BS excuse, behind which to hide coupled with justifying their actions. Protecting children has always been a favorite of politicians, and big corporations. If you don’t support plan X, then you must hate children . . . and you don’t hate children . . . do you?
This is about censorship! This is about the control of the “free flow” of information. This influence comes from outside of YouTube, outside of Google . . . compartmentalization. In order to control an entire corporation, one needs not to control every single worker within the corporate structure, but only one or two people at the apex, and through them, you can control everybody else, or more importantly, control the aims, and the means, of the corporation. We have had false realities created for us in which to dwell, and those who have created these false realities are nearing their endgame, and thus, they can not risk having some piece of truth go viral, and spread to too many people . . . for such an event may tear down some of the false realities they have created for us, and the destruction of just one element of any of these false realities would jeopardize their success, and further delay agendas, along with their ultimate endgame. All of this is nothing else except the control of information!!!
I also found the article in my google search results
I searched Google sucks and found this after searching for “box size to ship monster high dolls in” results were kitchen clocks at Target and other nonsense. the internet hasnt worked great for years when your looking for anything out of the ordinary it assumes your an idiot and are spelling everything wrong. this is what google wants you to see sheep, not actual useful information.
I typed on Google, “the internet sucks because more junk results than good ones,” and your article was second.
I used to be able to get tons of results, but now for a long time I get only a couple pages of results, with only a couple of relevant hits, before the results start repeating and the pages duplicate themselves. I literally can’t get beyond page 2 of results.
This happens with Chrome-Google and Edge-Bing.
The irony is that low quality articles like this are the reasons why Google sucks now.
I agree that SEO has killed the accuracy of google, but (with an adblocker)I still find it better than most of the competition. I get awful results from DDG, Brave, etc and find myself constantly going back to Google for results even though it’s no longer my default search engine.
Problem with Google now are all the effin ads in the results. What a mess. Google should be blocked from the Internet.
Hi Tad,
Great reading some of your posts, good job!
This one is unfortunately been undermined as I have just been on to the site Neeva and this is the pop-up message!
Neeva logo
⚠️ Notice from Neeva
Neeva.com is shutting down June 2, 2023.
What a pity, we need a non biased search engine.
Regards from South Africa.
Janek (Szymanowski)
I quit using google (as much as possible) years ago. It is just a waste of time.
And… Neeva is no longer… but hey they partnered with another confidence building name -> snowflake
Google sucks, but so do the rest…
Greed and deception, ruins everything.
Google has turned to utter and complete trash. It won’t even let you use the minus sign anymore to exclude. It seems to be completely taken over by some dumb robot or something. There are no alternatives that even come close to how good it was.
more irrelevant useless chattering prattle.
“Before we start take note though: I do not hate Google”
… I hate Google. It’s OK to hate Google. They are greedy, grasping, profiteering SOBs who couldnt give a monkey’s f@rt who you or I are and whose only interest is lining their pockets at you expense, and to take over the internet, because they are a little bit like Hitler in that way.
For many searches I tend to use at least a couple different search engines. Google over the years has muddied its results with ads and honestly Bing isn’t much better in that regard. DuckDuckGo is just Ok with results sometimes relevant but occasionally way off target. I am more concern today with privacy then I have been in the past.
Just starting today Google search results no longer include “All” in modern browsers such as Edge, Chrome, and Firefox. Weird thing is that old versions of Firefox, my fav version 52 ESR, don’t have this problem. BTW I couldn’t comment in any modern browser but I could in that one.
I got so mad several years ago when I noticed that Google had removed thousands of search results for things that I knew existed on the Web. Recently they claimed that their motive was to avoid “unnecessary” search results. Forget where I saw that but it just made me so angry.
Remember that Scify show Incorporation? A great show which they cancelled of course. That’s sort of where we live in now
Google sent me here as a “last twelve months” result. Screw google. And screw you for posting paragraphs of incoherent MiXeD cApS and COLORED TEXT.
Jim: Thank you for the feedback. I updated, rewrote and extended this post on August, the 9th. Thus it’s well within the 12 months.
Also the caps are very coherent: I simply use US English Title Case for headings.
The text-marker effects are not widely used on the Web but I prefer them instead of mere italics for better readability.
“MurderChicken: Thank you for the feedback. I don’t use SSL so “modern browsers” based on Chrome may block commenting for privacy reasons. I will switch to SSL as soon as possible.
The new button like menu below the search bar replaced the filters IMHO but you can still look up “All” by using a drop down menu on the right using desktop/laptop screens.
Did you mean Incorporated? I did not watch it yet. Thank you for the suggestion.
John: Thank you for your insights! Which search engines beyond DDG are you using? Most alternatives use Bing results IMHO.
Switched to Yandex this week, and it’s brilliant. No annoying ads, incredibly fast search results that are accurate and exactly what I’m looking for, and without paid search from massive corporations taking over the first 2 pages.
I dumped almost everything Google a few weeks ago as I’m so sick of their monopoly. Plus, their search results are now mainly AI-populated garbage or mega-corporations trying to sell me something.
Haven’t missed them at all.
So, Yandex might primarily be owned by the Russian government (I don’t care), but Google is handing over all our personal data to the U.S. government, so it’s pretty much all the same to me :)
Nice website, by the way.
All of the search engines give pretty much the same useless results.
Most often when you Google for a query, the first 20 are relevant. Turn to page 3 and you’ll start to see unrelevant results or even junks!!
A new show just came out so I was wondering how many episodes and googled this
Julius Caesar: The Making of a Dictator how many Episodes are there?
and this is what was 1st@ the top, I don’t need googles opinion I have my own and will judge for myself.
.“This six-part drama tries too hard to be emotional and moving, which is a nuisance when our expectations were undemanding to begin with. All we ask from this type of formulaic domestic thriller is some heavy petting, a lurid subplot or two, and a murderous denouement.”6 days ago
Google censors and is heavily left-wing. That is all I need to know to avoid them and their crappy products.
Yes, I hate Google but am also quite capable of constructive criticism. The two are not mutually exclusive you know :)
I know this is a old article and it is from back when Google was nowhere near as bad as it is today. But today, it is entirely corporate and anything remotely political will get you directed to hundreds of pages of CNN and MSNBC of the European equivalents of those. You will be steered like a herd of sheep towards what the elites want people to read and what to believe.
The search that I use now for most things not map related [Google Maps is very very good still], is Yandex. It is a Russian based search engine, true, but it does not censor non-Russian content. I don’t care about that as I am not Russian. Searches on western political issues gets actual results and it is night and day from what Google provides.
I searched on Google “why does Google suck” and it led me here, where I saw you replying to one of your comments with “”Btw. you can’t have found my post via Google because I banned Google on my blog and this post wasn’t even indexed. Don’t start your comment with a lie because it doesn’t support the rest of your message.”
Wtf? Why do you call people liars for supporting your site and telling you how they found it?
You need to calm down and think more about how people are finding you and interacting with you
Hey James!
Thank you for the feedback. Back when that comment was posted I de-indexed myself from Google search (2012 – 2015).
I had to return ever since as some of my ideas has been misrepresented by other sites.
So yes, you can find this on Google now.
It’s basically and ironically the only post that ranks on Google from this site after it has been penalized back in 2011 for linking out too much.
Martin Tier: The original article is old but I constantly keep it updated and follow (Google) search evolution. You can check out my article on the Google algorithm in 2023. It’s linked on the HP.
It’s true that over the recent years Google has increasingly focused on branded authority sites like mass media. I may add this to the article above itself.
It’s not necessarily a bad thing but I’m also wary of mass produced mainstream opinions so I understand your viewpoint.
Helen: Google is left-wing compared to alt-right and Fox “News”. Otherwise they are quite mainstream. See also the comment below yours by Martin Tier.
Michelle Topham: Thank you for the feedback! Indeed I was testing Yandex a few times but they would always mix English and Russian results for me so I gave it up. I will check it out again!
All I wanted to do was find out if the electronic lighter I received for Christmas was working properly. I tried Google with a question about that specific product and got absolutely nothing about the lighter. I got results for butane and propane lighters and nothing for purely electronic arc lighters. Very aggravating. Several years ago I used Google to find a hotel and Booking.com showed a room that was available. (Booking.com was top listing) Arrived to check in and was informed that this place was not only unaware of my reservation but was not even a stay by night hotel. Not only does Google suck but Booking.com is right there with them.
Duck is crap. Boolean operators need not apply. Effing hate it, right up there with google. Going to try some of these other search engines. I am 96 years old and don’t give a shit what you think. Great article here.
Google is just evil.
Way too much control over the internet and their search is only any good for technical articles.
Using google feels like a left-wing walled garden.
They are definitely censoring things and seem to hate a free and open internet.
As far as I am concerned they are an enemy of humanity but most of humanity seems to be to stupid to see it and continue to feed the beast by using their crap products.
I was using DuckDuckGo for many years but lately it also feels just like googles walled garden. Cleanest results page of all the search engines I’ve used but I also feel like “Where has half of the internet gone ??”
Haven’t used Google once since they decimated traffic to half the sites I own, and to tens of thousands of other indie sites last September as well.
Currently use Bing, DuckDuckGo, Yandex and Start Page depending on what device I’m on. Results are CRAZY better compared to the AI-stuffed, Reddit and Forbes hogging crap Google is now serving.
And btw, remember, EVERY search you do on Google helps them continue to destroy the Internet. Don’t ever use them. They really are evil at this point.
I also don’t like Google because it warns users about child abuse material, even on all kinds of false positives. It also filters innocent searches too. I don’t know exactly how to explain it. It doesn’t show as much websites that still work just fine that are from the 1990s, 2000s, 2010s, etc. I wish I could build my own search engine.
2024: Looks like google is in starting to trim the results Below the fold!
Why no mention of yandex?
Hey J!
Yandex always shows me Russian results on top even though I’m in Germany using a browser in English.
Hey Chuck!
Yes, it might appear as if yet it’s rather all that stuff above the results that makes them appear tiny IMHO.
Hey Jordan!
I didn’t notice many issues with safe search. Are you sure you have switched it on or off depending on your preferences?
[…] what’s behind the decline of Google Search. Some believe it is simply an excessive amount of Ads, some say it is because of changes in Google’s algorithm, and some because Google gives […]
Im a Scotsman and will never trust something that lies like an Englishman and thats this utterpish machine google the guy behind it is a leech and as shifty as they come he needs a spell in the jail and a few billion removed that he stoll from poor Uk people that have came over in boats toour Country and be had hook line and sinker because this clown kids them on and makes the think the streets are paved with gold when they are paved with nothing but hate for Indians that isolate and turn the communities into slumdog holes smell that stink and pish google causes in the UK those poor Indians are lying in the shyte because google give backhanders to Tories and sinks and hides its money in offshore accounts and plays with other peoples lives yes we keep saying it the Man that runs google needs to investigated and jailed for 7 years with all his assets removed UK have more than enough proof to jail this fake liar that steals like a russian with a foot in the door of ukrain yup hes worse than a nazi hes a vile little bastaard that prays on the poor confirm you are a Human yeah you little rat would a punch in the mouth suit you
[…] for different search engines since Google’s results now show more ads and its own services7. These alternatives are becoming more popular because users are not happy with Google’s […]
I hate google and I hate reddit/steven huffman even more.
Here we are in August 2024. Google search results have become positively weird and useless.. It’s actually so awful, that it’s comical. I was looking for information on Intel and if folks thought the company would survive and someday prosper again. I got just three pages of results, and several of which were about Israel surviving, other non-related entities prospering, and such nonsense. Their search algorithm is obviously deliberately degraded, so it returns minimal results, censored results, keyword results, and crapoid nonsense that is driven by maybe some nasty personalization trickery. The idea appears to be to make you keep trying, so that garbage-adverts can be shown – except I run ad-blockers, of course. So all I see now is weird skanky dreck and drivel. But I remember when search at Google used to actually work, and a sensible query would return hundreds and even thousands of websites. Now, it just returns some limited garbage-results, and only a few pages of even that. There have to be THOUSANDS of webpages and articles on Intel, and yet they are all now hidden by Google. This is just sad. We need a new internet search tool. Seriously.
I tried Start Page, searched for my own website by Name, and I got 3 GIANT sized word ads FIRST. My site was listed “below the fold” on a full size computer monitor. So this recommended search still sucks.
Yes, I’ve come to truly hate Google and all other “search engines” who incessantly shove utterly useless crap my way when used. These companies have literally destroyed millions of business with their asinine business models and incessant demands for webmasters.
I’m ready to quit the Internet altogether (for real), I’ve been online since 1983 before the “internet” even became a reality. I was a computer systems analyst for Uncle Sam and since then, the Internet has become a real cesspool of crap and profound ignorance with hucksters and scam artists everywhere.
I went to work for myself online in 1996, but I’m done with it now. It’s now impossible to stay in business with a website unless you constantly bow down to the Google gods and their incessant demands (and pay out through ads and other forms of online marketing).
Non-social media is a complete joke, but those who can stomach the level of stupid there can still direct some traffic to their own websites. My son does this for his website, but I can’t bring myself to spend anytime there at all.
Nope, the Internet is literally ruined these days. If you try hard, you can wade through thousands of useless links, ads, promotions and more crap and find something relevant and useful to your inquiry, but for a online business owner, it’s become a nightmare.
I’m going to close up my business this year after 28 years online and retire early instead. I closed my blog with thousands of posts because the same thing happened there too – the search gods promoted useless ads and search results that eventually killed of all the blog traffic, and social media did the rest.
I refuse to play the game Google demands and every other search engine. Yes – I quit and I admit it, but I’m not wrong about how “search” is fcking useless now. If a nuclear weapon was to drop on Google, I’d sit down and rejoice.
Hi Tad
I read your site and the start of the comments section after Googling exactly this:
‘How to stop Google keeping me ot of one of my email accounts. Damn you Google.’
Your site, this URL, was on the first page of results. Maybe the spelling mistake had something to do with that because, when I corrected it and ran it again, your site didn’t show in the results !!!
Although I trained as a programmer, the internet is a mystery to me. I just want to get into one of my email accounts but Google won’t let me until I reveal my mobile number so they can confirm it’s me. What nonsense. They don’t have my mobile number so it could be anyone and you’ll understand why I keep my data to myself. No social media, no posting nonsense, no sharing private stuff. But that’s because I’m old school.
There were green screen monitors when I started out !!!
Kids today don’t understand the risks. I do fraud investigations and track scammers.
Can you help me get into my email ?
I probably didn’t help by changing the language to United Kingdom from United States. Google knows roughly where I am and keeps telling me at the bottom of every search so why can’t it set the language correctly ???
I also pasted in my password which was the second mistake but I’m using the same IP address and IMEI as the last time I signed in also on a private page, so I reckon they just want my mobile number but they’re not having it. If I have to wait seven days them so be it. I’ll just phone instead.
There’s no such thing as Artificial Intelligence. When people ask me what I do I say I’m part of an intergalactic mission looking for intelligent life on earth. That shuts them up.
AI will cause more problems than it solves, just re-hashing past searches. If anything it’s gonna make more jobs for people having to check results are accurate, but only if people don’t rely on it in the first place and then wipe us out using the nonsense it generates.
I use Google for finding answers to questions like ‘Why is my neighbour’s heat pump making my house shake ?’ and then I open all the questions below, open all the URLs with interesting answers in separate tabs and then construct an argument based on verifiable facts.
It’s just so frustrating that you can’t talk to Google. If you know how to get past Google’s Security Checks I’d appreciate a nod, but I really posted to tell you that this site of yours came up in a Google Search when you say it can’t do that.
Regards, Penny
Hey Penny!
Thank you for thoughtful comment. Indeed you do make many valid points.
I rank on Google for often obscure misspellings and keyphrases because people like you add comments here and sometimes spell things wrong.
Btw. I was already wondering whether I should give up on comments completely and delete all the old ones (as many are 15+ years old).
Yet your addition restored my faith in humanity, or at least commenting as an online practice.
Sincerely, Tad
| true | true | true |
Do you think Google sucks? Yes. Google search is dying*. Why that? Ads, on-site features and AI take over gradually. In this post I reflect on how complex the Google search experience is nowadays. To solve the problem I will...
|
2024-10-12 00:00:00
|
2012-09-10 00:00:00
|
http://seo2.onreact.com/wp-content/uploads/2020/08/dead-800.jpg
|
article
|
onreact.com
|
SEO2.blog
| null | null |
28,468,977 |
https://misc.l3m.in/txt/github.txt
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
40,257,067 |
https://github.com/Profluent-AI/OpenCRISPR
|
GitHub - Profluent-AI/OpenCRISPR: AI-generated gene editing systems
|
Profluent-AI
|
This repository contains releases for OpenCRISPR, a set of free and open gene editing systems designed by Profluent Bio.
Release | Date | Description |
---|---|---|
OpenCRISPR-1 | 2024-04-22 | AI-designed, RNA-programmable gene editor with NGG PAM preference. Described in Ruffolo, Nayfach, Gallagher, and Bhatnagar et al., 2024. |
**What is OpenCRISPR-1?** OpenCRISPR-1 is an AI-created gene editor, consisting of a Cas9-like protein and guide RNA, fully developed using Profluent’s large language models (LLMs). The OpenCRISPR-1 protein maintains the prototypical architecture of a Type II Cas9 nuclease but is hundreds of mutations away from SpCas9 or any other known natural CRISPR-associated protein. You can view OpenCRISPR-1 as a drop-in replacement for many protocols that need a cas9-like protein with an NGG PAM and you can even use it with canonical SpCas9 gRNAs. OpenCRISPR-1 can be fused in a deactivated or nickase format for next generation gene editing techniques like base, prime, or epigenome editing. Find out more in our preprint.
**Why are you releasing OpenCRISPR free of charge – what’s the catch?** There is no catch. OpenCRISPR is free for commercial use to any users who take a license. In a world where gene editing technologies can be difficult to access for both researchers and patients for various reasons, we felt the need to put our company mission into action and release some of the byproducts of our prolific protein design engine to enable more discoveries in the gene editing industry. For partners where further customization and expanded features for OpenCRISPR or another system might be desired, we offer a high-touch collaboration model.
**Are you really not asking for anything?** In addition to abiding by our terms of use, we kindly ask that you allow us to acknowledge you as a user and to let us know when any products using OpenCRISPR advance to the clinic or commercial stages.
**Have you filed IP on OpenCRISPR?** Yes.
**If OpenCRISPR is truly open source, then why do I need to sign a license agreement?** The sequence is freely available via the pre-print. We considered many factors to make accessing OpenCRISPR as frictionless and lightweight as possible; chief among these was ensuring its ethical and safe use. For this reason, if OpenCRISPR users wish to use the molecule for commercial therapeutic uses, we require them to execute a simple license agreement that includes obligations to use the tool for ethical purposes only, in addition to other terms of use.
**What does the license include?** The current release includes the protein sequence of OpenCRISPR-1 along with a compatible AI-generated gRNA, though it is also compatible with canonical Cas9 gRNAs.
**Will there be additional OpenCRISPR releases in the future?** Stay tuned…
**Do you provide protocols?** Please see our pre-print in bioRxiv for a general protocol in addition to a readme protocol that accompanies the sequence release. Other general protocols for editing enzymes should also be compatible.
**Is there a way to share my experience using OpenCRISPR with Profluent?** We expressly welcome any feedback on OpenCRISPR and especially sharing of any observations as you’re using the system. If you find that certain attributes could be changed or improved for your particular needs, please reach out!
**OpenCRISPR is interesting, but I have more needs; what does Profluent offer?** We are open to collaboratively iterate and customize an AI-designed solution that is a perfect match for your specific therapeutic application. This ranges from customized gene editors, antibodies, and broader enzymes. Please email `[email protected]`
.
OpenCRISPR is free and public for your research and commercial usage. To ensure the ethical and safe commercial use, we have a simple license agreement that includes obligations to use the tool for ethical purposes only, in addition to other terms. Please complete this form to gain access to relevant documents and next steps.
If you use OpenCRISPR in your research, please cite the following preprint:
```
@article{ruffolo2024design,
title={Design of highly functional genome editors by modeling the universe of CRISPR-Cas sequences},
author={Ruffolo, Jeffrey A and Nayfach, Stephen and Gallagher, Joseph and Bhatnagar, Aadyot and Beazer, Joel and Hussain, Riffat and Russ, Jordan and Yip, Jennifer and Hill, Emily and Pacesa, Martin and others},
journal={bioRxiv},
pages={2024--04},
year={2024},
publisher={Cold Spring Harbor Laboratory}
}
```
| true | true | true |
AI-generated gene editing systems. Contribute to Profluent-AI/OpenCRISPR development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2024-04-16 00:00:00
|
https://repository-images.githubusercontent.com/787641587/b5397068-e6e4-4a06-8740-029007d018e8
|
object
|
github.com
|
GitHub
| null | null |
12,309,763 |
http://www.bloomberg.com/news/articles/2016-08-17/the-wretched-endless-cycle-of-bitcoin-hacks
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
4,334,291 |
http://blog.parsely.com/post/28628164745/whining-on-social-media-wont-get-you-olympics-coverage
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,700,076 |
https://www.theverge.com/2018/3/28/17172178/tesla-model-x-crash-autopilot-fire-investigation
|
Tesla defends Autopilot after fatal Model X crash
|
Sean O'Kane
|
The National Transportation Safety Board is investigating a fatal crash involving a Tesla Model X that occurred last Friday morning in Mountain View, California. The agency is looking into whether Tesla’s semi-autonomous Autopilot feature had anything to do with the crash and is investigating a fire that resulted from the car’s battery system. The severity of the accident is unprecedented, according to Tesla. “We have never seen this level of damage to a Model X in any other crash,” the company wrote in a blog post on Tuesday.
The driver of the car, Wei Huang, was headed southbound on California’s Route 101 when his Model X crashed headfirst into the safety barrier section of a divider that separates the carpool lane from the off-ramp to the left. The front end of his SUV was ripped apart, the vehicle caught fire, and two other cars crashed into the rear end. Huang was removed from the vehicle by rescuers and brought to Stanford Hospital, where he died from injuries sustained in the crash, according to *Mercury News*.
Tesla says it has “never seen this level of damage to a Model X” before
The batteries that power Tesla’s vehicles are specifically designed to prevent “thermal runaway,” which is when heat building up in the battery pack causes chemical reactions that, in turn, cause even more heat. “Tesla battery packs are designed so that in the rare circumstance a fire occurs, it spreads slowly so that occupants have plenty of time to get out of the car,” the company writes. “According to witnesses, that appears to be what happened here as we understand there were no occupants still in the Model X by the time the fire could have presented a risk.”
It’s unclear how long the fire lasted or how accurately Tesla’s second-hand account describes what happened. A spokesperson for the NTSB said in an email that the “field investigation, focused on the post crash fire and steps necessary to safely remove and transport the vehicle from the scene, continues.”
Tesla says it believes this crash was so serious because the safety barrier, which is supposed to mitigate the amount of impact with the concrete divider behind it, was either damaged or had been reduced in size. The company obtained images taken a day before the crash from the dash cam footage of a person who claims to have witnessed the accident. It shows that the barrier was a fraction of the size that it appears in older Google Street View photos. In 2017, The Model X became the first SUV to get a five-star safety rating across the board from the National Highway Traffic Safety Administration.
Tesla also claims that its owners have driven by this same particular barrier while using Autopilot “roughly 85,000 times since Autopilot was first rolled out in 2015” without any accidents. (Tesla states in its privacy policy that it collects data remotely about the use of Autopilot, among other things.) The NTSB said earlier this week that it is “[u]nclear if automated control system was active at time of crash.” Tesla says it is “working closely” with investigators to recover data from the vehicle, which could help explain what happened, and that it proactively reached out to the NTSB.
The NTSB previously investigated Tesla’s Autopilot feature after a driver died from a collision with a tractor-trailer in 2016. The agency found that Autopilot operated mostly as intended, but it “gave far more leeway to the driver to divert his attention to something other than driving,” which contributed to the crash. The NTSB has also recently looked into a January 2018 accident where the driver of a Model S claims to have been using Autopilot when the car crashed into a fire truck.
Semi-autonomous driving systems are currently experiencing enhanced scrutiny after a vehicle in Uber’s test fleet of autonomous cars killed a pedestrian in Arizona earlier this month. A number of companies, most notably Toyota and Nvidia, have suspended testing efforts in response.
| true | true | true |
The NTSB is investigating the accident, which the company says did unprecedented damage
|
2024-10-12 00:00:00
|
2018-03-28 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
11,889,998 |
https://www.youtube.com/watch?v=FXCCYB3Mdlg
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,337,183 |
http://www.planetary.org/blogs/emily-lakdawalla/2015/10050900-finding-new-language.html
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
8,464,259 |
http://kfi-apps.com/plugins/kfcocoapodsplugin/
|
KFI Apps: SimPholders, Kiwip and more
|
KF Interactive Team
|
## SimPholders
SimPholders is a utility for fast access to your iPhone Simulator apps. It saves you time during iOS development when you have to deal with Simulators' folder structure.
SimPholders is a utility for fast access to your iPhone Simulator apps. It saves you time during iOS development when you have to deal with Simulators' folder structure.
SimPholders Nano is a utility for fast access to your iPhone Simulator apps. It's never been easier to get rid of CoreData stores and preferences.
| true | true | true |
Is there a Tool missing on our way to new apps, we just write it right away.
|
2024-10-12 00:00:00
| null |
http://www.kf-interactive.com/site/assets/files/1027/img_6350.jpg
|
website
|
kf-interactive.com
|
KF Interactive GmbH
| null | null |
14,119,670 |
https://github.com/Reon90/tung
|
GitHub - Reon90/tung: A javascript library for rendering html
|
Reon
|
A javascript library for rendering html. Tung helps to divide html and javascript development. In order to start working with tung, you only need to know two methods, setView and setState.
```
npm install tung
```
**Of course, that you need to convert html to js.**
```
npm install babel-tung
```
There is config for Webpack and Gulp
• based on snabbdom, a fast and simple virtual DOM library;
• pure html: block defines context, variables, components;
• pure javascript: no jsx, defines only state for rendering html;
• stateful and stateless components;
• you don't like jsx;
• you have html developers in your team;
• you like React patterns, but you looking for something different;
```
<!-- page.tpl -->
<div>
<div class="users">
<Card block="users" />
</div>
<Btn block="btn" />
</div>
<!-- btn.tpl -->
<span class="btn">{this.text}</span>
<!-- card.tpl -->
<div class="item item--admin">
<img src={this.img} width="50" height="50" />
<span class="item__content">{this.name}<span block="isAdmin"> • admin</span></span>
<Btn block="btn"/>
<Btn block="delete"/>
</div>
```
```
import {Tung} from 'tung';
import card from './card';
import page from './tpl/page';
import btn from './tpl/components/btn';
class Users extends Tung {
constructor(container) {
super(container);
this.setView(page, btn, card); // IMPORTANT
fetch('https://api.github.com/users/octocat/following')
.then(response => response.json())
.then(users => this.ready(users))
.catch(console.error);
}
ready(users) {
users[Symbol.iterator] = this.getUsers;
this.usersIterator = users[Symbol.iterator]();
this.setState({ // IMPORTANT
users: [this.buildUser(this.usersIterator.next().value)],
btn: {
text: 'Load more',
on: { click: this.handleEvent }
}
});
}
buildUser(user) {
return {
name: user.login,
img: user.avatar_url,
url: user.html_url,
isAdmin: user.site_admin,
id: user.id,
onDeleteProfile: [this.onDeleteProfile, this]
};
}
handleEvent(e) {
let user = this.usersIterator.next();
if (user.done) {
delete this.state.btn;
}
this.state.users.push(this.buildUser(user.value));
this.setState(this.state); // IMPORTANT
}
onDeleteProfile(e) {
let index = this.state.users.findIndex(user => user.id === e.target.data.id);
this.state.users.splice(index, 1);
this.setState(this.state); // IMPORTANT
}
* getUsers() {
for (let i = 0; i < this.length; i++) {
if (this.length === i + 1) {
return this[i];
} else {
yield this[i];
}
}
}
}
new Users(root);
```
name | argument | description |
---|---|---|
setView | function | Defines which components will use |
setState | object | Render html from object |
name | argument | description |
---|---|---|
init | null | Called when component was created |
destroy | null | Called when component was removed |
name | type | description |
---|---|---|
refs | object | Stores stateful children components |
els | object | Stores DOM elements |
name | type | description |
---|---|---|
block | attribute | Name of context |
Component | tag | Name of component |
this | object | Access to context |
name | type | description |
---|---|---|
on | object | `{ on: { click: this.onClick } }` |
attrs | object | `{ attrs: { placeholder: 'Text' } }` |
props | object | `{ props: { data: { id: 12345 } } }` |
class | object | `{ class: { toggle: true } }` |
style | object | `{ style: { display: 'none' } }` |
**Feel free to offer new features 🤔**
| true | true | true |
A javascript library for rendering html. Contribute to Reon90/tung development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2017-04-08 00:00:00
|
https://opengraph.githubassets.com/d5056c6e2119c81ae6f2eeaa2c6ef49f683b02623a25d24ea4f4299f6259815a/Reon90/tung
|
object
|
github.com
|
GitHub
| null | null |
3,057,978 |
http://www.bloomberg.com/news/2011-09-30/japan-s-industrial-output-rises-less-than-expected-weighed-by-strong-yen.html
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
2,505,763 |
http://thenextweb.com/facebook/2011/05/02/wikileaks-founder-facebook-is-the-most-appalling-spy-machine-that-has-ever-been-invented/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,145,587 |
https://play.google.com/store/apps/details?id=com.smartvoicemail&hl=en
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,700,338 |
https://www.youtube.com/watch?v=uc67ARuFPB4
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,825,643 |
https://www.youtube.com/watch?v=TW1ie0pIO_E
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,348,071 |
http://dieter.plaetinck.be/graphite-ng_a-next-gen-graphite-server-in-go.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.