id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
19,310,845
http://doc.cat-v.org/bell_labs/utah2000/
/sys/doc/ Documentation archive
null
# Systems Software Research is Irrelevant (aka utah2000 or utah2k) - By **Rob Pike**(Lucent Bell Laboratories, Murray Hill) *“This talk is a polemic that distills the pessimistic side of my feelings about systems research these days. I won’t talk much about the optimistic side, since lots of others can do that for me; everyone’s excited about the computer industry. I may therefore present a picture somewhat darker than reality. However, I think the situation is genuinely bad and requires action.”*
true
true
true
null
2024-10-12 00:00:00
2000-01-01 00:00:00
null
null
null
null
null
null
270,814
http://www.pcworld.com/article/149586/2008/08/.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,114,621
https://medium.com/@roshanjossey/hacktoberfest-is-back-contribute-to-open-source-and-get-a-cool-t-shirt-11e50e89afd6
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,247,575
https://medium.com/dair-ai/deep-learning-based-emotion-recognition-with-pytorch-and-tensorflow-61e831f72234
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
988,736
http://christophermpark.blogspot.com/2009/12/encapsulated-businesses-for-indie.html
Encapsulated Businesses For Indie Development
Christopher M Park
Justin Vincent recently wrote a blog post about what he calls the Venture Matrix, which is basically a way to create a huge company out of a lot of little companies-- or, alternatively/additionally to make it easier to launch successful startup companies. It's a long article, and it starts out with a lot of semi-tangential discussion about centralized necessities, but if you look down at the heading The Birth of Companies and on, there are some really novel ideas there; or rather, he's cleverly applied some best practices from good programming to real world business, and that in itself is quite novel. The article is well worth a read, and it flows well with a lot of things I have been thinking about recently. I've always worked for small businesses, but often I've had very large businesses as key clients, and so I've had an opportunity to see it from both ends. I much prefer being in a small company because of the autonomy and room for creativity that provides (and the increased possibility of returns if the business does very well), but there's a key aspect often missing from small companies: security. That in itself can be a good thing, because it can prompt the business to be proactive where larger companies might be lethargic. But... The Inevitable Challenge Of Traditional Small Businesses On the other hand, lack of funding can lead to lack of certain expertises among available staff. In the past I've been in a company where we had excellent programming staff and support staff, but sales staff that couldn't sell the product, and a very novice marketing staff. That business actually did okay based on word of mouth alone, because clients liked our product and told other clients, but how much better might things have gone if we'd had really excellent sales and marketing staff to match our excellent product? Or, to cite another example, I once did some contract work for a small company that had developed the world's smallest (at the time) cell phone motherboard. It was about half the size of anything else on the market, back when cell phones were much larger; the possibility of such a motherboard was staggering, because it would have allowed for phones the size of today's devices about 3 years before those actually hit stores. Unfortunately, the company was completely unsuccessful in selling their work because they didn't have the needed expertise on the business development end of things. I love Justin Vincent's talk about encapsulation in his article, because that's exactly what is needed for companies like these. A lot of startups are product-focused, and they know how to make a product that is really unique and useful inside some niche. But unless they just happen to be lucky enough to have staff who are really ace at sales, marketing, and business development, that great idea is likely to languish in obscurity or at least not reach its full potential. AI War, for example, has done extremely well for an indie game and is still seeing a significantly growing playerbase every month, but marketing has always been a huge challenge for us. And none of the success we have seen would have been possible without all our digital distribution partners, and our upcoming retail partners. It's still quite a challenge to launch a product even in the digital marketplace, and I had to essentially work two jobs for years and luck into the right partnerships in order to make it happen. My Ideal Approach For Indie Development I have been thinking for the last few months about how the ideal setup for indie development would work. In other words, how to maximize output from indie developers, maximize revenue for them and their partners, and in general serve the gaming consumers the best by getting them the widest possible variety of truly quality titles at reasonable prices. Here's the ecosystem I imagine: Indie Development Company: This obviously already exists, but currently the indie developer has to do most everything -- game development, support, funding themselves, marketing, business development, and so on. In my imagined environment, the indie development company would mainly be responsible for game development and technical support/patches (basically what AAA game development companies are responsible for). Digital and Retail Distributors: These companies also clearly already exist, and I think that they by and large serve their function admirably. The challenge for most indie developers, however, is actually getting the notice of these distributors. The challenge for these distributors is that they get so many totally crap submissions that it is hard to evaluate what is good and what is not. This is very similar to the book publishing industry, incidentally, which leads us to... Indie Game Agents: These currently do not exist, not in the sense of the book publishing world. Certain well-connected individuals can act as agents, and in fact there are regional sales agents for retail, but they don't act as quite the career partner and liaison that you see with book author agents. More to the point, most indie developers don't even know these agents exist, and/or don't have any way to contact them. It basically requires having the right network connections, and a track record of existing success in the digital market. If you're unheard-of, then forget about it at present. So, the theoretical agents would be: 1) Available for submissions from all indies (acting as talent scouts), as with book publishing agents. 2) In exchange for an ongoing commission (10% to 15% of what the indie developer earns) but no upfront fees, will handle coordination of marketing, sales, business development, and so forth in both the digital and retail spaces. They would basically be the indie developer's business guide, advocate to business associates, and so on. Again, like book publishing agents. 3) These agents would be known and trusted by the distributors based on their past successes in finding diamonds in the rough that went on to be successful, and so if your game were to catch the eye of an agent that would be a sure-in to digital distribution at the least. These agents would be the valuable filters to the distribution pipeline, and the net effect in the short term would be to get more quality titles through. Fact is, a lot of indie developers themselves are not persistent enough or good enough at describing their work to catch the interest of distributors who have so much else to do. Right now, to get even digital distribution deals, you have to catch someone's eye at just the right time. Advertising/Marketing Agencies: These certainly exist, but these are vendors, not partners, in almost every case. They perform a service for a fee for anyone who can pay the fee, and then their work is done. They have no stake in the success of the product (they don't even have to like the product, let alone fully understand it), and they are inaccessible to anyone who can't afford the upfront costs. In the ideal model for the indie market, these agencies would be full partners on an individual game's public image, coordinated by the game agent. In other words, they would handle the details of advertising and marketing, as overseen by the game agent (who would also be responsible for selecting a good advertising/marketing agency), and they would not see any return on their work except a percentage of sales revenue. This could be anywhere from 5% to 10% of what the game makes in total. This is akin to having a talented in-house advertising/marketing staff, since they have a vested interest in making the products they represent quantifiably more successful. Investment Partners: The last big problem for indies is funding. Creating a game takes quite a long while to do, and this requires money for staff salaries, contractor pay, health insurance, licensing of development tools, possibly office space (for those that aren't virtual offices), and so on. Really polished indie games are likely to cost anywhere from $200,000.00 USD to $400,000.00 USD (or more) if the staff isn't being paid a pittance upfront in hopes of later return. The idea of the investment partners is that they would be contacted by the game agent, or directly by the indie developer in a few cases, and they would provide upfront funding for projects that can demonstrate their worth. That doesn't necessarily mean a detailed and rigid design document, given that part of the strength of an indie developer is their ability to innovate and use iterative design techniques. Rather, it's a matter of proving out the team and their ability to execute and refine solid ideas. That could be based on past products (spare-time, second-job type affairs), or based on a prototype. Some of the products an investment partner picks up will flop, as is the case with any such market, but the idea is that the investment partner is structured in such a way that even if 9 out of 10 indie projects they take on do not make a positive return, they as a company still see a positive return based on the 1 out of 10 that is a much larger success. With the advertising/marketing partners and game agents in place, the likelihood of success only goes up. Additionally, for the savvy investment partner that invests only in indie developers with a history of past success (whether that means commercially-speaking, or in terms of potentially marketable ideas brought to fruition, or whatever the criteria is), the return is likely to be much higher. I could see several tiers of investment partners forming, with higher-percentage-taking investors that might prefer higher-risk investments, and lower-percentage-taking investors that only the "premium" indie developers with mega-hits under their belts could have access to. Arcen would likely be slotted with an investment partner somewhere in the middle of that spectrum, for instance. What Would This Sort Of Indie Development Approach Really Change? It would provide a way for indie developers to start small and stay small, which has many advantages as noted in Justin Vincent's article. I am dismayed that so many indie developers basically create a pet project as a way to then launch into the larger industry. Indie development shouldn't just be about launching yourself into AAA development. I started indie, and I want to finish indie, if at all possible. What does indie really mean, anyway? It does mean smaller projects (not multi-million dollars), but also projects with a lot more creativity -- and therefore risk. It means having freedom to experiment, and less structure to the start of the development process. There is less known up front, and more discovered as you go. With expert developers, you'll get products that would never come about any other way. If you take Jonathan Blow, give him some money and some time, then stay out of his way, you can trust he'll come out with something amazing that he'd never create if he was working at someplace like EA. Is it possible to be an expert indie developer? You bet. This subject probably deserves a post all of its own, but for now let it suffice to say that there are examples all around the industry. Even just purely from an ROI standpoint for investors and partners, I think that's something worthwhile to cultivate in a more structured way from the business side of things (let alone all of the larger creative benefits to the industry as a whole). The goal is to give the indie development business more structure and security, have encapsulated partners who are better at handling the things that your average indie developer is poor at, and then just let the indie developers do their work. The investors and partners will of course want to keep an eye on the developers, especially the not-yet-fully-proven ones, to make sure they don't go off on a crazy tangent -- but otherwise they need to just trust that the developers are doing what they need to do to make an innovative product that people will want to buy. If a developer doesn't deliver, which occasionally one won't (these are humans we're talking about), then that developer will likely have an extremely hard time of ever getting significant funding again. So the onus is still on them to deliver a very solid product. In the end, you wind up with a team of companies that are all focusing on what they are best at, working together to create, deliver, and market products that are superior to what one of the companies could ever have hoped to achieve alone. In terms of percentages, each might have to take a lower amount than they might otherwise prefer (this especially applies to the indie developers themselves), but given the increase this is likely to have on sales volume, the actual amount of revenue received by each party would generally be higher in the end. All of the businesses I just described in my imagined indie ecosystem should be able to be profitable, even quite profitable, if they do their respective jobs well. And that's still with a $20 or less price point for the games, which is good for consumers. Also good for consumers: if indie developers have the time and money they need to be secure and just focus on creating amazing products, you'll see better products. That's a win for everyone. ## 4 comments: Only problem is that you need to find a way to stop the VCs from gaming the system with the recoupment rules like they do in the record industry (where recoupment is calculated based on the royalty rate payable to the artist rather than the net income generated by the record and where advances can be recouped against any other future albums). Otherwise dodgy accounting abounds and your indie developer will end up like the average record label artist and never see a single cent of their royalties. Well, no matter what your business is, you have to have confidence in your business partners, that is for sure. Having a good lawyer and accountant is pretty much a must. I really like the article, especially the part about the agents. This is something we really need, and something that shouldn't be that difficult to figure out, as, just as you pointed out, we do have a proven model for that in book publishing. On the other hand, about investors... As I see it, the problem here is that most investors would want some rights to have a say in what you do with their money, and then you're not that independent anymore. In fact, I'm afraid that this model would be almost the same as the traditional publisher / developer relationship was a few years ago (before publishers started to actually buy out most developers). Well, the tricky thing about investors is finding the right ones. I've worked with a number of small company investors in the past (indirectly, at least), and the right ones can be a real boon. The wrong ones, which you're likely to find as often as not I'm sure, can of course be nightmarish. I think that in my scenario it would be important for investment groups to basically become b2b brands based on their reputation in the industry, so that you'd know which ones you might want to work with (if they want to work with YOU, which is then always the next challenge as an indie). There's never a perfect solution, but I feel like an ecosystem like I described would get as close as possible to the ideal for companies that are not inherently self-funded in some manner. Post a Comment
true
true
true
Justin Vincent recently wrote a blog post about what he calls the Venture Matrix , which is basically a way to create a huge company out of ...
2024-10-12 00:00:00
2009-12-10 00:00:00
null
null
blogspot.com
christophermpark.blogspot.com
null
null
259,259
http://www.readwriteweb.com/archives/peering_into_microsofts_cloud.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,888,495
https://www.washingtonpost.com/opinions/2023/05/10/kinsey-institute-indiana-legislature-state-funds/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,315,465
http://www.jpl.nasa.gov/news/news.cfm?release=2011-372
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
374,534
http://online.wsj.com/article/SB122747680752551447.html?mod=djemITP
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,441,690
http://www.developingandrails.com/2015/01/the-ruby-and-ruby-on-rails-scene-in.html
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
14,080,968
https://www.usatoday.com/story/news/2017/04/08/hacker-triggers-all-156-emergency-sirens-dallas/100212412/
Hacker sets off all 156 emergency sirens in Dallas
Doug Stanglin
# Hacker sets off all 156 emergency sirens in Dallas Dallas city officials said Saturday that a hacker is to blame for setting off all the city's 156 emergency outdoor sirens, which wailed for an hour and half overnight. Rocky Vaz, director of the city's Office of Emergency Management, said engineers determined an unidentified hacker somewhere in the Dallas area was responsible, but has not been tracked down. The hacker tricked the system to send repeat signals activating each siren 60 times during the night, Vaz said. The sirens started sounding at 11:42 p.m. Friday and continued until 1:17 a.m. Saturday. The blaring sirens, used primarily to warn of tornadoes and other severe weather, prompted anxious residents to call 911, clogging up that system. At one point, 911 calls were backed up for six minutes instead of the normal wait time of 10 seconds. Dallas spokeswoman Sana Syed said the 911 system was bombarded with over 4,400 calls between 11:30 p.m. and 3:00 a.m., double the number for a normal, eight-hour period overnight. Mayor Mike Rawlings said in a statement that the hack was an attack on Dallas' emergency notification system, and that the city will "find and prosecute whomever is responsible." "This is yet another serious example of the need for us to upgrade and better safeguard our city's technology infrastructure," he posted on Facebook. "It's a costly proposition, which is why every dollar of taxpayer money must be spent with critical needs such as this in mind. Making the necessary improvements is imperative for the safety of our citizens." Vaz said he expects the emergency siren system to be back in operation by Monday afternoon. Locating the hacker, however, "is going to be a very long process if we do find out who actually did it," Vaz said. The Federal Communication Commission is helping in the investigation, he added.
true
true
true
Sirens wailed for 90 minutes, prompting callers to overwhelm 911 lines.
2024-10-12 00:00:00
2017-04-08 00:00:00
https://www.usatoday.com…t=pjpg&auto=webp
article
usatoday.com
USA TODAY
null
null
6,635,051
http://www.insidehighered.com/news/2013/10/29/more-data-show-students-unprepared-work-what-do-about-it
More data show students unprepared for work, but what to do about it?
Allie Grasgreen
**You have /5 articles left.** Sign up for a free account or log in. As more students have struggled to find a place in a depressed job market and questions about the employment value of a college degree have intensified, so too has concern that new graduates are not equipped to function in the work place and are not meeting employers’ expectations. A new survey reaffirms that quandary, but the group that commissioned it hopes the findings actually teach students something. “We’re going to go directly to students and help them understand what this gap is,” said Dan Rosensweig, president of the learning company and textbook rental giant Chegg, which runs a service connecting graduating high school students with colleges and scholarships. “We appreciate the fact that this dialogue is going on right now. We thought, however, that somebody really needed to frame what the issues really are and what is addressable, and help figure out the best way to address it.” In the report, "Bridge That Gap: Analyzing the Student Skill Index," only half of college students said they felt very or completely prepared for a job in their field of study. But even fewer employers – 39 percent of those surveyed – said the same about the recent graduates they’d interviewed in the past two years. Even wider gaps of varying size emerge when the survey zeroes in on about a dozen different skills. Students and employers consistently disagreed on how prepared new graduates were to employ a dozen different “business basics.” Those include “creating a budget or financial goal” and “writing to communicate ideas or explain information clearly” (each show a 22 percentage-point gap), and “organization” (25 percentage points). In the widest gap, at 27 percentage points, 77 percent of students but only half of hiring managers reported preparation for “prioritizing work.” Students fared the best at “making a decision without having all the facts.” About 47 percent of students said they were prepared to do that, and 37 percent of hiring managers said the same of recent graduates. Chegg surveyed about 2,000 18- to 24-year-olds enrolled in two- and four-year colleges, and 1,000 hiring managers. Rosensweig believes that higher education’s slow response to technological advancements and employers’ neither hiring nor training new graduates have contributed to a disconnect. “We think liberal arts is still incredibly important and incredibly valuable and the survey shows that. But it also shows we need to modernize some in how we’re teaching the curriculum,” he said. “We think that businesses, working with schools, can build that curriculum.” The information revealed in surveys such as Chegg’s has prompted colleges of all different types to come up with new and better ways to prepare students for careers and life after college. But just because people are only observing these differences now doesn’t mean they’re new, said Andy Chan, vice president for personal and career development at Wake Forest University. Researchers have only been tracking these sort of data for about five years. “I’ve heard this general complaint among employers that the students aren’t good enough – aren’t qualified enough – for a long time,” Chan said. The difference now is that the job market is “much tighter than it has ever been,” and at the same time students are either unwilling or unable to accept true entry-level positions that they view as dead-end jobs. Chan and others have argued that colleges aren’t doing enough to prepare students for the work force. In most cases, career services is an isolated, overbooked office that can go underutilized or flat-out ignored, Chan said in a report he co-authored this year. Instead, colleges should be embedding career development into the fabric of undergraduate education. Not only would this better prepare students for life after college, it would help to justify the value of a liberal arts degree. Some colleges are adding programs in innovation and entrepreneurship. Summer business programs are growing in popularity. And other professional schools are doing more to provide co-operative curriculum development such as internships. The skills gap has also created an opening for new models like the Fullbridge Program and Dev Bootcamp that teach additional skills and traits such as business analysis and research and forward-thinking and persistence. For students who can afford to pay for them (Fullbridge costs between $5,000 and $10,000), these courses can provide a leg up in the interview process. The Association of American Colleges and Universities, meanwhile, started LEAP (Liberal Education and America’s Promise) Employer-Educator Compact, an initiative seeking to ensure students get the experiences and knowledge base they need to succeed in the work place. On Monday in Boston, AAC&U and LEAP co-sponsored one of several regional forums for educators, employers and policymakers to “chart a plan of action” for creating more successful college-to-career pathways. While the prolonged economic recession has caused hard times to fall on graduates of all types of institutions, liberal arts education has faced particular scrutiny from the public, media and politicians. But Chan notes that some research has found those students are actually better-skilled in what the Chegg report deems “office street smarts.” A survey out of Michigan State University’s Collegiate Employment Research Institute found that the people interviewing liberal arts students for jobs believe recent graduates have the work place competencies they need, but could not articulate or demonstrate their abilities and lacked several key technical and professional skills. While arts and sciences students ranked higher than their peers in skills including working in a diverse environment, communication and innovation, they lagged being in areas such as utilizing software, analyzing, and evaluating and interpreting data. The Chegg survey found that science, technology, engineering and mathematics students were “slightly better prepared” than their peers. Those students fared better among employers in skills including preparedness to explain information and preparedness to solve problems through experimentation. The findings contain a lesson for colleges, students and employers, Rosensweig says. Colleges need to make sure their curriculums align with the way companies work today, with fast-paced technology and social media changing data collection and communication. Employers should articulate to colleges what they’re looking for in employees, and help make sure the what they’re teaching is useful. And students shouldn’t just take what’s handed to them in the classroom, they should do all they can to supplement their education with additional skill-building. “Because of the global economy, unemployment and high tuition, part of the responsibility of all of us should be to make sure students are qualified to get career-based jobs,” Rosensweig said. **straight to your inbox**?
true
true
true
New survey shows students think they're more prepared for the work force than employers believe they are. The question, its authors ask, is what will anyone do about it?
2024-10-12 00:00:00
2013-10-29 00:00:00
https://www.insidehigher…-no-wordmark.png
article
insidehighered.com
Inside Higher Ed | Higher Education News, Events and Jobs
null
null
2,683,281
http://www.entrepreneur.com/article/219847
From Paycheck to Pay Dirt: Blazing Your Own Trail As a Business Owner | Entrepreneur
Gwen Moran
# From Paycheck to Pay Dirt: Blazing Your Own Trail As a Business Owner First-time business owners starting a venture in unfamiliar waters face a special set of challenges. Here are three who tackled them with success. By Gwen Moran Edited by Frances Dodds Opinions expressed by Entrepreneur contributors are their own. Eric Beverding had been a club auto-racing fan for years, thanks to his wife, Dacia Rivers, a lifelong race car enthusiast. Beverding was working in production at a boutique political advertising agency in Austin, Texas, when he and Rivers finally decided to place their bets on their hobby. Although they knew they wanted to start a club sport track, they also knew they had to be sure that the area would support such a business. "My wife's family is in automotive retail, and they had customers telling them over and over that they were tired of driving and having to spend the night or spend a thousand dollars to go away for a weekend to a track to have fun in their cars," Beverding says. He looked at the radius of influence for the few club-based tracks in the area and found that the nearest one was still far enough away that it wouldn't affect his location. He was right. He and Rivers opened Harris Hill Road (H2R) in June 2008 in nearby San Marcos, Texas. It takes guts to leave a steady job with a big paycheck to launch an entirely new venture. Babson College entrepreneurship professor Dennis Ceru cautions that there are some solid best practices that should be followed before jumping ship from steady employment into uncertain waters. The rest of this article is locked. Join Entrepreneur+ today for access. Already have an account? Sign In
true
true
true
First-time business owners starting a venture in unfamiliar waters face a special set of challenges. Here are three who tackled them with success.
2024-10-12 00:00:00
2011-06-21 00:00:00
https://assets.entrepren…t=pjeg&auto=webp
article
entrepreneur.com
Entrepreneur
null
null
2,563,376
http://www.hackplanet.in/2010/03/list-of-all-ip-addresses-you-need-not.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,671,519
https://mail.gnome.org/archives/gimp-developer-list/2017-February/msg00034.html
[Gimp-developer] GEGL-0.3.12 and babl-0.1.24
null
# [Gimp-developer] GEGL-0.3.12 and babl-0.1.24 *From*: Øyvind Kolås <pippin gimp org> *To*: gimp-developer <gimp-developer-list gnome org>, gegl-developer-list <gegl-developer-list gnome org> *Subject*: [Gimp-developer] GEGL-0.3.12 and babl-0.1.24 *Date*: Mon, 13 Feb 2017 21:36:52 +0100 GEGL provides a node/graph based API and framework for cached, interactive non-destructive image processing. GEGLs data flow image processing graphs are used by GIMP and other software like gnome-photos, imgflo and iconographer. Highlights of changes in this release: Core: • less locale dependent serializations/parameters • fix local raw file detection of ARW and CR2 files • gegl_memset_pattern performance improvement • clean up the way we drop references and free memory • static caching of some frequently used babl formats/types. • mipmap preview render code fixes for the following subset of operations: point operations (filter, and composer subclasses), integer translate, crop. Operations: • new ops: edge-neon, image-gradient, slic, wavelet-blur, waterpixels, watershed • moved from workshop to common: color-warp, component-extract • text: remove now unneeded work-around, ability to control vertical positioning, permit <1.0 font-sizes, handle text-color alpha, other improvements. • lens-distortion: default to transparent background • crop: bounding box computation simplifications • noise-rgb: add gamma and distribution properties • dither: renamed from color-reduction and improved ui/property controls. • high-pass: do inversion, over and contrast in non-linear RGB • noise-rgb: new linear and gaussian properties • transform: added a clip-to-input property • raw-load: improvements to handling of Sony's ARW files • exposure: replaced offset with black-level • moved from common to workshop: bilateral-filter-fast • new workshop ops: bayer-matrix, linear-sinusoid, shadows-highlights, integral-image, segment-kmeans, • removed ops: gaussian-blur-old To build gegl-0.3.12 you will also need babl-0.1.24 which has recently been released with various new performance short-cuts and profiling cache improvements. This release of GEGL was brought to you through contributions from: Piotr Drąg, Marco Ciampa, Sergey "Shnatsel" Davidoff, Ell, Michael Hennig, Anders Jonsson, Christian Kirbach, Øyvind Kolås, Thomas Manni, Jordi Mas, Michael Natterer, Jon Nordby, Peter O'Regan, Jehan Pagès, Sebastian Rasmussen, Debarshi Ray, Dimitris Spingos (Δημήτρης Σπίγγος), Martin Srebotnjak, Elle Stone and Miroslav Talasek. Where to get GEGL: The latest versions of GEGL and it's hard dependencies babl and glib can be fetched from: http://download.gimp.org/pub/babl/0.1/babl-0.1.24.tar.bz2 http://download.gimp.org/pub/gegl/0.3/gegl-0.3.12.tar.bz2 SHA256 sums of the released tarballs: 472bf1acdde5bf076e6d86f3004eea4e9b007b1377ab305ebddec99994f29d0b babl-0.1.24.tar.bz2 62eb08d7dd6ac046953a0bf606a71f9d14c9016ffef4ef7273b07b598f14bec7 gegl-0.3.12.tar.bz2 More information about GEGL can be found at the GEGL website, http://gegl.org/ or by joining #gegl and #gimp on the GIMPnet IRC network. Have fun coding and image processing Øyvind Kolås -– http://pippin.gimp.org/ [ Date Prev][ Date Next] [ Thread Prev][ Thread Next] [ Thread Index] [ Date Index] [ Author Index]
true
true
true
null
2024-10-12 00:00:00
2017-02-13 00:00:00
null
null
null
null
null
null
1,774,770
http://www.aonergy.com
aonergy.com - Domain Name For Sale | Dan.com
null
# Aonergy.com is for sale # Aonergy.com is for sale ### Aonergy.com #### We've got your back #### We get these questions a lot Your burning questions about domain sales, answered. ##### How does your domain ownership transfer process work? No matter what kind of domain you want to buy or lease, we make the transfer simple and safe. It works like this: **Step 1: You buy or lease the domain name** You will find the available purchasing options set by the seller for the domain name **Aonergy.com** on the right side of this page. **Step 2: We facilitate the transfer from the seller to you** Our transfer specialists will send you tailored transfer instructions and assist you with the process to obtain the domain name. On average, within 24 hours the domain name is all yours. **Step 3: Now that the domain is officially in your hands, we pay the seller.** And we’re done! Unless you require our assistance. Our transfer team is available for free post-transfer assistance. ##### Which payment options do you accept? ##### Do I have to pay for your services? #### Make an offer **Free**Ownership transfer**Free**Transaction support- Secure payments
true
true
true
I found a great domain name for sale on Dan.com. Check it out!
2024-10-12 00:00:00
2024-01-01 00:00:00
https://cdn2.dan.com/ass…8a371dbacfa5.png
product
dan.com
Dan.com
null
null
17,822,786
https://www.reuters.com/article/us-china-politics/chinas-xi-says-internet-must-be-clean-and-righteous-idUSKCN1L71CG
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,337,083
https://cointelegraph.com/news/8-major-bitcoin-debit-cards-how-private-and-anonymous-are-they
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,404,692
https://svgwave.in/
Svg Wave - A free & beautiul gradient SVG wave Generator.
null
null
true
true
false
SVG Wave is a minimal gradient svg wave generator with lot of customization. It lets you abiltiy to generate and export pngs and svgs of beautiful waves. SVG wave also lets you layer multiple waves. Create SVGs for your website designs.
2024-10-12 00:00:00
null
https://svgwave.in/image…0ea67a8-3-01.svg
website
svgwave.in
A free & beautiful gradient SVG wave Generator.
null
null
18,296,943
https://nwn.blogs.com/nwn/2018/10/social-vr-comparison-chart-ryan-schultz.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,091,962
http://exploited.cz/flash/
Chrome click-to-play bypass
null
FlashControl extension can be easily bypassed using the `.click()` trick. FlashBlock extension (an older version?) can also be bypassed using the same trick. Chrome's click to play setting can be bypassed, see related Chrome issue. As per the issue's last comment, Chrome is removing *left-click-to-play* in version 41. Screenshot from Chrome 46.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
6,267,522
http://lisavandamme.com/we-dont-talk-about-college/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,587
http://www.readwriteweb.com/archives/web_20_expo_all_things_widgets.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,254,926
https://www.berationable.com/rationable-blog/2019/6/11/what-is-evidence-part-1-randomised-controlled-trials
What is Evidence? Part 1: Randomised controlled trials — Rationable
Abhijit Chanda
# What is Evidence? Part 1: Randomised controlled trials Randomised Controlled Trials (RCT) are one of the most important methods of testing medicine, diets, exercises, and much more. We use it on animals as well as humans to figure out if the thing we are testing works. Basically, most things that apply to clinical settings in regards to health. Their results can be an excellent form of evidence if the study is conducted on a large number of people and over a relatively long period. Most importantly, it’s great to know if you’re debating a claim with someone and they start throwing clinical studies at you. This post will tell you how to investigate what they’re claiming and figure out for yourself if they’re full of crap…or not. I’ve mentioned RCTs in several of my posts already. I’ve usually given a brief description of what one is, but since I can’t go into detail, I thought I’d go into a bit more detail here. Of course, I’m oversimplifying the process, but it’s to keep everything simple and easy to understand. I’ll link other sources where you can go in depth. # First, a little story One can take a pretty long ride back through history to find some form of a clinical trial being tried out millennia ago. I think it was even mentioned in the Bible! But let’s not pretend that gives it any more credibility, okay? The earliest record of a real-life RCT was back in the 1700s. The primary way of getting around the world was by ship. Unfortunately, on long voyages, many of the sailors were getting very unwell and even dying. Their teeth would fall out, old wounds would reopen, and their joints would swell up painfully. The disease was called scurvy. James Lind, a young surgeon, looked into the matter. He had a hypothesis that acids could help the condition. He wasn’t the first to think so since it was known that citrus fruits helped the sailors with such symptoms. But it hadn’t become widespread for some reason. So, he decided to go along on a voyage. After about two months at sea, the sailors started to show symptoms of scurvy. He divided the twelve sailors into six groups of two each. They all were given the same diet, but with one difference. One group was given cider, the second got twenty-five drops of “elixir of vitriol”. Don’t be fooled by that fancy name. It was diluted sulphuric acid. The third for vinegar, the fourth got seawater, the fifth got two oranges and one lemon, and the final group got a spicy paste and a drink made with barley water. The test ran for six days before the citrus fruits ran out. They were quite expensive at the time. Fortunately, the two sailors, in the group with the oranges and a lemon, had recovered almost to full strength by then. The only other ones that had seen some benefit was the first group with the cider, but only by a bit. No one else got better. He published his work called *The Treatise of the Scurvy*, but it initially got utterly ignored. However, over time, as more officials of the navy saw his idea worked, lime juice started becoming more of a staple on the navy ships of the time and scurvy rates dropped among sailors. # Starting an RCT It all starts with an idea - a hypothesis. This is probably the easiest part and the place where most science begins. You see the world around you, you recognise trends and wonder if there’s a connection. Just like James Lind did. There had been observations and associations had been made that citrus fruits helped the symptoms of scurvy. He wanted to figure it out for sure. Was it just a coincidence, a correlation, or causation? Is it just the lemon, anything sour, or something else? So, say there is a new medicine. A lab somewhere has had some promising findings in basic research on different chemicals they’re looking at to fight a particular disease. They’ve just put the bacteria causing the illness into a Petri-dish and added different chemicals to see what happens. And voila! It kills the bacteria! But that doesn't mean all that much in the big scheme of things. Is the chemical they used poisonous to other living things? Or is it well tolerated? Does it have side effects? How will it get to the disease in a living subject? Just a few good questions are asked, and you have the premise for a good experiment. The first stage is for this medicine to go through animal trials. We do RCTs here too. But let’s make things exciting and presume the rats did well on the meds and it’s time to see if it works in humans also. Note: Just because a trial succeeded in mice or rats, it doesn’t mean they’ll have the same effect on humans. # Randomised We take a sample of the population now. We select a bunch of people with this disease, the more we can include, the better. Why? Because everyone is different. If we want to see if this medicine works on most people, we need to take as many people as we can to see the trends more clearly. Now, these people should ideally be selected across races, ages and genders. Of course, if the researcher wants to focus on a particular subset of the population, they need to mention it clearly in the paper. Now, this bunch of people get randomly assigned to two or more groups. For the sake of simplicity, let’s stick with two. # Controlled One group, called the control group, gets a placebo of some sort that looks almost identical to the actual medicine. Think of them as the baseline reading. The real medicine is given to the other group. Placebos can just be sugar pills, flavoured syrups, fake injections, sham surgeries or any other intervention that only resembles the treatment being tested but isn’t meant to work. The point is to make the placebo identical to the treatment being tested except it excludes the active ingredients. Depending on the context of the study, the control group could also be given nothing. Another group could also be added in other settings where a new drug is being compared to an already well-established medicine with well-documented results. # Blinded and Double Blinded Whenever a test is run, it’s essential that it be blinded; otherwise, the biases of the candidates and the scientists could skew the results. Let’s first sort out the terms. A blinded trial is when the candidates in the experiment don’t know if they’re getting the real medicine or a placebo. But the scientists and analysts do. In a double-blinded trial, neither the candidates nor those conducting the test know who got the medicine, and who the placebo. But the analysts know. A triple blinded trial is where no one involved in the experiment knows who gets which medicine. If you think about it, if the patients have a preconceived notion that a medicine will work, they may feel better for a bit or claim to feel better even if they aren’t to show the drug in a better light. If the scientists conducting the study know which group gets the medicine and they have a bias towards the medication, they may record more positive results and fewer negative ones. This may not even be a conscious effort, but it can happen subconsciously. That’s how easy our brains are to fool. And that defeats the purpose of an RCT which is to find the results without any influences from biases. The example I wrote about in my article on homoeopathy on a study by Jacque Benveniste illustrates this quite well. # Analysis So now that the test has been conducted, the data is analysed to check the results. A trial could be unblinded at this stage or the next one. Unblinding is what it sounds like. Basically, we lift the veil and reveal which patient has been given which treatment. If the results of improvement are similar to placebo, we say the medicine is no more effective than a placebo. Check out the homeopathy article for that example. Beyond that, the effect can be measured. If the results are positive with minimal side effects, the chances are good that there will be further trials. And make no mistake, there will be many, many more trials – ranging in demographics, duration and number of patients – that have to be conducted before a drug can hit the pharmacy shelves. Each effect and side effect has to be recorded meticulously and repeatedly so the doctor can compare the benefits to the risks to give their patients the best prescription possible. # Problems with RCTs Of course, it’s not a perfect system, but it’s quite indispensable as a part of the process in modern medicine. The problems arise when the blinding isn’t thorough so biases creep in, or if a paper doesn’t get published if the results aren’t in favour of the medicine. Negative results are being suppressed in many cases. There can also be problems in conflicts of interest, which basically means the people funding the study want the scientists to publish only a positive result regardless of the real findings. Another problem could be the manipulation of data to get a particular p-value - a measure of statistical significance. This term is best explained by Steven Novella, an academic clinical neurologist at Yale University School of Medicine, host of The Skeptic’s Guide to the Universe, and author of a book by the same name: The primary method for determining significance is the P-value – a measure of the probability that the results obtained would deviate as much as they do or more from a null result if the null hypothesis were true. This is not the same as the probability that the hypothesis is false, but it is often treated that way. Also, studies often assign a cutoff for “significance” (usually a p-value of 0.05) and if the p-value is equal to or less than the cutoff, the results are significant, if not then the study is negative. He goes on to explain p-hacking: This is the practice of tweaking the choices a researcher makes in terms of how to gather and analyse data in order to push the results over the magic line of significance. Many researchers admit to behaviour that amounts to p-hacking. Further, when published results are analysed, they tend to suspiciously cluster around the cutoff for statistical significance. # How to make the most of RCTs RCTs are an indispensable tool to discover the real effects of medicines, diets and lots more. This is a basic form of preliminary evidence. Many of the flaws can even be overcome if other scientists review the work (peer review) and analyse the results to check for mistakes or shoddy calculations. Other researchers could try replicating the experiment to see if it works or not. # But what does this mean for you? Finding evidence for claims through RCTs can be a massive help in finding the facts. But it’s not easy for most laypeople to understand RCTs and what they imply. There’s a lot of jargon thrown in usually, and the statistical references can fly over your head if you’re not well versed with the subject. Worse yet, many clinical studies are stuck behind a paywall and meant for academic purposes only. Then you’ll have to rely on other sources, which I’ll mention a bit later. An excellent site for finding whole RCTs for free is Pubmed. I’m sure you’ve heard of it. Pubmed has become an excellent repository for a lot of studies across a wide range of fields - over 29 million actually. But just because it’s on Pubmed doesn’t mean its a well-designed study. That’s why you need to be able to evaluate them for yourself to figure out if a study is legit or not. Here are a few things you can do to start off with. Consider this a noob’s guide to figuring out RCTs. Ask the following questions of the RCT and figure out the answers: **How large was the study?**The higher the number of participants, the more reliable and universally applicable a study is.**How long was the study conducted?**Similar to the earlier point, the longer a study is done, the more information we have of long term effects of the experiment. For example, if it’s a diet, we can measure metabolic impacts over the long term and how sustainable the diet is.**How many people left?**Seeing if any of the participants left the experiment can also have hidden clues. It’s not unusual for people to drop out from an experiment, but do the researchers report why they left? If they do, it could say a lot about unforeseen problems or side-effects of the thing being tested. If not, something fishy could be going on.**Who participated?**The population tested could also have ramifications on how relevant the study is to you. Was it conducted only on seasoned athletes or diabetics or males aged over 65 with a history of prostate cancer? Are any of these demographics similar to yours? If not, it’s possible the study isn’t relevant for other groups. There is a chance there is a broader relevance, but that has to be clearly stated in the report.**How were the groups split up?**Understanding the methodology of the study can also prove vital. This will take a bit of training though, but it’s worth delving into. I’ll mention books you can read to help you here in the Further Reading section.**What were the results and how were they interpreted?**This is where the evidence becomes more explicit. Most of this is usually written in much clearer language and is relatively more easy to understand. But look out for terms like, “further research is required”. This is the researchers clearly stating their study is still preliminary and more needs to be done to really understand how everything is working. ## Pro tip #1 If you can only find studies trying to evaluate the safety of a particular chemical, there is a good chance that the product is alternative medicine. This is because many alternative and complementary meds (CAMs) try to go through the US Food & Drug Administration’s guideline loophole that implies you can sell unproven treatments in the form of supplements provided you prove they are safe for human consumption. Clever, right? ## Pro tip #2 Many people will possibly quote studies when you’re in a discussion regarding a specific claim. Use what you’ve learned from this article to investigate further. If you find the research was done on mice or earthworms or something, you can safely respond with a “this is interesting, but since it’s not on humans, nobody can use this as proof that your claim is valid.'' In other words, “you’re full of shit”. ## Noob tip If you can’t figure out what’s going on, or if you’re just starting out and need guidance, try and look for the references to a study or a claim through reliable sources like science magazines, WHO, US and UK government websites, NASA, or even Wikipedia. I’m going to be writing in more detail about Wikipedia in the next article in this series, but until then, just know that it’s a significant first step to figuring out any topic. Plus, all the citations are listed so you can go deeper into a subject by checking those out. Some great websites and blogs you should also search on are Science Based Medicine, Quackwatch, Skeptic, Neurologica and Snopes. # Conclusion If you are trying to figure out whether a claim is valid or not, RCTs are definitely useful. But it’s just one tool in a toolbox of instruments that can help you dig deeper into a topic to understand the facts, which we will continue to discuss here on Rationable. Finding the facts is not an easy task. I have been teaching myself these tools for over a decade now. I’ve picked them up from scientists and other science communicators and sceptics who use them regularly. I started at a point where only the abstract of a study was all I could understand. Now, I’m no scientist, but a lot of other pieces have fallen into place. This is one of the main reasons Rationable exists. I wanted to show you that scientific thinking and evidence is something laypersons can learn and use too. You don’t have to be a scientist to test people’s claims and understand science. I’ve taught myself this process and it has been an incredibly empowering experience, not to mention awe-inspiring. As I said, finding the truth is hard. And that’s why so many people don’t do it. So many of us believe a Whatsapp forward from a close relative or a friend or sibling we respect. They become our trusted sources because we trust them. But are they objectively reliable sources of information? No! For that matter, neither am I. I am just as biased as anyone else. I get my news from secondary and tertiary sources. Each and every one of us is biased, and we naturally want to agree with the information that fits those biases and our belief systems. We need to understand this, accept it and actively work against those instincts to find evidence that could contradict them. At the end of the day, we need to follow the evidence, not our biases. That’s why I want you to fact-check everything I say. The claims I make are not made from my expertise but rather from all the sources I get them from. And those too are secondary sources most of the time. But I link you back to them in the article and in recommended reading at the end of each post so that you can go back and check them out and check where they got their facts from. Randomised Controlled Trials, though are an effort to minimise our biases. Making them blinded and subjecting them to peer review makes them even more reliable. James Lind fashioned his experiment based purely on common sense, and it worked. Now, that has become a relatively crude experiment as we have continued to refine the process to make the results progressively more objective, free from bias and ethical. I’m quite sure the process will continue to be refined to minimise the problems it faces now. Have I got something wrong? Did I miss a detail? Let me know in the comments. If you enjoy this content, please bookmark it, subscribe to the RSS feed and share it with your friends. Your support keeps Rationable going. # References & Further Reading If you want to buy the books, I’d appreciate it if you could use the links I’ve added below. I’ll get a small commission which supports Rationable and everything that it does.
true
true
true
Want to figure out for yourself if a claim’s full of crap…or not? This is the first step.
2024-10-12 00:00:00
2019-06-11 00:00:00
http://static1.squarespace.com/static/5cdbf3470490793aafa79dfe/5cdc0a30c1c75e54743f5c4c/5cff963f5eb40900018b74cc/1598810615479/shutterstock_1016542360.jpg?format=1500w
article
berationable.com
Rationable
null
null
26,397,805
https://history.stackexchange.com/questions/63102/why-would-silk-underwear-disqualify-you-from-the-united-states-military-draft
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
918,750
http://itknowledgeexchange.techtarget.com/storage-soup/reports-resurface-of-emccisco-joint-venture/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,622,096
https://xn--gckvb8fzb.com/lets-take-wikipedia-offline/
Let’s Take Wikipedia Offline!
null
# Let's Take Wikipedia Offline! Well, not literally. But what if you could look up Wikipedia in an instant, without even requiring a working internet connection? Call it doomsday prepping or just a precaution, but let’s see if we can have our own copy of Wikipedia running offline, without having too much trouble along the way. ### Update 2024-09-12 The setup described here does not work anymore with the latest version of Quickwit. As some of you might know, up until the global situation went sideways I travelled quite a lot and hence had to deal with all sorts of things that people working stationary usually don’t experience. One of these things is bad internet. Depending on where in the world I was, I either had superb internet speeds and stable connections (<3 Seoul) or, well, not so much so. Back at the time I took care to have documentation stored offline on my machine, so that I wasn’t dependent on connectivity to be able to work. However, I oftentimes ended up in situations, in which I needed to look up *some documentation on real life*, for which I use Wikipedia most of the time. Unfortunately crawling Wikipedia’s HTML for offline use is not something that’s feasible - and it’s not even necessary as Wikipedia offers database dumps for everyone to download for free. Unfortunately, these database dumps aren’t exactly *browsable* the way they’re offered by Wikipedia. Luckily, there are ready-to-use apps (like Kiwix, Minipedia, XOWA, and many more) that *try* to offer an offline version of Wikipedia, either based on these dumps or through other means, but they’re all quite cumbersome to use and in parts have some pretty terrible prerequisites, like for example Java. I was looking for a more lightweight approach that integrated well into my workflow – which is terminal-based – and doesn’t end up eating more storage than the actual dump itself, which at time of writing is 81GB in total size (uncompressed). A year ago I tried this experiment once and used a tool called `dumpster-dive` to load the Wikipedia dump into a MongoDB and access it using uveira, my own command line tool for that. While that solution was pretty good, I ended up with a 250GB database back at the time, which had to be stored somewhere. At some point, it just became too impractical to deal with. So today I thought it might be a great day to try a different approach. ## Downloading Wikipedia First, let’s download the latest XML dump of the English Wikipedia. We’re going to use `wget -c` here, so that we can continue a partial download, just in case our internet connection drops. ``` wget -c \ 'https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2' ``` Now that we downloaded the archive, we can decompress it. Although converter that we’re going to use can handle bzip2, I wanted to be able to compare the size of the uncompressed file to the *database* that we’ll be using later on. ``` bzip2 -d ./enwiki-latest-pages-articles.xml.bz2 ``` This took about half an hour on my workstation. Next, let’s install the converter. It is a script that comes with the `gensim` Python package, so you’ll need that on your system. Ideally you should be working on an `virtualenv` . ``` pip install gensim ``` Update:You might need to downgrade`scipy` to the last version supporting`triu` :`pip install "scipy<1.13"` The gensim package includes a script named `segment_wiki` , that allows us to easily transform the XML to JSON. ``` python \ -m gensim.scripts.segment_wiki \ -i \ -f enwiki-latest-pages-articles.xml \ -o enwiki-latest-pages-articles.json ``` On my machine I got roughly 50,000 articles per minute. I kept the default for `-w` (the number of workers), which is 31. At the time of writing, Wikipedia consists of 6,424,367 articles, meaning it should take around two hours to convert all articles to JSON. However, it ended up taking less time in total, as a bit of the content was skipped. I didn’t bother to check in-depth, but 5,792,326 out of 6,424,367 didn’t sound too bad at all. ``` 2021-12-19 11:49:36,808 - segment_wiki - INFO - running /home/mrus/.virtualenvs/local3.9/lib/python3.9/site-packages/gensim/scripts/segment_wiki.py -i -f enwiki-latest-pages-articles.xml -o enwiki-latest-pages-articles.json 2021-12-19 11:51:34,063 - segment_wiki - INFO - processed #100000 articles (at 'Maquiladora' now) 2021-12-19 11:53:26,704 - segment_wiki - INFO - processed #200000 articles (at 'Kiso Mountains' now) 2021-12-19 11:55:09,739 - segment_wiki - INFO - processed #300000 articles (at 'Yuncheng University' now) 2021-12-19 11:56:38,493 - segment_wiki - INFO - processed #400000 articles (at 'Georgia College & State University' now) 2021-12-19 11:58:02,725 - segment_wiki - INFO - processed #500000 articles (at 'Stephen Crabb' now) 2021-12-19 11:59:26,795 - segment_wiki - INFO - processed #600000 articles (at 'Ann Jillian' now) 2021-12-19 12:00:43,973 - segment_wiki - INFO - processed #700000 articles (at 'Patti Deutsch' now) 2021-12-19 12:02:04,450 - segment_wiki - INFO - processed #800000 articles (at 'George J. Hatfield' now) 2021-12-19 12:03:19,111 - segment_wiki - INFO - processed #900000 articles (at 'Baya Rahouli' now) 2021-12-19 12:04:34,628 - segment_wiki - INFO - processed #1000000 articles (at 'National Institute of Technology Agartala' now) 2021-12-19 12:05:47,267 - segment_wiki - INFO - processed #1100000 articles (at 'The Cost (The Wire)' now) 2021-12-19 12:06:53,764 - segment_wiki - INFO - processed #1200000 articles (at '1026 Ingrid' now) 2021-12-19 12:08:05,158 - segment_wiki - INFO - processed #1300000 articles (at '85th Street (Manhattan)' now) 2021-12-19 12:09:19,355 - segment_wiki - INFO - processed #1400000 articles (at '1914 Penn State Nittany Lions football team' now) 2021-12-19 12:10:31,633 - segment_wiki - INFO - processed #1500000 articles (at 'Dave Hanner' now) 2021-12-19 12:11:44,928 - segment_wiki - INFO - processed #1600000 articles (at 'Swampwater' now) 2021-12-19 12:12:49,631 - segment_wiki - INFO - processed #1700000 articles (at 'Sri Lanka Army Medical Corps' now) 2021-12-19 12:14:02,378 - segment_wiki - INFO - processed #1800000 articles (at 'Yannima Tommy Watson' now) 2021-12-19 12:15:17,821 - segment_wiki - INFO - processed #1900000 articles (at 'What Rhymes with Cars and Girls' now) 2021-12-19 12:16:39,472 - segment_wiki - INFO - processed #2000000 articles (at 'Dark Is the Night for All' now) 2021-12-19 12:17:56,125 - segment_wiki - INFO - processed #2100000 articles (at 'Russ Young' now) 2021-12-19 12:19:03,726 - segment_wiki - INFO - processed #2200000 articles (at 'Tyczyn, Łódź Voivodeship' now) 2021-12-19 12:20:17,750 - segment_wiki - INFO - processed #2300000 articles (at 'Sahara (House of Lords album)' now) 2021-12-19 12:21:27,071 - segment_wiki - INFO - processed #2400000 articles (at 'Limburg-Styrum-Gemen' now) 2021-12-19 12:22:30,234 - segment_wiki - INFO - processed #2500000 articles (at 'Bogoriella' now) 2021-12-19 12:23:51,045 - segment_wiki - INFO - processed #2600000 articles (at 'Laurel and Michigan Avenues Row' now) 2021-12-19 12:25:05,416 - segment_wiki - INFO - processed #2700000 articles (at 'Kessleria' now) 2021-12-19 12:26:20,186 - segment_wiki - INFO - processed #2800000 articles (at 'EuroLeague Awards' now) 2021-12-19 12:27:34,021 - segment_wiki - INFO - processed #2900000 articles (at 'A.K.O.O. Clothing' now) 2021-12-19 12:28:58,561 - segment_wiki - INFO - processed #3000000 articles (at 'Česukai' now) 2021-12-19 12:30:24,397 - segment_wiki - INFO - processed #3100000 articles (at 'Program 973' now) 2021-12-19 12:31:43,178 - segment_wiki - INFO - processed #3200000 articles (at 'Dingden railway station' now) 2021-12-19 12:33:04,632 - segment_wiki - INFO - processed #3300000 articles (at 'Nagareboshi' now) 2021-12-19 12:34:23,664 - segment_wiki - INFO - processed #3400000 articles (at 'Anton Lang (biologist)' now) 2021-12-19 12:35:45,825 - segment_wiki - INFO - processed #3500000 articles (at 'Opera (Super Junior song)' now) 2021-12-19 12:36:58,564 - segment_wiki - INFO - processed #3600000 articles (at 'Mycena sublucens' now) 2021-12-19 12:38:22,892 - segment_wiki - INFO - processed #3700000 articles (at 'Man Controlling Trade' now) 2021-12-19 12:39:47,713 - segment_wiki - INFO - processed #3800000 articles (at 'Marwan Issa' now) 2021-12-19 12:41:07,354 - segment_wiki - INFO - processed #3900000 articles (at 'Anita Willets Burnham Log House' now) 2021-12-19 12:42:22,563 - segment_wiki - INFO - processed #4000000 articles (at 'Robert Bresson bibliography' now) 2021-12-19 12:43:44,812 - segment_wiki - INFO - processed #4100000 articles (at 'Ainsworth House' now) 2021-12-19 12:45:07,713 - segment_wiki - INFO - processed #4200000 articles (at 'Gohar Rasheed' now) 2021-12-19 12:46:31,470 - segment_wiki - INFO - processed #4300000 articles (at 'C1orf131' now) 2021-12-19 12:47:53,227 - segment_wiki - INFO - processed #4400000 articles (at 'Commatica cyanorrhoa' now) 2021-12-19 12:49:17,669 - segment_wiki - INFO - processed #4500000 articles (at 'Personal horizon' now) 2021-12-19 12:50:54,496 - segment_wiki - INFO - processed #4600000 articles (at 'Berteling Building' now) 2021-12-19 12:52:17,359 - segment_wiki - INFO - processed #4700000 articles (at 'Nyamakala' now) 2021-12-19 12:53:41,792 - segment_wiki - INFO - processed #4800000 articles (at "2017 European Judo Championships – Men's 81 kg" now) 2021-12-19 12:55:16,781 - segment_wiki - INFO - processed #4900000 articles (at 'Matt Walwyn' now) 2021-12-19 12:56:45,081 - segment_wiki - INFO - processed #5000000 articles (at 'Ralph Richardson (geologist)' now) 2021-12-19 12:58:23,606 - segment_wiki - INFO - processed #5100000 articles (at 'Here Tonight (Brett Young song)' now) 2021-12-19 13:00:06,069 - segment_wiki - INFO - processed #5200000 articles (at 'Jacob Merrill Manning' now) 2021-12-19 13:01:33,116 - segment_wiki - INFO - processed #5300000 articles (at 'Bog of Beasts' now) 2021-12-19 13:03:05,347 - segment_wiki - INFO - processed #5400000 articles (at 'Nixon Jew count' now) 2021-12-19 13:04:37,976 - segment_wiki - INFO - processed #5500000 articles (at 'Rod Smith (American football coach)' now) 2021-12-19 13:06:11,536 - segment_wiki - INFO - processed #5600000 articles (at 'Stant' now) 2021-12-19 13:07:43,201 - segment_wiki - INFO - processed #5700000 articles (at 'Mitsui Outlet Park Tainan' now) 2021-12-19 13:09:14,378 - segment_wiki - INFO - finished processing 5792326 articles with 28506397 sections (skipped 9832469 redirects, 624207 stubs, 5419438 ignored namespaces) 2021-12-19 13:09:14,399 - segment_wiki - INFO - finished running /home/mrus/.virtualenvs/local3.9/lib/python3.9/site-packages/gensim/scripts/segment_wiki.py ``` The XML dump that I used was 81GB in size (uncompressed) and I ended up with a JSON file that was only around 31GB. Apart from skipped content, a significant portion of these savings are probably attributed to the change in format. ``` -rw-r--r-- 1 mrus mrus 31G Dec 19 13:09 ./enwiki-latest-pages-articles.json -rw-r--r-- 1 mrus mrus 81G Dec 2 01:53 ./enwiki-latest-pages-articles.xml ``` ## Making 31GB of JSON usable Unfortunately we won’t be able to efficiently query a 31GB JSON just like that. What we need is a tool, that can ingest such large amounts of data and make them searchable. The `dumpster-dive` solution used MongoDB for this purpose, which I found is not an ideal way to solve this problem. And since we don’t actually need to work with the data, a database offers little benefit for us. Instead, a search engine makes a lot more sense. A while ago I stumbled upon quickwit and found it an interesting project. At that time I had no use case that would allow me to test it – but this experiment seems like a great playground to give it a go! Installation is fairly easy, even though it’s not available via `cargo install` . Simply clone the git repo and run `cargo build --release --features release-feature-vendored-set` . You’ll end up with the quickwit binary inside the `target/release/` directory. By default, quickwit will phone home, but you can disable that using an environment variable. `sh export DISABLE_QUICKWIT_TELEMETRY=1` Now, let’s create the required configuration. At the time of writing, the official quickwit documentation was out of date, at least unless we’d be using the 0.1.0 release, which was over half a year old. Hence the configuration as well as the commands that I’ll be showing here won’t match the documentation. However, if you’ve compiled quickwit from git master like I did (`144074d18e9b40615dacfd6c3908bcecb6b7ea3b` ) everything should work just fine. ``` { "version": 0, "index_id": "wikipedia", "index_uri": "file://YOUR_PATH_HERE/wikipedia", "search_settings": { "default_search_fields": ["title", "section_texts"] }, "doc_mapping": { "store_source": true, "field_mappings": [ { "name": "title", "type": "text", "record": "position" }, { "name": "section_titles", "type": "array<text>" }, { "name": "section_texts", "type": "array<text>" }, { "name": "interlinks", "type": "array<text>", "indexed": false, "stored": false } ] } } ``` **Note:** You have to manually replace `YOUR_PATH_HERE` in `index_uri` with the actual path to your *metastore* folder! Next, let’s `cd` into the directory that we’ve previously set in the `config.json` (`YOUR_PATH_HERE` ) and create the index using that exact same configuration (which I’m assuming is located in the same folder). ``` quickwit index create \ --metastore-uri file://$(pwd)/wikipedia \ --index-config-uri $(pwd)/config.json ``` After that we have to import the actual JSON data into the newly created index. This will take some time, depending on your machine’s performance. ``` quickwit index ingest \ --index-id wikipedia \ --metastore-uri file://$(pwd)/wikipedia \ --data-dir-path $(pwd)/wikipedia-data \ --input-path enwiki-latest-pages-articles.json ``` After around 10 minutes quickwit exited successfully with this output: ``` Indexed 5792326 documents in 10.42min. Now, you can query the index with the following command: quickwit index search --index-id wikipedia --metastore-uri file://$(pwd)/wikipedia --query "my query" ``` I noticed that the number it reported (`5792326` ) was the same as the one previously reported by the `segment_wiki.py` script, so I’m optimistically assuming that all data was imported successfully. What surprised me, was that unlike with the `dumpster-dive` setup that I mentioned before, quickwit’s database didn’t grow the data but instead shrank it even further down to only 21GB. At this size, having all of Wikipedia’s text articles available offline suddenly isn’t a PITA anymore. Let’s try querying some data to see if it works. ``` quickwit index search \ --index-id wikipedia \ --metastore-uri file://$(pwd)/wikipedia \ --query 'title:apollo AND 11' \ | jq '.hits[].title[]' "Apollo" "Apollo 11" "Apollo 8" "Apollo program" "Apollo 13" "Apollo 7" "Apollo 9" "Apollo 1" "Apollo 10" "Apollo 12" "Apollo 14" "Apollo 15" "Apollo 16" "Apollo 17" "List of Apollo astronauts" "Apollo, Pennsylvania" "Apollo 13 (film)" "Apollo Lunar Module" "Apollo Guidance Computer" "Apollo 4" ``` Looks like quickwit found what we were searching for. But since the article is literally named *Apollo 11* we should be able to perform what (according to quickwit’s documentation) seems to be an *exact search* to get the *Apollo 11* article we’re interested in. ``` quickwit index search \ --index-id wikipedia \ --metastore-uri file://$(pwd)/wikipedia \ --query 'title:"Apollo 11"' \ | jq '.hits[].title[]' "Apollo 11" "Apollo 11 (disambiguation)" "Apollo 11 in popular culture" "Apollo 11 missing tapes" "Apollo 11 goodwill messages" "British television Apollo 11 coverage" "Apollo 11 (1996 film)" "Apollo 11 lunar sample display" "Apollo 11 Cave" "Moonshot: The Flight Of Apollo 11" "Apollo 11 50th Anniversary commemorative coins" "Apollo 11 anniversaries" "Apollo 11 (2019 film)" ``` While it returns more than one match, my tests have shown that it’s safe to simply pick the first result when using exact matching, as it will return the *most exact* match first. Considering that we’re going through a very large set of data, the query speed is top-notch, with around 16004µs for a title query. Querying the actual content isn’t much slower either at only around 27158µs. Now, quickwit is designed to be able to run as a standalone service and hence also offers an HTTP endpoint for querying. However, since I don’t need it to be running continuously, because I’m not looking up stuff on Wikipedia all the time, I prefer its CLI interface for the purpose of finding articles when I need them. I’m sure that running it as a service might increase the performance, though. In order to simplify things, I wrote a helper function in my .zshrc. You can basically copy-paste it and would only need to adjust the `WIKIPEDIA_*` exports. However, you have to have `jq` , `fzf` , `pandoc` and `glow` installed for this to work. I might extend this tool and eventually make it a standalone script, as soon as it gets too big. Depending on how well this solution performs over time, I might also try to build something similar for Dash docsets. Enjoyed this? Support me via Monero, Bitcoin, Lightning, or Ethereum! More info.
true
true
true
Well, not literally. But what if you could look up Wikipedia in an instant, without even requiring a working internet connection? Call it doomsday prepping or just a precaution, but let’s see if we can have our own copy of Wikipedia running offline, without having too much trouble along the way.
2024-10-12 00:00:00
2021-12-19 00:00:00
https://xn--gckvb8fzb.co…ges/DSC08207.jpg
article
xn--gckvb8fzb.com
マリウス
null
null
2,047,568
http://hbswk.hbs.edu/archive/5289.html
Why Your Employees Are Losing Motivation
David Sirota; Louis A Mischkind; Michael Irwin Meltzer
*Harvard Management Update.* Most companies have it all wrong. They don't have to motivate their employees. They have to stop demotivating them. The great majority of employees are quite enthusiastic when they start a new job. But in about 85 percent of companies, our research finds, employees' morale sharply declines after their first six monthsand continues to deteriorate for years afterward. That finding is based on surveys of about 1.2 million employees at 52 primarily Fortune 1000 companies from 2001 through 2004, conducted by Sirota Survey Intelligence (Purchase, New York). The fault lies squarely at the feet of managementboth the policies and procedures companies employ in managing their workforces and in the relationships that individual managers establish with their direct reports. Our research shows how individual managers' behaviors and styles are contributing to the problem (see sidebar "How Management Demotivates")and what they can do to turn this around. Three key goals of people at work To maintain the enthusiasm employees bring to their jobs initially, management must understand the three sets of goals that the great majority of workers seek from their workand then satisfy those goals: - Equity: To be respected and to be treated fairly in areas such as pay, benefits, and job security. - Achievement: To be proud of one's job, accomplishments, and employer. - Camaraderie: To have good, productive relationships with fellow employees. To maintain an enthusiastic workforce, management must meet all three goals. Indeed, employees who work for companies where just one of these factors is missing are three times less enthusiastic than workers at companies where all elements are present. One goal cannot be substituted for another. Improved recognition cannot replace better pay, money cannot substitute for taking pride in a job well done, and pride alone will not pay the mortgage. What individual managers can do Satisfying the three goals depends both on organizational policies and on the everyday practices of individual managers. If the company has a solid approach to talent management, a bad manager can undermine it in his unit. On the flip side, smart and empathetic managers can overcome a great deal of corporate mismanagement while creating enthusiasm and commitment within their units. While individual managers can't control all leadership decisions, they can still have a profound influence on employee motivation. The most important thing is to provide employees with a sense of security, one in which they do not fear that their jobs will be in jeopardy if their performance is not perfect and one in which layoffs are considered an extreme last resort, not just another option for dealing with hard times. But security is just the beginning. When handled properly, each of the following eight practices will play a key role in supporting your employees' goals for achievement, equity, and camaraderie, and will enable them to retain the enthusiasm they brought to their roles in the first place. Achievement related **1. Instill an inspiring purpose.** A critical condition for employee enthusiasm is a clear, credible, and inspiring organizational purpose: in effect, a "reason for being" that translates for workers into a "reason for being there" that goes above and beyond money. Every manager should be able to expressly state a strong purpose for his unit. What follows is one purpose statement we especially admire. It was developed by a three-person benefits group in a midsize firm. Benefits are about people. It's not whether you have the forms filled in or whether the checks are written. It's whether the people are cared for when they're sick, helped when they're in trouble. This statement is particularly impressive because it was composed in a small company devoid of high-powered executive attention and professional wordsmiths. It was created in the type of department normally known for its fixation on bureaucratic rules and procedures. It is a statement truly from the heart, with the focus in the right place: on the endspeoplerather than the meanscompleting forms. To maintain an enthusiastic workforce, management must meet all three goals. | Stating a mission is a powerful tool. But equally important is the manager's ability to explain and communicate to subordinates the reason behind the mission. Can the manager of stockroom workers do better than telling her staff that their mission is to keep the room stocked? Can she communicate the importance of the job, the people who are relying on the stockroom being properly maintained, both inside and outside the company? The importance for even goods that might be considered prosaic to be where they need to be when they need to be there? That manager will go a long way toward providing a sense of purpose. **2. Provide recognition.** Managers should be certain that all employee contributions, both large and small, are recognized. The motto of many managers seems to be, "Why would I need to thank someone for doing something he's paid to do?" Workers repeatedly tell us, and with great feeling, how much they appreciate a compliment. They also report how distressed they are when managers don't take the time to thank them for a job well done yet are quick to criticize them for making mistakes. Receiving recognition for achievements is one of the most fundamental human needs. Rather than making employees complacent, recognition reinforces their accomplishments, helping ensure there will be more of them. A pat on the back, simply saying "good going," a dinner for two, a note about their good work to senior executives, some schedule flexibility, a paid day off, or even a flower on a desk with a thank-you note are a few of the hundreds of ways managers can show their appreciation for good work. It works wonders if this is sincere, sensitively done, and undergirded by fair and competitive payand not considered a substitute for it. **3. Be an expediter for your employees.** Incorporating a command-and-control style is a sure-fire path to demotivation. Instead, redefine your primary role as serving as your employees' expediter: It is your job to facilitate getting their jobs done. Your reports are, in this sense, your "customers." Your role as an expediter involves a range of activities, including serving as a linchpin to other business units and managerial levels to represent their best interests and ensure your people get what they need to succeed. How do you know, beyond what's obvious, what is most important to your employees for getting their jobs done? Ask them! "Lunch and schmooze" sessions with employees are particularly helpful for doing this. And if, for whatever reason, you can't immediately address a particular need or request, be open about it and then let your workers know how you're progressing at resolving their problems. This is a great way to build trust. **4. Coach your employees for improvement.** A major reason so many managers do not assist subordinates in improving their performance is, simply, that they don't know how to do this without irritating or discouraging them. A few basic principles will improve this substantially. First and foremost, employees whose overall performance is satisfactory should be made aware of that. It is easier for employees to accept, and welcome, feedback for improvement if they know management is basically pleased with what they do and is helping them do it even better. Space limitations prevent a full treatment of the subject of giving meaningful feedback, of which recognition is a central part, but these key points should be the basis of any feedback plan: - Performance feedback is not the same as an annual appraisal. Give actual performance feedback as close in time to the occurrence as possible. Use the formal annual appraisal to summarize the year, not surprise the worker with past wrongs. - Recognize that workers want to know when they have done poorly. Don't succumb to the fear of giving appropriate criticism; your workers need to know when they are not performing well. At the same time, don't forget to give positive feedback. It is, after all, your goal to create a team that warrants praise. - Comments concerning desired improvements should be specific, factual, unemotional, and directed at performance rather than at employees personally. Avoid making overall evaluative remarks (such as, "That work was shoddy") or comments about employees' personalities or motives (such as, "You've been careless"). Instead, provide specific, concrete details about what you feel needs to be improved and how. - Keep the feedback relevant to the employee's role. Don't let your comments wander to anything not directly tied to the tasks at hand. - Listen to employees for their views of problems. Employees' experience and observations often are helpful in determining how performance issues can be best dealt with, including how you can be most helpful. - Remember the reason you're giving feedbackyou want to improve performance, not prove your superiority. So keep it real, and focus on what is actually doable without demanding the impossible. - Follow up and reinforce. Praise improvement or engage in course correctionwhile praising the effortas quickly as possible. - Don't offer feedback about something you know nothing about. Get someone who knows the situation to look at it. Equity related **5. Communicate fully. **One of the most counterproductive rules in business is to distribute information on the basis of "need to know." It is usually a way of severely, unnecessarily, and destructively restricting the flow of information in an organization. A command-and-control style is a sure-fire path to demotivation. | Good communication requires managers to be attuned to what employees want and need to know; the best way to do this is to ask them! Most managers must discipline themselves to communicate regularly. Often it's not a natural instinct. Schedule regular employee meetings that have no purpose other than two-way communication. Meetings among management should conclude with a specific plan for communicating the results of the meetings to employees. And tell it like it is. Many employees are quite skeptical about management's motives and can quickly see through "spin." Get continual feedback on how well you and the company are communicating. One of the biggest communication problems is the assumption that a message has been understood. Follow-up often finds that messages are unclear or misunderstood. Companies and managers that communicate in the ways we describe reap large gains in employee morale. Full and open communication not only helps employees do their jobs but also is a powerful sign of respect. **6. Face up to poor performance.** Identify and deal decisively with the 5 percent of your employees who don't want to work. Most people want to work and be proud of what they do (the achievement need). But there are employees who are, in effect, "allergic" to workthey'll do just about anything to avoid it. They are unmotivated, and a disciplinary approachincluding dismissalis about the only way they can be managed. It will raise the morale and performance of other team members to see an obstacle to their performance removed. Camaraderie related **7. Promote teamwork.** Most work requires a team effort in order to be done effectively. Research shows repeatedly that the quality of a group's efforts in areas such as problem solving is usually superior to that of individuals working on their own. In addition, most workers get a motivation boost from working in teams. Whenever possible, managers should organize employees into self-managed teams, with the teams having authority over matters such as quality control, scheduling, and many work methods. Such teams require less management and normally result in a healthy reduction in management layers and costs. Creating teams has as much to do with camaraderie as core competences. A manager needs to carefully assess who works best with whom. At the same time, it is important to create the opportunity for cross-learning and diversity of ideas, methods, and approaches. Be clear with the new team about its role, how it will operate, and your expectations for its output. Related to all three factors **8. Listen and involve.** Employees are a rich source of information about how to do a job and how to do it better. This principle has been demonstrated time and again with all kinds of employeesfrom hourly workers doing the most routine tasks to high-ranking professionals. Managers who operate with a participative style reap enormous rewards in efficiency and work quality. Participative managers continually announce their interest in employees' ideas. They do not wait for these suggestions to materialize through formal upward communication or suggestion programs. They find opportunities to have direct conversations with individuals and groups about what can be done to improve effectiveness. They create an atmosphere where "the past is not good enough" and recognize employees for their innovativeness. Participative managers, once they have defined task boundaries, give employees freedom to operate and make changes on their own commensurate with their knowledge and experience. Indeed, there may be no single motivational tactic more powerful than freeing competent people to do their jobs as they see fit. ### How Management Demotivates by David Sirota, Louis A. Mischkind, and Michael Irwin Meltzer There are several ways that management unwittingly demotivates employees and diminishes, if not outright destroys, their enthusiasm. Many companies treat employees as disposable. At the first sign of business difficulty, employeeswho are usually routinely referred to as "our greatest asset"become expendable. Employees generally receive inadequate recognition and reward: About half of the workers in our surveys report receiving little or no credit, and almost two-thirds say management is much more likely to criticize them for poor performance than praise them for good work. Management inadvertently makes it difficult for employees to do their jobs. Excessive levels of required approvals, endless paperwork, insufficient training, failure to communicate, infrequent delegation of authority, and a lack of a credible vision contribute to employees' frustration. Reprinted with permission from "Stop Demotivating Your Employees!" **Harvard Management Update,** Vol. 11, No. 1, January 2006.
true
true
true
4/10/2006 Business literature is packed with advice about worker motivation—but sometimes managers are the problem, not the inspiration. Here are seven practices to fire up the troops. From Harvard Management Update. by David Sirota, Louis A. Mischkind, and Michael Irwin Meltzer Most companies have it all wro...
2024-10-12 00:00:00
2006-10-04 00:00:00
https://hbswk.hbs.edu/Pu…al/hbswk_260.jpg
Article
hbswk.hbs.edu
HBS Working Knowledge
null
null
36,328,890
https://openbrainproject.org/brainsurvey/
Brain Survey | The Open Brain Project
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
openbrainproject.org
The Open Brain Project
null
null
8,297,903
http://blog.camenergydatalab.com/2014/08/some-insights-about-domestic.html
Some insights about domestic electricity prices in the IEA countires
JustGlowing
### Prices in 2013 In the first figure below we compare the prices of the domestic electricity among the countries monitored by IEA. The plot also shows which fraction of the price is represented by taxes:In 2013, average domestic electricity prices, including taxes, in Denmark and Germany were the highest in the IEA. We also note that in Denmark the fraction of taxes paid is higher than the actual electricity price whereas in Germany the actual electricity price and the taxes are almost the same. Interestingly, USA has the lowest price and the lowest taxation. ### Relationship between taxes and full prices In this figure we highlight the correlation between taxes and full prices: Here we can see that there is a positive correlation (correlation=0.82) between the prices with taxes and the prices without taxes. This indicates that according to this data, when the full price increases, the taxes also increase. Hovering the pointer on the points we can discover that Germany and Denmark have the highest taxes, while USA, UK and Japan have the lowest. Also, we note that Ireland has expensive electricity and low taxes, while Norway shows the reverse trend.### Evolution of the prices from 2010 to 2013 Here we try to compare the trend of the prices among the five countries with the higest prices in 2013: From this chart we can observe that only in 2013 the cost of the electricity for the domestic consumers has become very similar in Germany and Denmark and that the Danish prices were substantially higher in the past. We can also see that prices in Italy and Ireland have a very similar increasing trend while prices in Austria dropped in 2012 but raised again in 2013.**the prices are showed as pence per Kwh.* Great writing post! Very helpful blog.Thanks for share thissolar panels installers ReplyDelete
true
true
true
In this post we will provide three interactive visualizations of the latest data released by the International Energy Agency (IEA) about t...
2024-10-12 00:00:00
2014-08-22 00:00:00
null
null
camenergydatalab.com
blog.camenergydatalab.com
null
null
20,185,049
https://techcrunch.com/2019/06/12/we-wont-be-listening-to-music-in-a-decade-according-to-vinod-khosla/
We won't be listening to music in a decade according to Vinod Khosla | TechCrunch
Darrell Etherington
Depending on who you ask, the advantage of technology based on artificial or machine intelligence could be a topsy-turvy funhouse mirror world — even in some very fundamental ways. “I actually think 10 years from now, you won’t be listening to music,” is a thing venture capitalist Vinod Khosla said onstage today during a fireside chat at Creative Destruction Lab’s second annual Super Session event. Instead, he believes we’ll be listening to custom song equivalents that are automatically designed specifically for each individual, and tailored to their brain, their listening preferences and their particular needs. Khosla noted that AI-created music is already making big strides — and it’s true that it’s come a long way in the past couple of years, as noted recently by journalist Stuart Dredge writing on Medium. As Dredge points out, one recent trend is the rise of mood or activity-based playlists on Spotify and channels on YouTube. There are plenty of these types of things where the artist, album and song name are not at all important, or even really surfaced. Not to mention that there’s a big financial incentive for an entity like Spotify to prefer machine-made alternatives, as it could help alleviate or eliminate the licensing costs that severely limit their ability to make margin on their primary business of serving up music to customers. AI-generated chart toppers and general mood music is one thing, but a custom soundtrack specific to every individual is another. It definitely sidesteps the question of what happens to the communal aspect of music when everyone’s music-replacing auditory experience is unique to the person. Guess we’ll find out in 10 years.
true
true
true
Depending on who you ask, the advantage of technology based on artificial or machine intelligence could be a topsy-turvy funhouse mirror world -- even in
2024-10-12 00:00:00
2019-06-12 00:00:00
https://techcrunch.com/w…?resize=1200,800
article
techcrunch.com
TechCrunch
null
null
24,989,782
https://www.vice.com/en/article/bjwxzd/the-first-thing-sold-online-was-a-sting-cd
The Story of the Very First Thing Securely Sold on the Internet
Rob Arcand
Something about the internet was always well-suited for commerce. Despite its original development as a military and academic research technology, generations of ambitious young people saw the internet’s life-changing potential through a lens of industry and entrepreneurship, using this nascent electronic connection to build vast empires. Companies like Amazon, eBay, Yahoo, and AOL became billion-dollar businesses through the sweeping commercialization of the network, and, though the dot-com bubble eventually burst just as quickly as it had arrived, its impact can still be felt today through the business models of nearly all of the world’s most popular (and profitable) websites. But the internet wasn’t always a sprawling shopping mall, and as hard as it can be to separate the current landscape from today’s colossal tech companies, the idea to use it to buy and sell physical goods wasn’t baked into the infrastructure from the start. That idea—that the web could be used as a medium for the exchange of commercial products—came, in part, from a 21-year-old college student named Dan Kohn, who on August 11, 1994, changed history forever with the world’s first secure credit card transaction for a physical good. The Swarthmore student sold a CD copy of Sting’s 1993 album * Ten Summoner’s Tales *to a friend in Philadelphia, who, for $12.48 plus shipping, received a copy of the disc by mail a few weeks later. ## Videos by VICE “It was just such a mind-blowing experience,” Kohn says, a few days before the 25th anniversary of transaction. At the time of the event, Kohn says, the number of websites was “in the dozens,” and few people thought that the internet had any use for financial exchange. The idea came to him while studying abroad at the London School of Economics, where he was able to remotely connect to his Swarthmore computer to check his email and read updates on the early web forum Usenet. “As I was using it every day, thinking about it, and also just reading a lot about business ideas and startups, it just sort of dawned on me: Why aren’t people doing commerce on this network?” Of course, Kohn wasn’t the only one thinking of using the internet to sell something. As far back as 1971, students at Stanford and MIT were using the pre-WWW ARPANET to sell pot across the country, as noted by Shopify in a video on the history of e-commerce. While the sale was organized via digital connection, the exchange itself took place in-person, and is radically different than the kind of e-commerce transaction associated with online shopping today. Similarly, a 74-year-old British woman Jane Snowball used an early internet connection to purchase groceries in 1984, but because the cash was exchanged in person, it still considerably different from the kind of encrypted exchange used by Kohn. Twenty-five years since that first transaction, the landscape of the internet is naturally a different place. For one, online shopping is now one of the world’s largest industries, fundamentally reshaping the landscape of brick-and-mortar retail. CD sales, too, are far from the most popular form of music consumption, which remains indisputably dominated by ad-supported and subscription-based streaming platforms. Through it all, Kohn remains committed to the internet freedom that allowed him to start NetMarket, attributing much of his success to the open-source movement. “We were standing on the shoulders of giants of Unix, the internet, cryptography, PGP, Tim Berners-Lee, and the web, all of these folks,” he says. “But we were also an important signpost saying, ‘Okay, this is where things are going. This is in a lot of ways going to be the future of commerce.’” ** VICE: It’s been a few years now since you started NetMarket.** **Dan Kohn:**I actually had the idea to start it when I was studying abroad at the London School of Economics, so I would’ve been 20. After a huge amount of hassle, I was able to get a VAX minicomputer to connect across to the Swarthmore computers where I had accounts. Today, all that seems kind of trivial and not that big of a deal, but at the time, it sort of blew me away that I was able to remotely connect. I remember it taking me a week or two to actually get it working. The computers at Swarthmore were named after spices, and at the London School of Economics, it was 10:08 a.m., [so] when I connected it to the Swarthmore computer, the little prompt came up that said “Oregano 5:08AM.” It was just such a mind-blowing experience. As I was using it every day, thinking about it, and also just reading a lot about business ideas and startups, it just sort of dawned on me: Why aren’t people doing commerce on this network? ** Was it important to you guys to be the first to solve this challenge? I know there’s been some debates about other sites like The Internet Shopping Network selling something around the same time.**I would say, at the time, we were very focused on building a business. So sure, in retrospect there’s some excitement or fun about having been first, but at the time, we were just heads-down trying to do something. We were all living in this house in New Hampshire and just working all the time. It just seemed obvious as we were launching it that security was going to be a huge deal, and so we were really focused on coming up with an answer there, telling a story, and being able to explain how this was going to be a hurdle that was going to be surmounted. But certainly we got a ton of attention for it; I love that * New York Times* headline—[“Attention Shoppers: The Internet Is Open”]—because that is very much the story that we were trying to tell. ** Were you worried initially that people might steal your intellectual property, being the first ones to do this?**We just didn’t think of it that way. It’s similar to writing software and saying, “Oh well, am I worried someone else will steal that idea.” And I genuinely believe that software ideas shouldn’t be protectable in that way. We saw ourselves as innovating and trying to address a market and expand as quickly as possible. ** The person who purchased the CD, Phil Brandenberger, he was a friend of yours at the time?**He was. We had gone to college with him. I actually found out that he’s in New York now, I need to get lunch with him. But the initial configuration, especially for the first couple weeks, was kind of challenging to set up and so we pinged him on it and said, “Hey, can you configure this and set it up and everything?” ** So it was kind of a transaction between friends and not really a public-facing company in the same kind of way.**I wouldn’t quite agree with that. He was already a customer of our site, so it was a natural thing to ask him to do. ** What was your relationship like with suppliers like at that stage? You were selling on behalf of this music company, right?**No, we didn’t deal directly with the music company; we dealt with this company in New Hampshire called Noteworthy Music that has since gone out of business. The reason we were familiar with them is that they were huge with college students, and so when I was in college, I used to look through their print catalogue… and then pick out the CDs I wanted and then mail in the order form, or maybe, if I could find one, use a fax machine for the order form. So it was kind of a natural thing for us to go to them and say, “Hey, we want to put your music online.” And so they shared the catalog with us, and then when the order came in, we connected into their servers and submitted it. ** Were their other companies you were doing this with outside of music?**We started with them and then the second one we did was flowers. We had planned to go into books or travel or other kinds of stuff, but then wound up taking an acquisition offer with this company Cendant, which is the biggest membership services company. ** Where do you see cloud computing and e-commerce moving in the future given your current work?**One of the fun things about it, having been a pioneer in the e-commerce space, is just that so many e-commerce giants today are leveraging the software that my foundation hosts and helps support. Our servers, they were all physically in the office with us underneath our desks. There is just a direct line of system administration skills, how the software runs and all this other kind of stuff, to how things work today, but it is kind of fun to get to work with companies like eBay and Mastercard and Salesforce and the *New York Times*and others that are now looking to operate at scale, where they want to dynamically expand out from dozens to hundreds of servers when a flash demand comes in, then shrink down again when the demand passes. ** I guess it’s kind of important to you guys to own your own servers and move the open-source movement forward in a way that isn’t limited to software but also is connected to the hardware as well.**No, in some ways it’s not. When we started NetMarket, we had to own all of our computers, there was no other option. Literally, the cloud didn’t mean anything, but when I was a venture capitalist in 2000, companies would come to us and say, “Oh, well we need to raise a few million dollars of funding because we have to buy all these physical servers.” It’s just a sea change in terms of how much easier it is from an idea or conception of an idea to actually being in production on the web. It’s almost magical when you look at the free software out there and tools like GitHub and the fact that you can find contractors all around the world who are eager to convert your designs into code. Now the opposite side of that is that it’s so much harder to get noticed today and build up a brand. At the time, we were among the first few dozen, maybe first few hundred websites. I remember at the time, there was a manual directory that somebody was creating. Of every website. And we were one of about three of them in New Hampshire. ** This was even before search engines, probably, right?**Exactly. ** Yeah, I guess AOL was just taking off then, that was right around the time they had their first search.**This earlier technology Gopher, and it had a search engine built into it, hilariously called Veronica, which I’m embarrassed to say that I still remember stands for Very Easy Rodent-Oriented Network-wide Internet Computer Archive, which is like the worst fake acronym. The reason they came up with Veronica was that the previous technology FTP, which was used for file transfers, had an internet-wide archive called Archie, like archive with the ‘V’ removed. And so when they created a search for Gopher, they wanted to call it Veronica. But in any event, AOL was out there, it was growing incredibly fast and we had tons of people come to us and say, “Why are you wasting your time on this internet thing? AOL is where everyone is going.” They want the walled garden, etc. And so there was a lot of other directions at the time and it definitely was the case that the internet at the time felt like a frontier town or the Wild West, where today it’s Manhattan.
true
true
true
25 years ago, a college kid named Dan Kohn had the novel idea to let people buy things online. The first thing he ever sold was a Sting solo album.
2024-10-12 00:00:00
2019-08-14 00:00:00
https://www.vice.com/wp-…_updated_HF.jpeg
article
vice.com
VICE
null
null
26,554,451
https://github.com/wang0618/PyWebIO
GitHub - pywebio/PyWebIO: Write interactive web app in script way.
Pywebio
*Write interactive web app in script way.* [Document] | [Demos] | [Playground] | [Why PyWebIO?] PyWebIO provides a series of imperative functions to obtain user input and output on the browser, turning the browser into a "rich text terminal", and can be used to build simple web applications or browser-based GUI applications without the need to have knowledge of HTML and JS. PyWebIO can also be easily integrated into existing Web services. PyWebIO is very suitable for quickly building applications that do not require complex UI. Features: - Use synchronization instead of a callback-based method to get input - Non-declarative layout, simple and efficient - Less intrusive: old script code can be transformed into a Web application only by modifying the input and output operation - Support integration into existing web services, currently supports Flask, Django, Tornado, aiohttp, FastAPI framework - Support for `asyncio` and coroutine - Support data visualization with third-party libraries, e.g., `plotly` ,`bokeh` ,`pyecharts` . Stable version: `pip3 install -U pywebio` Development version: `pip3 install -U https://github.com/pywebio/PyWebIO/archive/dev-release.zip` **Prerequisites**: PyWebIO requires Python 3.5.2 or newer **Hello, world** Here is a simple PyWebIO script to calculate the BMI: ``` from pywebio.input import input, FLOAT from pywebio.output import put_text def bmi(): height = input("Your Height(cm):", type=FLOAT) weight = input("Your Weight(kg):", type=FLOAT) BMI = weight / (height / 100) ** 2 top_status = [(14.9, 'Severely underweight'), (18.4, 'Underweight'), (22.9, 'Normal'), (27.5, 'Overweight'), (40.0, 'Moderately obese'), (float('inf'), 'Severely obese')] for top, status in top_status: if BMI <= top: put_text('Your BMI: %.1f, category: %s' % (BMI, status)) break if __name__ == '__main__': bmi() ``` This is just a very simple script if you ignore PyWebIO, but using the input and output functions provided by PyWebIO, you can interact with the code in the browser [demo]: **Serve as web service** The above BMI program will exit immediately after the calculation, you can use `pywebio.start_server()` to publish the `bmi()` function as a web application: ``` from pywebio import start_server from pywebio.input import input, FLOAT from pywebio.output import put_text def bmi(): # bmi() keep the same ... if __name__ == '__main__': start_server(bmi, port=80) ``` **Integration with web framework** To integrate a PyWebIO application into Tornado, all you need is to add a `RequestHandler` to the existing Tornado application: ``` import tornado.ioloop import tornado.web from pywebio.platform.tornado import webio_handler class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") if __name__ == "__main__": application = tornado.web.Application([ (r"/", MainHandler), (r"/bmi", webio_handler(bmi)), # bmi is the same function as above ]) application.listen(port=80, address='localhost') tornado.ioloop.IOLoop.current().start() ``` Now, you can open `http://localhost/bmi` for BMI calculation. For integration with other web frameworks, please refer to document. - Basic demo : PyWebIO basic input and output demos and some small applications written using PyWebIO. - Data visualization demo : Data visualization with the third-party libraries, e.g., `plotly` ,`bokeh` ,`pyecharts` . - Document pywebio.readthedocs.io - PyWebIO Playground: Edit, Run, Share PyWebIO Code Online
true
true
true
Write interactive web app in script way. Contribute to pywebio/PyWebIO development by creating an account on GitHub.
2024-10-12 00:00:00
2020-02-29 00:00:00
https://opengraph.githubassets.com/9dd7fdcac85e0c5d6641ac63fe7f78d24904a23ebe4d77c8ba5879ff3c5f0cef/pywebio/PyWebIO
object
github.com
GitHub
null
null
30,250,065
https://en.wikipedia.org/wiki/Sinicization_of_Tibet
Sinicization of Tibet - Wikipedia
null
# Sinicization of Tibet The **sinicization of Tibet** includes the programs and laws of the Chinese government and the Chinese Communist Party (CCP) to force cultural assimilation in Tibetan areas of China, including the Tibet Autonomous Region and the surrounding Tibetan-designated autonomous areas. The efforts are undertaken by China in order to remake Tibetan culture into mainstream Chinese culture. The changes, which have been evident since the annexation of Tibet by the People's Republic of China in 1950–51, have been facilitated by a range of economic, social, cultural, religious and political reforms which have been implemented in Tibet by the Chinese government. Critics cite the government-sponsored migration of large numbers of Han Chinese into the Tibet Autonomous Region, deemed Chinese settlements, as a major component of sinicization. Some academics have described it as a form of Han settler colonialism.[1][2][3] According to the Central Tibetan Administration, the government of Tibet in exile, China's policy has allegedly resulted in the disappearance of elements of Tibetan culture; this policy has been called a "cultural genocide".[4][5][6] The government in exile says that the policies intend to make Tibet an integral part of China and control desire for Tibetan self-determination. The 14th Dalai Lama and the Central Tibet Administration have characterized sinicization programs as genocide or cultural cleansing.[7][8] The Chinese government claims that its policies have benefited Tibet, and it also claims that the cultural and social changes which have occurred in Tibet are consequences of modernization. According to the Chinese government, Tibet's economy has expanded; improved services and infrastructure have improved the quality of life of Tibetans, and the Tibetan language and culture have been protected. ## History [edit]### Early developments [edit]After the fall of the Qing dynasty and before 1950, the region which roughly corresponds to the modern-day Tibet Autonomous Region (TAR) was a *de facto* independent state although unrecognized by other states. It printed its own currency and postage, and maintained international relations although it did not exchange ambassadors with other nations. Tibet claimed three provinces (Amdo, Kham and Ü-Tsang), but only controlled western Kham and Ü-Tsang.[ citation needed] Since 1950, China made eastern Kham part of Sichuan and western Kham part of the new Tibet Autonomous Region. [9] During the early-20th-century Republic of China era which followed the Qing dynasty, the Chinese Muslim general and governor of Qinghai Ma Bufang implemented policies of sinicization and Islamification in Tibetan areas according to accusations which have been made by Tibetans.[10] Forced conversion and heavy taxes were reported under his rule.[11] After Mao Zedong won the Chinese Civil War in 1949, his goal was the unification of the "five nationalities" as the People's Republic of China under the rule of the Chinese Communist Party.[12] The Tibetan government in Lhasa sent Ngabo (known as Ngabo in English sources) to Chamdo in Kham, a strategic town near the border, with orders to hold his position while reinforcements came from Lhasa to fight against the Chinese.[13] On 16 October 1950, news that the People's Liberation Army was advancing towards Chamdo and had taken the town of Riwoche (which could block the route to Lhasa) arrived.[14] Ngabo and his men retreated to a monastery, where the People's Liberation Army surrounded and captured them.[15] Ngabo wrote to Lhasa suggesting a peaceful surrender instead of war.[16] According to the Chinese negotiator, "It is up to you to choose whether Tibet would be liberated peacefully or by force. It is only a matter of sending a telegram to the PLA group to recommence their march to Lhasa."[17] Ngabo accepted Mao's Seventeen-Point Agreement, which stipulated that in return for Tibet becoming part of the People's Republic of China, it would be granted autonomy.[18] Lacking support from the rest of the world, in August 1951 the Dalai Lama sent a telegram to Mao accepting the agreement.[19] The delegates signed the agreement under duress, and the Tibetan's government's future was sealed.[20] Although the annexation of Tibet by the People's Republic of China is referred to as the Peaceful Liberation of Tibet in Chinese Communist Party historiography, the 14th Dalai Lama and the Central Tibetan Administration consider it a colonization[21] and the Tibetan Youth Congress believes that it was an invasion.[22] The Chinese government points to improvements in health and the economy as justifications for their assertion of power in what it calls a historically-Chinese region. According to the Dalai Lama, China has encouraged Han Chinese immigration into the region.[21] Before the signing of the Seventeen Point Agreement on 23 May 1951, Tibet's economy was dominated by subsistence agriculture, but the stationing of 35,000 Chinese troops during the 1950s strained the region's food supplies. When the Dalai Lama visited Mao Zedong in Beijing in 1954, Mao told him that he would move 40,000 Chinese farmers to Tibet.[23][24][25] As a part of the Great Leap Forward in the 1960s, Chinese authorities coerced Tibetan farmers to cultivate maize instead of barley (the region's traditional crop). The harvest failed, and thousands of Tibetans starved.[26][27] ### Cultural Revolution [edit]The Cultural Revolution, which was implemented by students and laborers who were members of the Chinese Communist Party, was initiated by Mao and carried out by the Gang of Four from 1966 to 1976 in order to preserve Maoism as China's leading ideology. It was an intra-CCP struggle to eliminate political opposition to Mao.[28][29] The Cultural Revolution affected all of China, and Tibet suffered as a result. Red Guards attacked civilians, who were accused of being traitors to communism. More than six thousand monasteries were looted and destroyed. Monks and nuns were forced to leave their monasteries and "live a normal life", and those who resisted were imprisoned. Prisoners were forced to perform hard labor, tortured and executed. Although the Potala Palace was threatened, Premier Zhou Enlai intervened and restrained the Tibetan Red Guards.[30] ### Recent developments [edit]China's National Strategic Project to Develop the West, introduced during the 1980s after the Cultural Revolution, encourages the migration of Chinese people from other regions of China into Tibet with bonuses and favorable living conditions. People volunteer to be sent there as teachers, doctors and administrators to assist Tibet's development.[31] Citing an unqualified labor force and less-developed infrastructure, the Chinese government has encouraged migrants to stimulate competition and change Tibet from a traditional to a market economy with economic reforms set forth by Deng Xiaoping.[32] Tibetans are the majority ethnic group in the Tibet Autonomous Region, making up about 93 percent of the population in 2008.[33][6][34] The 2008 attacks by Tibetans on Han- and Hui-owned property were reportedly due to the large Han Hui influx into Tibet.[35][36][37] According to George Fitzherbert, "To engage with China's arguments concerning Tibet is to be subjected to the kind of intellectual entrapment, familiar in the Palestinian conflict, whereby the dispute is corralled into questions which the plaintiff had never sought to dispute. Tibetans complain of being robbed of their dignity in their homeland by having their genuinely loved leader incessantly denounced, and of being swamped by Chinese immigration to the point of becoming a minority in their own country. But China insistently condemns such complaints as separatism, an offence in China under the crime of 'undermining national unity', and pulls the debate back to one about Tibet's historical status. Foreigners raise questions about human rights and the environment, but China again denounces this as a foreign intervention in the internal affairs of a sovereign nation, and pulls the debate back to Tibet's historical status."[38][39] The Chinese government has attempted to develop Tibet as part of its China Western Development policy and has invested 310 billion yuan (about 45.6 billion U.S. dollars) in Tibet since 2001. In 2009 it invested over $7 billion into the region, 31 percent more than the previous year.[40] The Qinghai-Tibet Railway was completed in 2006 at a cost of $3.68 billion, leading to increased tourism from the rest of China.[41] The Shanghai government contributed $8.6 million to the construction of the Tibet Shanghai Experimental School, where 1,500 Tibetan students receive a primarily-Chinese education.[42] Some young Tibetans feel that they are Tibetan and Chinese, and are fluent in Tibetan and Mandarin Chinese.[43] In August 2020, General Secretary of the Chinese Communist Party Xi Jinping gave a speech in which he stated that it is "necessary to actively guide Tibetan Buddhism to adapt to the socialist society and promote the Sinicization of Tibetan Buddhism."[44] In August 2021, the Associated Press reported that Wang Yang stated in front of the Potala Palace that efforts are needed to ensure that Tibetans share the "cultural symbols and images of the Chinese nation."[45] ## Religion [edit]The Chinese government claims it will control how the 15th Dalai Lama will be chosen, contrary to centuries of tradition. Chinese government officials repeatedly warn "that he must reincarnate, and on their terms".[46] When the Dalai Lama confirmed a Tibetan boy in 1995 as the reincarnation of the Panchen Lama, the second-ranking leader of the Gelugpa sect, the Chinese government took away the boy and his parents and installed its own child lama. The Dalai Lama's choice, Gedhun Choekyi Nyima's whereabouts are still unknown. The Chinese government claimed he has a "stable" job and a "normal" life.[47] In 2020 US Secretary of State Mike Pompeo said in a statement that "Tibetan Buddhists, like members of all faith communities, must be able to select, educate and venerate their religious leaders according to their traditions and without government interference [...] We call on the PRC government to immediately make public the Panchen Lama's whereabouts and to uphold its own constitution and international commitments to promote religious freedom for all persons."[48] The head of the Kagyu sect, the Karmapa Ogyen Trinley Dorje, was also groomed by Chinese leaders, but at age 14 he fled to India in 1999.[49] Within Tibet, schools issue warnings to parents that students should not be attending classes at monasteries, a long-standing tradition, or engage in any religious activity. Punishments for doing so are severe, including loss of government welfare and subsidies.[50] The practice of removing prayer flags, symbols of Tibetan culture and religious belief, has increased since 2010 as the persecution of religion escalates. In June 2020 Chinese authorities started a "behavioral reform," program, begun in the Tibet Autonomous Region's Qinghai's Golog (in Chinese, Guoluo) and Tengchen (Dingqing) county in Chamdo, ordering the destruction of prayer flags.[51] The 2019 Tibetan Centre for Human Rights and Democracy annual report found that Chinese police forces and surveillance teams moved into monasteries and villages to monitor Tibetan residents for signs of opposition to China's rule, "facial-recognition software and careful monitoring of digital spaces [were] deployed to suppress political protests against the increased clampdowns on civil and political rights."[52][53] According to the United States Commission on International Religious Freedom, during the summer of 2019, the Chinese authorities demolished thousands of residences at the Yachen Gar Tibetan Buddhist center in Sichuan Province, displacing as many as 6,000 monks and nuns. In April 2019, China authorities closed the Larung Gar Buddhist Academy to new enrollment. Authorities also intensified a crackdown on possessing or displaying photos of the Dalai Lama, continued to monitor religious festivals, and, in some areas, banned students from attending festivals during their school holidays. In protest of repressive government policies, at least 156 Tibetans have self-immolated since February 2009.[54] ## Education, employment and language [edit]The Chinese Constitution guarantees autonomy in ethnic regions and says local governments should use the languages in common use. Beginning in the early 2000s, there had been a process of Tibetanization of Tibetan education in Qinghai's Tibetan regions. Through grassroots initiatives by Tibetan educators, Tibetan had been somewhat available as the main language of instruction in primary, secondary and tertiary education in Qinghai.[55] Tibetan language in Qinghai remains even more marginalized in education and government employment, with a small number of public-service positions mandating a Tibetan degree or Tibetan language skills.[56][ better source needed] In 1987, the Tibet Autonomous Region published more explicit regulations calling for Tibetan to be the main language in schools, government offices and shops. Those regulations were eliminated in 2002 and state language policies and practices "jeopardize the continuing viability" of Tibetan civilization.[57] In Tibetan areas, official affairs are conducted primarily in Chinese. It is common to see banners promoting the use of Chinese. Monasteries and schools often held classes on the written language for ordinary people, and monks gave lessons while traveling, but officials ordered monasteries and schools to end the classes.[58] The Chinese Communist Party issued orders in December 2018 forbidding informal classes taught by Tibetan monks or other unapproved groups,[59] and ordered schools to stop teaching all subjects in Tibetan, except the Tibetan language in first grade classes, in May 2019 in Golog, in Chinese, Guoluo, Tibet Autonomous Region.[60] Private Tibetan schools have been forced to close.[61] Tibetan entrepreneur and education advocate Tashi Wangchuk was detained for two years and then indicted in 2017 by court officials after speaking to The New York Times for a documentary video[62] and two articles on Tibetan education and culture.[63][64] Tibetan *neidi* or boarding schools, in operation since 1985, have been increasing enrollment rapidly. Tibetan children are removed from their families, and Tibetan religious and cultural influences, and placed in Tibetan only boarding schools across China, well outside the Tibet Autonomous Region. Parents refusing boarding schools have reportedly been threatened with fines.[61] Chinese government policy requires only Tibetan government job candidates to disavow any allegiance to the Dalai Lama and support government ethnic policies, as announced in October 2019 on the TAR government's online education platform, "Support the (Communist) Party's leadership, resolutely implement the [Chinese Communist] Party's line, line of approach, policies, and the guiding ideology of Tibet work in the new era; align ideologically, politically, and in action with the Party Central Committee; oppose any splittist (division of Tibet from PRC) tendencies; expose and criticize the Dalai Lama; safeguard the unity of the motherland and ethnic unity and take a firm stand on political issues, taking a clear and distinct stand."[65] In April 2020, classroom instruction was switched from Tibetan to Mandarin Chinese in Ngaba, Sichuan.[66] Many schools in Tibet still have around five hours of instruction in Tibetan a week in addition to Mandarin. The growth of a bilingual professional class among Tibetans has lessened the historical animosity between them and Han Chinese.[67] ## Resettlement of nomadic herders [edit]The Chinese government launched an initiative that demanded the nomads[68] to relocate to urban housing in newly constructed villages in 2003.[69] At the end of 2015, in "what amounts to one of the most ambitious attempts made at social engineering, the Chinese government is in the final stages of a 15-year-old campaign to settle the millions of pastoralists who once roamed China's vast borderlands," the Chinese government claimed it will have moved the remaining 1.2 million nomad herders into towns that provide access to schools, electricity and modern health care. This policy, based on the government view that grazing harms grasslands, has been questioned by ecologists in China and abroad claiming the scientific foundations of nomad resettlement are questionable. Anthropological studies of government-built relocation centers have documented chronic unemployment, alcoholism and the fraying of millenniums-old traditions. Human rights advocates say the many protests by herders are met with harsh crackdowns by security forces.[70][71][72] In a 2011 report, the United Nations Special Rapporteur on the Right to Food, criticized China's nomad resettlement policies as overly coercive and said they led to "increased poverty, environmental degradation and social breakdown".[73] In 2017 Tibetan nomads previously forced from traditional grazing lands in a state-directed resettlement scheme in Qinghai were told to go back due to a new policy announced in 2016, so authorities could use their current homes for development as tourist centers and government employees housing. "After two years of living in the new towns, residents are now being forced to move back to their original grasslands without their animals, which are the main source of livelihood in Tibetan nomadic communities".[74][75] ## Growth of Lhasa's population [edit]Government-sponsored Chinese settlements in Tibet have changed the demographics of Tibet's population, especially the demographics of Lhasa's population. In 1949, there were between 300 and 400 Han-Chinese residents in Lhasa.[76] In 1950, the city covered less than three square kilometers and it had around 30,000 inhabitants; the Potala Palace and the village of Zhöl which is located below it were both considered separate from the city.[77][78] In 1953, according to the first population census, Lhasa had about 30,000 residents (including 4,000 beggars, but not including 15,000 monks).[79] In 1992, Lhasa's permanent population was estimated to number a little under 140,000, including 96,431 Tibetans, 40,387 Han-Chinese, and 2,998 Chinese Muslims and others. Added to that figure were 60,000–80,000 temporary residents, primarily Tibetan pilgrims and traders.[80] ## Debate about the intention of the PRC [edit]In 1989, the high-profile French criminal lawyer Robert Badinter was interviewed during an episode of *Apostrophes* (a well-known French television program which is devoted to human rights) with the Dalai Lama. Referring to the disappearance of Tibetan culture, Badinter used the phrase "cultural genocide".[81] In 1993, the Dalai Lama used the same phrase to describe the destruction of Tibetan culture.[82] During the 2008 Tibetan unrest, he accused the Chinese of committing cultural genocide during their crackdown.[83] In 2008, Robert Barnett, director of the Program for Tibetan Studies at Columbia University, stated that it was time for accusations of cultural genocide to be dropped: "I think we have to get over any suggestion that the Chinese are ill-intentioned or are trying to wipe out Tibet."[84] Barnett voiced his doubts in a review in the *New York Review of Books*: "Why, if Tibetan culture within Tibet is being 'fast erased from existence', [do] so many Tibetans within Tibet still appear to have a more vigorous cultural life, with over a hundred literary magazines in Tibetan, than their exile counterparts?"[85] ## See also [edit]- Annexation of Tibet by the People's Republic of China - Chinese settlements in Tibet - History of Tibet (1950–present) - Human rights in Tibet - Protests and uprisings in Tibet since 1950 - Chen Quanguo - 70,000 Character Petition - Labour camps in Tibet - Tibetan independence movement - Tibetan sovereignty debate - Choekyi Gyaltsen, 10th Panchen Lama - 11th Panchen Lama controversy - Religion in Tibet#Freedom of religion - Antireligious campaigns of the Chinese Communist Party - Freedom of religion in China#Buddhism - Racism in China - Chinese imperialism - Chinese nationalism - Han chauvinism - Han nationalism - Secession in China - Sinocentrism - Sinosphere ## References [edit]### Citations [edit]**^**McGranahan, Carole (17 December 2019). "Chinese Settler Colonialism: Empire and Life in the Tibetan Borderlands". In Gros, Stéphane (ed.).*Frontier Tibet: Patterns of Change in the Sino-Tibetan Borderlands*. Amsterdam University Press. pp. 517–540. doi:10.2307/j.ctvt1sgw7.22. ISBN 978-90-485-4490-5. JSTOR j.ctvt1sgw7.22.**^**Ramanujan, Shaurir (9 December 2022). "Reclaiming the Land of the Snows: Analyzing Chinese Settler Colonialism in Tibet".*The Columbia Journal of Asia*.**1**(2): 29–36. doi:10.52214/cja.v1i2.10012. ISSN 2832-8558.**^**Wang, Ju-Han Zoe; Roche, Gerald (16 March 2021). "Urbanizing Minority Minzu in the PRC: Insights from the Literature on Settler Colonialism".*Modern China*.**48**(3): 593–616. doi:10.1177/0097700421995135. ISSN 0097-7004. S2CID 233620981. Archived from the original on 15 June 2024. Retrieved 7 October 2023.**^**Burbu, Dawa (2001)*China's Tibet Policy*, Routledge, ISBN 978-0-7007-0474-3, pp 100–124**^**Davidson, Lawrence (2012).*Cultural Genocide*. Rutgers University Press. pp. 89–111. ISBN 978-0-8135-5243-9. JSTOR j.ctt5hj5jx.- ^ **a**Samdup, Tseten (1993) Chinese population – Threat to Tibetan identity Archived 5 February 2009 at the Wayback Machine**b** **^**"Dalai Lama: 'Cultural genocide' behind self-immolations".*BBC News*. 7 November 2011. Archived from the original on 3 November 2019. Retrieved 12 September 2020.**^**T. G. Arya, Central Tibetan Administration,*China's 'ethnic unity' bill aimed at complete sinicization of the Tibetan plateau through ethnic cleansing: CTA Information Secretary*, (15 January 2020), https://tibet.net/chinas-ethnic-unity-bill-aimed-at-complete-sinicization-of-the-tibetan-plateau-through-ethnic-cleansing-cta-information-secretary/ Archived 10 October 2021 at the Wayback Machine ["China has waged unceasing campaigns at both central and local government level to aggressively consolidate its military occupation of Tibet in the last more than six decades. But this new state-sponsored regulation is seen as a desperately contemplated measure to curb the undiminishing defiance of the Tibetan people and their call for the protection of their identity, for freedom, human rights and for the honourable return of His Holiness the Dalai Lama to Tibet." "Central Tibetan Administration's Information Secretary Mr T.G. Arya condemned the new ethnic identity law, calling it a measure of ethnic cleansing aimed at complete sinicization of the Tibetan plateau. The Secretary also criticised the legislation as a gross violation of the international law and the Chinese constitution." " "What China could not achieve through the sixty years of occupation and repression, now they are trying to achieve it through repressive law. The law aims to achieve complete sinicization of the Tibetan plateau through ethnic cleansing. China finds Tibetan language, religion and culture as the main barrier to achieving complete control over the land," Secretary TG Arya told the Tibet News Bureau.]**^**Burbu, Dawa (2001)*China's Tibet Policy*, Routledge, ISBN 978-0-7007-0474-3, pp 86–99**^**Woser (10 March 2011). "Three Provinces of the Snowland, Losar Tashi Delek!".*Phayul*. Archived from the original on 4 October 2012. Retrieved 24 March 2011.**^**Blo brtan rdo rje, Charles Kevin Stuart (2008).*Life and Marriage in Skya Rgya, a Tibetan Village*. YBK Publishers, Inc. p. xiv. ISBN 978-0-9800508-4-4. Archived from the original on 15 June 2024. Retrieved 28 June 2010.**^**Schaik 2011, p. 208**^**Schaik 2011, p. 209**^**Schaik 2011, p. 211**^**Schaik 2011, p. 212**^**Schaik 2011, p. 213**^**Schaik 2011, p. 214**^**Schaik 2011, p. 215**^**Schaik 2011, p. 218**^**Laird, Thomas (2006).*The story of Tibet: Conversations with the Dalai Lama*. London: Atlantic Books. p. 307. ISBN 978-1-84354-144-8. Archived from the original on 15 June 2024. Retrieved 29 July 2015.- ^ **a**"Tibet profile - Overview".**b***BBC News*. 13 November 2014. Archived from the original on 12 December 2012. Retrieved 19 September 2015. **^**"50 years of Colonization".*Tibetan Youth Congress*. Archived from the original on 10 September 2015. Retrieved 19 September 2015.**^**(in German) Forster-Latsch, H. and Renz S., P. L. in Geschichte und Politik Tibets/*Tibet unter chinesischer Herrschaft*Archived 13 February 2011 at the Wayback Machine.**^**(in German) Horst Südkamp (1998),*Breviarium der tibetischen Geschichte*, p. 191.**^**(in German) Golzio, Karl-Heinz and Bandini, Pietro (2002),*Die vierzehn Wiedergeburten des Dalai Lama*, Scherz Verlag / Otto Wilhelm Barth, Bern / München, ISBN 3-502-61002-9.**^**Shakya, Tsering (1999)*The Dragon in the Land of Snows*, Columbia University Press, ISBN 978-0-7126-6533-9**^**Stein, Rolf (1972)*Tibetan Civilization*, Stanford University Press, ISBN 0-8047-0806-1**^**MacFarquhar, Roderick & Michael Schoenhals (2006)*Mao's Last Revolution*, Harvard University Press, ISBN 978-0-674-02332-1, p. 102**^**Siling, Luo (3 October 2016). "The Cultural Revolution in Tibet: A Photographic Record".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Southerland, Dan (9 August 2016). "After 50 years, Tibetans Recall the Cultural Revolution". Radio Free Asia. Archived from the original on 10 July 2019. Retrieved 31 January 2019.**^**Peter Hessler (February 1999). "Tibet Through Chinese Eyes". The Atlantic. Archived from the original on 7 March 2012. Retrieved 29 February 2012.**^**Tanzen Lhundup, Ma Rong (25–26 August 2006). "Temporary Labor Migration in Urban Lhasa in 2005". China Tibetology Network. Archived from the original on 19 August 2011. Retrieved 29 February 2012.**^**"Cultural shift".*BBC News*. Archived from the original on 25 July 2013. Retrieved 18 April 2013.**^**Pinteric, Uros (2003): http://www.sidip.org/SIDIP_files/pintericu_tibet.pdf[*permanent dead link*]*International Status of Tibet*, Association for Innovative Political Science, University of Ljubljana, Slovenia.**^**Wong, Edward (5 June 2009). "Report Says Valid Grievances at Root of Tibet Unrest".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Wong, Edward (24 July 2010). "China's Money and Migrants Pour into Tibet".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**"Beijing renews tirade".*Sunday Pioneer*. 8 March 2011. Archived from the original on 11 March 2011. Retrieved 24 March 2011.**^**Fitzherbert, George (20 June 2008). "Land of the Clouds". The Times Literary supplement. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Corell, Anastasia (13 December 2013). "Tibet's Tense New Reality". The Atlantic. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Edward Wong (24 July 2010). "'China's Money and Migrants Pour into Tibet'".*The New York Times*. Archived from the original on 1 October 2013. Retrieved 29 February 2012.**^**Xinhua News Agency (24 August 2005). New height of world's railway born in Tibet. Retrieved 25 August 2005.**^**Damian Grammaticas (15 July 2010). "Is development killing Tibet's way of life?". BBC. Archived from the original on 20 November 2011. Retrieved 29 February 2012.**^**Hannü. (2008).*Dialogues Tibetan dialogues Han*. [Erscheinungsort nicht ermittelbar]: Hannü. ISBN 978-988-97999-3-9. OCLC 917425693.**^**"Tibetan Buddhism must be tailored to fit Chinese society, says Xi Jinping".*Apple Daily*. 30 August 2020. Retrieved 30 August 2020.[*permanent dead link*]**^**"China pushes adoption of language, cultural symbols in Tibet".*Associated Press*. 19 August 2021. Archived from the original on 24 April 2022. Retrieved 22 August 2021.**^**Buckley, Chris (11 March 2015). "China's Tensions With Dalai Lama Spill into the Afterlife".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Braine, Theresa (18 May 2020). "China claims boy seized 25 years ago after Dalai Lama chose him as Tibetan spiritual leader is 31 and has a job". Barron's. Archived from the original on 5 July 2020. Retrieved 5 July 2020.**^**"Pompeo Demands China Reveal Panchen Lama 'Immediately'". Aljazeera. 19 May 2020. Archived from the original on 5 July 2020. Retrieved 5 July 2020.**^**Wong, Edward (6 June 2009). "China Creates Specter of Dueling Dalai Lamas".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Halder, Bill (16 October 2019). "China Weaponizes Education to Control Tibet". Ozy. Archived from the original on 22 October 2019. Retrieved 22 October 2019.**^**Lhuboom (17 June 2020). "China Orders Prayer Flags Taken Down in Tibet in an Assault on Culture, Faith". Radio Free Asia. Archived from the original on 5 July 2020. Retrieved 4 July 2020.**^**"China Expands Its Clampdown in Tibet: Report". Radio Free Asia. 16 June 2020. Archived from the original on 1 September 2020. Retrieved 5 July 2020.**^**"Human Rights Situation in Tibet 2019 Annual Report" (PDF).*tchrd.org*. Tibetan Centre for Human Rights and Democracy. Archived (PDF) from the original on 5 July 2020. Retrieved 5 July 2020.**^**"IRF Annual Report" (PDF).*www.uscirf.gov*. 2020. Archived (PDF) from the original on 3 August 2020. Retrieved 30 August 2020.**^**Zenz, Adrian (2010). "Beyond Assimilation: The Tibetanisation of Tibetan Education in Qinghai".*Inner Asia*.**12**(2): 293–315. doi:10.1163/000000010794983478. ISSN 1464-8172. JSTOR 23615125.**^**Zenz, Adrian (2014).*Tibetanness under Threat? Neo-Integrationism, Minority Education and Career Strategies in Qinghai, P.R. China*. Global Oriental. ISBN 978-90-04-25796-2.**^**Tatlow, Didi Kirsten (14 December 2012). "An Online Plea to China's Leader to Save Tibet's Culture By".*New York Times*. Archived from the original on 15 June 2024. Retrieved 22 October 2019.**^**Wong, Edward (28 November 2015). "Tibetans Fight to Salvage Fading Culture in China".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Gelek, Lobsang (30 January 2019). "Tibetan Monasteries in Nangchen Banned From Teaching Language to Young Tibetans". Radio Free Asia. Archived from the original on 27 May 2019. Retrieved 27 May 2019.**^**"Prefecture in Qinghai to Drastically Cut Tibetan Language Education". Radio Free Asia. 16 May 2019. Archived from the original on 26 May 2019. Retrieved 27 May 2019.- ^ **a**"Why China takes young Tibetans from their families".**b***The Economist*. 13 June 2024. ISSN 0013-0613. Archived from the original on 13 June 2024. Retrieved 15 June 2024. **^**Kessel, Jonah M. (28 November 2015). "Tashi Wangchuk: A Tibetan's Journey for Justice".*New York Times*. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Wong, Edward (18 January 2017). "Rights Groups Ask China to Free Tibetan Education Advocate".*New York Times*. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Buckley, Chris (4 January 2018). "Tibetan Businessman Battles Separatism Charges in Chinese Court".*New York Times*. Archived from the original on 9 February 2019. Retrieved 31 January 2019.**^**Patranobis, Sutirtho (19 October 2019). "Tibetan graduates need to 'expose and criticise Dalai Lama' for Chinese government jobs". Hindustan Times. Archived from the original on 20 October 2019. Retrieved 22 October 2019.**^**Lobe Socktsang; Richard Finney. (9 April 2020). "Classroom Instruction Switch From Tibetan to Chinese in Ngaba Sparks Worry, Anger". Translated by Dorjee Damdul. Archived from the original on 12 April 2020. Retrieved 12 April 2020.**^**"Ethnic minorities expert on China's treatment of Uygurs, and Han chauvinism".*South China Morning Post*. 23 September 2024. Retrieved 25 September 2024.**^**Crowder, Nicole (11 August 2015). "Tibet's little-known nomadic culture, high on the 'Roof of the World'".*Washington Post*. Archived from the original on 15 June 2024. Retrieved 31 January 2019.**^**Lowry, Rachel (3 September 2015). "Inside the Quiet Lives of China's Disappearing Tibetan Nomads". Time. Archived from the original on 25 February 2019. Retrieved 31 January 2019.**^**Jacobs, Andrew (11 July 2015). "China Fences in Its Nomads, and an Ancient Life Withers".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**""They Say We Should Be Grateful" Mass Rehousing and Relocation Programs in Tibetan Areas of China".*hrw.org*. Human Rights Watch. 27 June 2013. Archived from the original on 10 September 2019. Retrieved 31 January 2019.**^**Hatton, Celia (27 June 2013). "China resettles two million Tibetans, says Human Rights Watch". BBC. Archived from the original on 9 December 2018. Retrieved 31 January 2019.**^**Jacobs, Andrew (10 June 2011). "Ethnic Protests in China Have Lengthy Roots".*New York Times*. Archived from the original on 31 January 2019. Retrieved 30 January 2019.**^**Tenzin, Kunsang (15 June 2017). "Tibetan Nomads Forced From Resettlement Towns to Make Way For Development". Radio Free Asia. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Tenzin, Kunsang (6 October 2017). "Tibetan Nomads Forced to Beg After Being Evicted From Their Homes". Radio Free Asia. Archived from the original on 31 January 2019. Retrieved 31 January 2019.**^**Roland Barraux, Histoire des Dalaï Lamas – Quatorze reflets sur le Lac des Visions, Albin Michel, 1993, reprinted in 2002, Albin Michel, ISBN 2-226-13317-8.**^**Liu Jiangqiang, Preserving Lhasa's history (part one), in*Chinadialogue*, 13 October 2006.**^**Emily T. Yeh, Living Together in Lhasa. Ethnic Relations, Coercive Amity, and Subaltern Cosmopolitanism Archived 22 February 2012 at the Wayback Machine: "Lhasa's 1950s population is also frequently estimated at around thirty thousand. At that time the city was a densely packed warren of alleyways branching off from the Barkor path, only three square kilometers in area. The Potala Palace and the village of Zhöl below it were considered separate from the city."**^**Thomas H. Hahn, Urban Planning in Lhasa. The traditional urban fabric, contemporary practices and future visions Archived 31 March 2012 at the Wayback Machine, Presentation Given at the College of Architecture, Fanzhu University, 21 October 2008.**^**Fjeld, Heidi (2003).*Commoners and Nobles: Hereditary Divisions in Tibet*. Copenhagen: NIAS. p. 18. ISBN 978-87-7694-524-4. OCLC 758384977.**^***Les droits de l'homme Apostrophes*, A2 – 21 April 1989 – 01h25m56s, Web site of the INA: http://www.ina.fr/archivespourtous/index.php?vue=notice&from=fulltext&full=Salonique&num_notice=5&total_notices=8 Archived 28 November 2008 at the Wayback Machine**^**"Archived copy". Archived from the original on 24 January 2009. Retrieved 30 January 2009.`{{cite web}}` : CS1 maint: archived copy as title (link) 10 March Archive**^**"'Eighty killed' in Tibetan unrest". BBC. 16 March 2008. Archived from the original on 20 March 2008. Retrieved 2 March 2017.**^**Robert Barnett, Seven Questions: What Tibetans Want Archived 19 December 2017 at the Wayback Machine,*Foreign Policy*, March 2008.**^**Robert Barnett, Thunder from Tibet, a review of Pico Iyer's book,*The Open Road: The Global Journey of the Fourteenth Dalai Lama*, Knopf, 275 p. Archived 11 September 2015 at the Wayback Machine, in*The New York Review of Books*, vol. 55, number 9. 29 May 2008. ### Sources [edit]- Schaik, Sam (2011). *Tibet: A History*. New Haven: Yale University Press Publications. ISBN 978-0-300-15404-7. ## Further reading [edit]- Fischer, Andrew M. *Urban Fault Lines in Shangri-La: Population and economic foundations of inter-ethnic conflict in the Tibetan areas of Western China*Crisis States Working Paper No.42, 2004. London: Crisis States Research Centre (CSRC). - Kuzmin, S.L. *Hidden Tibet: History of Independence and Occupation*. Dharamsala, LTWA, 2011. - Tibet - Cultural assimilation - Internal migration - Ethnic cleansing in Asia - Racism in China - Political repression in China - Human rights of ethnic minorities in China - Linguistic discrimination - Language policy in Tibet - Human rights abuses in China - Chinese imperialism - Settler colonialism in Asia - Cultural genocide - Genocide of indigenous peoples in Asia
true
true
true
null
2024-10-12 00:00:00
2008-08-07 00:00:00
https://upload.wikimedia…otala_Square.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
19,012,152
https://www.bbc.co.uk/programmes/articles/32ytpvFgnLxtZNynYJYWbP6/the-original-emoji-why-the-scream-is-still-an-icon-for-today
BBC Arts - BBC Arts - The original emoji: Why The Scream is still an icon for today
null
# The original emoji: Why The Scream is still an icon for today 11 April 2019 #### Norwegian artist Edvard Munch was the tortured genius behind one of the best-known images in history. TOM CHURCHILL looks at the enduring appeal of The Scream, now on display at the British Museum, and how it reflects the anxieties of our age. Most of art’s iconic masterpieces are renowned for their beauty. Think Leonardo’s smiling Mona Lisa, Vermeer’s luminous Girl with a Pearl Earring and Botticelli’s nude goddess, Venus. But there’s one glaring exception in the list of all-time greats: Edvard Munch's The Scream. With its pale, hairless figure holding its head in its hands, mouth agape in a tortured howl, it was perhaps an unlikely candidate to become one of the most recognisable and reproduced images of all time. Yet this visceral, doom-laden work – a reflection of the Norwegian artist’s troubled state of mind at the end of the 19th Century – has grown to permeate every aspect of popular culture, from film and TV to memes and tattoos. You’ll find adaptations and parodies of it on student bedroom walls, on protesters’ placards and in political cartoons. It’s the first painting to have spawned its own emoji – the ‘face screaming in fear’. It has become the ultimate image of existential crisis, the original Nordic Noir. “One evening I was walking along a path; the city was on one side and the fjord below,” Munch wrote, describing his inspiration for the painting. “I felt tired and ill. I stopped and looked out over the fjord — the sun was setting, and the clouds turning blood red. “I sensed a scream passing through nature; it seemed to me that I heard the scream. I painted this picture, painted the clouds as actual blood. The colour shrieked. This became The Scream.” An 1895 lithograph print of the work, one of several versions Munch created, is the main draw of a new exhibition, Edvard Munch: Love and Angst, at the British Museum in April. It’s the largest show of Munch’s prints in the UK for 45 years, and will offer a revealing look into his turbulent psyche. Born in the village of Ådalsbruk in 1863 and brought up in Kristiania (renamed Oslo in 1924), Munch’s life was shaped by a strict upbringing in an oppressively religious household, marked by tragedy and emotional stress. His mother and older sister both died before Munch turned 14, his father died 12 years later and another sister was committed to an asylum, suffering from bipolar disorder. Munch himself also struggled with his mental health throughout his life. “For as long as I can remember I have suffered from a deep feeling of anxiety which I have tried to express in my art,” Munch wrote. “Without this anxiety and illness I would have been like a ship without a rudder.” He studied at the Royal School of Art and Design in Kristiania before travelling to Paris and Berlin, embracing a bohemian lifestyle, cultivating a network of fellow artists and thinkers, and developing a style that broke with artistic tradition. Munch became increasingly preoccupied with the tensions caused by urbanisation, advances in science and the moral dilemmas of a world on the brink of great change. In 1893 he painted what would be the first of four versions of The Scream, which is today housed at the National Gallery of Norway in Oslo. The painting was stolen in 1994 but recovered undamaged shortly afterwards in a sting operation. The city’s Munch Museum houses a pastel version from the same year, along with a second painted version from 1910 – which was also stolen, in 2004, and also later recovered. A second pastel version, dating from 1895, is the only one of the four in private hands, and sold for $120 million at auction in 2012 – a record at the time. Finally, a lithograph stone was produced in 1895 – and it is a rare black-and-white print from this that the British Museum will display. Theories abound as to the influences behind key elements of the work. The red sky has been linked to the effects of the Krakatoa volcano in 1883 which led to spectacular colouring in the skies above Europe for many months; as well as to the phenomenon of Mother of Pearl clouds. The central figure has been linked to a Peruvian mummy Munch may have seen at the 1889 Exposition Universelle in Paris, and also to a giant Edison light bulb displayed at the same event. Author Kelly Grovier suggests: “Given Munch’s anxieties about modern culture, it is easy to see how the newly patented symbol of science, the light bulb, may have merged in the artist’s mind with the mien of the evocative mummy, an unsettling relic of a civilization long since extinguished.” But it is the ambiguous, unknowable nature of this strange figure which is the key to The Scream’s universal appeal, argues art critic Jonathan Jones. He writes in The Guardian: “By removing all individuality from this being, Munch allows anyone to inhabit it. He draws a glove puppet for the soul.” And if Munch’s work is indeed an expression of his anxiety at a turning point in history, in a world increasingly cut loose from old traditions, there are clear parallels in the world of today. This is surely why The Scream retains its power despite its ubiquity: it’s a mirror of our own contemporary fears. Inside, aren’t we all screaming too? **Edvard Munch: Love and Angst is at the British Museum, London, from 11 April – 21 July 2019. ** *A version of this article was published in January 2019.* Tattoo artist Lucky Soul Tattoo posted this image of a Scream tattoo on Instagram - one of many examples of Munch-inspired body art. ## More Munch - ### What is the meaning of The Scream? Edvard Munch’s portrait of existential angst is the second most famous image in art history – but why? - ### Munch inspired by 'screaming clouds' Norwegian scientists have put forward a new theory to explain the inspiration behind The Scream. - ### Munch's The Scream and the appeal of anguished art BBC News Magazine's Jon Kelly asks why such agonised, visceral works of art are so highly sought-after. - ### Witness: The Theft of the Scream In 1994 the painting was stolen from a Norwegian museum. It was recovered in a daring undercover operation. ## More from BBC Arts - ### Picasso’s ex-factor Who are the six women who shaped his life and work? - ### Quiz: Picasso or pixel? Can you separate the AI fakes from genuine paintings by Pablo Picasso? - ### Frida: Fiery, fierce and passionate The extraordinary life of Mexican artist Frida Kahlo, in her own words - ### Proms 2023: The best bits From Yuja Wang to Northern Soul, handpicked stand-out moments from this year's Proms
true
true
true
Edvard Munch's painting has become one of the most ubiquitous images on the planet.
2024-10-12 00:00:00
2019-01-23 00:00:00
https://ichef.bbci.co.uk…675/p06yw4tq.jpg
website
bbc.co.uk
BBC
null
null
35,684,709
https://blog.ploeh.dk/2023/04/24/are-pull-requests-bad-because-they-originate-from-open-source-development/
Are pull requests bad because they originate from open-source development?
Mark Seemann
# Are pull requests bad because they originate from open-source development? by Mark Seemann *I don't think so, and at least find the argument flawed.* Increasingly I come across a quote that goes like this: Pull requests were invented for open source projects where you want to gatekeep changes from people you don't know and don't trust to change the code safely. If you're wondering where that 'quote' comes from, then read on. I'm not trying to stand up a straw man, but I had to do a bit of digging in order to find the source of what almost seems like a meme. ### Quote investigation # The quote is usually attributed to Dave Farley, who is a software luminary that I respect tremendously. Even with the attribution, the source is typically missing, but after asking around, Mitja Bezenšek pointed me in the right direction. The source is most likely a video, from which I've transcribed a longer passage: "Pull requests were invented to gatekeep access to open-source projects. In open source, it's very common that not everyone is given free access to changing the code, so contributors will issue a pull request so that a trusted person can then approve the change. "I think this is really bad way to organise a development team. "If you can't trust your team mates to make changes carefully, then your version control system is not going to fix that for you." I've made an effort to transcribe as faithfully as possible, but if you really want to be sure what Dave Farley said, watch the video. The quote comes twelve minutes in. ### My biases # I agree that the argument sounds compelling, but I find it flawed. Before I proceed to put forward my arguments I want to make my own biases clear. Arguing against someone like Dave Farley is not something I take lightly. As far as I can tell, he's worked on systems more impressive than any I can showcase. I also think he has more industry experience than I have. That doesn't necessarily make him right, but on the other hand, why should you side with me, with my less impressive résumé? My objective is not to attack Dave Farley, or any other person for that matter. My agenda is the argument itself. I do, however, find it intellectually honest to cite sources, with the associated risk that my argument may look like a personal attack. To steelman my opponent, then, I'll try to put my own biases on display. To the degree I'm aware of them. I prefer pull requests over pair and ensemble programming. I've tried all three, and I do admit that real-time collaboration has obvious advantages, but I find pairing or ensemble programming exhausting. Since I read *Quiet* a decade ago, I've been alert to the introspective side of my personality. Although I agree with Brian Marick that one should be wary of understanding personality traits as destiny, I mostly prefer solo activities. Increasingly, since I became self-employed, I've arranged my life to maximise the time I can work from home. The exercise regimen I've chosen for myself is independent of other people: I run, and lift weights at home. You may have noticed that I like writing. I like reading as well. And, hardly surprising, I prefer writing code in splendid isolation. Even so, I find it perfectly possible to have meaningful relationships with other people. After all, I've been married to the same woman for decades, my (mostly) grown kids haven't fled from home, and I have friends that I've known for decades. In a toot that I can no longer find, Brian Marick asked (and I paraphrase from memory): *If you've tried a technique and didn't like it, what would it take to make you like it?* As a self-professed introvert, social interaction *does* tire me, but I still enjoy hanging out with friends or family. What makes those interactions different? Well, often, there's good food and wine involved. Perhaps ensemble programming would work better for me with a bottle of Champagne. Other forces influence my preferences as well. I like the flexibility provided by asynchrony, and similarly dislike having to be somewhere at a specific time. Having to be somewhere also involves transporting myself there, which I also don't appreciate. In short, I prefer pull requests over pairing and ensemble programming. All of that, however, is just my subjective opinion, and that's not an argument. ### Counter-examples # The above tirade about my biases is *not* a refutation of Dave Farley's argument. Rather, I wanted to put my own blind spots on display. If you suspect me of motivated reasoning, that just might be the case. All that said, I want to challenge the argument. First, it includes an appeal to *trust*, which is a line of reasoning with which I don't agree. You can't trust your colleagues, just like you can't trust yourself. A code review serves more purposes than keeping malicious actors out of the code base. It also helps catch mistakes, security issues, or misunderstandings. It can also improve shared understanding of common goals and standards. Yes, this is *also* possible with other means, such as pair or ensemble programming, but from that, it doesn't follow that code reviews *can't* do that. They can. I've lived that dream. If you take away the appeal to trust, though, there isn't much left of the argument. What remains is essentially: *Pull requests were invented to solve a particular problem in open-source development. Internal software development is not open source. Pull requests are bad for internal software development.* That an invention was done in one context, however, doesn't preclude it from being useful in another. Git was invented to address an open-source problem. Should we stop using Git for internal software development? Solar panels were originally developed for satellites and space probes. Does that mean that we shouldn't use them on Earth? GPS was invented for use by the US military. Does that make civilian use wrong? ### Are pull requests bad? # I find the original *argument* logically flawed, but if I insist on logic, I'm also obliged to admit that my possible-world counter-examples don't prove that pull requests are good. Dave Farley's claim may still turn out to be true. Not because of the argument he gives, but perhaps for other reasons. I think I understand where the dislike of pull requests come from. As they are often practised, pull requests can sit for days with no-one looking at them. This creates unnecessary delays. If this is the only way you know of working with pull requests, no wonder you don't like them. I advocate a more agile workflow for pull requests. I consider that congruent with my view on agile development. ### Conclusion # Pull requests are often misused, but they don't have to be. On the other hand, that's just my experience and subjective preference. Dave Farley has argued that pull requests are a bad way to organise a development team. I've argued that the argument is logically flawed. The question remains unsettled. I've attempted to refute one particular argument, and even if you accept my counter-examples, pull requests may still be bad. Or good. ## Comments Another important angle, for me, is that pull requests are not merely code review. It can also be a way of enforcing a variety of automated checks, i.e. running tests or linting etc. This enforces quality too - so I'd argue to use pull requests even if you don't do peer review (I do on my hobby projects atleast, for the exact reasons you mentioned in On trust in software development - I don't trust myself to be perfect.) Casper, thank you for writing. Indeed, other readers have made similar observations on other channels (Twitter, Mastodon). That, too, can be a benefit. In order to once more steel-man 'the other side', they'd probably say that you can run automated checks in your Continuous Delivery pipeline, and halt it if automated checks fail. When done this way, it's useful to be able to also run the same tests on your dev box. I consider that a good practice anyway.
true
true
true
I don't think so, and at least find the argument flawed.
2024-10-12 00:00:00
2023-04-24 00:00:00
null
null
null
Ploeh
null
null
6,865,918
http://yourstory.com/events-listing/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,898,096
http://ryanspoon.com/blog/2010/11/12/espn-ipad/
ESPN's iPad Experience: Choose Your Own Adventure — Ryan Spoon
Ryan Spoon
I write a lot about the importance of getting product, messaging and promotion "**in the river**" (in other words: making sure that messages are delivered inside the core experience and to the respective audience - good example by Facebook here). Here is a great example by ESPN. When you visit ESPN via the iPad, it presents you with three options: 1. Visit the iPad optimized site (ideal for 3G usage) 2. Visit ESPN.com's full site 3. Download the new ESPN Scorecenter App for the iPad The first two options were always present for iPad users, but now that they have an iPad app (that is pretty good by the way), ESPN has decided to promote it to all iPad owners.... this is more effective of a marketing campaign than running site-wide banners on ESPN.com.
true
true
true
I write a lot about the importance of getting product, messaging and promotion " in the river " (in other words: making sure that messages are delivered inside the core experience and to the respective audience - good example by Facebook here ). Here is a great example by ESPN. When yo
2024-10-12 00:00:00
2010-11-12 00:00:00
null
article
ryanspoon.com
Ryan Spoon
null
null
16,349,650
https://www.iafrikan.com/2018/02/10/gabon-is-using-technology-to-fight-elephant-poaching/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,864,963
https://www.snipcss.com
SnipCSS - Free Chrome Extension to Extract CSS from any web component
Mark Rieck
SnipCSS works on ANY website. For hundreds of examples of what it can do, just check out our template directory. Call to Action Cards Content Block Footer Forms Gallery GDPR Accept Cookies Grid Hero Section Navigation Menu Popup Modals Pricing Table SaaS Sections Sidebar Links Sidebar Sections Tables Vertical Listing Video Thumbnails HeadersWhen you use SnipCSS, you only get the code needed for the section you select. You get all CSS rules including: SnipCSS has the ability to replace any copyrighted text, images, svg with AI generated data or MIT licensed icons. Machine code and scripts are copyrightable, but SnipCSS only copies HTML markup and CSS rules which provides a unique opportunity for creative reuse. Even so, it remains the user's responsibility to use SnipCSS conscientiously and ethically. Replace all CSS classes and ids used in the HTML/CSS snippet with unique ones Use Stable Diffusion to replace any existing copyrighted images in the design, use templates to inject data or your own images When recreating a section of a website many HTML properties CSS declarations can be removed The source and generated CSS have many differences due to what happens during processing Replace colors or proprietary fonts There is no reason to copy a large portion of any individual website because snippets can be scoped, that's why SnipCSS is the best tool to make small reusable components Collect all media queries by running at multiple resolutions Select specific children or multiple elements No one wants extra junk in their code, just clean CSS, which the Pro Version provides All features of templates in the directory are unlocked with a Pro Membership Unlock the real power of SnipCSS with the Pro Version SnipCSS is helping thousands of developers all over the world quickly build web UI. The Snip Kiwi pecks away at a page to collect styles and then you'll have a new web component. Many users don't believe it will work until they try it. Free Extension
true
true
true
SnipCSS allows you to extract only the associated html/css/images with a specific element on any webpage.
2024-10-12 00:00:00
2023-01-01 00:00:00
https://www.snipcss.com/…pcss_ogimage.png
website
snipcss.com
SnipCSS
null
null
7,582,212
https://github.com/kashev/nextWAVE
GitHub - kashev/nextWAVE: A project for HackIllinois 2014
Kashev
``` nextWAVE A project for HackIllinois 2014 Dario Aranguiz :: [email protected] Kashev Dalmia :: [email protected] Brady Salz :: [email protected] Ahmed Suhyl :: [email protected] ``` **nextWAVE** is a smart microwave built for the hardware hackathon section of HackIllinois. nextWAVE solves the 'problem' of not knowing how long to microwave your food. We built an Android app that allows you to scan barcodes, and look up cook times in a Firebase Database. The app can be launched using an NFC tag. Then, the app can turn on the microwave via wifi using a Spark Core Microcontroller. While your food is cooking, the cook time is displayed on a Pebble Smartwatch App. Finally, when the food is done cooking, it will open itself and play "Funky Town". You can view all the source code here and see a video of the working microwave here. Team Brady Rocks built nextWAVE in 36 hours. **Dario Aranguiz**- Android App, DB Interaction**Kashev Dalmia**- Pebble App, Website, Android Layout**Brady Salz**- Hardware, Hardware, Hardware, Embedded Software**Ahmed Suhyl**- Embedded Software, Test, Hardware Special thanks to **Isaac DuPree** for designing the logo. "This is so dumb you fat kids used sellotape to hold this radioactive machine together you are going to get cancer what do they teach you at that school how to eat your lunch? Use your god given brane for once #realtalk #obliterated " - Miles Anderson "As the "techie culture" crawls ever so further up its own anus I crave the day when the ISIS will celebrate victory over the smoldering ruins of our failed civilization." - kortexsirvasil "not really more innovative than a normal microwave." - Dragam - Working Microwave - Manual Switching - Relay Controlled Switching - Add NFC Launching Pads - Decorate - Working Spark Core Code - Switch a pin on & off - Control a Screen - Working Android App - Send Commands to the Spark - Send Remaining Cook Time to the Pebble - Scan Barcodes - Communicate with FireBase Backend - Facebook Login - Pebble App - Get and Display Remaining Cook Time - Website - Looks Nice - Link to Github Code - Food Stream
true
true
true
A project for HackIllinois 2014. Contribute to kashev/nextWAVE development by creating an account on GitHub.
2024-10-12 00:00:00
2014-04-12 00:00:00
https://opengraph.githubassets.com/3f4016c0fdbdb78005f57f4f2517cbb5dd3d71376c0404f6178a91ca7cb587a0/kashev/nextWAVE
object
github.com
GitHub
null
null
19,984,388
https://medium.com/syncedreview/neurips-2019-will-host-minecraft-reinforcement-learning-competition-146e8bc8da1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,283,100
https://www.bloomberg.com/news/articles/2019-10-15/pictures-raise-specter-of-fake-evidence-in-737-max-crash-probe
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
5,951,748
http://roe.myedu.com/student-journey/moocs-pre-reqs-and-car-crashes/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,614,070
http://runway.blogs.nytimes.com/2014/11/12/mark-zuckerberg-adopts-obamas-approach-to-dressing/?ref=technology
Mark Zuckerberg Adopts Obama's Approach to Dressing
Vanessa Friedman
But wait, you say upon reading this headline. Doesn’t Mark Zuckerberg, the Facebook chief executive, wear a gray T-shirt every day, while President Obama wears a dark gray or blue suit? Well, yes, but conceptually, it turns out the two are on the same page. Last week, Mr. Zuckerberg held his first town-hall-style meeting at Facebook headquarters. Among the challenging questions about product changes and how the company decides what to work on (or not), one enterprising questioner asked about the ever-present gray T. Here was Mr. Zuckerberg’s answer: “It’s a very simple question in a way, but it really speaks to how we think about the community here. I really want to clear my life to make it so that I have to make as few decisions as possible about anything except how to best serve this community. And there’s actually a bunch of psychology theory that even making small decisions around what you wear, or what you eat for breakfast, or things like that, they kind of make you tired and consume your energy. And my view is, I’m in this really lucky position where I get to wake up every day and help serve more than a billion people. And I feel like I’m not doing my job if I spend any of my energy on things that are silly or frivolous about my life, and that way, I can dedicate all my energy to building the best products and services and helping us reach our goal and achieve our mission.” Yada, yada, yada about connecting the whole world, and then: “So even though it kind of sounds silly — that that’s my reason for wearing a gray T-shirt every day — it also is true.” He does it to keep his mind clear! I’m not going to quibble about the “frivolous” comment here, or note that Mr. Zuckerberg doesn’t address the question of why he likes gray T-shirts above all, as opposed to, say, striped T-shirts or dress shirts, or point out that the rather lengthy discourse above demonstrates quite a lot of thought about clothes, as opposed to a little. But I would like to highlight its similarities to the following statement made by President Obama to Michael Lewis in a 2012 Vanity Fair profile: “You’ll see I wear only gray or blue suits,” he said. “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” I feel a trend in executive dressing rationales coming on.
true
true
true
Wearing the same T-shirts or suits every day may leave time for the really big decisions.
2024-10-12 00:00:00
2014-11-12 00:00:00
https://static01.nyt.com…eenByNine600.jpg
article
nytimes.com
On the Runway Blog
null
null
30,710,908
https://github.com/kdeldycke/awesome-falsehood
GitHub - kdeldycke/awesome-falsehood: 😱 Falsehoods Programmers Believe in
Kdeldycke
**Yᴏᴜʀ Pʀᴏᴅᴜᴄᴛ ʜᴇʀᴇ!** Add a link to your company or project here: purchase a GitHub sponsorship. *The logic of the world is prior to all truth and falsehood.* — Ludwig Wittgenstein[1] A curated list of falsehoods programmers believe in. A *falsehood* is an ** idea that you initially believed was true**, but in reality, it is **proven to be false**. E.g. of an *idea*: valid email address exactly has one `@` character. So, you will use this rule to implement your email-field validation logic. Right? Wrong! The *reality* is: emails can have multiple `@` chars. Therefore your implementation should allow this. The initial *idea* is a falsehood you believed in. The *falsehood* articles listed below will have a comprehensive list of those false-beliefs that you should be aware of, to help you become a better programmer. - Meta - Arts - Business - Cryptocurrency - Dates and Time - Education - Emails - Geography - Human Identity - Internationalization - Management - Multimedia - Networks - Phone Numbers - Postal Addresses - Science - Society - Software Engineering - Transportation - Typography - Video Games - Web - Falsehoods Programmers Believe - A brief list of common falsehoods. A great overview and quick introduction into the world of falsehoods. - Falsehoods about Programming - A humbling and fun list on programming and programmers themselves. - Falsehoods about Falsehoods Lists - Meta commentary on how these falsehoods shouldn't be handled. - Falsehoods about Music - False assumption that might be made in codifying music. - Falsehoods about Art - Common misconceptions about art. - Falsehoods about Online Shopping - Covers prices, currencies and inventory. - Falsehoods about Prices - Covers currencies, amounts and localization. - Falsehoods about IBANs - International Bank Account Numbers are not international. - Falsehoods about Economics - Economics are not simple or rational. - Decimal Point Error in Etsy's Accounting System - The importance of types in accounting software: missing the decimal point ends up with 100x over-charges. - Twenty five thousand dollars of funny money - Same error as above at Google Ads, or the danger of separating your pennies from your dollars, where $250 internal coupons turned into $25,000. My advice: get rid of integers and floats for monetary values. Use decimals. Or fallback to strings and parse them, don't validate. - Characters `<` and`>` in company names lead to XSS attacks - Because UK allows companies to be registered with special characters, a hacker leveraged them to register`\"><SCRIPT SRC=MJT.XSS.HT></SCRIPT> LTD` , but also`; DROP TABLE "COMPANIES";-- LTD` ,`BETTS & TWINE LTD` and`SAFDASD & SFSAF \' SFDAASF\" LTD` . - Minutiae of company names - How the rules of the State of Delaware and the IRS does not intersects. - CLDR currency definitions - Currency validity date ranges overlap due to revolts, invasions, new constitutions, and slow planned adoption. `tax` - A PHP 5.4+ tax management library. - Falsehoods about Bitcoin - A list of mistaken perspectives on Bitcoin. - Falsehoods about Ethereum - Misconceptions and common pitfalls in contract programming. - Falsehoods about Time - Seminal article on dates and time. - More Falsehoods about Time - Part. 2 of the article above. - Falsehoods about Time and Time Zones - Another takes on time-related falsehoods, with an emphasis on time zones. - Critique of Falsehoods about Time - Takes on the first article above and provides an explanation of each falsehood, with more context and external resources. - Falsehoods about Unix Time - Mind the leap second! - Falsehoods about Time Zones - Has some nice points regarding the edge-cases of DST transitions. - Your Calendrical Fallacy Is Thinking… - List covering intercalation and cultural influence, made by a community of iOS and macOS developers. - Time Zone Database - Code and data that represent the history of local time for many representative locations around the globe. - The Long, Painful History of Time - Most of the idiosyncrasies in timekeeping can find an explanation in history. - You Advocate a Calendar Reform - Your idea will not work. This article tells you why. - So You Want to Abolish Time Zones - Abolishing timezones may sound like a good idea, but there are quite a few complications that make it not quite so. - The Problem with Time & Timezones - A video about why you should never, ever deal with timezones if you can help it. - $26,000 Overcollection by Labor Department - The consequence of wrong calendar accounting. - RFC-3339 vs ISO-8601 - An giant list of formats from the two standards, how they overlaps, and live examples. - ISO-8601, `YYYY` ,`yyyy` , and why your year may be wrong - String formatting of date is hard. - UTC is Enough for everyone, right? - There are edge cases about dates and time (specifically UTC) that you probably haven't thought of. - Storing UTC is not a silver bullet - “Just store dates in UTC” is not always the right approach. - How to choose between UT1, TAI and UTC - Depends on your priorities between SI seconds, earth rotation sync, leap seconds avoidance. - Why is subtracting these two times (in 1927) giving a strange result? - Infamous Stack Overflow answer about both complicated historical timezones, and how historical dates can be re-interpreted by newer versions of software. - Critical and Significant Dates - From Y2K to the overflow of 32-bit seconds from Unix epoch, a list of special date to watch for depending on the system. - “I'm going to a commune in Vermont and will deal with no unit of time shorter than a season.” - Is the note left on his terminal by a quitting engineer in the 70s, after too much effort toiling away on sub-second timing concerns. Source: The Soul of a New Machine. - Falsehoods CS Students (Still) Believe Upon Graduating - A list of things (not only) computer science students tend to erroneously and at times surprisingly believe even though they (probably) should know better. - Postdoc myths - “Lots of things are said, written and believed about postdoctoral researchers that are simply not true.” - Falsehoods about Email - On addresses, content and delivery. - I Knew How to Validate an Email Address Until I Read the RFC - Provides intricate examples that are unsuspected valid email addresses according the RFC-822. - So you think you can validate email addresses (FOSDEM 2018) - Presentation of edge-case email addresses and why you should not use regex to parse them. - Your E-Mail Validation Logic is Wrong - A summary of the various, surprising things that are allowed in an email address. `libvldmail` - A library that implements RFC-based checks for e-mail addresses. - Falsehoods about Geography - Takes on places, their names and locations. - Falsehoods about Maps - Covers coordinates, projection and GIS. - I Hate Coordinate Systems - A guide for geospatial practitioners on diagnosing and fixing common issues with coordinate systems. - Top 5 most insane kanji place names in Japan - “There's one special group of kanji that's hard even for Japanese people to read: place names.” - Falsehoods about Names - The article that started it all. - Falsehoods about Names – With Examples - A revisited version of the article above, this time with detailed explanations. - Falsehoods about Biometrics - Fingerprints are not unique. - Falsehoods about Families - You can't really define a family with strict rules. - Falsehoods about Gender: #1 & #2 - Gender is part of human identity and has its own subtleties. - Falsehoods about Me - Issues at the intersection of names and gender and internationalization. - Gay Marriage: The Database Engineering Perspective - How to store a marriage in a database while addressing most of the falsehoods about gender, naming and relationships. - Personal Names Around the World - How do people's names differ around the world, and what are the implications for the Web? - XKCD #327: Exploits of a Mom - Funny take on how implementation of a falsehood might lead to security holes. - Hello, I'm Mr. Null. My Name Makes Me Invisible to Computers - Real-life example on how implemented falsehood has negative impact on someone's life. - HL7 v3 RIM - A flexible data model for representing human names. - Apple iOS `NSPersonNameComponentsFormatter` - Localized representations of the components of a person's name. On character encoding, string formatting, unicode and internationalization. - Falsehoods about Language - Translating a software from English is not as straightforward as it seems to be. - Falsehoods about Plain Text - Plain text can't cut it, which makes Unicode even more incredible for its ability to just work well. - Falsehoods about text - A subset of the falsehoods from above, illustrated with some examples. - Internationalis(z)ing Code - A video about things you need to keep in mind when internationalizing your code. - Minimum to Know About Unicode and Character Sets - A good introduction to unicode, its historical context and origins, followed by an overview of its inner working. - Awesome Unicode - A curated list of delightful Unicode tidbits, packages and resources. - Dark corners of Unicode - Unicode is extensive, here be dragons. - Let's Stop Ascribing Meaning to Code Points - Dives deeper in Unicode and dispels myths about code points. - Unicode misconceptions - A collection of falsehoods on case, encodings, string length, and more. - Breaking Our `Latin-1` Assumptions - Most programmers spend so much time with`Latin-1` they forgets about other's scripts quirks. - Ode to a shipping label - Character encoding is hard, more so when each broken layer of data input adds its own spice. - Localization Failure: Temperature is Hard - You cannot localize temperature differences as-is. - i18n Testing Data - Compilation of real-word international and diverse name data for unit testing and QA. - Big List of Naughty Strings - A huge corpus of strings which have a high probability of causing issues when used as user-input data. A must have set of practical edge-cases to test your software against. - Falsehoods about Job Applicants - Assumptions about job applicants and their job histories aren't necessarily true. - Falsehoods about Video - Cover it all: video decoding and playback, files, image scaling, color spaces and conversion, displays and subtitles. - Horrible edge cases to consider when dealing with music - Music catalogs data are full of crazy stuff. - MusicBrainz database schema - An open-source project and database that seems to have solved the complexity of music catalog management. - DDEX - The industry standard for music metadata, including archiving, sound recording, sales and usage reporting, royalties and license deals. - Apple Music Style Guide - Quality insurance guidelines to format music, art, and metadata to increase discoverability. - Falsehoods about Networks - Covers TCP, DHCP, DNS, VLANs and IPv4/v6. - Fallacies of Distributed Computing - Assumptions that programmers new to distributed applications invariably make. - There's more than one way to write an IP address - Some parts of the address are optional, mind the decimal and octal notations, and don't forget IPv6 either. - IDN is crazy - International characters in domain names mean support of homographs and heterographs. `hostname-validate` - An attempt to validate hostnames in Python. - Falsehoods about Phone Numbers - Covers phone numbers, their representation and meaning. `libphonenumber` - Google's common Java, C++ and JavaScript library for parsing, formatting, and validating international phone numbers. Also available for C#, Objective-C, Python, Ruby and PHP. - Falsehoods about Addresses - Covers streets, postal codes, buildings, cities and countries. - Falsehoods about Residence - It's not only about the address itself, but the relationship between a person and its residence. - Letter Delivered Despite No Name, No Address - Ultimate falsehood about postal addresses: you do not need one. - UK Address Oddities - Quirks extracted from a list of most residential property sales in England and Wales since 1995. - What is the Most Minimal UK Address Possible? - The trick is to rely on postcodes, which in the UK are pretty specific and “often identify one or a few specific buildings, unlike countries where a postcode represents an entire neighbourhood”. - The Bear with Its Own ZIP Code - Smokey Bear has his own ZIP Code ( `20252` ) because he gets so much mail. - Why doesn't Costa Rica use real addresses? - Costa Rican uses an idiosyncratic system of addresses that relies on landmarks, history and quite a bit of guesswork. - Regex and Postal Addresses - Why regular expressions and street addresses do not mix. - Parsing the Infamous Japanese Postal CSV - “I saw many horrors, but I've never seen this particular formatting choice anywhere else.” - USPS Postal Addressing Standards - Describes both standardized address formats and content. `libaddressinput` - Google's common C++ and Java library for parsing, formatting, and validating international postal addresses.`addressing` - A PHP 5.4+ addressing library, powered by Google's dataset.`postal-address` - Python module to parse, normalize and render postal addresses.`address` - Go library to validate and format addresses using Google's dataset. - Falsehoods about Systems of Measurement - On working with systems of measurement and converting between them. - Falsehoods about Political Appointments - Designing election systems has its own tricks. - Falsehoods about Women In Tech - Myth about women in STEM (Science, Technology, Engineering, Math) industries. - Falsehoods about Versions - Attributing an identity to a software release might be harder than thought. - Falsehoods about Build Systems - Building software is hard. Building software that builds software is harder. - Falsehoods about Undefined Behavior - Invoking undefined behavior can cause *anything*to happen, for a much broader definition of "anything" than one might think. - Falsehoods about CSVs - While RFC4180 to exists, it is far from definitive and goes largely ignored. - Falsehoods about Package Managers - Covers package and their managers. - Falsehoods about Testing - An attempt to establish a list of falsehoods about testing. - Falsehoods about Search - Why search (including analysis, tokenization, highlighting) is deceptively complex. - What every software engineer should know about search - A better sourced article on the difficulty of implementing search engines. - Falsehoods about Pagination - Why your pagination algorithm is giving someone (possibly you) a headache. - Falsehoods about garbage collection - Misconceptions about the predictability and performance of garbage collection. - Myths about File Paths - Diversity of file-systems and OSes makes file paths a little harder than we might think of. - The weird world of Windows file paths - “On any Unix-derived system, a path is an admirably simple thing: if it starts with a `/` , it's a path. Not so on Windows.” - Myths about CPU Caches - Misconceptions about caches often lead to false assertions, especially when it comes to concurrency and race conditions. - Myths about `/dev/urandom` - There are a few things about`/dev/urandom` and`/dev/random` that are repeated again and again. Still they are false. - Facts about State Machines - State machines are often misunderstood and under-applied. - Hi! My name is… - This talk could have been named *falsehoods about usernames (and other identifiers)*. - Popular misconceptions about `mtime` - Part of a post on why file's`mtime` comparison could be considered harmful. - Rules for Autocomplete - Not falsehoods *per se*, but still a great list of good practices to implement autocompletion. - Floating Point Math - “Your language isn't broken, it's doing floating point math. (…) This is why, more often than not, `0.1 + 0.2 != 0.3` .” - The yaml document from hell - YAML is full of obscure complexity like accidental numbers and non-string keys. - I am endlessly fascinated with content tagging systems - There are edge-cases even in tagging systems which are supposed to be barebone. - Falsehoods about Quantum Technology - Common misconceptions about quantum technology and computers. - Falsehoods about Event-Driven Systems - Misconceptions about event driven systems and message passing. - Falsehoods about Cars - Even something as common as defining a car is full of pitfalls. - Falsehoods about Airline Seat Maps - Airline seat maps are far more complex than just neat rows and columns of seats. - The Maddening Mess of Airport Codes - Having multiple international and national agencies trying to reconcile history, practicality and logistics makes codes follow arcane rules. - My name causes an issue with any booking! - Old airline reservation systems considers the `MR` suffix as`Mister` and drops it. - Falsehoods about Fonts - Assumptions about typography on the web and in desktop applications. - Truths programmers should know about case - A complete reverse of the falsehoods format, on the topic of case (as in uppercase and lowercase text). - The Door Problem - All the things you have not considered implementing for your doors in games. - Falsehoods about HTML - “Web is beautiful. Web is ugly. Web is astonishing. A part of this appeal is HTML, with its historical quirks.” - Falsehoods about REST APIs - Pitfalls to be mindful of when creating and documenting APIs. - URLs: It's complicated… - There's a lot of components in an URL, and all have their own logic. - The Hidden Complexity of Downloading Favicons, Told in 15+ Edge Cases - Downloading that little icon you see in you browser tabs should be a simple exercise. It turned out to be a lot more complicated than you think. Be vigilant that you are not shaving a Yak. Your contributions are always welcome! Please take a look at the contribution guidelines first. This list gathered some popularity in social medias over the past few years. See it being discussed and mentioned elsewhere. The header image is based on a modified photo taken in February 2010 by Iza Bella, distributed under a Creative Commons BY-SA 2.0 UK license. [1]: *Notebooks, 1914-1916* (Liveright, 2022) - source: page 14e. [↑]
true
true
true
😱 Falsehoods Programmers Believe in. Contribute to kdeldycke/awesome-falsehood development by creating an account on GitHub.
2024-10-12 00:00:00
2016-09-08 00:00:00
https://repository-images.githubusercontent.com/67687706/6ba96300-3b10-11ea-8631-f631a513dc8a
object
github.com
GitHub
null
null
13,687,888
https://torrentfreak.com/pirate-site-with-no-traffic-attracts-49m-mainly-bogus-dmca-notices-170219/
Pirate Site With No Traffic Attracts 49m Mainly Bogus DMCA Notices * TorrentFreak
Andy Maxwell
As reported in these pages on many occasions, Google’s Transparency Report is a goldmine for anyone prepared to invest time trawling its archives. The report is a complete record of every DMCA notice Google receives for its ‘search’ function and currently lists more than two billion URL takedowns spread over a million websites. Of course, most of those websites will remain faceless since there’s far too many to research. That said, the really big ‘offenders’ are conveniently placed at the top of the list by Google. **The most-reported sites, according to Google** As we can see, the 4shared file-hosting site is at the top of the list. That isn’t a big surprise since the site has been going for years, attracts massive traffic, and stores countless million files. There are a number of other familiar names too, but what is the site in second place? MP3Toys.xyz has a seriously impressive 49.5m takedown requests logged against it. We’ve never even heard of it. Checking the site out, MP3Toys is clearly a pirate platform that allows users to download and stream unlicensed MP3s from thousands of artists. There are hundreds of these kinds of sites around, probably pulling content from YouTube and other web sources. But here’s the problem. According to Google, MP3Toys.xyz (which also uses a .tech extension) has only been appearing in its databases since Jun 30, 2016. During this short time, Google has received requests to remove 49.5 million URLs from its indexes. That’s about 1.6 million URLs for each of the 31 weeks MP3Toys has been online. No site in history has ever achieved these numbers, it’s completely unprecedented. So MP3Toys must be huge, right? Not exactly. According to Alexa, the site’s .xyz domain is ranked the 25 millionth most popular online, while its .tech domain is currently ranked 321,614 after being introduced in January 2017. In loose terms, this site has no significant traffic yet will soon be the most-infringing site on the whole Internet. How can this be? Well, it’s all down to an anti-piracy company making things up and MP3Toys going along with the charade. As seen in the image below, along with outfits such as the BPI and BREIN, anti-piracy outfit APDIF do Brasil has an unusual fascination with MP3Toys. In fact, it’s sent the vast majority of the notices received by Google. However, while some of the notices are undoubtedly correct, it appears a huge number are absolutely bogus. Instead of scanning the site and sending an accurate takedown notice to Google, APDIF tries to guess the URLs where MP3Toys stores its content. A sample list is shown below. The problem here is that in real terms, none of these URLs exist until they’re requested. However, APDIF’s guesses are entertained by the site, which creates a random page of music for every search. The content on these auto-generated pages cycles, but it never relates to the searches being put in. As shown below, even TorrentFreak’s Greatest Hits Volume 77 is a winner (Test it yourself here) So in summary, APDIF makes up its own URLs, MP3Toys randomly generates a page of music that has nothing to do with the URL input, APDIF logs it as an infringement of its clients’ rights, and sends a complaint to Google. Then, putting the icing on an already confused cake, Google ‘removes’ every URL from its search results, even though it appears they were never in them in the first place. And that’s how a site with virtually no traffic received more DMCA complaints than The Pirate Bay. Unbelievable.
true
true
true
It's likely you've never heard of mp3toys.xyz since the site has very little traffic. However, thanks to a bungling anti-piracy outfit, the site is now the second most complained about 'pirate' site on the Internet, with Google receiving more than 49 million notices in just over six months.
2024-10-12 00:00:00
2017-02-19 00:00:00
null
article
torrentfreak.com
Torrentfreak
null
null
13,234,895
http://www.lrb.co.uk/v39/n01/ian-penman/wham-bang-teatime
Ian Penman · Wham Bang, Teatime: Bowie
Ian Penman
In 1975 David Bowie was in Los Angeles pretending to star in a film that wasn’t being made, adapted from a memoir he would never complete, to be called ‘The Return of the Thin White Duke’. This dubious pseudonymous character was first aired in an interview with *Rolling Stone*’s bumptious but canny young reporter Cameron Crowe; it soon became notorious. Crowe’s scene-setting picture of Bowie at home featured black candles and doodled ballpoint stars meant to ward off evil influences. Bowie revealed an enthusiasm for Aleister Crowley’s system of ceremonial magick that seemed to go beyond the standard, kitschy rock star flirtation with the ‘dark side’ into a genuine research project. He talked about drugs: ‘short flirtations with smack and things’, but given the choice he preferred a Grand Prix of the fastest, whitest drugs available. He brushed aside compatriots/competitors like Elton John and called Mick Jagger the ‘sort of harmless bourgeois kind of evil one can accept with a shrug’. If pushed, this apprentice warlock could also recite Derek and Clive’s ‘The Worst Job I Ever Had’ by heart and generally came on like a twisted forcefield of ego, will and fantastic put-on. It’s impossible to imagine someone like Bowie giving the media anything like this kind of insane access today – but then, of course, there is no one like Bowie today. In 2016 it might take five months of negotiation to get an interview with the superstar of your choice and then you’d probably have to present your questions in advance and be babysat by three or four PR flaks and a spooky zombie-faced entourage for the whole blessed 15 minutes. In 1975, Bowie just turned up grinning, already babbling, at Crowe’s door. When Crowe got him to sit still long enough he couldn’t stop talking, which may or may not have had something to do with the industrial amounts of pharmaceutical cocaine he was daily ingesting. He had become almost an abstraction in the dry California air: surrounded by stubbly country-rock cowboys and wailing witchy women he was a sheet of virgin foolscap. Where did he fall from, this Englishman with his barking seal laugh and outrageous quotes about Himmler and semen storage and articulate ghosts? *Station to Station,* which Bowie released in January 1976, came complete with glancing references to the esoteric teachings of Kabbalah (‘Here we are: one magical movement, from Kether to Malkuth’) and Crowley’s obscure (but unobscurely sexual) poetry collection of 1898, *White Stains*. (‘Kether to Malkuth’ represents the A to Z, so to speak, of the Kabbalistic Tree of Life, which Bowie can be seen drawing in outtakes from a photoshoot included on the 1999 CD reissue of *Station to Station*.) The album was just under forty minutes long, with three songs per side and a subtle, striking, modernist sleeve – white space, black sound baffles and a red tickertape of words along the top. Red, white and black, a colour chart of his alleged diet at the time: cocaine and milk, raw red peppers and the printed word. In later years, Bowie hinted that *Station to Station* was some kind of codified ceremony drawn in sound (‘It’s the nearest album to a magical treatise that I’ve written’). Truth be told, you could go as mad as a pampered rock star trying to sieve and parse all the album’s different references and clues. It’s quite a witty record, in its skeletal way: after all, what could be less mid-1970s USA than a Thin White Duke with his Gauloises and artsy 19th-century references, cold white spotlights and muggy Berlin atmospheres? It’s about as far from Kiss and Led Zeppelin and the Eagles as you could possibly get. *Station to Station* is both decadent and symmetrical: one side ends with ‘Word on a Wing’, the other with ‘Wild Is the Wind’. These two tracks, those unfurled wings, were completely different from anything he had previously recorded, from what any rock star has *ever* recorded (except perhaps Scott Walker, a Bowie household god from way back): spare, hauntingly personal and as close to simple and emotionally direct as he would ever let himself get. In his age of grand illusion he returned to stately melodies and simple words. ‘Word on a Wing’ is the song as protection, a counter-spell for a white-knuckle time, almost classic soul/gospel with its ‘you’ alternating between earthbound lover and timeless Christ. ‘Wild Is the Wind’, originally sung by Johnny Mathis for a 1957 Hollywood potboiler (and given perhaps its definitive performance by Nina Simone), is like a sung doppelgänger, and even here (with someone else’s lyrics) there were hints of matters ethereal: ‘For we’re like creatures/of the wind.’ No matter what your feeling about David Bowie and his work – whether you like all of it, or only love bits of it, or are determined to collect every last thing – it was impossible for his death last January not to feel like some kind of marker. The 20th century was the age of the frontline news photo, the water-cooler TV moment, the must-have LP, but all those heat-of-the-moment things have been demoted or disappeared in our new century’s digital realignment. In our current post-everything age, Bowie’s death was another reminder of how times have changed: an oldtime star who once enacted his alter ego Ziggy Stardust’s demise as an old-fashioned diva-esque theatrical goodbye-ee, and who more or less staged his own death online, with admirable restraint, impeccable good manners, and a profoundly surprising, legacy-salvaging last work, *Blackstar*. His career began in the early-to-mid 1960s when rock music itself had barely got up a head of steam, BBC2 had just become the UK’s third TV channel, and there was very little ‘media’ to register the underground tremors of rock. By the time he died, the music and the culture it gave birth to had boomed, then bust. There is still music and obscene amounts of money to be made – perhaps more than ever. But it sometimes all feels like little more than a Potemkin masquerade, mass nostalgia for a time when rock really mattered. It’s impossible to imagine something like Bowie’s masterpiece *Low* (1977) coming out now, an album split down the middle like an old *Mad* centrepiece, one half fidgety pop songs (the whitest blues ever recorded), the other just pure tone. What is there left to know about David Bowie? What is there left to unearth? I’m really only half a Bowie fan and I already had a whole separate shelf for Bowie books, even before the posthumous publication tsunami. One thing you can’t help but notice about the new books is that the dominant tone has changed. Even at their most celebratory, they are far more wistful: this is pop culture eschatology. The authors seem haunted by the past, with little or no sense of what a post-Bowie or post-rock future might hold. There’s a feeling that nothing will ever be as surprising or shocking again; that rock as ‘alternative’ culture is done, and only remains to be archived and periodically dusted. The implication of at least two of the new titles is that we’re living in times shaped by some kind of Bowie/Glam legacy. I don’t quite see it myself, partly because of a long-ingrained distrust of words like ‘epoch’, ‘era’, ‘age’ and ‘legacy’, which make me feel as if things are being divided up too soon, too neatly, and for convenience’s sake. Maybe even for the sake of some convenient branding. Just look at these four new books: three of them have an identical Aladdin Sane flash on the cover, and two have the same bloody title, which strikes me as pretty good evidence we’ve reached peak something or other. There are more and more books like this these days: rock histories and encyclopedias, stuffed with information, compendiums of every last detail from this or that year, era, genre, artist – time pinned down, with absolutely no anxiety of influence. And while it would be churlish to deny there is often a huge amount of valuable stuff in them, I do think we need to question how seriously we want to take certain lives and kinds of art – and how we take them seriously without self-referencing the life out of them, without deadening the very things that constitute their once bright, now frazzled eros and ethos. One of the big differences between Bowie’s heyday in the 1970s and now is that today you can choose from a huge selection of books on any given cult figure (Nick Drake, Gram Parsons, Syd Barrett etc). In the days when such figures were active you had to be satisfied with an occasional music-press annual, or the lyrics printed in your girlfriend’s *Jackie*, or, if you were really lucky, a title like *The Sociology of Riff* (no photos or illustrations). Maybe one of the reasons the 1970s were such an incredibly creative time is that we weren’t all reading biographies and blogs and tweets about (or even by) our heroes, who in turn weren’t thinking about the best way to ‘grow their brand’ exponentially through a social media arc. All that unmediated space waiting to be filled! Bowie in Los Angeles was a kind of mantic probe for young kids discovering the joy of sex, sexuality, art, artiness. He was going into areas that no one had really explored before. He left spaces for his followers: not just the hierarchy of stardom and fandom but a strange, astute, uncanny folding of one into the other. From album to album there was a strange, light, almost mocking dialectic: he taught us to be critics of our own enthusiasms. He was ‘post’ and ‘meta’ and playfully ‘iconic’, before such terms had any real popular currency. July 1972: blue guitar, red boots, jumpsuit made of cushion covers from a bad mescaline trip, orange hair, a Klieg-light nimbus around his ghost-train head. A big crooked grin like he’s having the best possible time, like he has just sold the waiting world a truly irresponsible dare, his arm curling around the guitarist Mick Ronson. ‘But he thinks he’d blow our minds!’ And he did. One reason for that blown fuse was that Bowie had already worked out that the best way to put across a serious point was to stage it as an almost luridly OTT showbiz scene. You have to remember that *Top of The Pops* was it. There was no pop media at large, only three channels: everyone in the country was eating their tea and watching the same flicker of sound and vision. And here was this flirtatious pop-art revelation, all under the disbelieving eyes of everyone’s parents: a cosy family teatime – then wham bang! Did you see that! What on earth was going on there? Then he was gone. There was no rewinding to playback and OMG on Twitter and sharing it on YouTube the next day. The first girlfriend I ever had, circa mid-1970s, worked a Saturday job at the make-up counter in our local Boots; I worked at H. Samuel, the high-street jewellers. These were very Glam jobs, which is to say not at all glamorous, but rather make-do glam, British glam. With Glam you could always see the joins, the stitching, the wig glue. Whoever it was that said the Sweet looked like ‘brickies in make-up’ got it exactly right: a profusion of perms, verdant chest hair and vertical-lift-off sideburns. But Bowie was something else. Bowie did look good, even when what he was wearing was as silly as what everyone else was wearing. Beyond good in fact, otherworldly: his make-up was not merely heavy-handed appliqué, it was a masquerade with echoes of times gone by, and maybe some unimaginable future. How much Bowie was really Glam at all is surely up for debate. His aspirations were always different: he was always headed in some unreadably different direction. Even before Glam, if it was different he’d tried it. Before he had a proper audience, he rifled the cultural event horizon like a pack of Tarot cards: he was beatnik, mod, mime, novelty record maker, hippy, arts lab founder, would-be ambisexual proselytiser. (He even dreamed of staging a proto-Ziggy sci-fi musical, but was stymied by, among other things, how to conjure up a ‘black hole’ convincingly on stage.) One of the reasons I’m glad Simon Reynolds gives so much space to this earlier period (in his outsize but periodically acute history of Glam) is that it furnishes real clues to later Bowie, the Bowie of superstar myth, the master manipulator, one move ahead of everyone else. In his 1975 *Rolling Stone* interview Bowie remembered being a ‘trendy mod … a sort of throwback to the Beat period in my early thinking’. But it’s easily forgotten that amid all the Genet references, Mingus namedrops and mime moves, the likes of Bolan, Bowie and Bryan Ferry were very much in love with, and shaped by, mainstream British showbiz and Saturday night TV. Recall: Lulu singing ‘The Man Who Sold the World’. Recall: Bryan Ferry duetting with Cilla (‘and special guests Gerald Harper and Tony Blackburn!’). Recall: a newly shorn Bowie guest-starring on Bolan’s camp-for-kids teatime show, *Marc*. This was what made them who they were: they could play in both keys, MOR versions of avant-garde modernity. People still get into knots about the ‘mystery’ of Bowie’s serial life-swapping in the 1970s, but he’d been pulling the same trick for years on the perimeter of Tin Pan Alley before he applied it to rock. A bit of sci-fi, a bit of up-in-the-air sexuality, a bit of scarves-in-the-air sing-along, a bit of an ‘Oh no he isn’t!’ panto vibe, and a lot of power chords. Surely one of the main reasons we project other, more fancy motivations onto the blank screen of Bowie’s waiting face is precisely because of its breathtaking and deeply odd beauty. If he’d looked more like John Bonham we might not be having this conversation. Bowie had a striking talent for shaping luck or forcing serendipity, and for finding the right other half in the right place at the right time: early manager Kenneth Pitt, producer Tony Visconti, personal assistant Corinne Schwab, and even (or especially) his first wife, Angela Barnett. (You could write a whole piece about Angie’s polarising effect on people and the question of how much she contributed to his breakthrough in the 1970s. As someone who had to read every last page of her bafflingly awful 1993 memoir, *Backstage Passes*, I find it hard to dredge up any feeling of protective support, but it’s hard to believe she had as little to do with her husband’s canny self-reinvention as some fans would have us believe.) Another key to his success in the 1970s was that his rock superstar personae, including Ziggy Stardust and Aladdin Sane, were both otherworldly and inclusive, up on a pedestal and down in the everyday dirt. ‘You’re not alone!’ Bowie sings: ‘Give me your hands!’ Here was someone who looked like a star but managed to communicate something like a fan’s own baffled awe. It was as if he was saying: ‘Just look at all this stuff we suddenly have to play with in rock and roll! We’re the first, too! Isn’t it a gas?’ It’s very hard with a lot of his songs from the early 1970s to tell how far Bowie’s tongue is lodged in his cheek; maybe he didn’t know himself. A lot of his more ‘surreal’ lyrics of this time are more *Goon Show* than Velvet Underground. ‘Hazy cosmic jive’ indeed. A song like ‘Space Oddity’ teeters right on the edge of ridiculous overload: one minute it’s sensitive, free-festival acoustic guitar, the next vibrating handclaps and space FX. Let’s throw in every trend of the moment: space travel! stylophones! existentialist gloom! He was part hippy, part beady-eyed nitroglycerin queen, part Penguin Modern Classic reader, part theatre door Johnny; or, as the American critic Robert Christgau once put it, ‘a middlebrow fascinated by the power of a highbrow-lowbrow form’. Bowie between two poles: laughter-madness-performance v. cool calculation, media plotting. On one hand, the nerveless psychological chess player, three moves ahead of everyone else; on the other, a man haunted by the whole troubled (and unresolved) matter of his older half-brother, Terry Burns. It was Terry who had introduced the young David Jones to Buddhism, Beat poetry, Mingus, and who later slipped or snapped into genuine psychosis, to lead a cruel jacitation of a life until his eventual suicide in 1985. Bowie himself played with and performed a version of madness far nearer a late 19th-century Euro-romantic (and very painterly) notion of things. An idea of ‘Europe’ haunts his work, and may even be what steered him back from the precipice of genuine black-out or white-out in the late 1970s. It can hardly be a coincidence that after leaving the coast of California on a jet plane he ended up behind a wall, in shadowy Berlin. In the sun, he plummets. In the shadows, he blooms. And with a double! Bowie had brought along Jim Osterberg, aka Iggy Pop, the none-more-messed-up singer of the apocalyptic heavy rock band the Stooges, whose career rejuvenation Bowie had made a point of honour. Aversion therapy: you can’t say no to beauty and the beast. The sleeves for both *Heroes* and *The Idiot* took their cue from the work of the German painter Erich Heckel, a source Bowie would return to later. This was not the genuine disturbance and real ‘madness’ of Syd Barrett, say, which would have been all too unromantic, mere daily drudge, a matter of counting the same sequence of numbers over and over again until death. A scientific researcher with no emotional investment in Bowie could conduct an empirical study of his interviews down the years and record two main traits. One: where the choice was either a fascinating lie or a disappointing truth, Bowie would always go for the former. Two: he had an almost pathological need to be liked. It is possible that for Bowie, being liked was even more important than being taken seriously. He was always a skilled and guilt-free liar. The very first time he appeared on screen, I believe, was on a 1964 BBC teatime ‘novelty’ news item, where he posed as an earnestly disgruntled spokesman for the Society for the Prevention of Cruelty to Long-Haired Men. (He actually had more of a straggly, shiny bob than a long, hippyish hairstyle.) Apparently the Spokesman thing was entirely off the cuff. The 17-year-old Bowie could do something like that as naturally as breathing: he opened his toothy English mouth and out came the tallest of tales. Everyone was happy: aspirant pop star, TV crew, teatime audience, maybe even a few light-hearted longhairs. I don’t think it’s unfair to say that as an artist, writer, actor, singer, Bowie was good, often very good, but could never join the ranks of the greats in any of those areas. What he was great at was being David Bowie – the only one we ever got, or will ever have. The huge surge of affection when he died came about partly because people felt that no matter how far away he travelled, he remained in some indefinable sense close to them, one of our own – a working-class kid who never forgot where he was from. I’m not entirely sure about that. Part of me thinks that Bowie saw through ‘class’ at a very early stage, and realised that a large percentage of it was just another kind of performance. After the bacofoil pyre of Ziggy was he ever really ‘English’ again? Wasn’t he rather our brightest exile? The amount of non-professional time this quintessential London boy spent in the UK after the early 1970s is minuscule. There are snapshots of him in the American desert, Arizona, Los Angeles, Berlin, Tokyo, New York; at a loose end in Switzerland; asleep on a European train idling between stations; gone to ground in Berlin (in the kind of district he would never have considered a base in the UK); finally at rest, an art-collecting, thought-collecting Englishman in New York. Naturally, he made obligatory visits to London clubs when something exciting was happening, but most of the images of him in the UK that flash to mind are rather unfortunate: the disputed ‘wave/heil’ at Victoria Station from the back of a Mercedes in 1976; or down on his knees solemnly reciting the Lord’s Prayer at the Freddie Mercury Tribute Concert in 1992; or as a smart-alec cameo in Ricky Gervais’s smug TV self-com, *Extras*. For long periods of time he was all over the place, homeless and disjointed and – perhaps – happy that way. One of my favourite photos, by Geoff MacCormack, shows Bowie asleep on a train, between stations, Vladivostok to Moscow, in 1973. It may be one of the only portraits we have where he is entirely at rest – mouth shut, eyes closed, no public gaze to contend with, no hunger in other people’s eyes, no need to scan the room or prepare an opening gag. The last cigarette of the day smoked, the last sentence in the latest book underlined, make-up sluiced away, dreaming like the rest of us of something ridiculous and sublime. Bowie sometimes seems to have regarded sound itself as a subset of space, a succession of planes, crypts or topoi where he could hide or advertise, measure or medicate, celebrate or mourn himself. A sense of simultaneous projection/encryption is at the heart of his best work: the idea of a ‘split’ personality embraced as healthy, even helpful; whiteness and blackness, authentic and plastic, real and ersatz, dark and light – mere categories, useful dance steps, not unbreakable truths. This doubling effect is to the fore in ‘Fame’, the final track on *Young Americans* (1975), which is both genuinely tin-tack arched-back funky as can be, but also so sealed-off, hygienic and airless that it erases any trace or taint of genuine metabolic heat. You can imagine that druggy riff going on for a thousand years, until the end of time. (I clicked on Google for the song’s lyrics and one site I saw just had the word ‘fame’ repeated 19 times, which is kind of on the money.) It’s such a hard, clipped, merciless sound; but the words, too: ‘reject’, ‘flame’, ‘insane’, ‘bully’, ‘chilly’, ‘hollow’. Compare this with the flowery songs he was writing only a handful of years before. I always half-imagined he was singing ‘Fame’ from the back of the limo he mentions – a ‘dream car twenty foot long’ (as he put it in the song that ‘Fame’ could be said to be twinned with, *Station to Station*’s ‘Golden Years’). Or, if not singing, repeating the words in his head as a nursery rhyme to keep his overstimulated skull from cracking open. A fissure that would let all the false, neon light into a cosy darkness. He finally had everything he’d worked for, for so many years, and what did he do? Ran for the shadows, where he got out his copy of Crowley’s *Magick in Theory and Practice* to work some not-joking spells in order to facilitate … what exactly? What deeper or stranger form of elsewhere or otherwise could he possibly have desired? Deep in my dreams is an unmade movie: something like Fassbinder’s *Despair* crossed with Laurence Harvey in *A Dandy in Aspic*, starring Scott Walker and Bowie as Cold War agents trailing one another in a wilderness/wonderland of dark two-way mirrors. After all, Bowie was a man whose way of keeping his most vital secrets close to his chest was to gab away at ten to the dozen at the drop of a fedora, giving every last interviewer (interrogator, partner, label boss) the appearance of off-guard disclosure. The spy operating on remote control is a close relative of the undead, a nine-to-five vampire (nine at night until five in the morning), and it’s easy to see Bowie’s music-making of the mid-1970s as a form of spycraft: assuming identities, exploiting new gadgetry, eavesdropping on himself. Trans-Europe Express, with the emphasis very much on trans. But there is also an unmistakable mood of mourning and melancholy. *Ashes to Ashes* (1980) was genuinely some kind of ceremony or rite in which a certain past was buried, in which he bid adieu (or hoped he did) to certain demons: ‘want an axe/to break the ice’. Listen to the strange, scary voices in the background, the truly mournful play-out, the far from comforting ending. The song is a confession written in sound: ‘I never did anything out of the blue … Want to come down right now.’ The paradox is that this Number One hit with its ‘iconic’ video may have reflected more of an all-time low than the fabled *Low*. One of the photos of Bowie I saw online in the weeks after his death was taken in January 1997 at his 50th birthday celebrations, with Billy Corgan, Lou Reed and Robert Smith. This was the period when he finally found inner calm and displayed outer happiness, relaxed into himself. But take a look at this photo: doesn’t he look a bit wizened, a bit unreal, like a doll of himself or a plastic action figure of a late-era rock star? He’s sporting one of those fussy chin/lip beard doodads that were popular in the 1990s, just like any other middle-aged duffer trying to look young and trendy. He’s wearing something that looks like a cheaply ‘avant-garde’ high-street wedding suit with a novelty waistcoat. When he was at death’s door he looked wonderful, whereas now he’s clean and sober he looks etiolated, smudgy and grey; he looks, in fact, downright ill. Bowie met the Somalian supermodel Iman in 1990 and they married in 1992. She seems to have dislodged something in him he’d hidden for too long, and he sobered up (though he continued to smoke like a troubled priest in a 1940s noir film). By swapping some of his alien ‘otherness’ for ordinary sociable life he found ordinary-folks happiness, ease in his skin, an end to constant ambient anxiety. Time calmed down, away from the on-off loop of addiction, which was great for life but maybe not quite as beneficial when it came to work. During these years, it felt like Bowie was rarely exciting by his own standards. Too much of his middle-period work comes across as a triumph of airless busyness over any kind of considered or memorable shape – dozens of weightless surfaces and battling signifiers in search of a missing hook or core. Listening to middle/late albums like *Earthling* (1997) or *Heathen* (2002), parts of which are perfectly pleasant or serviceable modern AOR, I kept thinking: is this the kind of thing Bowie himself would sit and listen to at home and get excited by? I really don’t think so; though there are moments when it’s all too obvious what he *has* been listening to. A track like *Heathen*’s opener, ‘Sunday’, sounds like a naked and embarrassing attempt to imitate the recently rejuvenated Scott Walker. For the first time in his career he was chasing the zeitgeist, rather than looking airily back at its approaching shape. It can sound as if he has rushed into the studio with no more than a vague idea of doing something along the lines of whatever was currently ‘happening’ elsewhere on the fringes of pop/rock. (Is there a more depressing moment in his entire canon than the opening track of *Earthling* with its boilerplate drum and bass?) Maybe you can do that when you’re young and on fire (and when the material you’re half-inching is gold star), but he carried on trying to work like his younger self even when it no longer worked with golden ease. Bowie obsessives may be able to point to a few great songs scattered here and there, but for most of us Bowie’s middle period was a time when a lot of time could go by without him really impinging on your consciousness the way he used to do. It’s strange how, at a time when he was probably happier than he’d ever been, on works like *Outside* (1995), *Earthling* and *Heathen*, he continued to try and uphold the Bowie brand of weird, cold and untouchable. Look at the sleeve of *Heathen*, where all the stops have been pulled to make him look ‘weird’. The sleeves of *Hours* (1999) and *Reality* (2003) scream ‘workshopped by creative team’: they’re all over the place, too busy, with two or three clashing tropes. Compare such desperate-seeming incoherence with his records from the mid-1970s, when his album portraits were both subtle and subtly disorienting. It’s very hard indeed if not impossible to intend or design something uncanny. The uncanny *results*, it isn’t something you have much control over. I wasn’t exactly scrupulous about following every new release during those years. The exception was 1995’s *Outside* – or, to give it its full title, *1. Outside. The Nathan Adler Diaries: A Hyper-Cycle*; or, to give it its full *full* title: *The Diary of Nathan Adler or The Art-Ritual Murder of Baby Grace Blue: A Non-Linear Gothic Drama Hyper-Cycle* – which I had to listen to because I was interviewing Bowie for one of dozens of ‘comeback’-themed magazine stories. Although I was a good professional and played it to death, it disappeared like smoke and I found I could remember almost nothing about it a few weeks later except a vague sense of something ‘Bowie-ish’ having taken place – as if a surprisingly un-scary ghost had strolled through my front room. I could remember one song, or one line from one song – ‘Hallo Spaceboy!’ – and only because it teetered so shamelessly on the edge of self-parody. (In mid-period Bowie the more autopilot the track the more he tended to amp up his wideboy London brogue.) I found myself wondering, as I do with some postmodern artists (David Salle, for example): is this good because it’s a good David Bowie, or is it good by any standards? Or is he essentially, now, just imitating his own brand, because that’s what you do after a while these days? Much as I found *Outside* entirely uninvolving and sometimes close to ludicrous, the minute I met him, the minute I was alone in a room with DAVID BOWIE, I melted. All this time, what we were consuming wasn’t this or that great/OK/below-par/dreadful music, but a cherished idea of Bowie, his chat, strangeness and charm. My half of that interview for *Esquire* mostly consisted of me grilling him about *Young Americans*, *Station to Station*, *Low*, and him plugging the ‘ideas’ behind *Outside* (outsider art, blood theology, chaos theory, ritual scarification, you name it). This was very much characteristic of the time. The days of interviews full of LA covens and drug confessions were over and he began to sound like any other old rocker with product to flog. For *Never Let Me Down* (1987) he ‘explained’ how he wrote such and such a song because of his concern with making a ‘statement’ about the homeless in the US. Ditto this one about Chernobyl. Ditto the one about (what else?) Margaret Thatcher. Those song ‘explanations’ are exactly the kind of thing you might have expected from someone not quite as smart as Bowie trying to do a Bowie: Bowie lite. Ironically, they gave the game away by being way too heavy. He could seem almost like a *Pete and Dud* parody of an up-himself rockstar. ‘Left field’ became a kind of default position – but it was more like a fond parody than the real future shocks that works like *Low* and Iggy’s *The Idiot* represented when they were released to a baffled world. Using Burroughs’s cut-up method was a liberating conceit to begin with, but after twenty or thirty years’ duty it began churning out a kind of homogeneous ‘Bowie-osity’ that turned the texts of his songs into an opaque shield, a way not to reveal anything very much at all. When Robert Christgau claimed that one of below-par-Bowie’s main traits was ‘the way he simulates meaning’ he got it just right. The more stress Bowie put on his songs’ ‘meaning’ the less they sounded meaningful to the listener. It began to feel as though the music was there to facilitate the headlines and interviews and cover shots, rather than the other way round: the whole point was to keep the brand ticking over, the name alive. Legacy. The archive. Especially after he converted himself into a ‘celebrity bond’ in 1997. Bowie bonds were ‘asset backed securities of current and future revenues’. In other words, you were investing in the value of his song archive. By March 2004 they had been downgraded from A3 status to a notch above junk bond, then they were liquidated in 2007 ‘as planned’, at which point all the rights reverted to Bowie. In a way, things like the Bowie Bonds, or his much derided global *Glass Spider* tour of 1987, were far more in keeping with the future of rock – rock as marketplace and archive rip and nostalgia drip – than the old-fashioned idea of shock songs he tried to revive on records like *Outside*, all those putatively more ‘risky’, ‘eclectic’ raids into drum and bass, Nine Inch Nails and outsider art. The real business of creativity was gigantic tours, archive retrieval, branding, legacy. While working on this piece to the background sound of ambient TV, all of a sudden I heard him call my name: I guess he had to crop up on a Vintage TV 1980s Special at some point. The featured clip was a live one from that notorious *Glass Spid*er tour and he was doing one of my favourite songs off one of my favourite albums (‘Breaking Glass’ from *Low*) and – how can I put this? – it was a bit crap. His look was all bright primary colours and sharp angles, hair the colour of two-day-old crème fraiche. Every last trace of the song’s original strangeness had been scoured away, replaced by brassy professionalism. It had lost its heart to the starship trouper rocking everything up to 11 – at one moment he even threw in some old mime moves. His honking sax player had a big feather in his big, sad, I’m Such a Character hat. ‘You’re such a wonderful person,’ Bowie sings, then the girly chorus goes, ‘BUT YA GOT PROBLUH!’ ‘I’ll never touch you!’ – once, twice, again, over and over and over again, with something like nightmarish irony, almost as if he knew what damage he was doing to this lovely ghostly song, and how appalled many of us would be. Watching Bowie selling himself to the world was a depressing sight, whereas I always found *Low* itself delirious fun; I never, ever, got the idea that it was depressive or depressing or ‘plastic soul’ or ‘alien disko’ or any of the other labels applied by a baffled rock press, a large leather-jacketed percentage of whom, let it not be forgotten, got this work entirely wrong at the time. Even though it now seems impossible not to hear that Bowie is opening up his heart for possibly the very first time, it was somehow judged not authentically rock and roll enough. Even a song like ‘Always Crashing in the Same Car’, which conveys a feeling of hopeless stasis, is, in sonic terms, sheerly lovely. It somehow manages to make the feeling of going round in gluey circles sound like some new kind of homeopathic druggy bliss. In order to fall to earth, you have to be way up in the blackness and stars to begin with. What is the alleged Bowie ‘legacy’ if not a permission to dream, to fantasise, to get things wrong, to change horses in mid-air? The failure to communicate this is maybe why I find a lot of the new Bowie books admirable enough on their own terms but ultimately a bit disappointing – they seem all too sensible, linear, stuck safely inside certain conventions. Paul Morley is best known for cheerleading the infectious ‘New Pop’ of the 1980s, and as a pithy postmodern interviewer. I worked (and played, and plotted) with him at the *NME* in the 1980s and still regard him with something beyond mere affection, but I don’t think even he would claim to be suited to writing any kind of ‘critical biography’. His 480-page *The Age of Bowie* (flap: ‘a startling biographical critique of David Bowie’s legacy’) was allegedly written in ten weeks, and fair play to him for managing to write a whole book in about the third of the time it might take (cough) some writers to squeeze out a tiny review, but – well, it has to be said, it shows. I’m completely mystified by writers who won’t let themselves be edited, and Morley is the Anti-Edit Man; I often wonder if he has an actual clause in his contract that forbids anyone erasing a single line. *The Age of Bowie* is fatally holed before it gets properly underway by a woefully self-indulgent, overlong introduction that completely skews the biographical timeline. The book is two-thirds over and we’re barely out of the *Low*/*Heroes* period: it’s like one of those painted demo slogans that start out in big bold letters then have to scrunch everything together by the end. The rest of Bowie’s life goes by in a flash, and the ‘critical’ part of this critical biography turns to hagiographical incense. Morley rushes through a jittery, excitable version of conventional biography, as if trying to break some kind of world land-speed record – but I’m not sure he has a single new fact or surprising interpretation. I went back to an earlier biography from my Bowie shelf – Christopher Sandford’s *Bowie: Loving the Alien* (1996) – to compare a page or two, and found I couldn’t put it down. Sandford is especially good on Bowie in the 1980s and 1990s. He deals with all the stuff that fancier or more fanciful authors wouldn’t touch with a bargepole: record company politics (the behind the scenes fiasco of *Black Tie White Noise* is a highlight), sales figures, advances, the drudgery of touring, backstage tantrums, fleeting romances, house moves and so on. The portrait of Bowie that emerges is fascinating: it makes you wonder about all the versions of him we’ve been given, and have taken at face value down the years. If Sandford shows him as flawed – not the perfect five-moves-ahead manipulator of Morley’s schema – it surely also shows a more faceted, human Bowie: another Bowie to Otherness Bowie, a mortal, fallible man. For convinced superfans like Morley, Bowie was an advertising/design genius as well as a great pop musician; but a quick flick through later middle-period sleeves and ‘looks’ produces wince after wince. Music may have been the least of Bowie’s preoccupations during much of that period. He had a happy marriage, a seat on the editorial board of *Modern Painters*, various acting jobs and spots of journalism (as well as, apparently, endless tinkering with some kind of long-mooted novel/screenplay). Plus, of course, there was a never-ending stream of rock and lifestyle mag interviews – though it’s not clear how much they were a grudging duty insisted on by EMI after the huge sum of money it had invested in him. And then there was Bowie’s marriage to Iman, all arranged around a glossy *Hello!* magazine mega-spread. This event was subsequently talked up (rather unconvincingly, it has to be said, as if he were trying primarily to convince himself) by Brian Eno, who claimed his friend was a far-sighted celebrant of some strange new ritual/cultural paradigm. I don’t know that *The Age of Bowie* even begins to come to terms with the manifold contradictions of Bowie’s ‘legacy’. My own feeling is that by the time we get to the end of the book we have learned more about Morley than we have about Bowie. Still, at least it possesses a nutty, Maileresque kind of *grande ambition*, whereas Rob Sheffield’s *On Bowie* reads more like a series of affably intimate blog entries. (You can get an idea of Sheffield’s jauntily faux-naïf tone from the titles of two of his previous studies*: Talking to Girls about Duran Duran: One Young Man’s Quest for True Love and a Cooler Haircut*; and *Turn around Bright Eyes: A Karaoke Journey of Starting Over, Falling in Love, and Finding Your Voice*. I don’t know about you, but I began to lose the will to live somewhere during the second subtitle, and the word ‘journey’ thus employed is banned in our house.) Sheffield lost my sympathy as soon as he suggested that Nicolas Roeg didn’t know what he was doing in *The Man Who Fell to Earth*, which he regards as some kind of hopeless artsy mess, redeemed only by Bowie’s presence. (Rather than, say, a precisely plotted allegory about certain subjects that had an obvious pull for Bowie: loss of innocence; self-willed fall through excess and addiction; media overload; English artfulness and reserve in the landscape of American spectacle and get-it-now emptiness.) Both Sheffield and Morley manage some sharp, rapt writing on Bowie’s LA-to-Berlin crack-up period, but given the richness of the material, anyone who couldn’t get a few good lines out of it should be drummed forthwith out of the rock writers’ guild. Sheffield’s *On Bowie* (white cover with red and blue Aladdin Sane flash; 197 pp.) also bears a spooky resemblance to Simon Critchley’s *On Bowie* (white cover with red and blue Aladdin Sane flash; 207 pp.), ‘a version of which was first published in 2014’. Critchley’s elegant text is far more to my taste than the many encyclopedic volumes in vogue at the moment. He makes use of heavyweight theory names, but doesn’t belabour the reader with academic lingo. Tying together early and late Bowie he arrives at a ‘Lazarus’ figure occupying a figural ‘space between the living and the dead, the realm of purgatorial ghosts and spectres’. It probably helps that he originally spun this steely web without any pressure to conform to the post-death consensus. (Bizarrely, a recent review in the *Observer* called Critchley’s work ‘hastily written’, while Morley’s ten-week blitz was termed an ‘aide memoire’. Go figure.) Simon Reynolds, on the other hand, represents the geography teacher tendency of rock crit, all muscular spadework and measured appreciation. He is enviably industrious, his books are scrupulously researched, and he gives the impression of having heard every B-side ever recorded. At its best, his approach can have a cumulatively enlightening effect, but he can also come across like a jovial cultural studies lecturer dutifully ticking off bullet points. Back in the day, Reynolds always seemed to be trying to uncover some new trend or invent an exciting new micro-genre. Now that the future looks a lot less certain, he specialises in turning over the rich dark humus of the recent past, and *Shock and Awe* (a baffling and maybe even tasteless title) is a pre-punk book to go with his post-punk opus *Rip It Up and Start Again* (2005). In terms of covering every last inch of ground it can’t be faulted. What’s less evident is an invigorating theoretical framework or overview – something that might shatter snoozy old paradigms. I got to the end of the book without being any wiser about how we’re living today with the backwash of Glam (unless, of course, you want to delve into certain foul nooks to do with historic sex allegations). A full and convincing explanation of the nature of Glam’s ‘legacy’ never quite arrives; instead, there’s a scattershot treatment in the final section’s inevitable list/diary/round-up, the weakest part of the book by a considerable way. When Reynolds manages to combine high and low culture, like two tipsy strangers at a wild party who would never have met otherwise, it can be great – a sort of local version of Benjamin’s yearned-for flash of temporal insight: history with a lit fuse. But over the course of 650 pages it begins to feel like trying to see every single ‘iconic’ work in a major gallery in one exasperating go. (An excerpt from Reynolds’s index: ‘Hilton, Paris; Himmler, Heinrich; Hitler, Adolf; Hobsbawm, Eric; Hockney, David; Holder, Noddy’.) One problem with this kind of polymorphously clued-in work is that the author has to pretend to a functioning expertise in a dozen different disciplines (sociology, aesthetics, fashion, musicology) and thereby opens himself up to the jibes of actual experts in those areas. For instance, Reynolds asserts that ‘Magic and self-aggrandisement go together,’ and that ‘Aleister Crowley’s dictum “Do what thou wilt shall be the whole of the law” enshrines this egocentric world view of the disobedient child.’ Well, no, it really doesn’t; it does more or less the opposite. ‘Do what thou wilt’ doesn’t mean ‘do whatever you please and bugger the consequences,’ it means find the one thing you are meant to do and devote yourself to it. I’m not sure how seriously Reynolds wants us to take some of this stuff. You have to wonder if all the Nietzsche-boy references, say, aren’t a bit heavy for Marc Bolan’s frail wee shoulders. I’ve always thought of Glam as a kind of Op-Rock: the lines that made it up are pretty broad and not that special on their own, but taken together up close they can really go to your head. How seriously should we take a Bolan masterwork with lyrics that run: ‘Did you ever see a woman coming out of New York City/With a frog in her hand?’ Isn’t this really just the Op-Rock equivalent of ‘How Much Is That Doggy in the Window?’? Was Bolan really Reynolds’s troubled Wildean dandy, or just an empty chancer who did a few lines and babbled at any interviewer from the pop trades, giving them zingy fibs and fabulations? I recently caught Bolan on afternoon TV singing ‘Get It On’ and his appeal struck me as uncomplicated and elemental. Musically he was from the school of Chuck Berry, but instead of a pervy black granddad, Bolan was an impossibly cute girl-boy in pink satin, Mickey Mouse T-shirt and grandma shoes. He really did look androgynous: willowy and hard at the same time, softly inviting and unthreatening but with enough of a hint of lippy carnality to keep a young female fan base interested. And he seemed especially sexy alongside the rest of the programme’s line-up: Noddy Holder in a big flat cap, Paul Simon’s comb-over, a very sweaty Marvin Gaye in a bobble hat, and ELO’s Jeff Lynne with that scenery-eating perm. When it comes to David Bowie I think Reynolds gets the tone exactly right, and pulls together a really superior potted Bowie biog. In this telling, the years that matter aren’t just Berlin and Los Angeles but also 1964-70, the apprentice years leading up to his Glam breakthrough. I was delighted to see Reynolds being properly respectful to a big influence on Bowie that other critics shy away from: Anthony Newley, a Light Entertainment renaissance man – actor, singer, songwriter – who trod the thin line between shameless MOR schmaltz and nervy conceptual daring. This is a great lost period of pop history, with its dozens of little papers and magazines, fan clubs, pop agony aunts and countless queer cross-currents: oscillation in and out of the closet (and the closet sometimes even a site of uncanny power); the dissolving together of timeless theatricality and pop temporality; deals done in gay drinking clubs, golden youths picked up and polished then abandoned by Machiavellian gay managers. (One key difference between Mark Feld and David Jones was that the latter was maybe happier to go that extra inch.) We might have expected *Blackstar* to be fatally earnest, with everything stripped away leaving only the most bitter daily bread, the mortality blues – a kind of Bowie unplugged. But he chose to go another way, out on a limb for one last bout of serious fun, as scattershot cut-up ‘pretentious’ as ever. Glints of this, splints of that, ambiguous headlines, crowd-sourced jazz noise, theological dust-up: it’s all still there, just as it was in the early bird days. The video for *Blackstar*’s title song looks like it’s set in the world of Ernst Jünger’s a-chronistic sci-fi novel, *Eumeswil*: ancient Egypt or Rome, with Bowie as 23rd-century schizoid shaman. One last play with time and identity: a combination of Far East worship and fear, bandaged eyes (ironic fate for a master of image) and politics as primal myth. This was always the lure with Bowie: to see how his mind worked, to find out what it had been turning over lately, see what he’d pressed into the circuit and what came out the other end. Which is why the 1970s worked so well. Who could have predicted *Young Americans* and *Station to Station*, or *Low* and *Heroes*? It was like the zeitgeist had its loving arms around him, pulling him forcefully by the well-tailored sleeve, pushing him further, so that at a certain point he almost couldn’t tell any longer if the stuff he was taking in and processing was benign or demonic, darkness or light, and no longer cared. He simply accepted the dare. *Blackstar* is Bowie returning to a well-worn spot where he wonders about belief, religion, the cross around his neck. Heaven or hell, who knows, and maybe the key to the sacred is when you accept that you’ll never know and accept that you need both. He left possibly the loveliest image of all right to the end. It’s part of the promo session for *Blackstar* taken by a longtime friend and photographer, Jimmy King, and released on Bowie’s 69th birthday. He’s out in the open, suited and booted to rip through his final curtain. I may not be the biggest Bowie fan in the world, but I still open this pic on my desktop when I have the mean reds and need a spiritual kick up the arse. One final uplifting performance of self! He looks like a little old man, a greying leprechaun, but also younger than springtime. He’s on the finishing line of life, but his body is folding into an arc of pleasure, as if it’s about to leap off into one more flimsy unknown. All the little black stars are tunnelling within, but here he is in meeting-the-accountant suit and soles, and meeting the monster laughing fit to burst. Pure joy creasing the familiar face. Maybe he’s even laughing at one last silly Narcissus reflection: ‘Just look at me, I’ve finally become that god-damned laughing gnome after all.’ A wistful kind of collective love enveloped him at the end, when the world was won over by the dignity of his not selling his failing health to the global media. Finally, he had everyone on side before he had even said a word. Finally, he let the words, music and images speak for him, with no scare quotes or cloudy art-speak verbiage or winning London Boy wink. On *The Next Day* (2013) and *Blackstar* Bowie seems more present, as though illness had had a calming or restorative effect alongside the natural grief. Surely one of the first (and potentially worst) strikes of illness for any performer is that it strips away image (false or otherwise), a large part of a star’s capital no matter their age or how wryly they may now regard the other self sent out to do battle on the stages and screens of the world. But it’s also possible that sloughing off the brittle patina of image can bring unexpected revelations, a nakedness that restores context. Before you know it you may hear yourself say: ‘Maybe I can do anything I like now, after all.’ The cliché is that illness shows the mottled wolf skull beneath the pampered skin – but it can also be a welcome corridor, returning you to places you’d left behind. Suddenly, in the antiseptic hospital room one afternoon, you remember them all: so many unstarry things. The way shadows caressed a wall in a vacant lot in Berlin, one rainy November day in … 1976, was it? A scrum of garish fans surrounding you on Sunset Boulevard. Postwar London, whose bombsites seemed to harbour all the time in the world. Make-up counters, listening booths, bakelite curves, saloon bar mirrors, diamanté in a jewellery box that played *Swan Lake* when the lid clicked up. The strange snake hiss of early TV. A new world inventing itself in the middle of the 20th century, when images were things that genuinely shocked, carriers of forbidden knowledge. Something torn from a Hollywood gossip mag or a single image in a clunky library book on Surrealism could literally change your life. Penguin Modern Classic paperbacks; Genet and his cruisey down-is-up theology; Andy and his abyssal Wow. The surprising new meanings ‘love’ could develop far away from home. Backstage’s suffocating air. The way she walked; the way she talked. Send Letters To: The Editor London Review of Books, 28 Little Russell Street London, WC1A 2HN [email protected] Please include name, address, and a telephone number.
true
true
true
The scene-setting picture of Bowie at home featured black candles and doodled ballpoint stars meant to ward off evil...
2024-10-12 00:00:00
2017-01-05 00:00:00
https://www.lrb.co.uk/st…/LRB-3901-01.jpg
article
lrb.co.uk
London Review of Books
null
null
36,480,720
https://asia.nikkei.com/Business/Tech/Semiconductors/ASML-says-decoupling-chip-supply-chain-is-practically-impossible
ASML says decoupling chip supply chain is practically impossible
Staff Writer
VELDHOVEN, Netherlands -- Decoupling the global semiconductor supply chain would be "extremely difficult and expensive" if not impossible, a senior executive at ASML, the world's most valuable chip equipment maker, told Nikkei Asia. Christophe Fouquet, ASML's executive vice president and chief business officer, said in an exclusive interview that any single country would struggle to build its own fully self-reliant chip industry.
true
true
true
Top equipment maker sources globally while keeping most production in Netherlands
2024-10-12 00:00:00
2023-06-22 00:00:00
https://www.ft.com/__ori…=auto&height=630
website
nikkei.com
Nikkei Asia
null
null
24,838,411
https://www.theguardian.com/money/2020/oct/20/monzo-launches-180-a-year-premium-account-despite-downturn
Monzo launches £180-a-year premium account despite downturn
Kalyeena Makortoff
The digital bank Monzo has launched a premium account, complete with a metal payment card, as it tries to convince more customers to pay for its services. Customers can pay £180 a year, or £15 a month for a minimum of six months, for the privilege of Monzo Premium. Its key features include travel and phone insurance, £600 of fee-free withdrawals abroad each month, a sleek white steel card and 1.5% interest on deposits up to £2,000. The travel insurance will cover cancellation costs up to £5,000 as well as flight delays of more than four hours, which Monzo’s chief product officer, Mike Hudack, said could come in handy as UK travel restrictions change in response to the Covid crisis. It marks a renewed push into fee-paying services for Monzo, which has 4.6 million customers and is particularly popular among millennials. In July it relaunched its paid-for Plus account, which it originally offered and then withdrew last year. It has also announced charges for customers who do not use its service for everyday banking but make cash withdrawals of more than £250 a month. Hudack said the premium account had been in development before Covid and he was confident that customers would be willing to pay, despite the economic impact of Covid which has sent the UK unemployment rate to 4.5%. “Everything we do needs to make money and be sustainable as a product that’s really key,” Hudack said. “You can’t get greedy, you can’t take too much, but if you build something that is really powerful that people love and they want to keep, you make money from that.” John Cronin, a financial analyst at the stockbroker Goodbody, predicted Monzo would struggle to squeeze more money out of its users. “While customers will undoubtedly be pleased with Monzo’s latest offering, it is unlikely to address the significant challenges the company faces in the context of monetising its young customer base,” he said. Monzo’s streamlined Plus account went on sale in July and gained 50,000 paying users – charged £5 a month – within the first four weeks. That month Monzo revealed that its annual losses had ballooned to £114m for the year to February, up from a £50m loss a year earlier. It said the financial strain of the Covid-19 crisis had put the company’s future at risk and, along with stricter regulations meant to combat financial crime, it could result in lower customer numbers, higher costs and lower revenue.
true
true
true
Digital bank says it is confident its mainly younger customers will pay for extra services
2024-10-12 00:00:00
2020-10-20 00:00:00
https://i.guim.co.uk/img…02daffae2f2ff9d9
article
theguardian.com
The Guardian
null
null
23,053,341
https://blog.datagran.io/posts/less-growth-hacking-more-growth-science
Less Growth Hacking, more Growth Science.
Carlos Mendez CEO; Co-Founder
According to Wikipedia, the goal of growth hacking strategies is generally to acquire as many users or customers as possible while spending as little money as possible. A growth hacking team is made up of marketers, developers, engineers and product managers that specifically focus on building and engaging the user base of a business. The typical growth hacker often focuses on finding smarter, low-cost alternatives to traditional marketing, e.g. using social media, viral marketing or targeted advertising instead of buying advertising through more traditional media such as radio, newspaper, and television. For the last decade, growth hacking teams have been focused on finding data points or opportunities to “hack” the system and acquire clients at a lower cost via product iterations or UX, to increase the LTV of users. All of these findings are great, but with the advent of Artificial Intelligence, Growth Hacking teams are struggling to integrate Data Science into their day to day process. The main reason for the struggle is companies are hiring Data Scientists, data architects and data engineers who live in silos within the organization, in part because this is a department that serves the entire organization, not just Marketing. But, wait, we had the same problem before we came up with Growth Hacking teams right? I mean, engineers and product managers were also part of different departments in the past and Startups were the ones who figured out that companies should bring this talent together into growth hacking teams. Now, with Artificial Intelligence is not that simple because unfortunately, the talent is scarce. By 2025 there will be a shortage of 2.5 million data scientists according to Linkedin and Upwork. Usually the Data Science team, if it exists at all, is small and tends to cause inefficiencies. For example, imagine departments like operations, sales and marketing funneling all of their requirements at once, and to a single and small department. These departments become sort of “labs” within the organization. As growth hacking teams understand the importance of Artificial intelligence in their operation to for example, reduce churn, predict user behavior, or increase LTV, some profiles inside the team have started to migrate into data analysts. So now, the problem is different. Tools in the market for AI are still too specialized and siloed, the same way organizations are. Existing tools are focused on Data Science and Data architects, the same way that in the 80’s airplanes needed to fly with a flight engineer. Functions of the flight engineer included inspecting the aircraft and overseeing fueling operations before flight. During the flight, the flight engineer monitored the performance of the engines and cabin pressurization, air conditioning, and other systems. So, like flying in the 80’s, Growth hacking teams need to solve the problem the way the airline industry did 20 years ago. First we need developers, product and marketing people to get trained in data and Machine Learning. They need to understand how it works and what it can do for a business. The same way pilots had to be trained in avionics and engineering. Second, we need the technology to catch up, we need no code tools that can be operated by non technical professionals, the same way pilots can now fly planes without a flight engineer. For example, having access to a software that can by default, set a Spark configuration to run a linear regression and then, automatically send the output to a business application. If mitigating from a Marketing profile to a growth hacking professional was hard, upgrading yourself to a growth science professional will be 2x harder. A growth science team leader would need to have strong management and marketing skills, computer science background and data analytics capabilities. The team will now have to operate and execute a bit differently, first aggregating and cleaning large amounts of data. Analyzing and visualizing. Building theories and hypotheses on how AI and ML can solve specific problems at scale and preparing business applications to run the outputs of the models. It is just not anymore about working with Excel files or Google sheets and building rule-based systems. There’s a new business level, and professionals need to level up, fast. Although we have started to see the shift, very few professionals are talking about it. Let’s start the conversation.
true
true
true
According to Wikipedia, the goal of growth hacking strategies is generally to acquire as many users or customers as possible while spending.
2024-10-12 00:00:00
null
https://cdn.prod.website…e5e99_hacker.png
website
null
null
null
null
7,742,812
http://rohit.io/dikhao-quickly-find-all-related-aws-resources.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
875,509
http://illusioncontest.neuralcorrelate.com/2009/the-illusion-of-sex/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,022,899
https://www.facebook.com/ads/preferences/?entry_product=ad_settings_screen
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
13,401,197
http://nautil.us/issue/44/luck/the-deceptions-of-luck
The Deceptions of Luck
David J Hand
Would you say you are a lucky person? Have unexpected things turned up which made your life better? I don’t mean something as extreme as a major lottery win, but perhaps getting a job because a stronger candidate dropped out with the flu, or catching the train despite being late because it was delayed? Or would you say you are unlucky? You missed the key job interview because you caught the flu, or missed that train because it was cancelled? Or perhaps you don’t believe in luck, thinking that people make their own good—or bad—fortune, and that success in life is down to hard work and persistence. Of course, even if you believe that, it can’t be a complete explanation—no matter how hard you worked, you could not make that cancelled train appear. There are always things beyond your control. Luck is obviously closely related to the concept of chance, but it’s not quite the same. Chance describes an aspect of the physical universe: It’s what happens *out there*. The coin coming up heads rather than tails, the die falling to show a six, and even a particular one of the 45,057,474 possible tickets in the United Kingdom National Lottery being drawn. In contrast, luck attaches a value to the outcome of chance. Luck is chance viewed through the spectacles of good or bad fortune. It’s really good news, at least for you, if you win the lottery, and it’s really bad news if you’re one of the passengers on the plane when it crashes. Chance, then, is the *objective reality* of random outcomes in the real world, while luck is a consequence of the subjective value you place on those random outcomes. Luck, we might say, is chance with a human face. Understanding this gives us a clearer view of reality, and a clearer view of reality means we can choose better courses of action. Good luck is something to be desired—having good luck means that the chance events you experienced had positive outcomes. And bad luck is something you hope you don’t get. Which naturally leads to the question: Is there anything we can do to make ourselves luckier? We could try to do this by changing what we regard as a good outcome, but that seems unreasonable. Slipping on ice and breaking your leg seems unlucky however you look at it, whereas it’s hard to see winning the lottery as unlucky. So perhaps instead we should look for ways to alter the chance, the probability, that different outcomes will occur. And the world is full of beliefs that we can change our chances, and hence our luck. Superstitions are examples: baseball pitcher Turk Wendell drawing three crosses in the dirt before pitching; Manchester United soccer player Phil Jones putting his left sock on first when the team played at home, but his right sock on first when the team played away; you taking your favorite pen into the examination room. Unfortunately, there’s precious little evidence that any such things increase the chance of a favorable outcome. By increasing the chance of a favorable outcome, you can make your own luck. On the other hand, there’s an old saying, attributed in various forms to Thomas Jefferson, Stephen Leacock, Sam Goldwyn, and others: “The harder I work, the luckier I get.” It’s certainly true that if you train hard you are more likely to win a sporting event, but it clearly does not explain everything. Your hard work does not reduce the chance of being kept awake by noisy neighbors the night before, or slipping on a wet patch as you run during the race. And people seem to win lotteries regardless of how dissolute a life they lead. Louis Pasteur said something similar: “Chance favors the prepared mind”—making sure you are able to recognize and grasp opportunities when they arise. One kind of preparation is to take advantage of what’s called the law of truly large numbers.1 This is not the same as the statistician’s law of large numbers, which describes how averages get closer and closer to a fixed value the more numbers you put into the average. It’s something quite different. Let’s begin with the truism that, while you have a tiny chance of winning the lottery if you buy a ticket, you can *guarantee* that you *won’t* win if you don’t buy one. So that’s quite a big difference between two chances, from nothing to something, even if that something is still very small. But we can then take this idea further. Obviously, the more (differently numbered) tickets you buy, the greater your chance of winning. Buy 1,000 tickets instead of just one, and your chance of winning is 1,000 times greater. Buy 1 million—a truly large number—and your chance is even greater. I should comment parenthetically that I am not encouraging you to buy lottery tickets. With the U.K. National Lottery a single ticket has a 1 in 45 million chance of winning the jackpot. If you buy 1,000 tickets you still have only a 1 in 45,000 chance of winning. That’s less likely than getting 15 heads in a row tossing a fair coin. Is that something you’d want to bet on? Still, this example shows that if we increase the number of opportunities for a very improbable event to occur (drawing the winning ticket), we can increase dramatically the chance that it will happen. Put another way, if we give ourselves lots of chances for something good to happen, we can increase our chance of succeeding. This is beginning to look very much as if we can improve our luck. Actually, we can take this further. If you had a spare £90 million lying around (that’s £45 million times two; times two because each ticket costs £2), you could buy all possible combinations of six numbers between 1 and 59 (each ticket in the U.K. National Lottery consists of six numbers chosen from 1 to 59), and so guarantee holding the jackpot winning ticket. This is going beyond the law of truly large numbers and entering the realm of the law of inevitability. This simply says that one of the set of all possible outcomes *must* happen: It’s *inevitable* that one of them will occur because, by definition, there isn’t anything else. Returning to the law of truly large numbers, it obviously does not apply solely to winning the lottery, and in fact the numbers need not always be truly large. Di Coke lives in Brighton in the U.K., and has won over £300,000 worth of prizes from competitions. These have included overseas holidays, a trip to the Brazilian Grand Prix, a trip to New Zealand, a car, five iPods, two computers, a ticket to the British Academy of Film and Television Arts awards, and money. Overall, she averages wins equivalent to more than £15,000 per year. What a lucky woman, you might think. But the fact is Di doesn’t rely on blind chance; she uses the law of truly large numbers to increase her chance of getting lucky. She does this by entering over 400 competitions per week. She might have a small chance of winning any one of them, but with so many competitions over the course of a year the chance that she won’t win *any* is vanishingly small. She can pretty well guarantee some wins. She’s making her own luck. And have you ever read enviously of an entrepreneur who has just sold his start-up for millions of dollars? If so, ask yourself, was that their first attempt? Did they in fact keep going, through failure after failure, until they happened to hit the jackpot? Whether you see it as lucky or unlucky, the chance event is the same. And what about job applications? Appointment to a position always involves an element of chance. Who else has applied? What exactly was the appointments panel looking for? Will all the members of the panel see eye to eye? You may be unlucky not to get a particular job, but keep applying and you increase your chance of winning *some* job. The key to all this is to give chance the opportunity to produce an outcome in your favor: to give yourself a chance of getting lucky. If you don’t apply for the job because you believe you will be unlucky, then you are certain not to get it. By increasing the chance of a favorable outcome, you can make your own luck. But when Pasteur commented on a prepared mind, he didn’t just mean keep trying until you succeeded. He also meant something deeper: that he was *ready* to see an opportunity when it arose, and to see the links and relationships that others wouldn’t notice. This notion also applies in job applications as much as in science. Prepare for that application by carefully studying what they are after and you are more likely to be lucky. At a higher level, people who regard themselves as lucky will tend to be more outgoing. And there’s certainly a causal relationship here, although it works in both directions. An outgoing and positive person will be more open to new ideas, new people, and new experiences, and so give themselves more opportunities for positive things to happen. They are more likely to get lucky. But then, positive experiences—learning that good things happen—are likely to make someone more prepared to risk new things. Leading to a self-reinforcing cycle. We’ve already seen that luck is different from chance: It’s chance viewed through the mind’s eye. This is illustrated very clearly by people’s attitudes to accidents. For example, people often describe as *good* luck events that an objective observer might describe as horribly *bad* luck. Someone involved in a traffic accident in which their car rolled over three times and was a write-off, but from which they emerged unscathed, might well remark on how lucky they were. Others might think they were very unlucky to have been involved in the accident in the first place. If a rock falls off a cliff face just as you are walking beneath, and narrowly misses you, you might well say how lucky you were that it missed, instead of saying how unlucky you were to be there at the time it fell. But whether you see it as lucky or unlucky, the chance event is the same. The fact that luck is a human construct is forcefully brought home when we experience a sequence of chance events, one after the other. For example, an accumulator bet involves betting on just such a sequence of events, winning only if all of the events in the sequence happen. In August 2015, a Manchester United fan from Lichfield in the U.K. placed a 30p bet on a sequence of outcomes of 15 soccer games, leading to winnings of half a million pounds. But although this outcome has almost the same *chance* as getting the first 14 soccer matches right and the last one wrong, we would have described him as extremely *unlucky* had he lost that last bet. And in 2008, when Yorkshireman Fred Craggs placed a 50p stake on an eight horse accumulator, and watched all of his horses romp home, giving him a cool £1 million winnings, we would have described him as unlucky had he got all but the last one right. It doesn’t always go as planned. Indeed, it rarely does so. Joe McGuire placed an accumulator bet on a sequence of six horse races, watching his £10 million jackpot get closer and closer as the first five winners came in according to his predictions, only to lose it all when Escape to Glory and Justonefortheroad, the two horses he had bet on in the final race, came in second and sixth. And guess what? One report described him as “Britain’s unluckiest punter.” Wherever we are in life, we can look back and identify a chain of events that led us there. Suppose we are pretty good at picking winners in horse races, and can pick the winner of any race with a probability of ½. That means that we can expect to get about half right. Now suppose we bet on a five step accumulator. Then the chance of getting the first right is ½, the chance of getting both of the first two right is ½x½ = 1/4, the chance of getting the first three all right is ½x½x½ = 1/8, and so on up to a chance of getting all five right of 1/32. This idea has been used in unethical stock price movement predictions. We begin by claiming to be able to predict stock movements; in particular, we will aim to predict whether the market will move up or down next week. We identify 1,024 people, and to 512 of them we send an email saying the price will move up next week, and to the other 512 we send an email saying it will move down. We will be right for one of these two groups, and we discard the other. Next week, for 256 of the 512 we got right, we send an email saying the price will move up and to the other 256 an email saying it will move down. Again we must get it right for one of these two groups. And we go on in this way, always dropping the group we sent the wrong prediction to, and turning our attention to the other. After 10 weeks like this we have one person left, who has seen us make 10 successive correct predictions about the stock market movements—*and does not know about the other 1,023 people*. To this person we then send an email saying something like “you can see that our algorithm works. If you want our prediction for next week, it will cost you $10,000.” This scam has made use of the law of inevitability: There are only 1,024 possibly up/down patterns in 10 steps, so one of them must come up. And it has also made use of another law, the law of selection, which says you can make probabilities as large as you want if you choose after the event. In this case, step by step, you appear to have chosen the single pattern of 10 correct predictions—*and you can always do this*. A less extreme version of this arises accidentally in the legitimate investment advisor space. Imagine a large population of such advisors, and suppose, for the sake of argument, that none of them are any good—that their predictions are no better than chance. And let’s follow their fortunes over 10 weeks of up and down market movements as before. Now, at each week, some of them (about half) will get it right, purely by chance—we might say they are lucky that week. Of course, the chance that any particular one of them will get their predictions right for all 10 weeks is just 1/1024, since there are 1,024 patterns. But if there are enough of these people then we would expect some of them to get all 10 right, just by accident. Or by chance, since they don’t have any skill. But those lucky ones, the ones who got many predictions right by chance simply because it was likely that some would, are the ones who see investors flocking to them. Only to be disappointed as they move into the future and the predictions turn out to be no better than chance, with about half right and half wrong. Di Coke, the multiple competition winner, also uses a variant of the law of selection, though in a less extreme way. Perhaps counter-intuitively, she recommends focusing on competitions that take time and effort. Her argument is that fewer people will enter these competitions, so her chance of winning is greater. This clearly makes sense. At an extreme, if only one person gives the right answer, then that person would win. For this same reason, she recommends against entering competitions that involve random draws (like the lottery). Such competitions are easy to enter, so you get huge numbers of people entering, and if there is to be only one winner—for that exotic holiday say—then you are proportionately much less likely to be that person. Wherever we are in life, we can look back and identify a chain of events that led to us being there. If I hadn’t been raised in *that* village at *that* time, I wouldn’t have met the teacher who introduced me to playing that musical instrument, so I’d never have played in that band, and I wouldn’t have met the woman who had a shared interest in antiques and introduced me to the dealer who offered me the job. “How incredibly lucky,” we might think, “that just those things happened which resulted in me being where I am now.” But that’s misleading. Wherever we’ve got to now must have been preceded by some sequence of chance events that had to play out as they did to lead to us being where we are now. So we can *always* find such a chain. The bottom line is that *stuff happens*. Chance, the essential unpredictability of the natural world just rolls on, flipping things this way and that way at random. But we look at the outcomes, we relate them to our lives, and we interpret them differently. We say, “Wasn’t I lucky?” or “Wasn’t I unlucky?” as the case may be. Luck is our attempt to find meaning in a meaningless universe. *David J. Hand is Emeritus Professor of Mathematics at Imperial College, London, where he was previously a professor of statistics. He studied mathematics at Oxford, and statistics at Southampton University in the U.K. He has published 29 books and over 300 scientific papers.*
true
true
true
Nature makes chance, humans make luck.
2024-10-12 00:00:00
2017-01-09 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
8,519,036
http://www.macrumors.com/2014/10/27/amazon-launches-fire-tv-stick/
Amazon Launches $39 'Fire TV Stick' to Compete With Chromecast, Apple TV
Juli Clover
Amazon today announced the launch of a new Fire TV Stick, designed to compete with Google's Chromecast and Apple's Apple TV. The Fire TV Stick is a media streaming stick much like the Chromecast, designed to allow users to plug it in to the HDMI port of a television to access content like TV shows, movies, games, and more. The Fire TV Stick offers a dual-core processor, 1GB of RAM, 8GB of built-in storage, and dual-band/dual-antenna MIMO Wi-Fi. It can be controlled with a remote control, a smartphone, or voice control through an app. According to Amazon, it offers 50 percent more processing power than the Chromecast, along with 2x the memory and 32 times more storage. "Fire TV Stick is the most powerful streaming media stick available--a dual-core processor, 1 GB of RAM, 8 GB of storage, dual-band and dual-antenna Wi-Fi, included remote control, voice search with our free mobile app, easy set-up, an open ecosystem, and exclusive features like ASAP for instant streaming," said Jeff Bezos, Amazon.com Founder and CEO. "The team has packed an unbelievable amount of power and selection into an incredible price point--Fire TV Stick is just $39." In addition to allowing users to access Amazon Prime content, the Fire TV Stick also supports third-party apps like Netflix, Hulu Plus, WatchESPN, Spotify, Pandora, and more, delivering much of the content that's also available on competing products like the Chromecast, the Apple TV, and products from Roku. The Fire Stick offers "Fling" technology, letting users switch between viewing content on their televisions and Fire Phone or Fire tablet, and it provides wireless mirroring from both compatible Amazon products and those that support Miracast. It also supports various Amazon technologies like X-Ray for looking up movie, music, or TV show information, and it utilizes Whispersync to sync all of a user's content. Amazon also has the Fire TV, a set-top media streaming box that's a closer competitor to the Apple TV than the Fire TV stick, but the slimmed down plug in-based media sticks from Amazon and Google are far cheaper than the Apple TV, which still costs $99. While Google and Amazon have concentrated on offering a slimmer portable media solution to consumers, Apple is said to be working on a revamped set-top box that might include support for third-party apps and games along with deeper integration with cable TV channels. It is unclear when Apple might launch its revised set top box, as development has reportedly been delayed several times over the course of the last two years. Amazon's Fire TV Stick is priced at $39, but for the next two days, Amazon Prime members will be able to purchase the device at a discounted price of $19.
true
true
true
Amazon today announced the launch of a new Fire TV Stick, designed to compete with Google's Chromecast and Apple's Apple TV. The Fire TV...
2024-10-12 00:00:00
2014-10-27 00:00:00
https://images.macrumors…ire_TV_Stick.jpg
article
macrumors.com
MacRumors.com
null
null
38,296,103
https://dollarsanity.com/multiple-income-streams-for-busy-people/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,416,498
http://gigaom.com/2010/06/08/xing-founder-tries-euro-twist-on-y-combinator-model/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+OmMalik+%28GigaOM%29
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,260,773
http://news.bbc.co.uk/2/hi/entertainment/8616413.stm
LSD inspired Doctor regeneration
null
Previous Doctors all faced hostility from viewers **Doctor Who's regenerations were modelled on bad LSD trips, internal BBC memos have revealed.** The Doctor's transformations were meant to convey the "hell and dank horror" of the hallucinogenic drug, according to papers published on the BBC Archive. Regenerations were introduced in 1966 to allow writers to replace the lead actor. New Doctor Matt Smith is the 11th Time Lord. The papers also reveal the difficulties of bedding in a new Doctor. In an internal memo dating from 1966, producers outlined how the original Doctor, William Hartnell, would be transformed for his successor Patrick Troughton. | **The whole idea of regenerating the Doctor was a flash of genius that's kept Doctor Who fresh** | It also tackled the "horrifying experience" of the regeneration. "The metaphysical change... is a horrifying experience - an experience in which he relives some of the most unendurable moments of his long life, including the galactic war," it said. "It is as if he has had the LSD drug and instead of experiencing the kicks, he has the hell and dank horror which can be its effect," the memo added. Discussing his appearance, the document stated: "His hair is wild and his clothes look rather worse for wear (this is a legacy from the metaphysical change which took place in the Tardis)." **'Half-witted'** The documents also reveal how new Doctors have faced hostility from viewers. Some members of the audience felt Troughton "exaggerated the part". William Hartnell regenerates into Patrick Troughton "Once a brilliant but eccentric scientist, he now comes over as a half-witted clown," said one viewer. Another told the BBC's Audience Research Department: "I'm not sure that I really like his portrayal - I feel the part is exaggerated - whimsical even - I keep expecting him to take a great watch out of his pocket and mutter about being late like Alice's White Rabbit." His successor Jon Pertwee fared a little better in 1970, although a research report following his first appearance declared: "Reaction to this first episode of the new Dr Who series can hardly be described as enthusiastic." Tom Baker's debut also drew much criticism. "General opinion was that the new Doctor Who is a loony - he is an eccentric always, but the way it was presented made him stupid," said one viewer. **Approval rating** And in 1984, Colin Baker proved to be a turn-off, with one viewer finding him "too stern" and another "too aggressive". Reaction to Sylvester McCoy's debut in 1987 was even worse. His "approval rating" was considerably lower than Colin Baker's, although the reception given to his sidekick Mel, played by Bonnie Langford, was worse. Roly Keating, the BBC's director for archive content, said: "The whole idea of regenerating the Doctor was a flash of genius that's kept Doctor Who fresh and exciting for 47 years now. "As we welcome Matt Smith and Karen Gillan into the Tardis, it's the perfect moment to remember his predecessors and also to celebrate the work of the BBC Archive in preserving these documents and photographs for future generations." | ## Bookmark with: What are these?
true
true
true
Doctor Who's regenerations were modelled on bad LSD trips, internal BBC memos released on the BBC Archive reveal.
2024-10-12 00:00:00
2010-04-12 00:00:00
null
null
null
BBC
null
null
18,581,404
https://www.reddit.com/r/haskell/comments/2cv6l4/clojures_transducers_are_perverse_lenses/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,976,293
https://www.zdnet.com/article/cisa-says-62000-qnap-nas-devices-have-been-infected-with-the-qsnatch-malware/
CISA says 62,000 QNAP NAS devices have been infected with the QSnatch malware
Catalin Cimpanu
# CISA says 62,000 QNAP NAS devices have been infected with the QSnatch malware Cyber-security agencies from the UK and the US have published today a joint security alert about QSnatch, a strain of malware that has been infecting network-attached storage (NAS) devices from Taiwanese device maker QNAP. In alerts [1, 2] by the United States Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom's National Cyber Security Centre (NCSC), the two agencies say that attacks with the QSnatch malware have been traced back to 2014, but attacks intensified over the last year when the number of reported infections grew from 7,000 devices in October 2019 to more than 62,000 in mid-June 2020. Of these, CISA and the NSCS say that approximately 7,600 of the infected devices are located in the US, and around 3,900 in the UK. "The first campaign likely began in early 2014 and continued until mid-2017, while the second started in late 2018 and was still active in late 2019," the two agencies say. ### QSnatch malware has exfiltration capabilities CISA and the NCSC say that the two campaigns used different versions of the QSnatch malware (also tracked under the name of Derek). The joint alert focuses on the latest version, used in the most recent campaign. According to the joint alert, this new QSnatch version comes with an enhanced and broad set of features that includes functionality for modules such as: **CGI password logger**- This installs a fake version of the device admin login page, logging successful authentications and passing them to the legitimate login page.**Credential scraper****SSH backdoor**- This allows the cyber actor to execute arbitrary code on a device.**Exfiltration**- When run, QSnatch steals a predetermined list of files, which includes system configurations and log files. These are encrypted with the actor's public key and sent to their infrastructure over HTTPS.**Webshell functionality for remote access** However, while CISA and the NCSC experts managed to analyze the current version of the QSnatch malware, they say that one mystery has still eluded them -- namely how the malware initially infects devices. Attackers could be exploiting vulnerabilities in the QNAP firmware or they could be using default passwords for the admin account -- however, none of this could be verified beyond a doubt. But once the attackers gain a foothold, CISA and the NCSC say the QSnatch malware is injected into the firmware, from where it takes full control of the device and then blocks future updates to the firmware to survive on the victim NAS. CISA and NCSC urge companies to patch QNAP NAS devices The joint alert says that the QSnatch group's server infrastructure that was used in the second series of attacks is now down, but that QSnatch infections still remain active around the internet, on infected devices. The two agencies are now urging companies and home users who use QNAP devices to follow remediation and mitigation steps listed in the Taiwanese vendor's support page to get rid of QSnatch and prevent future infections. Failing to remove the malware equates to allowing hackers a backdoor into company networks and direct access to NAS devices, many of which are used to store backups or sensitive files.
true
true
true
QSnatch malware, first spotted in late 2019, has grown from 7,000 bots to more than 62,000, according to a joint US CISA and UK NCSC security alert.
2024-10-12 00:00:00
2020-07-27 00:00:00
https://www.zdnet.com/a/…t=675&width=1200
article
zdnet.com
ZDNET
null
null
26,507,017
https://www.xda-developers.com/samsung-experience-10-theme-samsung-galaxy-s9-samsung-galaxy-s8/
Theme brings Samsung Experience 10 design to the Samsung Galaxy S9/S8
Arol Wright
We know that Samsung Experience 10 based on Android 9 Pie is in the works. We recently got our hands on a very early build for the Snapdragon Samsung Galaxy S9 and Galaxy S9+, and we're hoping to get an official beta program before the end of the year as well. Samsung's version of Android Pie looks very, very different from what we're used to on Samsung Experience 9.X based on Android Oreo. There are rounded corners everywhere and lots of white cards among other UI changes in the upcoming Samsung Experience 10. If you dig the look, you don't necessarily have to wait until the Android Pie rollout to get it. In fact, you can get it right now with a theme if you have a Samsung Galaxy S9 or Samsung Galaxy S8 on Android 8.0 Oreo. Obviously, not every single aspect of the newest iteration of Samsung Experience 10 can be recreated as there are changes not attainable with a simple theme. Stock Android Pie is already radically different from Android Oreo both aesthetically and functionally, and Samsung Experience 10 takes those new changes even further and also adds some changes of their own. But for what it's worth, the theme is pretty good at emulating the Android Pie look on Samsung phones. If you're interested in seeing what the newest version of Samsung's software looks like, we have a detailed hands-on article as well as an in-depth video you should check out. The theme brings rounded corners for notifications, the new navigation bar design, and a lot more, so if you like what Samsung is doing with Android 9 Pie, then this is totally worth a shot. It does have a few bugs, particularly a lack of full Android Nougat support (it's optimized for Android 8.0 Oreo, specifically on the Samsung Galaxy S8 and Samsung Galaxy S9) and a lack of rounded corners for notifications on Android 8.1 Oreo. You can download the theme from our forums below. **Download the Samsung Experience 10 Theme for the Samsung Galaxy S8 and Samsung Galaxy S9** ## How to install - Open the Theme Store app and apply the default theme. - Download and install the theme APK from the above link. - Open the Theme Store app via the Wallpapers and Themes section in Settings. - You should see the new Samsung Experience 10 theme here. Apply the trial and then restart your phone. Even though it says it's a trial, it's actually the full theme.
true
true
true
You can now get a taste of Samsung's new Samsung Experience 10 theme on your Samsung Galaxy S8 or Samsung Galaxy S9 without updating to Android Pie!
2024-10-12 00:00:00
2018-09-30 00:00:00
https://static1.xdaimage…eature-Image.jpg
article
xda-developers.com
XDA
null
null
25,165,424
https://www.smithsonianmag.com/science-nature/scientists-create-buzz-first-ever-global-map-bee-species-180976348/
Scientists Create a Buzz With the First Ever Global Map of Bee Species
Smithsonian Magazine; Corryn Wetzel
# Scientists Create a Buzz With the First Ever Global Map of Bee Species Most of the insects avoid the tropics and choose treeless environments in arid parts of the world From the collapse of honeybee colonies to the arrival of bee-eating “murder hornets” in the United States, bees have received swarms of attention recently, yet scientists know surprisingly little about where these animals live. Now a pioneering study, published today in *Current Biology*, reveals that bees avoid moist, tropical ecosystems and instead favor dry, treeless landscapes. The research shows the greatest diversity of species lives in two bands around the globe—mostly in temperate zones—an unusual distribution pattern. Experts say this first-ever map of bee species around the world is a leap forward in understanding and protecting the pollinators that our food supply and ecosystems rely on. “Nobody has, to my knowledge tried to produce a map of bee diversity previously,” says Paul Williams, an entomologist at the Natural History Museum in London who was not involved in the work. “I think it's a fantastic move in the right direction.” “Humans are pretty good at just going for what's easy, which is why we've got really great data on mammals, but then we overlook all the invertebrates, despite the fact they contribute some really important services within ecosystems,” says Alice Hughes, associate professor at the Chinese Academy of Sciences and author of the paper. “If we don't understand what those patterns of diversity look like, we've got no means of trying to conserve them.” Mapping animals of any kind on a global scale is a challenge, but when assessing tiny, similar-looking species with patchy data, the task is particularly daunting. The team looked at nearly six million public records of where bees appeared around the world from five publicly-accessible open source databases. They then compared that information with a comprehensive checklist of species compiled by entomologist John Ascher available on DiscoverLife, an encyclopedia of global species diversity. The checklist includes verified observations, collected specimens and published records. In the public open source records, a bee could be logged in the wrong location because someone misplaced a minus sign when documenting the species, for example, says Hughes. If a species name was misspelled, the team wanted to make sure it wasn’t logged as a new species. The researchers eliminated misidentifications, inaccurate location points and other errors by checking the public entries against that DiscoverLife checklist. One of the issues with open-source data repositories is that they are riddled with errors and biases that can be misleading, says Daniel Cariveau, a professor in the department of entomology at the University of Minnesota and leader of the Cariveau Native Bee Lab who was not involved in the research. “Bees aren't like birds — they're really hard to identify. You need really good taxonomists to do this,” says Cariveau. “And this paper, these authors, are really some of the best taxonomists in the world.” Hughes and colleagues also set standards for the quantity of the data used in each region to make sure the results weren’t weighted unfairly toward places with more records. She says the end result was a map that was as accurate as possible. The research revealed that bee species were most numerous in two bands around the globe, with more species in the Northern Hemisphere—in areas including California, Morocco and the Himalayas—than in the Southern Hemisphere—in regions including South Africa and the Andes. While most plant and animal species are richest in tropical areas, bees avoid these ecosystems along with the colder areas near the poles. This two-banded distribution is an anomaly, says Cariveau. “If you were to study beetles, or butterflies, or moths, or things like birds, you see this unimodal pattern where you get this increase in the tropics. So this is a really unique thing." Though rare, some marine species and mammals can also follow this distribution. Williams says this work brings into focus what many bee researchers suspected from smaller-scale efforts to map the diversity of bees on local levels. Williams thinks the bees’ avoidance of tropical and forested environments likely has to do with food abundance and nesting choice. Most bees aren’t social honey-producers. They often live alone and don’t sting. And because many of these solitary species nest in the ground, the water-logged earth of tropical environments means fungi could spoil their food stores, threatening the bee’s survival. Moisture isn’t the only reason bees seem to dislike tropical ecosystems. Drier deserts environments have super blooms that can support a huge number of bees at once. “In the deserts and on the desert edges, you often get great flushes of flowers after there's been rain,” says Williams. Bees can exploit these resources quickly and feed off their pollen stores in hotter and drier seasons. But their environment can’t be too dry. Bee species were at their most abundant near deserts that have surrounding vegetation and are ripe for plant growth. One barrier to creating comprehensive species maps is a lack of open, accessible data on bees. Countries are not incentivized to share their records, says Hughes, which bars other researchers from benefiting from their work. Both Hughes and Williams say that sharing data internationally would be a boon to their work and could produce even more accurate results. Williams says he’s curious to see how species patterns look when broken down into sub-groups, like ground-nesting bees, stingless bees or honeybees. With the impacts of climate change mounting, Cariveau says this work could point to bee habitat that needs protection now, and to areas bees might live in the future. “Whether the plant communities can migrate given climate change, whether bees can follow those I think, is a pretty interesting and important thing to be figuring out as we move forward,” says Cariveau.
true
true
true
Most of the insects avoid the tropics and choose treeless environments in arid parts of the world
2024-10-12 00:00:00
2020-11-19 00:00:00
https://th-thumbnailer.c…jmc-unsplash.jpg
article
smithsonianmag.com
Smithsonian Magazine
null
null
35,177,577
https://arstechnica.com/gaming/2023/03/why-game-archivists-are-dreading-this-months-3ds-wii-u-eshop-shutdown/
Why game archivists are dreading this month’s 3DS/Wii U eShop shutdown
Kyle Orland
In just a few weeks, Nintendo 3DS and Wii U owners will finally completely lose the ability to purchase new digital games on those aging platforms. The move will cut off consumer access to hundreds of titles that can't legally be accessed any other way. But while that's a significant annoyance for consumers holding onto their old hardware, current rules mean it could cause much more of a crisis for the historians and archivists trying to preserve access to those game libraries for future generations. "While it's unfortunate that people won't be able to purchase digital 3DS or Wii U games anymore, we understand the business reality that went into this decision," the Video Game History Foundation (VGHF) tweeted when the eShop shutdowns were announced a year ago. "What we don't understand is what path Nintendo expects its fans to take, should they wish to play these games in the future." ## DMCA headaches Libraries and organizations like the VGHF say their game preservation efforts are currently being hampered by the Digital Millennium Copyright Act (DMCA), which generally prevents people from making copies of any DRM-protected digital work. The US Copyright Office has issued exemptions to those rules to allow libraries and research institutions to make digital copies for archival purposes. Those organizations can even distribute archived digital copies of items like ebooks, DVDs, and even generic computer software to researchers through online access systems. But those remote-access exemptions explicitly leave out video games. That means researchers who want to access archived game collections have to travel to the physical location where that archive resides—even if the archived games themselves were never distributed on physical media. People like us are actually taking the time to do things "legitimately" by requesting limited exemptions. Yet we still get treated like random pirates, because a penny of profit could potentially get pilfered, somehow. It makes little sense when you consider that cat's already out of the bag for many games in terms of emulation. Inflexible stances like that of the ESA will only push some scholars to turn to less approved of means. For the 3DS and Wii U eShop in particular, I know for a fact people have been making their own "unauthorized" repos and backups. In the end, I think a number of scholars might just end up not caring how they come to analyze games in the future. Kind of like how some teachers and professors download songs from YouTube for class without a second thought. And it's not as if the Internet Police are on such cases. The ESA's position just encourages people to look for alternative methods to research things.
true
true
true
Industry lobbying against remote access leaves researchers cut off from game archives.
2024-10-12 00:00:00
2023-03-15 00:00:00
https://cdn.arstechnica.…22/05/eshops.jpg
article
arstechnica.com
Ars Technica
null
null
27,996,311
https://www.awingu.com/secure-vpn-alternative/
Secure VPN alternative: simple, secure remote access from anywhere
null
# Secure VPN alternative Why are you using technology built in the nineties to enable and secure your business today? ## Zero trust security Parallels Secure Workspace is a core enabler for adopting a Zero Trust IT Security strategy. Based on really simple principles, it will significantly increase the security of remote access. What’s more, it is especially powerful combined with a BYOD policy. #### Secure authentication as default Parallels Secure Workspace comes with built-in multi-factor authentication (MFA) at no extra cost. It’s super easy to activate. #### Context-aware restrictions Define the context for which apps/file shares can be accessed by which users (e.g., block access to ERP from all foreign countries). #### No local data Parallels Secure Workspace is built on a server-based computing concept. There is no data stored locally on the device. #### Fully audited All usage going through Parallels Secure Workspace is audited and available to the platform admin. #### Granular usage controls Not all users should have equal access. With Parallels Secure Workspace, it’s easy to give user (groups) different rights (e.g., file sharing, copying & pasting, download to desktop, etc.). #### Encryption over HTTPS Traffic between the browser and Parallels Secure Workspace is encrypted over HTTPS. It also includes a built-in SSL certificate service at no extra cost. ### Parallels Secure Workspace scales easily - Remote access is not static. It requires scaling up and down. Often, organizations need to scale quickly, and Parallels Secure Workspace enables that by: Hardware requirements are limited (up to 500 concurrent sessions can run on 1 VM with 8Gb and 8vCPU), and bandwidth requirements are limited with approximately 100kbps up/down per user (/session). - Roll-out does not require any local installations on the end-user device. As such, it does not create a big workload for your IT ops & support organization. - Parallels Secure Workspace can be (securely) rolled out to devices that are not owned or managed by the IT department. This saves time and reduces costs by eliminating the need to acquire new laptops. *Dr. Chase “Zero Trust” Cunningham about why we should get rid of insecure VPN solutions* Not only does VPN require a client to connect to the end users’ devices (who are not always tech-savvy enough to set it up effortlessly), but you are digging a tunnel to our environment. Parallels Secure Workspace makes us feel comfortable about what happens to our data, and vice versa. Because it works 100% through the browser, I know that I can make Parallels Secure Workspace available to everyone, without fearing that my environment will be contaminated by what might be on the user’s device. Peter Lemmens IT Manager, Boeckmans
true
true
true
Need a secure alternative to VPN? Try Parallels Secure Workspace. Simple, secure remote access from any device, anywhere. No VPN necessary.
2024-10-12 00:00:00
2018-01-01 00:00:00
null
article
parallels.com
parallels.com
null
null
23,220,265
https://droitthemes.com/ux-factors-about-wordpress-website/
7 UX Factors You Should Know About Your WordPress Website - DroitThemes
Kevin David
**Useful, usable, findable, credible. Desirable, accessible, and valuable – these are the main things that make a website’s UX design and praiseworthy. Let’s learn all about this in this article. ** For several reasons, a large number of people choose **WordPress** for their personal or corporate websites. But one of the main reasons we have found that people are using this CMS is for its extensive collection of different themes, templates, and ease of customization. So, in recent, WordPress developers also put their efforts more into the UX, overall look of the theme or templates, and help to give this CMS more variant than other traditional themes you could find back in the last few years. However, a good theme and design hinge on how the users experience their journey/surfing while visiting your website. **This term is called UX**. A WordPress theme with better UX design can hold your visitor for a longer time within your website and provide them with a better experience and earn their satisfaction: that is what you want as a website owner, right? So, today, we are going to tell you about the essential UX Factors You Should Know about Your WordPress Website. “**UX**” indicates “**User Experience.**” It mainly consists of two terms: User Interface Design and Usability. It determines how useful is your website’s layout and how the users interact with it. It also depends on the smoothness, effectiveness, ease of use of your website. A good UX design can enhance your customer satisfaction, improve usability, and provide a better interaction experience between the customer and the product. If a WordPress website has a better combination of both of UX and UI, you can put it on the best WordPress theme list. However, if you want to have some top-notch theme with perfect UX and UI design, try these WordPress themes like BeTheme, Studio 8, or **Saasland**. But if you want to have the WordPress theme specially for your SaaS, start-ups, agency, software, mobile app, or any related website, try this one of the best-selling WordPress themes in 2019: “Saasland – A Multi-Purpose Theme for Stratus, Business, Agencies.” You can use this as the best start-up WordPress theme or as your software or any IT services WordPress theme. First of all, let us share a diagram of 7 serious factors that directly affect any website’s user experience. After analyzing these, **we have made a list of some UX factors** that you must know to improve the experience and interaction of your users. Here below, you can learn about the key factors that influence your website and can help to build a smooth and interactive WordPress website. If your website has small fonts, a low contrast ratio, hard-to-read texts, then this can be a problem for all of your users. So, the first two criteria of a good website are its **interaction **and **usefulness**. If your website is not that useful to your users, they do not care about your services or your online presence. **Your website should be easy to use (usable) and useful to convert the target readers to your customers**. Also, it should contain the features and contents that might be helpful for them. Besides, the usability or interaction with your website is related to how easily your user can land on a page and interact with their desired action without having any problems or distractions. (ex: large click areas, perfect place, and spaces where users know they can click). The **speed** and the **uptime** – these two are significant factors for every website that exists on the world wide web. Web hosting companies mainly handle these two vital factors, however. So, choosing the right host for your website is essential. It is also vital when it comes to **Google ranking factors**. However, speed and performance also depend on the theme, the lite coding, images, and other elements used to build the theme. WordPress themes with good UX always is lite, fast, and perform better in any server. Human eyes are easily attracted to colors. An accurate, right, and eye-pleasant color always attracts our eyes and gives us a wonderful feeling. So, the website interface with **attractive colors is fundamental** to its users. It can enhance the customer’s mood, make them stay longer on your website, and it has an impact on web marketing as well. Digital contents like free PowerPoint templates, images, videos, and infographics are nowadays one of the effective methods to get the **customer’s attention**. A good quality picture can speak or explain without writing a thousand words. Excellent images and infographics also do not let the users become boring by reading thousands of words without seeing any visual content. So, to make your WordPress UX better, **images on your web pages play a vital role**. In 2019, **mobile phone internet users are 63.4% globally**. That means 63.4% of all online traffic comes from smartphones and tablets. This statistic shows us how vital is responsive design for your website. If you want to ensure a great UX for your WordPress website, you need to make it suitable across all devices. Your website should be 100% mobile-friendly and should have better interaction and usability in mobile devices too. If your users find any ”**error page**” like 404 error or bad gateway error, they might be annoyed and leave your website without further browsing. So, checking and solving all the errors of your WordPress site is another important UX factor. If the users do not find any value while browsing your website, they do not get satisfied with your products or services and do not come back again. So, your **website must deliver value through its contents** to convince the users and make them pleased with your website. Now get to another important point: how to improve or implement the UX factors to your website. Here are our **pro tips:** Before the optimization or implementation of UX improvement, you should invest your time in UX research as it is the backbone of any successful website design. For that, you need to know how to research thoroughly and accurately. For this research, in short, you need a **card sorting tool, UX expert review, Usability test, user persona**, etc. All of them together can provide a useful research result. Next, you should give your attention to the speed and performance of your WordPress website. Buy a good hosting plan from a well-known hosting provider. Make sure they provide 99% uptime on average. And also, perform speed tests and solve the problems if the “Speed Test” result shows any. Then make some **attractive images** and infographics and put them within your written content. Make them colorful, meaningful, and eye-catching. But make sure the images are well-optimized before using them on your website. Besides, you need to pay attention to whether the **font size, font colors, menu bars**, and other elements of your websites are working well or not. **Also, **check the responsiveness of your theme. Finally, check all the errors, remove 404 pages, and always keep your theme updated, and check whether it is working/performing well or not. If you are not that techie person, there are plenty of WordPress themes with excellent **UX design**. You can check them from here: “10 Best WordPress Themes with Excellent UX and UI Design”. A good UX depends on the design of your WordPress website, images, logo, elements, interaction, usability, and aesthetics. The more gorgeous and desirable your website is, the more the users get pleased and would want to come back. Not only that, if they love the UX of your website, honestly speaking, they tell other people about your website too. So, try to use a WordPress theme with great UX and modern design. Comments are closed. ## Kevin David I am new to this subject, and I came across feedback from a User Interface Expert to change a color combination of text on a web screen, we were developing. The eye-catching color and digital content area a lot of helpful information here. Thank you very much !! ## Md Shahadat Hussain Nice to know it came helpful for you! Keep an eye on our blog section as we are going to post lot more helpful posts in near future. ## Mark William Informative and helpful tips for UI/UX designers. I’ll try to implement some of the tips in my current work. In order to make UI/UX design look amazing and user-friendly, you should know those factors. ## Md Shahadat Hussain Nice to know it came helpful for you! ## Andy Budd UX is more than you see. UX it’s not just an app or systems, It’s simple, the way you experience your life, experience the service or an app, systems, and person! ## Md Shahadat Hussain You are right! We try to provide robust UX on all our products. Thanks for your comment, anyway. ## soundos website is good, thank you ## Md Shahadat Hussain Thank you very much for appreciation. ## installation de revêtement extérieur Hi there! I just wanted to ask if you ever have any problems with hackers? My last blog (wordpress) was hacked and I ended up losing a few months of hard work due to no data backup. Do you have any solutions to prevent hackers? ## Alauddin Chowdhury Thanks for your query. No, we haven’t face any problem yet. You have to follow some tips to protect your website from hacking. You can check this article. ## Alauddin Chowdhury Thanks, John.
true
true
true
By improving these 7 UX factors, you can make your WordPress website stand out and achieve a good number of users.
2024-10-12 00:00:00
2019-10-24 00:00:00
https://droitthemes.com/…-droitthemes.png
article
droitthemes.com
DroitThemes
null
null
20,499,789
https://eirify.com/and-were-up-an-running
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,749,247
https://www.nbcnews.com/science/environment/some-locals-say-bitcoin-mining-operation-ruining-one-finger-lakes-n1272938
A bitcoin business is polluting Seneca Lake, say critics. Here's how.
Gretchen Morgenson
Summer on Seneca Lake, the largest of the Finger Lakes in upstate New York, is usually a time of boating, fishing, swimming and wine tasting. But for many residents of this bucolic region, there's a new activity this season — protesting a gas-fired power plant that they say is polluting the air and heating the lake. "The lake is so warm you feel like you're in a hot tub," said Abi Buddington of Dresden, whose house is near the plant. The facility on the shores of Seneca Lake is owned by the private equity firm Atlas Holdings and operated by Greenidge Generation LLC. They have increased the electrical power output at the gas-fired plant in the past year and a half and use much of the fossil-fuel energy not to keep the lights on in surrounding towns but for the energy-intensive "mining" of bitcoins. Bitcoin is a cryptocurrency — a digital form of money with no actual bills or coins. "Mining" it, a way of earning it, requires massive high-performance computers. The computers earn small rewards of bitcoin by verifying transactions in the currency that occur on the internet around the world. The math required to verify the transactions and earn bitcoins gets more complex all the time and demands more and more computer power. At Greenidge, the computers operate 24/7, burning through an astounding amount of real energy, and producing real pollution, while collecting virtual currency. An estimate from the University of Cambridge says global bitcoin miners use more energy in a year than Chile. When the energy comes from fossil fuels, the process can add significantly to carbon emissions. The Greenidge plant houses at least 8,000 computers and is looking to install more, meaning it will have to burn even more natural gas to produce more energy. Private equity firms like Atlas buy companies, often using debt, and hope to sell them later at a profit. They are secretive operations with investments that can be hard to track. The number of such firms has grown significantly in recent years, and they oversee $5 trillion for pension funds, insurance companies, university endowments and wealthy people. In the past 10 years, private equity firms have poured almost $2 trillion into energy investments, according to Preqin, a private equity database. About $1.2 trillion has gone into conventional energy investments, such as refineries, pipelines and fossil-fuel plants, compared to $732 billion in renewables like solar and wind power, Preqin said. As investor criticism prompts some public companies to dump fossil fuel assets, private equity firms are ready buyers. In 2019, for example, powerhouse Kohlberg, Kravis & Roberts, or KKR, acquired a majority stake in the troubled Coastal GasLink Pipeline project, a 400-mile fracking gas pipeline in British Columbia that has drawn citations from a regulator and protests from First Nations people whose land it crosses. In a report last fall, the Environmental Assessment Office, a provincial agency, said the project failed to comply on 16 of 17 items inspected. As a result, Coastal GasLink was ordered to hire an independent auditor to monitor its work to prevent site runoff that can pollute streams and harm fish. Because private equity firms expect to hold their investments for only a few years, they often keep alive fossil-fuel operations that would otherwise be mothballed, said Tyson Slocum, director of the energy program at Public Citizen, a nonprofit consumer advocacy group. "Private equity thinks it can squeeze a couple more years out of them," Slocum said. "And they are often immune from investor pressures." In 2016, for instance, the private equity firm ArcLight Capital Partners of Boston bought into Limetree Bay, an oil refinery and storage facility in St. Croix in the U.S. Virgin Islands. The operation had gone bankrupt after a series of toxic spills, but it reopened in February. Just three months later, it was shuttered after it unleashed petroleum rain on nearby neighborhoods. ArcLight, which has invested $23 billion since it was founded in 2001, gave up operational control of Limetree Bay early last year, a person briefed on the matter said, and it exited in a restructuring in April, just before the accident. A spokeswoman for ArcLight said the firm "takes its responsibilities to protect the environment and support local communities seriously and will continue to strive to meet the highest standards." Because private equity firms are secretive, their investors may not know what they own or the risks, said Alyssa Giachino of the Private Equity Stakeholder Project, a nonprofit organization that examines the industry's impact on communities. She said pension funds and their beneficiaries may end up with more fossil fuel exposure than they realize and may not have a full appreciation of the risks. They include heavy impacts on communities of color, risks of litigation and environmental penalties and long-term climate effects, she said. KKR is a huge energy investor on behalf of endowments, public pensions and other institutional investors. Like many of its private equity brethren, KKR has deployed far more money in conventional energy assets like the Coastal GasLink Pipeline than in renewables. From 2010 to 2020, KKR invested $13.4 billion in conventional energy assets, compared to $4.9 billion in renewables, according to a recent estimate by Giachino. KKR didn't dispute those figures in emails. KKR's spokeswoman said the firm is "committed to investing in a stable energy transition, one that supports a shift to a clean energy future while recognizing the ongoing importance of supplying the conventional energy needed for well-being and economic growth around the world today." The company said it communicates its investment approach, progress and goals transparently to stakeholders. KKR recently added a team focused on energy transition investments in North America. Private equity investors sometimes "leave behind messes for someone else to clean up," said Clark Williams-Derry, energy analyst at the Institute for Energy Economics and Financial Analysis. "The real trouble happens when the private equity firm comes in and is just trying to strip mine the company and the workers for whatever they're worth," he said. Not so Greenidge, the Atlas-owned operator of the Seneca Lake power plant, said Jeff Kirt, its CEO. "The environmental impact of the plant has never been better than it is right now," he said. The lakeshore facility is operating within its federal and state environmental permits, he said, and it has created 31 jobs, a company-commissioned report shows. Williams-Derry said cryptocurrency's potential profits add to the appeal of buying low-cost and carbon-intensive power plants. While natural gas-fired plants like Greenidge's in New York aren't as problematic as those that use coal, they still generate damaging greenhouse gases, he said. Kirt said that after Greenidge took over the plant, it sought ways to earn higher returns on its surplus energy. It struck gold with bitcoin mining. During the 12 months that ended Feb. 28, it mined 1,186 bitcoins at a cost of about $2,869 each, the company said. Bitcoin, which gyrates feverishly, currently trades at around $34,000. ## 'A horrible business model' Greenidge's owner, the private equity firm Atlas, is on a roll. It recently raised $3 billion from investors, doubling its assets to $6 billion. Atlas owns stakes in 23 companies; two are power generators — Greenidge in New York and Granite Shore Power in New Hampshire. Atlas bought the 150-acre coal-fired Greenidge plant in 2014, three years after it had closed. Converted to natural gas, the almost 80-year-old plant began operations in 2017, generating energy to the grid only at times of high demand. In 2019, Greenidge began using the plant to power bitcoin mining and increased its output. It still supplies surplus power to the local electrical grid, but a lot of the power it generates is now used for bitcoin mining. And it has plans for expansion at Greenidge and elsewhere, company documents show. Last week, Greenidge announced a new bitcoin mining operation at a retired printing plant Atlas owns in Spartanburg, South Carolina. In March, Greenidge said its Bitcoin mining capacity of 19 megawatts should reach 45 megawatts by December and may ramp to 500 megawatts by 2025 as it replicates its model elsewhere. Larger gas-fired plants in the U.S. have capacities of 1,500 to 3,500 megawatts. Also in March, Greenidge announced a merger with Support.com, a struggling tech support company whose shares trade on the Nasdaq exchange. The deal, which is expected to close in the third quarter of this year, will give Atlas control of the merged company and access to public investor money. Andrew Bursky, founder of Atlas, owns half to three-quarters of Atlas, a regulatory filing shows. Neither Atlas nor Bursky would comment for this article. "These crypto operations are looking for anywhere that has relatively cheap power in a relatively cool climate," said Yvonne Taylor, vice president of Seneca Lake Guardian, a nonprofit conservation advocacy. "It's a horrible business model for all of New York state, the United States and for the planet." Greenidge, which disputes that view, said last month that its operations would soon be carbon neutral. It is buying credits that offset the plant's emissions from an array of U.S. greenhouse gas reduction projects. Judith Enck, a former regional administrator for the Environmental Protection Agency who is a senior fellow and visiting faculty member at Bennington College in Vermont, has doubts. "Carbon offsets is not a particularly effective way to reach greenhouse gas reduction goals," she said in an email, "and there is no system in place to regulate it in New York." One reason bitcoin mining is seen as a threat to the environment, critics say, is that new operators of power plants may continue to use permits issued years earlier without undergoing in-depth environmental assessments. So far, legal challenges to the Greenidge operation have failed. Greenidge's air permit is up for renewal in September, said Mandy DeRoche, deputy managing attorney in the coal program at Earth Justice, a nonprofit environmental advocacy group. "We've asked the Department of Environmental Conservation to take a hard look and think about it as a new permit, not just a renewal," DeRoche said. Materials issued by Greenidge say state environmental authorities have determined that the plant "does not have a significant impact on the environment." Still, emissions from the plant are rocketing. At the end of last year, even though it was operating at only 13 percent capacity, the plant's carbon dioxide equivalent emissions totaled 243,103 tons, up from 28,301 tons in January, according to regulatory documents Earth Justice received under an open records request. Before it began mining bitcoins, the plant generated carbon emissions of 119,304 tons in 2018 and 39,406 tons in 2019, federal documents show. On June 5, residents staged a protest against the plant at a nearby Department of Environmental Conservation office in Avon. If regulators don't rein in the Greenidge plant, they say, 30 other power plants in New York could be converted to bitcoin mining, imperiling the state's emission-reduction goals. "New York had established a goal in law of reducing greenhouse gas emissions by 40 percent by 2030," Enck said. "The state will not reach that goal if the Greenidge Bitcoin mining operation continues." Greenidge declined to comment on Enck's statement. Maureen Wren, a spokeswoman for the Department of Environmental Conservation, or DEC, said in a statement that it is closely monitoring Greenidge. "DEC will ensure a comprehensive and transparent review of its proposed air permit renewals with a particular focus on the potential climate change impacts and consistency with the nation-leading emissions limits established in the state's Climate Leadership and Community Protection Act. As the greenhouse gas emissions associated with this type of facility may be precedential and have broader implications beyond New York's borders, DEC will consult with the U.S. EPA, the state's Climate Action Council, and others as we thoroughly evaluate the complex issues involved." Water usage by Greenidge is another problem, residents said. The current permit allows Greenidge to take in 139 million gallons of water and discharge 135 million gallons daily, at temperatures as high as 108 degrees Fahrenheit in the summer and 86 degrees in winter, documents show. Rising water temperatures can stress fish and promote toxic algae blooms, the EPA says. A full thermal study hasn't been produced and won't be until 2023, but residents protesting the plant say the lake is warmer with Greenidge operating. Greenidge recently published average discharged water temperatures from March 1 to April 17, during the trout spawning season; they were around 46 degrees to 54 degrees, with differences between inflow and outflow of 5 degrees to 7.5 degrees. From June 7 to July 6, Greenidge said, water temperatures recorded at a buoy about 10 miles north of the Greenidge plant and at a depty of three-and-a-half feet have averaged 67.3 degrees. The low of 61 degrees occurred on June 7 and the high of 73 was recorded on July 1. Over longer periods, temperatures have spiked, however. NBC News reviewed a February email from the DEC to a resident stating that since 2017, the plant's daily maximum discharge temperatures have been 98 degrees in the summer and 70 degrees in winter. The Greenidge spokesperson said, "The limits already protect the lake's fishery and the public health, and they have been clearly validated as not concerning." Not everyone wants Greenidge gone. The Dresden Fire Department welcomed the company's $25,000 donation for a jaws-of-life machine, and the school district was grateful for a $20,000 gift to develop education and enrichment programs. Gwen Chamberlain, a former local newspaper editor, is one of three members of a community advisory board working with Greenidge to advance the region's economy. "The tax base is growing, and that's helping the school, the county and the town tremendously," Chamberlain said. "Their employment has always been good, solid jobs for local workers." A recent economic study commissioned by Greenidge said the company made payments to local authorities in lieu of real property taxes of $272,000 last year. Peter Mantius, a former journalist who writes about environmental politics in the region, said the payments, while greater than zero, are far less than what the plant once generated, thanks to a favorable tax assessment arrangement. "The amount they paid instead of regular real estate taxes to the town and local schools and county — when you add those together, it's a fraction, maybe a quarter, of what the old owner paid," Mantius said. Meanwhile, residents like Buddington feel compelled to keep fighting. "My concern is if we don't do something now," she said, "it's going to be so much harder to undo."
true
true
true
"The lake is so warm you feel like you're in a hot tub," said a woman who lives near a gas-fired New York plant that powers 8,000 computers mining bitcoins.
2024-10-12 00:00:00
2021-07-05 00:00:00
https://media-cldnry.s-n…ution-2x1-cs.png
article
nbcnews.com
NBC News
null
null
20,270,327
https://medium.com/@justinamiller_1857/microservices-architecture-problems-you-will-have-to-solve-4153cfbde713
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,370,266
http://www.policymic.com/articles/84601/the-countries-with-the-highest-number-of-female-executives-are-not-the-ones-you-d-expect
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,603,662
https://github.com/dezmou/cryptoghost.art
GitHub - dezmou/Cryptoghost.art: On chain generated Non Fungible Token
Dezmou
## Cryptoghost is a NFT that doesn't rely on external hosting for pictures assets. Everything is on the Ethereum blockchain. As long as Ethereum exists, pictures exist. Non Fungible Token is a growing phenomenon appeared in 2017, It allows people to "own" some image by writing an entry in a Ethereum smart contract. Once your personal ethereum address written in the registry, it is the proof that some token ID is affected to your adress, then by calling some methods in the smart contract, you can trade and transfer your NFT to other ethereum addresses. NFTs are often criticized because image sources aren't stored on the blockchain. This is simply because writing data in a smart contract is so expensive that storing an image would cost thousands of dollars of gas fees per image. Pictures assets must be hosted elsewhere, generally on IPFS or private networks. The art lifespan depends on the lifespan of the ethereum blockchain and the external hosting, if one may disappear, the art disappears. Storing pictures on blockchain is not possible, but what about generating files on-demand ? Cryptoghost represent cryptographics fractals generated from a chosen string In this case, no need to store the picture somewhere but just the input string in the contract. Then with solidity the image file can be generated on the fly. Go the the Etherscan page of the contract then call the `getBitmapFromghostKey` function with some string input The result is some hex-encoded result of a .bmp file, you can quickly generate the file with this website ``` function getBitmapFromGhostKey(string memory value) public view returns (bytes memory) { // The header of the bitmap file, with color palette bytes memory headers = hex"424d384400000000000036040000280000008000000080000000010008000000000002400000120b0000120b0000000000000000000000000000000000000100010002010200030103000401030005020400060205000703060008030600090307000a0408000b0409000c0409000d050a000e050b000f060c0010060c0011060d0012070e0013070f0013070f001408100015081100160912001709130018091300190a14001a0a15001b0b16001c0b16001d0b17001e0c18001f0c1900200c1900210d1a00220d1b00230e1c00240e1c00250e1d00260f1e00260f1f00270f1f0028102000291021002a1122002b1122002c1123002d1224002e1225002f132600301326003113270032142800331429003414290035152a0036152b0037162c0038162c0039162d0039172e003a172f003b172f003c1830003d1831003e1932003f19320040193300411a3400421a3500431a3500441b3600451b3700461c3800471c3900481c3900491d3a004a1d3b004b1e3c004c1e3c004c1e3d004d1f3e004e1f3f004f1f3f00502040005120410052214200532142005421430055224400562245005722450058234600592347005a2448005b2448005c2449005d254a005e254b005f264c005f264c0060264d0061274e0062274f0063274f006428500065285100662952006729520068295300692a54006a2a55006b2a55006c2b56006d2b57006e2c58006f2c5800702c5900712d5a00722d5b00722d5b00732e5c00742e5d00752f5e00762f5f00772f5f0078306000793061007a3162007b3162007c3163007d3264007e3265007f326500803366008133670082346800833468008434690085356a0085356b0086356b0087366c0088366d0089376e008a376e008b376f008c3870008d3871008e3972008f39720090397300913a7400923a7500933a7500943b7600953b7700963c7800973c7800983c7900983d7a00993d7b009a3d7b009b3e7c009c3e7d009d3f7e009e3f7e009f3f7f00a0408000a1408100a2408100a3418200a4418300a5428400a6428500a7428500a8438600a9438700aa448800ab448800ab448900ac458a00ad458b00ae458b00af468c00b0468d00b1478e00b2478e00b3478f00b4489000b5489100b6489100b7499200b8499300b94a9400ba4a9400bb4a9500bc4b9600bd4b9700be4c9800be4c9800bf4c9900c04d9a00c14d9b00c24d9b00c34e9c00c44e9d00c54f9e00c64f9e00c74f9f00c850a000c950a100ca50a100cb51a200cc51a300cd52a400ce52a400cf52a500d053a600d153a700d153a700d254a800d354a900d455aa00d555ab00d655ab00d756ac00d856ad00d957ae00da57ae00db57af00dc58b000dd58b100de58b100df59b200e059b300e15ab400e25ab400e35ab500e45bb600e45bb700e55bb700e65cb800e75cb900e85dba00e95dba00ea5dbb00eb5ebc00ec5ebd00ed5fbe00ee5fbe00ef5fbf00f060c000f160c100ffffff00"; bytes memory pixels = new bytes(16384); // Start position of the next pixel to be colored int256 x = 64; int256 y = 64; // Amount of points already painted, is reset to 0 if next pixel get out of canvas int256 totalIndex = 0; string memory directions = value; bytes memory directionsBytes; // Fill the bitmap with blank for (uint256 iFill = 0; iFill < 16384; iFill++) { pixels[iFill] = bytes1(uint8(255)); } // We will paint 151 x 64 pixels for (uint256 i = 0; i < 151; i++) { // Array of characters, each characters are one direction : up, down, left, upper left etc... directions = BytesUtils.sha256HexString(directions); directionsBytes = bytes(directions); // Calculate position for the next pixel for (uint256 iChar = 0; iChar < 64; iChar++) { if (uint8(directionsBytes[iChar]) == 48 || uint8(directionsBytes[iChar]) == 49) { x += 1; } else if (uint8(directionsBytes[iChar]) == 50 || uint8(directionsBytes[iChar]) == 51) { x += -1; } else if (uint8(directionsBytes[iChar]) == 52 || uint8(directionsBytes[iChar]) == 53) { y += 1; } else if (uint8(directionsBytes[iChar]) == 54 || uint8(directionsBytes[iChar]) == 55) { y += -1; } else if (uint8(directionsBytes[iChar]) == 56 || uint8(directionsBytes[iChar]) == 57) { x += 1; y += 1; } else if (uint8(directionsBytes[iChar]) == 97 || uint8(directionsBytes[iChar]) == 98) { x += -1; y += 1; } else if ( uint8(directionsBytes[iChar]) == 99 || uint8(directionsBytes[iChar]) == 100 ) { x += 1; y += -1; } else if ( uint8(directionsBytes[iChar]) == 101 || uint8(directionsBytes[iChar]) == 102 ) { x += -1; y += -1; } // If next pixel is out of canvas, reset it to center of canvas if (y >= 128 || y < 0 || x >= 128 || x < 0) { y = 64; x = 64; totalIndex = 0; } // Set the pixel color palette index pixels[uint256(x) + (128 * uint256(127 - y))] = bytes1( uint8(totalIndex / 38) ); totalIndex += 1; } } // concatenate headers and pixel values return BytesUtils.MergeBytes(headers, pixels); } ```
true
true
true
On chain generated Non Fungible Token. Contribute to dezmou/Cryptoghost.art development by creating an account on GitHub.
2024-10-12 00:00:00
2021-03-26 00:00:00
https://opengraph.githubassets.com/992e00011be8a85a258d3e73d069234a8175d193359235460dbaaffad7096167/dezmou/Cryptoghost.art
object
github.com
GitHub
null
null
31,601,040
https://hirrolot.github.io/posts/rust-is-hard-or-the-misery-of-mainstream-programming.html
Rust Is Hard, Or: The Misery of Mainstream Programming
Hirrolot'S Blog
## Functions that handle updates: First try We are programming a ~~blazing fast~~ messenger bot to make people’s lives easier. Using long polling or webhooks, we obtain a stream of server updates, one-by-one. For all updates, we have a vector of handlers, each of which accepts a reference to an update and returns a future resolving to `()` . `Dispatcher` owns the handler vector and on each incoming update, it executes the handlers sequentially. Let us try to implement this. We will omit the execution of handlers and focus only on the `push_handler` function. First try (playground): ``` use futures::future::BoxFuture; use std::future::Future; #[derive(Debug)] struct Update; type Handler = Box<dyn for<'a> Fn(&'a Update) -> BoxFuture<'a, ()> + Send + Sync>; struct Dispatcher(Vec<Handler>); impl Dispatcher { fn push_handler<'a, H, Fut>(&mut self, handler: H) where H: Fn(&'a Update) -> Fut + Send + Sync + 'a, Fut: Future<Output = ()> + Send + 'a, { self.0.push(Box::new(move |upd| Box::pin(handler(upd)))); } } fn main() { let mut dp = Dispatcher(vec![]); dp.push_handler(|upd| async move { println!("{:?}", upd); }); } ``` Here we represent each handler using a dynamically typed `Fn` restricted by an HRTB lifetime `for<'a>` , since we want a returning future to depend on some `'a` from the `&'a Update` function parameter. Later, we define the `Dispatcher` type holding `Vec<Handler>` . Inside `push_handler` , we accept a statically typed, generic `H` returning `Fut` ; in order to push a value of this type to `self.0` , we need to wrap `handler` into a new boxed handler and transform the returning future to `BoxFuture` from the `futures` crate using `Box::pin` . Now let us see if the above solution works: ``` error[E0312]: lifetime of reference outlives lifetime of borrowed content... --> src/main.rs:17:58 | 17 | self.0.push(Box::new(move |upd| Box::pin(handler(upd)))); | ^^^ | note: ...the reference is valid for the lifetime `'a` as defined here... --> src/main.rs:12:21 | 12 | fn push_handler<'a, H, Fut>(&mut self, handler: H) | ^^ note: ...but the borrowed content is only valid for the anonymous lifetime #1 defined here --> src/main.rs:17:30 | 17 | self.0.push(Box::new(move |upd| Box::pin(handler(upd)))); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` Unfortunately, it does not work. The reason is that `push_handler` accepts a *concrete* lifetime `'a` that we try to boil down to an HRTB lifetime `for<'a>` . By doing so, we try to prove that `for<'a, 'b> 'a: 'b` (with `'b` being `'a` from `push_handler` ), which obviously does not hold. We can try to approach this differently: instead of the `Fut` generic, we can force a user handler to return `BoxFuture` bounded by `for<'a>` (playground): ``` use futures::future::BoxFuture; #[derive(Debug)] struct Update; type Handler = Box<dyn for<'a> Fn(&'a Update) -> BoxFuture<'a, ()> + Send + Sync>; struct Dispatcher(Vec<Handler>); impl Dispatcher { fn push_handler<H>(&mut self, handler: H) where H: for<'a> Fn(&'a Update) -> BoxFuture<'a, ()> + Send + Sync + 'static, { self.0.push(Box::new(move |upd| Box::pin(handler(upd)))); } } fn main() { let mut dp = Dispatcher(vec![]); dp.push_handler(|upd| { Box::pin(async move { println!("{:?}", upd); }) }); } ``` It compiles fine now but the final API is defected: ideally, we do not want a user to wrap each handler with `Box::pin` . After all, this is one of the reasons why `push_handler` exists: it transforms a statically typed handler into its functionally equivalent counterpart in the dynamic type space. But what if we force handlers to remain static? We can accomplish it using heterogenous lists. ## Second try: Heterogenous list A heterogenous list is indeed just a fancy name for a tuple. Thus, we want something like `(H1, H2, H3, ...)` , where each `H` is a different handler type. But at the same time, the `push_handler` and `execute` operations require us to be able to iterate on this tuple – a possibility that is missing in vanilla Rust. It does not mean, though, that we cannot express a similar thing via some freaky type machinery! First of all, this is the representation of our heterogenous list (playground): If you think this is a bit senseless, you are not far from true. All we want is to be able to construct types like `Dispatcher<H1, Dispatcher<H2, Dispatcher<H3, DispatcherEnd>>>` , an equivalent form of the `(H1, H2, H3)` tuple. With this in mind, we can now define the `push_handler` function using simple type-level induction: ``` trait PushHandler<NewH> { type Out; fn push_handler(self, handler: NewH) -> Self::Out; } impl<NewH> PushHandler<NewH> for DispatcherEnd { type Out = Dispatcher<NewH, DispatcherEnd>; fn push_handler(self, handler: NewH) -> Self::Out { Dispatcher { handler, tail: DispatcherEnd, } } } impl<H, Tail, NewH> PushHandler<NewH> for Dispatcher<H, Tail> where Tail: PushHandler<NewH>, { type Out = Dispatcher<H, <Tail as PushHandler<NewH>>::Out>; fn push_handler(self, handler: NewH) -> Self::Out { Dispatcher { handler: self.handler, tail: self.tail.push_handler(handler), } } } ``` If you are new to type-level induction, you can think of it as of regular recursion, but applied to types (traits) instead of values: - The **base case**is`impl<NewH> PushHandler<NewH> for DispatcherEnd` . Here we construct a dispatcher with only one handler. - The **step case**is`impl<H, Tail, NewH> PushHandler<NewH> for Dispatcher<H, Tail>` . Here we only propagate our induction to`self.tail` . We implement `execute` in the same way: ``` trait Execute<'a> { #[must_use] fn execute(&'a self, upd: &'a Update) -> BoxFuture<'a, ()>; } impl<'a> Execute<'a> for DispatcherEnd { fn execute(&'a self, _upd: &'a Update) -> BoxFuture<'a, ()> { Box::pin(async {}) } } impl<'a, H, Fut, Tail> Execute<'a> for Dispatcher<H, Tail> where H: Fn(&'a Update) -> Fut + Send + Sync + 'a, Fut: Future<Output = ()> + Send + 'a, Tail: Execute<'a> + Send + Sync + 'a, { fn execute(&'a self, upd: &'a Update) -> BoxFuture<'a, ()> { Box::pin(async move { (self.handler)(upd).await; self.tail.execute(upd).await; }) } } ``` But that is not all we need. The final move is to abstract `execute` *for all* lifetimes of updates, since our implementation of `Execute<'a>` relies on some concrete `'a` , whereas we want our dispatcher to handle updates of variying lifetimes: ``` async fn execute<Dp>(dp: Dp, upd: Update) where Dp: for<'a> Execute<'a>, { dp.execute(&upd).await; } ``` Fine, now we are ready to test our bizzare solution: ``` #[tokio::main] async fn main() { let dp = DispatcherEnd; let dp = dp.push_handler(|upd| async move { println!("{:?}", upd); }); execute(dp, Update).await; } ``` But it does not work either: ``` error: implementation of `Execute` is not general enough --> src/main.rs:83:5 | 83 | execute(dp, Update).await; | ^^^^^^^ implementation of `Execute` is not general enough | = note: `Dispatcher<[closure@src/main.rs:80:30: 82:6], DispatcherEnd>` must implement `Execute<'0>`, for any lifetime `'0`... = note: ...but it actually implements `Execute<'1>`, for some specific lifetime `'1` ``` Still think that programming with borrow checker is easy and everybody can do it after some practice? Unfortunately, no matter how much practice you have, you cannot cause the above code to compile. The reason is this: the closure passed to `dp.push_handler` accepts `upd` of a *concrete* lifetime `'1` , but `execute` requires `Dp` to implement `Execute<'0>` for *any* lifetime `'0` , due to the HRTB bound introduced in the `where` clause. However, if you try your luck with regular functions, the code will compile: ``` #[tokio::main] async fn main() { let dp = DispatcherEnd; async fn dbg_update(upd: &Update) { println!("{:?}", upd); } let dp = dp.push_handler(dbg_update); execute(dp, Update).await; } ``` This will print `Update` to the standard output. This particular behaviour of borrow checker may seem irrational – and, in fact, it is; functions and closures differ not only in their respective traits but also in how they handle lifetimes. While closures that accept references are bounded by *specific* lifetimes, functions such as our `dbg_update` accept `&'a Update` for *all* lifetimes `'a` . This divergence is demonstrated by the following example code (playground): ``` let dbg_update = |upd| { println!("{:?}", upd); }; { let upd = Update; dbg_update(&upd); } { let upd = Update; dbg_update(&upd); } ``` Due to calls to `dbg_update` , we obtain the following compilation error: ``` error[E0597]: `upd` does not live long enough --> src/main.rs:11:20 | 11 | dbg_update(&upd); | ^^^^ borrowed value does not live long enough 12 | } | - `upd` dropped here while still borrowed ... 16 | dbg_update(&upd); | ---------- borrow later used here ``` This is because the `dbg_update` closure can handly only one specific lifetime, whereas the lifetimes of the first and the second `upd` are clearly different. In contrast, `dbg_update` as a function works perfectly in this scenario (playground): ``` fn dbg_update_fn(upd: &Update) { println!("{:?}", upd); } { let upd = Update; dbg_update_fn(&upd); } { let upd = Update; dbg_update_fn(&upd); } ``` We can even trace the exact signature of this function using the handy `let () = ...;` idiom (playground): The signature is `for<'r> fn(&'r Update)` , as expected: ``` error[E0308]: mismatched types --> src/main.rs:9:9 | 9 | let () = dbg_update_fn; | ^^ ------------- this expression has type `for<'r> fn(&'r Update) {dbg_update_fn}` | | | expected fn item, found `()` | = note: expected fn item `for<'r> fn(&'r Update) {dbg_update_fn}` found unit type `()` ``` That being said, this solution with a heterogenous list is not what we want either: it is quite flummoxing, boilerplate, hacky, and does not work with closures at all. Also, I do not recommend going too far with complex type mechanics in Rust; if you suddenly encounter a type check failure somewhere near the dispatcher type, I wish you good luck. Imagine that you are maintaining a production system written in Rust and you need to fix some critical bug as quickly as possible. You introduce the necessary changes to your codebase and then see the following compilation output: ``` error[E0308]: mismatched types --> src/main.rs:123:9 | 123 | let () = dp; | ^^ -- this expression has type `Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update0}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update1}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update2}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update3}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update4}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update5}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update6}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update7}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update8}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update9}, DispatcherEnd>>>>>>>>>>` | | | expected struct `Dispatcher`, found `()` | = note: expected struct `Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update0}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update1}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update2}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update3}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update4}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update5}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update6}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update7}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update8}, Dispatcher<for<'_> fn(&Update) -> impl futures::Future<Output = ()> {dbg_update9}, DispatcherEnd>>>>>>>>>>` found unit type `()` ``` (In a real-world scenario, the above error would probably be 20x bigger.) ## Third try: Using Arc When I was novice in Rust, I used to think that references are simpler than smart pointers. Now I am using `Rc` /`Arc` almost everywhere where using lifetimes causes too much pain and performance is not a big deal. Believe or not, all of the aforementioned problems were caused by that single lifetime in `type Handler` , `'a` . Let us just replace it with `Arc<Update>` (playground): ``` use futures::future::BoxFuture; use std::future::Future; use std::sync::Arc; #[derive(Debug)] struct Update; type Handler = Box<dyn Fn(Arc<Update>) -> BoxFuture<'static, ()> + Send + Sync>; struct Dispatcher(Vec<Handler>); impl Dispatcher { fn push_handler<H, Fut>(&mut self, handler: H) where H: Fn(Arc<Update>) -> Fut + Send + Sync + 'static, Fut: Future<Output = ()> + Send + 'static, { self.0.push(Box::new(move |upd| Box::pin(handler(upd)))); } } fn main() { let mut dp = Dispatcher(vec![]); dp.push_handler(|upd| async move { println!("{:?}", upd); }); } ``` Hell yeah, it compiles! We even do not need to manually specify `Arc<Update>` in each closure – type inference will do the dirty work for us. ## The problem with Rust “Fearless concurrency” – a formally correct but nonetheless misleading statement. Yes, you no longer have *fear* of data races, but you have **PAIN**, much pain. Let me elaborate. In the previous sections, I have not even loaded you with all the peculiarities and inadequacies of Rust that affected the final solution – but there were plenty of them. First of all, notice the heavy use of boxed futures: *all* of the aforementioned `BoxFuture` types, as well as the corresponding `Box::new` and `Box::pin` twiddling, were irreplaceable by generics. If you know at least a little bit of Rust, you know that `Vec` can only contain fixed-sized types, so the occurrence of `BoxFuture` inside `type Handler` makes sense; however, using `BoxFuture` instead of an `async` function signature in the `Execute` trait is not that apparent. The awesome essay *“Why async fn in traits are hard”* by Niko Matsakis explains why. In short, at the moment of writing this blog post, it is impossible to define `async fn` functions in traits; instead you should use some type erasure alternative like the `async-trait` crate or boxing futures manually, as in our examples. In fact, `async-trait` performs quite a similar thing, but honestly I avoid using it because it mangles compile-time errors with procedural macros. The technique of returning `BoxFuture` also has disadvantages – one of them is that you need not forget to specify `#[must_use]` for *each* `async fn` , otherwise the compiler would not warn you if you call `execute` without `.await` ing it 1. In essence, boxing static entities is so common that the `futures` crate exposes other dynamic variants of common traits, including `BoxStream` , `LocalBoxFuture` , and `LocalBoxStream` (the last two come without the `Send` requirement). Secondly, explicit type annotation for `upd` breaks everything (playground): ``` use tokio; // 1.18.2 #[derive(Debug)] struct Update; #[tokio::main] async fn main() { let closure = |upd: &Update| async move { println!("{:?}", upd); }; closure(&Update).await; } ``` Compiler output: ``` error: lifetime may not live long enough --> src/main.rs:8:34 | 8 | let closure = |upd: &Update| async move { | _________________________-______-_^ | | | | | | | return type of closure `impl Future<Output = ()>` contains a lifetime `'2` | | let's call the lifetime of this reference `'1` 9 | | println!("{:?}", upd); 10 | | }; | |_____^ returning this value requires that `'1` must outlive `'2` ``` (Try to remove the type annotation `: &Update` and the compilation will succeed.) If you have no idea what this error means, you are not alone – see issue #70791. Looking at the list of issue labels reveals `C-Bug` , which classifies the issue as a compiler bug. At the moment of writing this post, rustc has 3,107 open `C-bug` issues and 114 open `C-bug` +`A-lifetimes` issues. Remember that `async fn` worked for us but an equivalent closure did not? – this is also a compiler bug, see issue #70263. There are also many language-related issues dated earlier than 2020, see issue #41078 and issue #42940. You see how our simple task of registering handlers has seamlessly transcended into wandering in rustc issues with the hope to somehow circumvent the language. Designing interfaces in Rust is like walking through a minefield: in order to succeed, you need to balance on your ideal interface and what features are available to you. Yes, I hear you. No, it is not like in all other languages. When you program in some stable production language (not Rust), you can typically foresee how your imaginary interface would fit with language semantics; but when you program in Rust, the process of designing APIs is affected by numerous arbitrary language limitations like those we have seen so far. You expect that borrow checker will validate your references and type system will help you to deal with program entities, but you end up throwing `Box` , `Pin` , and `Arc` here and there and fighting with type system inexpressiveness. To finish the section, this is the full implementation in Golang: `dispatcher.go` ``` package main import "fmt" type Update struct{} type Handler func(*Update) type Dispatcher struct { handlers []Handler } func (dp *Dispatcher) pushHandler(handler Handler) { dp.handlers = append(dp.handlers, handler) } func main() { dp := Dispatcher{handlers: nil} dp.pushHandler(func(upd *Update) { fmt.Println(upd) }) } ``` ## Why Rust is so hard? Sometimes it is helpful to understand why shit happens. “Because X is bad” is not an answer; “Because people that made X are bad” is not an explanation either. So why Rust is so hard? **Rust is a systems language.** To be a systems PL, it is very important not to hide underlying computer memory management from a programmer. For this reason, Rust pushes programmers to expose many details that would be otherwise hidden in more high-level languages. Examples: pointers, references and associated stuff, memory allocators, different string types, different `Fn` traits, `std::pin` , et cetera. **Rust is a static language.** This is better explained in my previous essay *“Why Static Languages Suffer From Complexity”*. To restate, languages with static type systems (or equivalent functionality) tend to duplicate their features on their *static* and *dynamic* levels, thereby introducing *statics-dynamics biformity*. Transforming a static abstraction into its dynamic counterpart is called *upcasting*; the inverse process is called *downcasting*. Inside `push_handler` , we have used upcasting to turn a static handler into the dynamic `Handler` type to be pushed to the final vector. In addition, Rust is committed to making all these things intuitive and memory safe. This kick-ass combination stresses the human bounds of computer language design. From now it should be completely understandable why Rust feels like a full of holes from time to time; in fact, it is almost a miracle that it is functioning at all. A computer language is like a system of tightly intertwined components: every time you introduce a new linguistic abstraction, you have to make sure that it plays well with the rest of the system to avoid bugs and inconsistencies. Perhaps we should grant free health insurance or other life benefits to those who develop such languages on full-time. ## How things can be different? Now imagine that **all of Rust’s issues dissapear**. Also, whole rustc and std are formally verified. It would be also fairly nice to have a complete language specification with multiple tier-1 implementations, the same support for hardware platforms as of GCC, stable ABI (though it is unclear how to deal with generics), and similar stuff. That would probably be an ideal language for systems programming. Or imagine that **Rust’s issues dissapear and it is now completely high-level**. That would kick the shit out of all mainstream programming languages. Rust has adequate defaults, it supports polymorphism, it has a very convenient package manager. I will not enumerate here all the faults of mainstream PLs: cursed JavaScript semantics, enterprise monstrosity of Java, `NULL` pointer problems in C, uncontrollable UB of C++, numerous ways of doing the same job in C#, et cetera. The modern programming language scene is rather a freak show. Yet, you see, even with all of these drawbacks, people write working software, while Rust (in its current state) is far from being the most used PL. Moreover, my prediction is that Rust will never be as popular as Java or Python. The reason is more social than technical: due to the innate complexity of the language, there will always be fewer professional software engineers in Rust than in Java or Python; to make matters even worse, they will require higher salaries, mind you. As an employer, you will have much more trouble finding good Rustaceans for your business. Finally, imagine that **Rust’s issues dissapear, it is high-level, and has uniform feature set.** That would presumably be close to the theoretical ideal of a high-level, general-purpose programming language for the masses. Funnily enough, designing such a language might turn out to be a less intimidating task than original Rust, since we can hide all low-level details under an impenetrable shell of a language runtime. ## Waiting for better future So if I “figured out it all”, why should not I develop a sublime version of Rust? I do not want to spend my next twenty years trying to do so, given that the chance that my language will stand out is infinitely small. I think the current set of most used production languages is pretty random to some extent – we can always say why a specific language got popular, but generally we cannot explain why better alternatives sunk into oblivion. Backing from a big corporation? Accidentally targeting an IT trend of the future? Again, the reasons are rather social. Harsh reality: in life, sometimes hope plays a much more vital role than all of your skills and self-dedication. If you still want to create a PL of the future, I wish you good luck and strong mental health. You are endlessly courageous and hopelessly romantic. ## Related ideas *“Garbage Collection Makes Rust Easier to Use: A Randomized Controlled Trial of the Bronze Garbage Collector”**“Shifgrethor I: Garbage collection as a Rust library”**“Revisiting a ‘smaller Rust’”**“The Rust I Wanted Had No Future”**“Dada, an Experiement by the Creators of Rust”* Feel free to contact me if you wish to extend this list. ## Update: Addressing misinterpretations Since publication, this post has gained 500+ upvotes on r/rust and 700+ comments on HN. I did not expect such amount of attention. Unfortunately, before publishing anything, it is very hard to predict all possible misinterpretations. Some people pointed out that the dispatcher example was concerned with the problems of library maintainers, and that application programmers usually do not have to deal with such peculiarities. They are right to some extent; however, the reason I wrote this essay was mainly to talk about *programming language design*. Rust is ill-suited for generic `async` programming, this is the gross true. When you enter `async` , you observe that many other language features suddenly break down: references, closures, type system, to name a few. From the perspective of language design, this manifests a failure to design an orthogonal language 2. I wanted to convey this observation in my post; I should have stated this explicitly. Additionally, how we write libraries reveals the true potential of a language, since libraries tend to require more expressive features from language designers – due to their generic nature. This also affects mundane application programming: the more elegant libraries you have, the more easily you can solve your tasks. Example: the abscence of GATs does not allow you to have a generic runtime interface and change Tokio to something else in one line of code, as we do for loggers. One gentleman also outlined a more comprehensive list of `async` Rust failures, including function colouring, asynchronous `Drop` , and library code duplication. I did not try to address all of these issues here – otherwise the text would be bloated with too much information. However, the list pretty much sums up all the bad things you have to deal with in generic `async` code, such as library development. Actually I forgot this `#[must_use]` while writing the example and then did not understand for a while why`stdout` was clean in the case of two or more chained handlers. 🤡↩︎A language is orthogonal when its features “play well” with each other. E.g., arrays as function parameters in C just boil down to pointers, which is admittedly not orthogonal.↩︎
true
true
true
null
2024-10-12 00:00:00
2022-06-02 00:00:00
null
null
null
null
null
null
7,008,986
https://github.com/knizhnik/imcs
GitHub - knizhnik/imcs: In-Memory Columnar Store extension for PostgreSQL
Knizhnik
In-Memory Columnar Store extension for PostgreSQL - Notifications You must be signed in to change notification settings - Fork 33 In-Memory Columnar Store extension for PostgreSQL ### License # knizhnik/imcs This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. ## Folders and files Name | Name | Last commit message | Last commit date | | ---|---|---|---|---| ## Repository files navigation ## About In-Memory Columnar Store extension for PostgreSQL ### Resources ### License ### Stars ### Watchers ### Forks ## Packages 0 No packages published
true
true
true
In-Memory Columnar Store extension for PostgreSQL. Contribute to knizhnik/imcs development by creating an account on GitHub.
2024-10-12 00:00:00
2013-12-12 00:00:00
https://opengraph.githubassets.com/503b6ffb84cd26ee69f00b5096973f3c8c942533bfa1eeaf900bcd5bd07a2c6b/knizhnik/imcs
object
github.com
GitHub
null
null
18,696,821
https://www.owasp.org/index.php/OWASP_Cheat_Sheet_Series
OWASP Cheat Sheet Series
null
# OWASP Cheat Sheet Series ## Our Goal The OWASP Cheat Sheet Series was created to provide a set of simple good practice guides for application developers and defenders to follow. Rather than focused on detailed best practices that are impractical for many developers and applications, they are intended to provide good practices that the majority of developers will actually be able to implement. The cheat sheets are available on the main website at https://cheatsheetseries.owasp.org. If you wish to contribute to the cheat sheets, or to suggest any improvements or changes, then please do so via the issue tracker on the GitHub repository. Alternatively, join us in the `#cheetsheats` channel on the OWASP Slack (details in the sidebar). ## Bridge between the projects OWASP Proactive Controls, OWASP ASVS, and OWASP CSS A work channel has been created between OWASP Proactive Controls (OPC), OWASP Application Security Verification Standard (ASVS), and OWASP Cheat Sheet Series (OCSS) using the following process: - When a Cheat Sheet is missing for a point in OPC/ASVS, then the OCSS will handle the missing and create one. When the Cheat Sheet is ready, then the reference is added by OPC/ASVS. - If a Cheat Sheet exists for an OPC/ASVS point but the content do not provide the expected help then the Cheat Sheet is updated to provide the required content. The reason of the creation of this bridge is to help OCSS and ASVS projects by providing them: - A consistent source for the requests regarding new Cheat Sheets. - A shared approach for updating existing Cheat Sheets. - A usage context for the Cheat Sheet and a quick source of feedback about the quality and the efficiency of the Cheat Sheet. It is not mandatory that a request for a new Cheat Sheet (or for an update) comes only from OPC/ASVS, it is just an extra channel. Requests from OPC/ASVS are flagged with a special label in the GitHub repository issues list in order to identify them and set them as a top level priority. ## Contributors V1 **From 2014 to 2018:**V1 - Initial version of the project hosted on the OWASP WIKI. ## Contributors V2 ## Special thanks A special thank you to the following people for their help provided during the migration: - Dominique Righetto: For his special leadership and guidance. - Elie Saad: For valuable help in updating the OWASP Wiki links for all the migrated cheat sheets and for years of leadership and other project support. - Jakub Maćkowski: For valuable help in updating the OWASP Wiki links for all the migrated cheat sheets.
true
true
true
The OWASP Cheat Sheet Series project provides a set of concise good practice guides for application developers and defenders to follow.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://owasp.org/www--site-theme/favicon.ico
website
owasp.org
owasp.org
null
null
25,571,580
https://den.dev/blog/air/
Unlocking My Air Data Through API Analysis
Den Delimarsky
# Unlocking My Air Data Through API Analysis ## Table of Contents I am naturally curious about the APIs1 that the devices in my house use, so when I got an air quality monitor, one of the first things I did was fiddle with the REST APIs that were made available through the device. As it turns out - more than I expected. In this post, I will discuss the use of an undocumented API, so no warranties are implied - it might stop working tomorrow for all I know. ## Contents # - Overview - Getting data through app APIs - Building out custom analysis - Discovering the web APIs - Conclusion ## Overview # Let’s get started by taking a look at *what* device I have, exactly. It’s an IQAir AirVisual Pro2 - a bit on the pricey side, but it gets the job done and has all the data that I need, like CO2 concentration, current AQI3, temperature, humidity, and PM2.5 concentration. Neat little device, but the application that it comes with, along with the web experience is a bit underwhelming. Mostly because it only shows data for a short period of time, and doesn’t allow any kinds of pivots or transformations, which can be a bit boring. Say I want to know at what hours I have the highest CO2 concentration inside the house, or compare the humidity over time - none of this is an option with the default app *or* the online service. If I wanted to go the easiest route, I could explore some of the built-in functionality. The device exposes a SMB share for the data that you can grab if you are on the local network, but that means that I need to boot device off of my guest network and onto my main one, which I don’t want to do. So what’s an engineer to do? Man-in-the-middle the app4 to figure out what servers it talks to, because I just *assumed* that the data is not only stored locally (there is a joke about Internet of Things here somewhere5). ## Getting data through app APIs # The folks at IQAir seem to have created several branches for their API, and at least two are known to me - the one for the app, and the one for the web interface. By looking at the traffic that originated from my mobile device, I realized that there is an endpoint that can actually channel all the info in one call - all I needed to do was send requests to the following URL: ``` https://app-api.airvisual.com/api/v5/devices/{device_id}/measurements ``` The device ID is something you can grab from the device itself or through the app. Because with `mitmproxy` I can also inspect the headers, it was relatively easy to spot that there is a pre-baked `x-api-token` header that I could grab directly from the app. That is, if I just want to grab the data - but what if the token expires? Is there a way to get a new one? Well, as it turns out, the token is hard-coded into the application (or so it seems), which makes my job that much easier - this means I can just run all the requests I need directly. By using the aforementioned header, I am able to get a JSON representation of the data that originated from my device, along with the comparison information for the location I set the air quality sensor to use as the baseline. Great, so I am mostly where I want to be. I now can access the data, and ideally store it locally. By accessing the data directly from the service, I can now write a `cron` job that can take regular snapshots of the environment and place those somewhere. What is somewhere, though? There could be many choices, including writing everything to a CSV file or maybe even to a document database, if I would need to access the information remotely. For now, however, I just needed to run local analysis, so I opted for SQLite. By using SQLite, I am able to create SQL queries on the data, and slice-and-dice it in a way that makes the most sense for scenarios that I care about right now, or might care about in the future. I could create a very simple table with the help of this SQL snippet: ``` CREATE TABLE "AirQualityData" ( "Timestamp" TEXT, "IndoorTemperature" REAL, "IndoorHumidity" REAL, "OutdoorTemperature" REAL, "OutdoorPressure" REAL, "OutdoorHumidity" REAL, "OutdoorWindSpeed" REAL, "OutdoorWindDirection" REAL, "OutdoorWeatherIcon" TEXT, "IndoorPM25AQI" REAL, "IndoorPM25Concentration" REAL, "IndoorCO2Color" TEXT, "IndoorCO2Concentration" REAL, "IndoorPM10AQI" REAL, "IndoorPM10Concentration" REAL, "IndoorPM1AQI" REAL, "IndoorPM1Concentration" REAL, "OutdoorAQI" REAL, "OutdoorPollutant" TEXT, "OutdoorConcentration" REAL, PRIMARY KEY("Timestamp") ); ``` I could already hear someone being utterly horrified by the fact that I chose the timestamp as the primary key, but worry not - this decision was made based on the fact that the environment data is captured on an hourly basis. What that means is that every entry should *technically* be unique, and if a new or updated entry is added with the same timestamp, it should just overwrite whatever is already in the database. The function to store data in this table (written in Python) then becomes very easy because I just need to run a `INSERT OR REPLACE` statement: ``` def StoreMeasurements(database_name, measurements): data_connection = sqlite3.connect(database_name) statement = f'INSERT OR REPLACE INTO AirQualityData (Timestamp, IndoorTemperature, IndoorHumidity, OutdoorTemperature, OutdoorPressure, OutdoorHumidity, OutdoorWindSpeed, OutdoorWindDirection, OutdoorWeatherIcon, IndoorPM25AQI, IndoorPM25Concentration, IndoorCO2Color, IndoorCO2Concentration, IndoorPM10AQI, IndoorPM10Concentration, IndoorPM1AQI, IndoorPM1Concentration, OutdoorAQI, OutdoorPollutant, OutdoorConcentration) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' for measurement in measurements: pst = pytz.timezone('US/Pacific') target_date = dateutil.parser.parse(measurement.timestamp) localized_timestamp = target_date.astimezone(pst) data_connection.execute(statement, (localized_timestamp.isoformat(), measurement.indoor_temperature, measurement.indoor_humidity, measurement.outdoor_temperature, measurement.outdoor_pressure, measurement.outdoor_humidity, measurement.outdoor_wind_speed, measurement.outdoor_wind_direction, measurement.outdoor_weather_icon, measurement.indoor_pm25_aqi, measurement.indoor_pm25_concentration, measurement.indoor_co2_color, measurement.indoor_co2_concentration, measurement.indoor_pm10_aqi, measurement.indoor_pm10_concentration, measurement.indoor_pm1_aqi, measurement.indoor_pm1_concentration, measurement.outdoor_aqi, measurement.outdoor_pollutant, measurement.outdoor_concentration)) data_connection.commit() data_connection.close() ``` Feel free to ignore some timezone changes - the default values returned by the IQAir services are in UTC, and I wanted them stored in PST for ease of processing. You could entirely skip this step, and delegate that work to the future rendering/analysis layer. With the above, the data acquisition function should be referred to as the “parse a metric crapton of JSON” function that takes the data and places it inside an object: ``` def GetMeasurementData(api_key, device_id): url = f'https://app-api.airvisual.com/api/v5/devices/{device_id}/measurements' headers = { 'x-api-token': api_key } response = requests.request('GET', url, headers=headers) try: raw_data = json.loads(response.text) measurements = [] weather_measurements_count = len(raw_data['data']['hourlyWeathers']) device_measurements_count = len(raw_data['data']['hourlyMeasurements']) print(f'[info] There are {weather_measurements_count} weather measurements.') print(f'[info] There are {device_measurements_count} device measurements.') measurements = [] # Ideally, the assumption is that weather measurements are the same number # as device measurements. We will test this assumption as the tool is used. # In this case, I chose the first array in the returned JSON as the baseline. for measurement in raw_data['data']['hourlyWeathers']: timestamp = '' indoor_temperature = '' indoor_humidity = '' outdoor_temperature = '' outdoor_pressure = '' outdoor_humidity = '' outdoor_wind_speed = '' outdoor_wind_direction = '' outdoor_weather_icon = '' indoor_pm25_aqi = '' indoor_pm25_concentration = '' indoor_pm10_aqi = '' indoor_pm10_concentration = '' indoor_pm1_aqi = '' indoor_pm1_concentration = '' indoor_co2_color = '' indoor_co2_concentration = '' outdoor_aqi = '' outdoor_concentration = '' outdoor_pollutant = '' timestamp = measurement['ts'] indoor_temperature = measurement['temperature'] indoor_humidity = measurement['humidity'] if 'outdoor' in measurement: outdoor_temperature = measurement['outdoor']['temperature'] outdoor_pressure = measurement['outdoor']['pressure'] outdoor_humidity = measurement['outdoor']['humidity'] outdoor_wind_speed = measurement['outdoor']['windSpeed'] outdoor_wind_direction = measurement['outdoor']['windDirection'] outdoor_weather_icon = measurement['outdoor']['weatherIcon'] device_measurement = [x for x in raw_data['data']['hourlyMeasurements'] if x['ts'] == timestamp][0] indoor_pm25_measurement = [x for x in device_measurement['pollutants'] if x['pollutant'].lower() == 'pm25'][0] indoor_pm25_aqi = indoor_pm25_measurement['aqius'] indoor_pm25_concentration = indoor_pm25_measurement['conc'] indoor_pm10_measurement = [x for x in device_measurement['pollutants'] if x['pollutant'].lower() == 'pm10'][0] indoor_pm10_aqi = indoor_pm10_measurement['aqius'] indoor_pm10_concentration = indoor_pm10_measurement['conc'] indoor_pm1_measurement = [x for x in device_measurement['pollutants'] if x['pollutant'].lower() == 'pm1'][0] indoor_pm1_aqi = indoor_pm1_measurement['aqius'] indoor_pm1_concentration = indoor_pm1_measurement['conc'] indoor_co2_measurement = [x for x in device_measurement['pollutants'] if x['pollutant'].lower() == 'co2'][0] indoor_co2_color = indoor_co2_measurement['color'] indoor_co2_concentration = indoor_co2_measurement['conc'] if 'outdoor' in device_measurement: if 'aqius' in device_measurement['outdoor']: outdoor_aqi = device_measurement['outdoor']['aqius'] if 'mainus' in device_measurement['outdoor']: outdoor_pollutant = device_measurement['outdoor']['mainus'] if 'pollutants' in device_measurement['outdoor']: outdoor_pollutant_measurement = [x for x in device_measurement['outdoor']['pollutants'] if x['pollutant'].lower() == outdoor_pollutant.lower()][0] outdoor_concentration = outdoor_pollutant_measurement['conc'] key_measurement = mmodel.Measurement(timestamp=timestamp, indoor_temperature=indoor_temperature, indoor_humidity=indoor_humidity, outdoor_temperature=outdoor_temperature, outdoor_pressure=outdoor_pressure, outdoor_humidity=outdoor_humidity, outdoor_wind_speed=outdoor_wind_speed, outdoor_wind_direction=outdoor_wind_direction, outdoor_weather_icon=outdoor_weather_icon, indoor_pm25_aqi=indoor_pm25_aqi, indoor_pm25_concentration=indoor_pm25_concentration, indoor_co2_color=indoor_co2_color, indoor_co2_concentration=indoor_co2_concentration, indoor_pm10_aqi=indoor_pm10_aqi, indoor_pm10_concentration=indoor_pm10_concentration, indoor_pm1_aqi=indoor_pm1_aqi, indoor_pm1_concentration=indoor_pm1_concentration, outdoor_aqi=outdoor_aqi, outdoor_pollutant=outdoor_pollutant, outdoor_concentration=outdoor_concentration) measurements.append(key_measurement) return measurements except Exception as ex: exc_type, exc_obj, exc_tb = sys.exc_info() print('[error] There was a problem with getting the data.') print(f'[error] Exception details: {ex}') print(exc_tb.tb_lineno) return None ``` Again, I don’t expect you to dive super deep into the code above, but it gives you an idea of how the data can be read and processed. ## Building out custom analysis # I ran this scheduled job for a couple of days, capturing information in the morning and in the evening, and ended up with a pretty good corpus of data. I could now attempt to render it. Because I need something fast and easily tweakable, I went with a Jupyter notebook, where I can just plug the SQL queries and get the raw output. Like this: ``` import sqlite3 import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns %matplotlib inline data_connection = sqlite3.connect("../airdata.db") statement = f""" SELECT Timestamp, IndoorTemperature, OutdoorTemperature FROM AirQualityData GROUP BY 1 ORDER BY 1 ASC """ df = pd.read_sql_query(statement, data_connection) df['OutdoorTemperature'] = pd.to_numeric(df.OutdoorTemperature) plt.figure(figsize=(20,10)) plt.xticks(rotation='vertical') plt.plot(df.Timestamp, df.IndoorTemperature) plt.plot(df.Timestamp, df.OutdoorTemperature) ``` This would give me a very nice graph (X axis aside), allowing me to compare the indoor and outdoor temperature over more than just 7 days: The beauty of being able to grab hourly snapshots is in the fact that I no longer need to rely on IQAir’s pre-calculated aggregate values (e.g. weekly averages) - I have full control over the information that my sensor generates. Want to calculate the average AQI over every hour and every day of the week? That’s possible now: ``` statement = f""" SELECT printf("%.2f", AVG(IndoorPM25AQI)) PM25Concentration, case cast (strftime('%w', Timestamp) as integer) when 0 then 'Sunday' when 1 then 'Monday' when 2 then 'Tuesday' when 3 then 'Wednesday' when 4 then 'Thursday' when 5 then 'Friday' else 'Saturday' end as DayOfWeek, strftime('%H:00', Timestamp) PM25MeasurementHour FROM ( SELECT Timestamp, IndoorPM25AQI FROM AirQualityData GROUP BY 1 ORDER BY Timestamp DESC) GROUP BY DayOfWeek, PM25MeasurementHour """ df = pd.read_sql_query(statement, data_connection) df['PM25Concentration'] = pd.to_numeric(df.PM25Concentration) pivoted_df = df.pivot(index='PM25MeasurementHour', columns='DayOfWeek', values='PM25Concentration') pivoted_df = pivoted_df.reindex(columns=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']) pivoted_df = pivoted_df.replace(np.nan,0) pivoted_df.style.background_gradient(cmap='Blues') ``` CO2 concentration averages in the same heatmap format? Yup - can do! ``` statement = f""" SELECT printf("%.2f", AVG(IndoorCO2Concentration)) AvgCO2Concentration, case cast (strftime('%w', Timestamp) as integer) when 0 then 'Sunday' when 1 then 'Monday' when 2 then 'Tuesday' when 3 then 'Wednesday' when 4 then 'Thursday' when 5 then 'Friday' else 'Saturday' end as DayOfWeek, strftime('%H:00', Timestamp) CO2MeasurementHour FROM ( SELECT Timestamp, IndoorCO2Concentration FROM AirQualityData GROUP BY 1 ORDER BY Timestamp DESC) GROUP BY DayOfWeek, CO2MeasurementHour """ df = pd.read_sql_query(statement, data_connection) df['AvgCO2Concentration'] = pd.to_numeric(df.AvgCO2Concentration) pivoted_df = df.pivot(index='CO2MeasurementHour', columns='DayOfWeek', values='AvgCO2Concentration') pivoted_df = pivoted_df.reindex(columns=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']) pivoted_df = pivoted_df.replace(np.nan,0) pivoted_df.style.background_gradient(cmap='YlOrRd') ``` Possibilities are truly endless, because now I manage my own data and a new insight is a SQL query away. ## Discovering the web APIs # But what if you are less adventurous than I am, and for one reason or another you can’t quite MITM your device? Well, as it turns out, there is a web API that does exactly the same thing that the app API does, but with a different auth format that does not require you to obtain the hardcoded API token. The endpoint you should use is this: ``` https://website-api.airvisual.com/v1/users/{user_id}/devices/{device_id}?units.temperature=celsius&units.distance=kilometer&AQI=US&language=en ``` Query parameters are, of course, modifiable. This request requires only one custom header - `x-login-token` . To get said token, you can issue a POST request to the following URL: ``` https://website-api.airvisual.com/v1/auth/signin/by/email ``` The payload you need to send should be your email and password in JSON form, as such: ``` {"email":"email@address","password":"YourP@ssw0rdGo35H3r3"} ``` The response you will get will be another JSON document, of the following form: ``` { "id": "YOUR_ID", "email": "email@address", "name": "Johny Pineappleseed", "loginToken": "YourLoginToken=" } ``` Cool - so this also answers the question as to what you need to insert in `{user_id}` in the first API endpoint in this section. But what about `{device_id}` ? Is it the same identifier you would use in the mobile API? Apparently not, because every API comes with its own identifiers, apparently (and not the share code). But that’s quite alright, because we can get the right device ID by executing yet another API call, this time to this endpoint: ``` https://website-api.airvisual.com/v1/users/{user_id}/devices?page=1&perPage=10&sortBy=aqi&sortOrder=desc&filters[]=aqi&filters[]=pm25&filters[]=co2&filters[]=tvoc&filters[]=hcho&filters[]=humidity&filters[]=temperature&units.temperature=celsius&units.distance=kilometer&AQI=US&language=en ``` Query parameters are optional, but you will need to append the same `x-login-token` header to this request as well. In return for your effort, you will get a list of devices: The `id` field is what I need here, and once I grab this value, I can now execute the very first call I wrote about in this section. But, as you might’ve already noticed, some of the measurement information is returned in the `/devices` call as well, so you can pick and choose whichever API call suits your needs best. ## Conclusion # This was a fun project to put together not the least because I am a big fan of discovering undocumented APIs that allow me to get more insights about the tools that I use. For my personal use, I’ve wrapped this entire post in a CLI, that I hope to share more about in the future - it makes data storage and inspection significantly easier in a non-interactive session (e.g. running inside a GitHub action). If you have an IQAir device, I hope that this little adventure pointed you in the direction where you can take control of your own data and try to get a better understanding of the air in your dwelling over time. - I still remember figuring out how to query the Xbox Live Marketplace. Surprisingly, those APIs still work, ten years later. ↩︎ - You can read more about it on the vendor page. According to South Coast Air Quality Management District, the sensor is fairly reliable. ↩︎ - https://den.dev/blog/intercepting-iphone-traffic-mac-for-free/ ↩︎ - It’s good that I already put one together. ↩︎
true
true
true
I learned that owning your data is powerful, and it’s even more powerful when you are able to slice-and-dice it for better insights.
2024-10-12 00:00:00
2020-12-28 00:00:00
https://assets.den.dev/i…a/air/header.jpg
article
den.dev
den.dev
null
null
39,823,209
https://www.sciencedirect.com/science/article/pii/S0149763421003456#bib1385
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,824,450
https://ucsdneuro.wordpress.com/2018/12/31/what-facilitates-the-extreme-maternal-behaviors-in-octopuses/
What facilitates the extreme maternal behaviors in octopuses?
null
The octopus, a highly intelligent invertebrate with a uniquely complex central nervous system, has caught the attention of Dr. Clifton Ragsdale for investigation of cephalopod genomics. As a professor under the Department of Neurobiology at The University of Chicago, Dr. Ragsdale and his lab are applying modern cellular and molecular techniques to studying octopus neurobiology. In 2015, they published findings with collaborators from Rokhsar’s group at UC Berkeley on the genome of the California two-spot octopus, *Octopus bimaculoides*. Since then, the Ragsdale lab has been investigating comparative cephalopod genomics, octopus arm regeneration, octopus embryogenesis, and neocortex development. A recent paper published in August of 2018 in the Journal of Experimental Biology revealed multiple signaling pathways in the octopus optic gland that facilitate maternal behaviors and death. A female octopus that has mated will undergo extreme maternal care behavior of starvation and death after a single reproductive cycle. The optic gland is analogous to the vertebrate pituitary gland, and when removed, the octopus reverses the brooding behavior and abandons her clutch to resume feeding and mating. The molecular features underlying this optic gland signaling were explored by Ragsdale with transcriptome and behavioral analyses. They examined four behavioral stages in the adult life of the sexually mature female octopus. The first being non-mated females that are active predators outside their dens. Next were mated females that actively guarded their dens and egg clutches but exhibited reduced predatory behavior. Following that is the fasting stage, and then the final stage of rapid physiological and behavioral decline with excessive self-grooming and self-cannibalization resulting in death. Ragsdale used RSEM and edgeR analysis methods to identify nearly 1200 transcripts that were differentially expressed. 25 subclusters were determined, of which 22 were excluded due to sample scarcity or lack of monotonicity (either entirely increasing or decreasing trends). The remaining three subclusters contained 343 genes of interest with differential expression across the four behavioral stages. The first cluster included transcripts for neural signaling and neurotransmitter receptors that increased after mating and remained elevated through brooding stages (Fig. 5A-C). The transition from feeding to fasting stages revealed increased expression of insulin signaling genes (Fig. 5D), which promote cell survival under starvation conditions. Genes related to feeding circuit-related neuropeptides in the third cluster decreased expression between unmated and mated stages (Fig. 5E), which may regulate energy expenditure or the drive to hunt for food. To assess whether these gene expression changes were localized in the optic gland or globally present in other tissues, Ragsdale compared the transcriptomes of the optic gland to those of tissues from different parts of the octopus arm using BLASTP and TBLASTN. They found these molecular markers of senescence are only present in the optic glands and not in other tissues. Figure 5. Expression profiles of genes relevant to optic gland signaling. The Ragsdale lab has uncovered multiple signaling systems in the octopus optic glands. Upregulations and downregulations of catecholamine, steroid, insulin, and feeding peptide pathways tightly regulate maternal behavioral feeding behaviors. These functions parallel those of the anterior pituitary gland and adrenal glands in vertebrates, thus prompting further investigation of optic gland targets. Ragsdale and his team have demonstrated the significant organization and function of the optic gland in octopus physiology for maternal behavior. To hear more about the work being done in Dr. Ragsdale’s lab, please join us at 4 PM, Tuesday 1/8/2019 at the Marilyn G. Farquhar Seminar room in CNCB. To read the paper, visit: http://jeb.biologists.org/content/221/19/jeb185751.long To learn more about the octopus genome research project in the Ragsdale lab, check out https://www.youtube.com/watch?v=7QaPmCRhr80 and https://www.nature.com/articles/nature14668 for their 2015 Nature paper. *Vivian Ko is a first-year PhD student. *
true
true
true
The octopus, a highly intelligent invertebrate with a uniquely complex central nervous system, has caught the attention of Dr. Clifton Ragsdale for investigation of cephalopod genomics. As a profes…
2024-10-12 00:00:00
2018-12-31 00:00:00
https://ucsdneuro.wordpr…/12/Picture1.png
article
wordpress.com
UCSD Neurosciences
null
null
35,986,297
https://thenewstack.io/gitops-as-an-evolution-of-kubernetes/
GitOps as an Evolution of Kubernetes
Steven J Vaughan-Nichols
# GitOps as an Evolution of Kubernetes VANCOUVER, British Columbia — Many people talk about GitOps and Kubernetes, but when Brendan Burns, a Microsoft Corporate Vice President, a Distinguished Engineer at Microsoft Azure, and, oh yeah, co-founder of Kubernetes, talks, I listen. Burns spoke at The Linux Foundation’s GitOpsCon about how GitOps is an evolutionary step for Kubernetes. How? Burns started by explaining how it’s deeply rooted in the development of continuous integration, deployment, and delivery. What really motivated him to help create Kubernetes was, “When we were starting out, we tried to put together reliable deployments. They worked on this using the DevOps tools of the time with a mixture of Puppet, Chef, Salt, and Ansible — and Bash obviously — it worked about 85% of the time. And then you’d massage it, and it eventually would work maybe 95% of the time: However, the journey was often fraught with difficulties and uncertainties, which birthed the idea of Kubernetes. Kubernetes’ inception was essentially a response to the arduous and unreliable nature of the deployment process. It was a fusion of the DevOps challenges and the innovative strides Docker made in the container revolution. Docker’s focus on hermetically sealing and packaging applications was a vital prerequisite to reimagining how deployments could be executed. Over the past decade, this approach has transformed into the standard modus operandi within the tech community. ## Advent of GitOps But the tech world has now moved a step further with the advent of GitOps. It’s no longer aimed at redefining the deployment process itself. It is no longer just about leading into the deployment that Kubernetes orchestrates but the entire journey — from sourcing configurations to deploying them into the world where Kubernetes can utilize them. GitHub, with its declarative configuration, now plays a pivotal role in ensuring reliable delivery and contributes to the ongoing evolution of the community. “While it’s universally accepted now,” said Burns, “the idea was a subject of contention at the time.” Scripting was rampant. Notably, the CI/CD pipeline, even when described in YAML, was an imperative program execution. Burns thinks GitOps, with its inherent declarative nature, is a welcome reinforcement to the Kubernetes ecosystem. Moreover, empowering people to do more was another central theme of our initial thought process. The goal was to alleviate the burdens that plagued developers daily. This, in essence, is the journey of the community — from its inception rooted in deployment and continuous delivery to the present day, where GitOps reigns, offering a more reliable, declarative, and user-empowering approach to managing deployments. It does this in several ways: - Separation of Concerns: With Kubernetes and GitOps, teams can be compartmentalized, focusing on specific tasks and responsibilities. This clean delineation can help avoid confusion, improve efficiency, and make it clear where one team’s responsibilities end, and another begins. - Multiple Personas: In modern software development, there are many personas involved, such as developers, platform engineers, and security teams. Each has a specific role and responsibilities, and all need to work together in the same environment. - GitOps as a Solution: GitOps can help manage this complex environment. It allows each persona to manage a Git repository, rather than needing to directly interact with the cluster. This can reduce the risks associated with one group having too much control and can make it easier for teams to work together. It essentially allows for a clearer division of labor and less risk of overlap or conflict. - Automated Updates: GitOps can also facilitate automatic updates. Tools such as Dependabot can monitor repositories and propose updates when necessary. This process reduces the time and effort required to stay up to date, increasing efficiency and reducing the risk of falling behind on important updates. - Security and Compliance: GitOps also supports better security and compliance. Through a well-managed Git repository, it can ensure that every change is tracked and auditable, which is important for meeting compliance requirements. The GitOps workflow and its intersection between platform engineering and the developer is particularly significant for programmers who prefer not to be bogged down by the intricacies of deploying their code into Kubernetes. Irrespective of their preferred programming language — be it Java, Python, Dotnet, Rust, or Go — they simply want to push their code, generate a container image, and have it deployed immediately. GitOps enables them to do this. ## Scalability Burns continued, the beauty of GitOps lies in its scalability. Developers need not be overly concerned with the number of clusters in their organization or their specific locations. The shift from a push model of pipelines to a GitOps pull model allows a level of abstraction where the number of clusters becomes somewhat irrelevant. Developers only have to deal with a Git repository. If a new cluster emerges or an old one disappears, developers may not even notice. The consistency of the workflows remains even when transitioning from early pre-production to staging to production in the application lifecycle. This decreases the cognitive load on developers, allowing them to concentrate more on their code rather than where it goes post-deployment. Thus, in GitOps, the Git repository becomes the ultimate source of truth, and the platform engineering team can concentrate on initializing that Git repository, thus empowering developers to efficiently deploy their code. Burns also reminded us that historically, the concept of “snowflakes” (One-off unique servers impossible to reconstruct if they “melted”) was a cause of concern. True, containers and orchestration eliminated this problem at the individual container level. However, we now face the issue of “snowflake clusters” — clusters of machines that are uniform internally but differ from others. GitOps, Burns said, offers a robust solution for this issue. The shift from a push to a pull model makes GitOps relatively indifferent to the scale or number of clusters. Each cluster is configured to point to the same Git repository. When you make the Git repository initialization part of creating clusters, it automatically creates clusters that are initialized with the correct software versions. Thus, this process ensures consistency across the platform. For example, it also eliminates the chances of forgetting to include a cluster in a pipeline that deploys a new version of security software or having to inform a development team about changes in regions. This consistency and reliability are among the main advantages of GitOps. Interestingly, the application of GitOps is not restricted to Kubernetes but extends to public cloud resources through service operators. Users are leveraging the Kubernetes control plane to manage containerized resources and instances of a Postgres database or blob storage system. GitOps can manage resources within your cluster as well as those in the cloud, thus widening its scope and utility. ## No Be-All, End-All However, GitOps is not the be-all and end-all solution There’s a place for both CI/CD pipelines and GitOps, “It’s not a fight, but rather it’s two very complementary technologies, one that is very good at easily making the state real and one that is very good at orchestrating stages of what you want the world to look like. Drawing parallels with robotics, which Burns worked on before he came to software, there’s a constant handoff between control and planning, one can understand the relationship between traditional CI/CD pipeline systems and GitOps. GitOps is like a controller, quickly making a state reality, but it’s not ideal for software rollouts on a global scale that requires slow, gradual deployments. This is where traditional CI/CD systems, or “planners,” come into play. So, Burns concluded, CI/CD pipelines and GitOps each have their strengths — GitOps in bringing a specific state into reality with ease, and traditional CI systems in orchestrating stages of what the world should look like. Understanding the value of GitOps in the container context and its interplay with traditional CI systems can significantly enhance efficiency and productivity. And all, of course, will work well in a Kubernetes-orchestrated world.
true
true
true
Brendan Burns, Kubernetes' co-founder shared his thoughts on GitOps and Kubernetes at GitOpsCon.
2024-10-12 00:00:00
2023-05-16 00:00:00
https://cdn.thenewstack.…684243423339.jpg
article
thenewstack.io
The New Stack
null
null
10,797,328
http://www.engadget.com/2015/12/27/uk-claims-spying-bill-fights-cyberbullies/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,519,252
https://pyedifice.github.io/
Edifice#
null
# Edifice# Edifice is a Python library declarative framework for application user interfaces. Modern **declarative**UI paradigm from web development.**100% Python**application development, no language inter-op.A **native**Qt desktop app instead of a bundled web browser.Fast iteration via **hot-reloading**. Edifice uses PySide6 or PyQt6 as a backend. Edifice is like React, but with Python instead of JavaScript, and Qt Widgets instead of the HTML DOM. If you have React experience, you’ll find Edifice easy to learn. Edifice has function Components, Props, and Hooks just like React. ## Getting Started# ``` pip install PySide6-Essentials pip install pyedifice ``` ``` from edifice import App, Label, Window, component @component def HelloWorld(self): with Window(): Label("Hello World!") if __name__ == "__main__": App(HelloWorld()).start() ``` For more, see the Tutorial. To understand the core concepts behind Edifice, see Edifice Core. ## Table of Contents# ## Why Edifice?# **Declarative** Most existing GUI libraries in Python, such as Tkinter and Qt, operate imperatively. To create a dynamic application using these libraries, you must not only think about *what* widgets to display to the user, but also *how* to issue the commands to modify the widgets. With Edifice the developer need only declare *what* is rendered, not *how* the content is rendered. User interactions update the application state, the state renders to a widget tree, and Edifice modifies the existing widget tree to reflect the new state. Edifice code looks like this: ``` number, set_number = use_state(0) with VBoxView(): Button("Add 5", on_click=lambda event: set_number(number+5)) Label(str(number)) if number > 30 and number < 70: Label("Number is mid") ``` The GUI displays a button and a label with the current value of `number` . Clicking the button will add 5 to the `number` . If the `number` is “mid” then another label will reveal that fact. **Developer Tools** Dynamic hot-reloading of source code changes. Element Inspector. See Developer Tools for more details. **Edifice vs. Qt Quick** Qt Quick is Qt’s declarative GUI framework for Qt. Qt Quick programs are written in Python + the special QML language + JavaScript. Edifice programs are written in Python. Because Edifice programs are only Python, binding to the UI is much more straightforward. Edifice makes it easy to dynamically create, mutate, shuffle, and destroy sections of the UI. Qt Quick assumes a much more static interface. Qt Quick is like DOM + HTML + JavaScript, whereas Edifice is like React. QML and HTML are both declarative UI languages but they require imperative logic in another language for dynamism. Edifice and React allow fully dynamic applications to be specified declaratively in one language. **Extendable** Edifice does not support every feature of Qt, but it is easy to interface with Qt, either incorporating a Qt Widget into an Edifice component, use Qt commands directly with an existing Edifice component, or incorporating Edifice components in a Qt application. ## Poetry Build System# The Poetry `pyproject.toml` specifies the package dependecies. Because Edifice supports PySide6 and PyQt6 at the same time, neither are required by `[tool.poetry.dependencies]` . Instead they are both optional `[tool.poetry.group.dev.dependencies]` . A project which depends on Edifice should also depend on either PySide6-Essentials or PySide6 or PyQt6. The `requirements.txt` is generated by ``` poetry export -f requirements.txt --output requirements.txt ``` ## License and Code Availability# The source code is avaliable on github/pyedifice. Edifice is released under the MIT License. Edifice uses Qt under the hood, and both PyQt6 and PySide6 are supported. Note that PyQt6 is distributed with the *GPL* license while PySide6 is distributed under the more flexible *LGPL* license. Can I use PySide for commercial applications?Yes, and you don’t need to release your source code to customers. The LGPL only requires you to release any changes you make to PySide itself. ## Support# Submit bug reports or feature requests on Github Issues. Submit questions on Github Discussions.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
Edifice 2.5.0 documentation
null
null
27,157,845
https://kea.js.org/blog/2021/05/14/data-first-frontend-revolution
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,706,048
https://spectrum.ieee.org/automaton/robotics/robotics-hardware/why-we-need-robot-sloths
Why We Need Robot Sloths
Evan Ackerman
An inherent characteristic of a robot (I would argue) is embodied motion. We tend to focus on motion rather a lot with robots, and the most dynamic robots get the most attention. This isn’t to say that highly dynamic robots don’t deserve our attention, but there are other robotic philosophies that, while perhaps less visually exciting, are equally valuable under the right circumstances. Magnus Egerstedt, a robotics professor at Georgia Tech, was inspired by some sloths he met in Costa Rica to explore the idea of “slowness as a design paradigm” through an arboreal robot called SlothBot. Since the robot moves so slowly, why use a robot at all? It may be very energy-efficient, but it’s definitely not more energy efficient than a static sensing system that’s just bolted to a tree or whatever. The robot moves, of course, but it’s also going to be much more expensive (and likely much less reliable) than a handful of static sensors that could cover a similar area. The problem with static sensors, though, is that they’re constrained by power availability, and in environments like under a dense tree canopy, you’re not going to be able to augment their lifetime with solar panels. If your goal is a long-duration study of a small area (over weeks or months or more), SlothBot is uniquely useful in this context because it can crawl out from beneath a tree to find some sun to recharge itself, sunbathe for a while, and then crawl right back again to resume collecting data. SlothBot is such an interesting concept that we had to check in with Egerstedt with a few more questions. *IEEE Spectrum:* Tell us what you find so amazing about sloths! **Magnus Egerstedt: **Apart from being kind of cute, the amazing thing about sloths is that they have carved out a successful ecological niche for themselves where being slow is not only acceptable but actually beneficial. Despite their pretty extreme low-energy lifestyle, they exhibit a number of interesting and sometimes outright strange behaviors. And, behaviors having to do with territoriality, foraging, or mating look rather different when you are that slow. **Are you leveraging the slothiness of the design for this robot somehow?** *Sadly, the sloth design serves no technical purpose. But we are also viewing the SlothBot as an outreach platform to get kids excited about robotics and/or conservation biology. And having the robot look like a sloth certainly cannot hurt.* **Can you talk more about slowness as a design paradigm?** The SlothBot is part of a broader design philosophy that I have started calling “Robot Ecology.” In ecology, the connections between individuals and their environments/habitats play a central role. And the same should hold true in robotics. The robot design must be understood in the environmental context in which it is to be deployed. And, if your task is to be present in a slowly varying environment over a long time scale, being slow seems like the right way to go. Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks, where the environment itself is slowly varying. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations. **How do sloths inspire SlothBot’s functionality?** Its motions are governed by what we call survival constraints. These constraints ensure that the SlothBot is always able to get to a sunny spot to recharge. The actual performance objective that we have given to the robot is to minimize energy consumption, i.e., to simply do nothing subject to the survival constraints. The majority of the time, the robot simply sits there under the trees, measuring various things, seemingly doing absolutely nothing and being rather sloth-like. Whenever the SlothBot does move, it does not move according to some fixed schedule. Instead, it moves because it has to in order to “survive.” **How would you like to improve SlothBot?** I have a few directions I would like to take the SlothBot. One is to make the sensor suites richer to make sure that it can become a versatile and useful science instrument. Another direction involves miniaturization - I would love to see a bunch of small SlothBots “living” among the trees somewhere in a rainforest for years, providing real-time data as to what is happening to the ecosystem. Evan Ackerman is a senior editor at *IEEE Spectrum*. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
true
true
true
Georgia Tech’s SlothBot is more than a cute face
2024-10-12 00:00:00
2020-07-01 00:00:00
https://spectrum.ieee.or…%2C179%2C0%2C180
article
ieee.org
IEEE Spectrum
null
null
4,575,816
http://www.engadget.com/2012/09/25/iphone-5-lumia-920-image-stabilization-face-off/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,583,522
http://www.latimes.com/science/sciencenow/la-sci-sn-kidney-stones-roller-coaster-20160926-snap-story.html
Try riding a roller coaster to dislodge those painful kidney stones
Melissa Healy
# Try riding a roller coaster to dislodge those painful kidney stones A Michigan State University urologist reported that riding a medium-intensity roller coaster can result in the painless passing of small, and even a few large, kidney stones. Just ask any one of the 300,000 Americans who, in any given year, develop kidney stones: What if the excruciating pain of passing one of those little devils could be prevented by strapping yourself into a make-believe runaway mine train, throwing your hands in the air and enduring G-forces as high as 2.5 for about three minutes? Would you do it? Hell yeah, they’d do it. In a bit of medical research inspired by strange and remarkable patient accounts, a Michigan State University urologist reports that, yes, riding a medium-intensity roller coaster such as the Disney theme parks’ Big Thunder Mountain Railroad can result in the painless passing of small, and even a few large, kidney stones. For best results, ride in the back, where — roller coaster afficionados all seem to agree — the thrills are greatest. Independent of kidney stone volume and location, findings reported Sunday in the Journal of the American Osteopathic Assn. showed that sitting in the back of the roller coaster resulted in an average passage rate of 63.89%. Front-seat rides resulted in a far more modest passage rate of 16.67%. In what magical kingdom, you may well ask, does someone think to conduct such research? Dr. David D. Wartinger, a professor emeritus at Michigan State University’s College of Osteopathic Medicine, initiated the study after a series of patients reported something almost too strange to believe: In the wake of riding Big Thunder at Walt Disney World in Orlando, Fla., these patients said their kidney stones passed painlessly from the kidney through the narrow duct of the ureter and into the bladder. In one case, a patient told Wartinger that he passed one kidney stone after each of three consecutive rides on the roller coaster. Using a 3-D printed model of that patient’s kidney, Wartinger and his colleagues implanted three kidney stones of various sizes into the upper, middle or lower passageways of the clear silicone model. Two of those mineral clusters, which can form as the kidney filters waste from the bloodstream, were small-to-moderate size — 4.5 millimeters and 13.5 mm. Those might pass through the duct leading to the bladder without incident but could also cause considerable pain and discomfort as they passed unaided. But a third measured 64.6 mm, a size that would rarely pass without treatment — the administration of ultrasound shock waves, called lithotripsy, designed to break up the deposit and allow it to pass. The researchers received permission from Walt Disney World first, then concealed the kidney model in a backpack and rode Big Thunder 20 times, varying their seat position between front and back. After analyzing the location of those three kidney stones at the end of each ride, the researchers concluded that “findings support the anecdotal evidence that a ride on a moderate-intensity roller coaster could benefit some patients with small kidney stones,” Wartinger said. When the kidney stone was large, the initial position of the kidney stone affected the likelihood of its passing during the ride. But even those passed two in three times while the silicone model rode the thrill ride. “Many people in the United States probably live within a few hours’ drive of an amusement park containing a roller coaster with features capable of dislodging calyceal renal calculi,” wrote Wartinger and co-author Dr. Marc A. Mitchell of the Doctor’s Clinic in Poulsbo, Wash. Roller coaster therapy might be a good preventive treatment for people who are at high risk of developing obstructive kidney stones, wrote Wartinger and Mitchell. They suggested that patients who have had kidney stones in the past, or women who have had kidney stones and are thinking of becoming pregnant, consider a thrill ride or two in a bid to clear tiny stones before the deposits grow larger. Kidney stone sufferers who have had their deposits broken up by lithotripsy might also consider a roller-coaster ride to finish the job, they said. **Follow me on Twitter @LATMelissaHealy and “like” Los Angeles Times Science & Health on Facebook.** **MORE IN SCIENCE** **United Nations takes on antimicrobial resistance** **How scientists virtually unwrapped an ancient, burned scroll and read the words inside** **Studies on the perils of polyester underwear and the personality of rocks win Ig Nobel Prizes**
true
true
true
Just ask any one of the 300,000 Americans who, in any given year, develop kidney stones: What if the excruciating pain of passing one of those little devils could be prevented by strapping yourself into a make-believe runaway mine train, throwing your hands in the air and enduring G-forces as high as 2.5 for about three minutes?
2024-10-12 00:00:00
2016-09-26 00:00:00
null
newsarticle
latimes.com
Los Angeles Times
null
null
4,663,350
http://ilia.ws/archives/254-Introduction-to-PHP-5.4.7-IPC2012-Mainz,-Germany.html
Introduction to PHP 5.4.7 - IPC2012 - Mainz, Germany
null
Introduction to PHP 5.4.7 - IPC2012 - Mainz, Germany iBlog - Ilia Alshanetsky Who am I? Talks Publications Pictures Home Quicksearch Calendar October '24 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Syndicate This Blog Comments Tuesday, October 16. 2012 Posted by Ilia Alshanetsky in PHP , Talks Comments (2) Trackbacks (0) Introduction to PHP 5.4.7 - IPC2012 - Mainz, Germany My slides introducing PHP 5.4.7 from the talk at IPC 2012 are now available online and can be downloaded at: http://ilia.ws/files/ipc12_php54.pdf Trackbacks Trackback specific URI for this entry No Trackbacks Comments Display comments as ( Linear | Threaded) pluriels on : Maybe a typo in the link ? http://ilia.ws/files/ipc2012_php54.pdf Ilia on : The link was fixed, had the typo in the filename. Thanks for the heads-up. Add Comment Name Email Homepage Comment In reply to [ Top level ] #1: pluriels on 2012-10-16 09:36 #1.1: Ilia on 2012-10-16 09:49 Standard emoticons like :-) and ;-) are converted to images. E-Mail addresses will not be displayed and will only be used for E-Mail notifications. Remember Information? Subscribe to this entry Frontpage - Top level Guide to PHP Security Categories FUDforum PHP Publications Security Stuff Talks Archives October 2024 September 2024 August 2024 Recent... Older... Blog Administration Open login screen
true
true
true
null
2024-10-12 00:00:00
2012-10-16 00:00:00
null
null
null
Introduction to PHP 5.4.7 - IPC2012 - Mainz, Germany - iBlog
null
null
32,266,767
https://fiberplane.dev/blog/why-we-chose-jsonnet-over-webassembly/
Why we chose Jsonnet over WebAssembly for Fiberplane Templates
null
Fiberplane Templates supercharge incident runbooks. Instead of having static runbooks (which most of the team probably ignores), Fiberplane Templates are triggered from alerts to create fresh collaborative notebooks already filled with debugging steps and live infrastructure data to help you debug your systems faster. This blog post explains how we evaluated different templating technologies, why we opted for Jsonnet in this use case, and how we built Fiberplane Templates. ## Fiberplanes Templates Use Case Fiberplane is a collaborative notebook for infrastructure and service debugging, resolving incidents, and running postmortems. At a high level, we wanted to support the following template use cases: - **Automatically creating a notebook from a template with an API call**. For example, a team could set up PagerDuty or Prometheus AlertManager to create a notebook from a service-specific runbook. The incident response notebook would be pre-populated with metrics and logs from the affected service and recommended debugging steps. - **Manually creating a notebook from a template**. For example, a team could use templates to run a structured process for postmortems or root cause analyses. - **Sharing best practices**. We want to make it easy to share best practices within and across organizations to help all DevOps and SRE teams work more efficiently. ## Template Technology Criteria Our main criteria for choosing a template technology were: **Power**- Templates need to be able to fill in fields based on input parameters. It is also useful to have basic programming constructs such as loops and if-statements to be able to define behavior like “for each of these services, add this group of notebook cells”.**Ease of learning**- Templates should be easy for any developers and less technical team members to read, understand, and contribute to. You shouldn’t need to be an expert with a specific programming language to grok what a template is doing.**Version control**- Everything important should be in git. ‘Nuff said.**Stability**- We don’t want to be breaking runbooks because the template technology is still in development.**Rust implementation**- Our backend stack is written in pure Rust so a Rust library was a plus, though not strictly necessary. ## Template Approaches Considered ### Pure User Interface The first option we considered was making templates a purely UI-driven feature. This would mean defining and using templates in Fiberplane, similar to templates in Notion or Google Docs. UI-driven templates would be easy to use and learn for non-developers. However, it would be hard to represent programming concepts like for-loops and if-statements in a WYSIWYG editor, and this option would not easily support checking templates into version control. As a result, we opted for a more code-based template approach. ### WebAssembly We were already doing lots of fun things with WASM so we also considered making templates WASM modules that take in the input parameters and output Fiberplane notebooks as JSON objects. On the plus side, using WASM would enable templates to be written in many different programming languages and we could leverage some of our existing tooling. Additionally, we could support having templates proactively fetch additional data before creating a notebook (though we debated whether that is useful or overly complicated). A major downside of the WASM approach is that templates would be distributed as binary blobs, which would be difficult to inspect and understand when shared or checked into git. Moreover, having templates written in different languages would make it more difficult to share or copy code between templates (at least until the WebAssembly Component Model is finalized). After concluding that WebAssembly wasn’t the right tool for the job, we searched around for text-based templating technologies. ### Mustache Mustache is an extremely minimal text-based template language most commonly used for HTML templates. Mustache is intentionally “logic-free” (no for-loops or if-statements), which unfortunately makes it a bit too limited for our use case. We also considered a variety of Mustache-inspired tools including Jinja, Handlebars, Askama, and Tera. Most of these are designed for HTML templating, though they support other text-based formats. For our use case, it made most sense for our templates to either output the JSON objects our API uses to represent notebooks, or an equivalent data structure encoded in YAML, TOML, or another markup language. While this approach meets many of our selection criteria, it would unfortunately make it quite easy to write templates that fail to create valid notebooks. For example, it would be difficult to discover the fields required for a particular notebook cell type or ensure that you had specified all of them. ### Jsonnet Jsonnet is a data templating language originally developed at Google that extends JSON with basic control logic and pure functions. Its syntax is somewhat similar to that of dynamically typed scripting languages like Python or Javascript. Companies including Grafana, Databricks, and Akamai use Jsonnet to enable external users to configure their products or to template their internal configuration files. One of our favorite things about Jsonnet – and one of the design considerations for the language – was: **“Familiarity: …Computation constructs should behave in standard ways and compose predictably.”** And it shows! Without having used it before, we found it easy to look at Jsonnet code and understand what it’s doing. It also took just a couple of minutes of looking over the interactive tutorial to start writing it (Learn jsonnet in Y minutes is only 117 lines). As is obvious from the title of this post, Jsonnet was the option we picked, so we mostly have nice things to say about it.😉 We liked that Jsonnet offered a text-based format with all of the programming constructs we could imagine wanting, without being difficult to pick up. We’ll describe more of the details of our implementation below. ### Cue, Dhall, and Nickel Cue, Dhall, and Nickel are all in the same space as Jsonnet but add static types. As avid Rust users, we definitely like static types for programming but had some reservations about the tradeoffs they presented for this use case. Cuehttps://cuelang.org/ can do a lot, but it is arguably the most complex of these languages. It combines data validation, configuration generation, schema definition, code generation, and data querying. In Cue, types are values, which is a clever construct that, unfortunately, takes some time to grasp for those that are less well-versed in type theory. Dhall extends JSON with functions, types, and imports. It represents something of a middle ground between the dynamically typed Jsonnet and Cue. Unfortunately (or fortunately, depending on your perspective), it clearly draws inspiration from Haskell. This is great for everyone that loves Haskell and less than ideal for everyone else (begin flame war). Nickel is the smallest project of the group and is an effort to separate the language behind the Nixpackage manager from the package manager itself. It describes itself as similar to Jsonnet with added types. Unfortunately, there are a number of important aspects of the language that are still being designed, so it felt a bit too early in Nickel’s development to be a good solution for our use case. Ultimately, we decided the benefits of static types were not worth the added complexity for template developers and editors in this use case. We envisioned using helper functions to build up the notebook JSON object, which would then be validated by our API. As a result, strict schema validation offered fewer benefits than it might for an organization dealing with the configuration of many disparate services that each expect different formats. ### Scripting Languages The final category we considered was embedded scripting languages, including Lua, Javascript, Starlark (a subset of Python used by Google’s Bazel build system), and Rhai. All of these offered relatively similar functionality to Jsonnet but were not specifically intended to produce JSON or a template function at the end of a script. We could have required that the scripts return a notebook object, or added a built-in function the script could call to create one or even multiple notebooks. However, this level of scripting seemed like overkill for the template feature and we preferred the approach of treating a template as a program that would generate a single notebook. ## Building Fiberplane Templates Once we’d chosen the templating language, we settled on the design of our templates implementation. The Fiberplane Templates library consists of a Jsonnet library of helper functions, an evaluator that wraps the Rust Jsonnet implementation, and a converter that can export an existing notebook as Jsonnet. ### Fiberplane Jsonnet Library First, we have a library of helper functions, written in Jsonnet, that enables users to build up the Fiberplane notebook JSON object. Each cell type (for headings, text cells, code cells, graphs, etc.) has its own helper function that takes the relevant parameters and provides sensible defaults. To generate API docs for the library, we use JSDoc comments and jsdoc-to-markdown. While JSDoc is technically meant for Javascript, it has a handy “comments only” plugin that enables it to be used to document code written in other languages. ### Template Evaluator and Converter The template evaluator uses the Rust Jsonnet implementation, jrsonnet, and injects some use-case-specific values into the runtime: - Fiberplane Jsonnet Library - the evaluator includes the latest version of the helper function library so that templates can be standalone Jsonnet files and do not need to use a package manager like jsonnet-bundler. - The current date - this enables using time-based values when configuring the notebook - Data sources - Fiberplane enables users to pull live data about their infrastructure into notebooks. The template runtime includes the data sources available when the template is executed to make it easy to pre-populate notebooks with specific data sources or exact queries needed to debug an incident. Finally, the converter takes an existing notebook JSON object as input and outputs a template as Jsonnet. This enables users to write simple templates in the WYSIWYG editor before exporting them. ## Conclusion We chose Jsonnet for Fiberplane Templates because it offered a sweet spot between power and ease of use. After building our template evaluator and helper function library, we’re happy with our choice and would recommend it for similar use cases involving templating API objects. **Major thanks to the Jsonnet contributors and special thanks to @CertainLach and the other contributors to the Rust Jsonnet implementation.**
true
true
true
How we choose Jsonnet for Fiberplane templates
2024-10-12 00:00:00
2022-07-26 00:00:00
null
article
fiberplane.com
Fiberplane
null
null
25,047,576
https://www.cockroachlabs.com/blog/cockroachdb-20-2-release/
Announcing CockroachDB 20.2: Build more, deploy easier, innovate faster
null
Here at Cockroach Labs, we want to arm you with tools you need to build better products, deliver better customer experiences, and maybe even create the next billion-dollar idea. Our goal with CockroachDB is to make it easier for any and every developer to deliver data-intensive applications, allowing them to easily take advantage of high availability and elastic scale. With our latest release, CockroachDB 20.2, we have added updates to make developers even more productive with a broader range of workloads. We’ve also continued to improve the security and management capabilities of the database and, as always, have made considerable improvements to the performance of CockroachDB. With this release we are also incredibly happy to note that the majority of these new capabilities are available in our free option, CockroachDB Core. With CockroachDB 20.2 you can now: Store and index spatial data using PostGIS-compatible SQL syntax Deploy and manage your cloud-native stack with greater ease, using our new CockroachDB on Kubernetes offering, which packages CockroachDB with an Operator Use core Backup and Restore capabilities (previously Enterprise-only) in our free, community option, CockroachDB Core Work more efficiently with easier debugging, added SQL functionality, and improved support for Java and Ruby Take advantage of improved performance—CockroachDB 20.2 passed TPC-C with 140k warehouses and a maximum throughput of 1.7M transactions per minute (tmpC), representing a 40% performance improvement over the past year Enjoy generally enhanced performance and stability with our new storage engine, Pebble Save time and improve security with new and updated management and security features As a general-purpose, distributed SQL database, CockroachDB is the right choice for any data-intensive application. The updates in 20.2 expand CockroachDB’s support to more workloads and give access to more developers. We’re excited to see what you build! Read on for more details, then head over to the 20.2 Docs for a full list of what’s new. Note that all features mentioned in this blog post are available for free in our open-source option. If you want to try out CockroachDB 20.2 yourself, you can download the release here or try it for free on CockroachDB Dedicated. #### Build more with CockroachDB 20.2 In 20.2, we focused on giving developers more tools, so they can build more types of applications and realize the full potential of those applications. ##### Spatial data types and indexing in CockroachDB Spatial data powers some of the world’s most innovative apps and services, letting you answer questions like, “Where’s the nearest gas station?” and “How long will it take for my ride-sharing vehicle to arrive?” and even, “Where can I catch a Pokémon?” The only problem is, this data has been locked away in brittle legacy or separate specialized databases, making it difficult for developers to support large datasets in the cloud. With 20.2, we give spatial data the same first-class treatment as other data types, bringing it into the cloud age and making it easier to develop applications that use it. CockroachDB is the first SQL database to build this functionality from the ground up for a distributed environment. This means you can now effortlessly scale your spatial data and have the confidence it will survive outages. And you can serve all your customers with fast, always-on experiences no matter where they are on the globe. CockroachDB now supports the following, all of which are open-source, available for free, and accessible with PostGIS-compatible SQL: External formats (GeoJSON, Well-known Text (WKT), and Well-known Bytes (WKB)) Common spatial shapes (e.g., line, polygon, geometry collections) ##### New transactions and sessions pages in the DB Console Our DB Console (formerly Admin UI) displays metrics like SQL performance, network traffic, and storage capacity and is critical for troubleshooting and debugging. CockroachDB 20.2 adds two new pages to help developers introspect and understand query performance: Sessions Page: See live database sessions, and cancel them easily from the DB Console. For a given session, you can see whether there’s a live transaction, which statements are currently running, and how long sessions have been running. Transactions Page: See historical SQL transactions and the statements that comprise them, so you can better understand application performance. This is in addition to the existing statements page, which lets you troubleshoot individual statements. ##### Additional SQL functionality in CockroachDB 20.2 CockroachDB is wire compatible with PostgreSQL and delivers standard SQL syntax, so you can use our database as your next generation relational store. In 20.2, we’ve improved our SQL capabilities adding: User-Defined Schemas: Structure your data hierarchy with schemas, which are commonly used in relational databases including PostgreSQL. This update makes CockroachDB more familiar for developers, more compatible with PostgreSQL applications and tools, and more flexible in its support for different data isolation patterns such as microservices. Partial Indexes: Index only the subset of rows needed for fast reads. More precise indexing reduces the amount of data stored by your indexes and therefore the performance impact on writes to data that does not need to be indexed. Materialized Views: Reduce costs for frequently-run queries by caching query results in-memory and only updating when necessary. Enumerated types (ENUMs): With this popular data type, you can restrict inputs to a defined set of values like a drop-down list. Improved performance of Foreign Keys: As a crucial component of relational databases, foreign keys protect data integrity by creating references between two tables to ensure the entry into one table is a valid entry into the other. In 20.2, performance improvements in foreign keys will let more customers use them. ##### Better support for Java and Ruby in CockroachDB 20.2 CockroachDB supports a variety of popular data access tools, including ORMs, making it easier to develop in your preferred programming language. Specifically, for 20.2 we improved support for Java by adding better compatibility with Hibernate, MyBatis, and Spring Data JDBC; and Ruby by adding compatibility with Active Record. We also built out an adaptor for the Go data access layer upper/db. Many thanks to all the community developers who collaborated with us on these projects. Don’t hesitate to let us know in Slack if there’s a tool you wish CockroachDB supported, or if you’d like to collaborate on building out support! ##### Improved database performance in CockroachDB 20.2 As with every release we are committed to constantly improving CockroachDB’s performance and we’ve made significant advances with 20.2. **TPC-C: CockroachDB passes 140,000 warehouses at 1.7M transactions / minute**: TPC-C is the industry standard transactional database benchmark, simulating an e-commerce environment. We’ve written a lot about TPC-C in the past as we think it is the best measure of OLTP database workloads. CockroachDB 20.2 passed TPC-C with a maximum volume of 140k warehouses (previously we reported 100K)and a maximum throughput of 1.7M transactions per minute (tmpC), which represents a 40% performance improvement over the past year.**TPC-H: CockroachDB decreased query latencies on 20 of 22 queries**: We also ran TPC-H, which extends our benchmarking work with complex analytic queries. While CockroachDB is primarily a transaction-oriented database, it can also perform complex joins and aggregations that are best measured through a benchmark like TPC-H. On the TPC-H benchmark, we saw a decrease in query latency for 20 out of the 22 queries with query 9 latency improving by 80x. #### Deploy easier with CockroachDB 20.2 Your database should make you more efficient, not slow you down and 20.2 introduces updates to let you more seamlessly deploy and manage CockroachDB—both with an Enterprise license and for free. ##### Introducing CockroachDB on Kubernetes CockroachDB is already the easiest database to run with Kubernetes—indeed, it is the only database architected and built from the ground up to deliver on the core distributed principles of atomicity, scale and survival. This means you can manage your database in Kubernetes, rather than alongside it. And hundreds of our customers are doing just that.Today we’re introducing CockroachDB on Kubernetes, a version of our distributed database that packages up CockroachDB with our brand new, open-source Kubernetes Operator. We’ve learned a whole lot about Kubernetes over the past few years by using it for our own database-as-a-service, CockroachDB Dedicated and we’ve packaged many of these learnings into an open-source Kubernetes Operator. This offering makes CockroachDB even easier to deploy on Kubernetes,With CockroachDB on Kubernetes, you get a truly cloud-native database plus automated management and best practices with our new Operator: **Deployment**: Deploy with an operator that handles cluster securing (certs) and configuration (persistent volume size, number of CockroachDB nodes, and more).**Management**: Simply scale your cluster up and down on pods in Kubernetes without any manual manipulation of the data. Add a node (or remove a node) by spinning it up in a pod and the database will rebalance the data for you.**Rolling upgrades**: Execute rolling updates according to CockroachDB Dedicated’s best practices to perform upgrades and apply security patches. And the database naturally handles online schema modifications as well, even for primary keys so you can avoid any downtime.**Resilience**: Pods are ephemeral, but databases (nodes) are not; however, with CockroachDB, we use our core survivability capability combined with StatefulSets to elegantly recover from any pod failure. ##### Basic distributed Backup and Restore are now in CockroachDB Core We want you to be able to build scalable production applications on our community option, CockroachDB Core. And with each release, we carefully review all of our capabilities to determine if any existing or new features should be placed into Core. We’ve outlined a set of guidelines to help us make these determinations and it seems we increasingly err on the side of Core these days. In our last major release (20.1), we added Role-Based Access Control (RBAC) to CockroachDB Core, giving community users more control over security. In this release, we’ve already spoken to the new Spatial capabilities that have been added to Core, and we’ve also added more advanced backup/restore capabilities to Core, including BACKUP, RESTORE, and EXPORT. We’ve been delighted to see CockroachDB Core clusters grow to support terabytes of data, and we recognize that scalable, distributed backups are crucial for these types of production applications. We hope these additions will let our community users achieve both effortless scale and peace of mind in production, with rock-solid disaster recovery plans: BACKUP: Captures native binary data with very high reliability and reproducibility, writes to a number of different storage options such as AWS S3, Google Cloud Storage, NFS, or HTTP storage, and distributes work across the nodes to maximize performance. RESTORE: Restores cluster databases or tables from a BACKUP. EXPORT: Exports tabular data or the results of arbitrary SELECT statements to CSV files. More advanced Backup features, such as Incremental Backups, Encrypted Backups, Locality-Aware Backups, and Revision History remain only in CockroachDB Self-hosted and CockroachDB Dedicated. We’re incredibly grateful for the community feedback that challenges us to make our product better—for every discussion thread, Slack post, and email. Keep them coming! ##### Automated, more seamless management To make CockroachDB as low-touch as possible, we’ve added more automated management capabilities and general improvements to operations. Native scheduled Backups: Schedule backups directly from your CockroachDB cluster, instead of having to run a separate backup scheduler. As they are native to the database, scheduled backups have the same resilience as your cluster. Userfile upload: Import data more easily with the new command `cockroach userfile upload,` which lets non-admins load files from their client into a cluster. User-scoped files protect from malicious actors and are a simple way to upload data from a laptop.Faster imports: Bulk imports of data into CockroachDB are now significantly faster, so you can minimize time spent waiting for data to load. Imports are accomplished using the SQL command IMPORT, and let you import CSV/TSV files, Postgres dump files, MySQL dump files, and more. ##### Stronger security and compliance in CockroachDB 20.2 With 20.2, we’ve continued to improve CockroachDB’s security and compliance story to meet enterprise requirements and data privacy regulations. Certificate Revocation: Revoke a TLS client certificate before its expiration date, helping improve security in the case a certificate is compromised. With this update, CockroachDB now supports Online Certificate Status Protocol (OCSP). More granular Role Based Access Control (RBAC): 20.2 introduces a list of new privileges and role options for CockroachDB’s RBAC, meaning finer-grained permissions for database users. ##### CockroachDB’s new storage engine: Pebble With the goal of continuous improvement of CockroachDB, our team built a new storage engine from scratch, called Pebble. Previously, CockroachDB used RocksDB, and while it has served us well, we saw an opportunity to further enhance CockroachDB with a purpose-built storage engine.Pebble is an open-source key value store written in Go, and it bring a number of improvements to CockroachDB: Better performance and stability. Avoids the challenges of traversing the Cgo boundary. Gives us more control over future enhancements tailored for CockroachDB’s needs. Pebble is the default storage engine in CockroachDB 20.2, with the option to enable RocksDB if desired. Learn more about Pebble in this blog post. #### Innovate Faster with CockroachDB 20.2 With the updates in CockroachDB 20.2, we continue to move towards a world where the database automatically handles all rote operations, scale, and resilience. It gives you a flexible platform you don’t have to worry about. It works, and it opens up a world of possibilities. And all that means you can focus your energy and output on developing, creating, and innovating. CockroachDB 20.2 is the database for modern cloud applications—let’s see what you can build! ##### Try CockroachDB 20.2 for Free This blog post covers just a sampling of the updates in 20.2. For a full list, head over to the 20.2 Docs. To try out these features yourself, just download CockroachDB Core or spin up an instant, free cluster on CockroachDB Dedicated, our CockroachDB-as-a-Service offering. Finally, we love feedback and would love to hear from you. Please join our Slack community and connect with us today!
true
true
true
CockroachDB 20.2 includes spatial data types and indexing, improved performance on TPC-C and TPC-H benchmarks, and introduces CockroachDB on Kubernetes. All available in CockroachDB Core.
2024-10-12 00:00:00
2024-04-21 00:00:00
https://images.ctfassets…announcement.jpg
null
cockroachlabs.com
cockroachlabs.com
null
null
6,334,477
http://www.fastcompany.com/3016840/majority-of-americans-want-better-privacy-protection-online
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,135,732
https://github.com/Homebrew/brew/issues/7857
macOS 11 Big Sur compatibility on Apple Silicon · Issue #7857 · Homebrew/brew
Homebrew
- - Notifications You must be signed in to change notification settings - Fork 9.6k ## New issue **Have a question about this project?** Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account # macOS 11 Big Sur compatibility on Apple Silicon #7857 ## Comments ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. @FigBug Please don't ask us for help while you're running an unsupported version of macOS. | ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. Submit PRs to fix things. Almost every issue we have had so far has been already known. We know things aren't working. We need help fixing things not telling us what isn't working. | ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. ## ### This comment has been minimized. ### This comment has been minimized. Yeah, don't let the lack of a big announcement turn you off. I have an M1 Mac, and all I've installed is the ARM-compatible version of HB. Took a while to get some of the complex languages/compilers working, but there is a TON working now. Don't hold back! BTW, it would be a good idea to make an announcement, esp. if you can link to a dynamic list of apps/formulae/bottles working on M1/ARM. That would reassure a lot of people, I think. | @fxcoudert @jimtut - oh I know support is good! Been using it for since M1 launch. Just at some point its worth publishing this is ready, and removing the warnings, many ppl still believe its not working on M1. | The warnings were removed a in brew 2.7.1 | We're working on an announcement. | One more question. I would like to know if My real interest is if in the medium term I will have to update configurations by changing library and binary paths. | From the installation docs: Not sure if Intel would ever be changed but looks like for Apple silicon it will remain | | Thank you very much for your answers @mvllow and @fxcoudert | I installed a bunch of stuff into | | I can not install kafka on my M1 mac by using | It's possible that | I try | Looks like it doesn't work on ARM then. You'll need to check with the | @dasNavy As suggested in the error message Homebrew on ARM should be installed in | Not entirely sure what "bottle block" refers to. Is there a way to get that piece of information for all installed formulas? | `brew unbottled --tag=arm64_big_sur $(brew list --formula)` `brew unbottled --tag=arm64_big_sur $(HOMEBREW_NO_AUTO_UPDATE=1 brew bundle list --formula)` | To understand the output: IIUC | It means all of its dependencies (if it has any) have been bottled, but it hasn't been bottled itself. | I don't know whether this is the best place to post this, but after I upgraded to a MBP 13" M1, SwiftGen just dies with a | May be best for the discussions area. Some similar discussions: https://github.com/Homebrew/discussions/search?q=killed&type=discussions | clauicommented## Latest news on native ARM compatibility (2020-12-26) We currently have 3168 formulas bottled for Apple Silicon in homebrew-core. At this point, the table below is probably not really relevant anymore (except for historical reference) and the best way to know if a formula is working is “does it have an ARM bottle” (a`:arm64_big_sur` line in the bottle block).That means Apple Silicon bottles are coming soon. Bottle-building may start next week, which is a few weeks earlier than we thought. Expect bottling to drag along. It definitely won’t be finished in 2020. Homebrew remains unsupportedon Apple Silicon, and will become supported once enough bottles are built and once everything feels stable enough.Major blockerssection for a quick overview.`mkmf` in`Ruby.framework` resolved (FB7836181).All `brew` commands that depend on that gem, e. g.`brew audit` , seem to work now.code signingnow resolved thanks to @fxcoudert, @mistydemeo and others.won’t be any supportfor native ARM Homebrew installationsfor months to come.See both macOS 11 Big Sur compatibility on Apple Silicon #7857 (comment) about CI infrastructure and macOS 11 Big Sur compatibility on Apple Silicon #7857 (comment) about GCC for details.(once support arrives). See macOS 11 Big Sur compatibility on Apple Silicon #7857 (comment) for details.`/opt/homebrew` ## A detailed description of the proposed feature This is an overview of compatibility issues and work items related to native ARM Homebrew installations on macOS 11.0 (Big Sur). Homebrew doesn’t support it right now but we need to track and triage those items nonetheless. ## The motivation for the feature macOS 11.0 (Big Sur) has been released to the public, and our goal is for Homebrew to support it. ## How the feature would be relevant to at least 90% of Homebrew users In the long run, more than 90 % of Homebrew (macOS) users are going to run Apple Silicon hardware. ## What alternatives to the feature have been considered No alternatives. ## Major blockers `arm64_big_sur` bottles## Status of core formulae 1on 11.0 `ack` `adns` `adwaita-icon-theme` `aircrack-ng` `ansible` `ant` `openjdk` works`aom` `apache-spark` `apr-util` `apr` `argon2` `arpack` `asciidoc` `asdf` `aspell` `atk` `augeas` `autoconf` `autojump` `automake` `aws-elasticbeanstalk` `aws-iam-authenticator` `go` works`awscli` `distutils.errors.DistutilsClassError` , see logs.Possibly related to setuptools: pypa/setuptools#2231 `azure-cli` `bash-completion` `bash` `make` says,`redefinition of 'sys_siglist' with a different type: 'char *[32]' vs 'const char *const [32]'` . Logs`bat` `rust` works`bazel` `openjdk@11` works`bdw-gc` `berkeley-db` `binutils` `bison` `blueutil` `boost` `brotli` `c-ares` `cabal-install` `ghc` works`cairo` `cargo-c` `rust` prereleases; will work when a stable Rust with Apple Silicon support ships`carthage` `cask` `emacs` works`ccache` Used to work, no longer builds. Logs Says: `ceres-solver` `certbot` `cfitsio` `cgal` `qt` works`circleci` `go` works`clang-format` `cloc` `cmake` `cocoapods` `Unrecognized Mach-O load command: 0x80000034` in`ffi_c.bundle` `colordiff` `composer` `consul` `go` works`coreutils` `cscope` `ctags` `cunit` `curl` `curl-openssl` `cython` `dav1d` `daemontools` `deno` `llvm` and`rust` work`dep` `go` works`dialog` `direnv` `go` works`dnsmasq` `docbook-xsl` `docbook` `docker` `go` works`docker-completion` `docker-machine` `go` works`doctl` `go` works`dos2unix` `doxygen` `duti` `eigen` `elasticsearch` `gradle` and`openjdk` work`elixir` `erlang` works`emacs` `gnutls` works`epsilon` `epstool` `ghostscript` works`erlang` Same with erlang/otp#2687. `exiftool` `expat` `fastlane` `fd` `rust` works`ffmpeg` `gnutls` ,`libbluray` and several other dependencies work`fftw` `gcc` and`open-mpi` work`fig2dev` `ghostscript` and`netpbm` work`figlet` `findutils` `fish` `flac` `fltk` `fontconfig` `fontforge` `freetds` `freetype` `freexl` `implicitly declaring library function 'printf'` Logs`frei0r` `fribidi` `fswatch` `fzf` `gawk` `gcal` `gcc` @iains has some work in progress on https://github.com/iains/gcc-darwin-arm64 to port the GCC backend to Apple Silicon. Mind that Apple Silicon support is going to require GCC 11 even in the best case. The first stable release of GCC 11 may come out in mid-2021 or later. If you absolutely require a stable GCC, or any formula that depends on it, you may want to hold off your Apple Silicon Mac purchase decisions until it’s clear if or when GCC will support it.For limited testing on Apple Silicon, Homebrew mayconsider shipping an unstable GCC 11 but that’s yet to be decided.`gdal` `expat` ,`freexl` ,`geos` ,`hdf5` and a dozen of other dependencies work`gdbm` `gdb` `gdk-pixbuf` `gd` `geckodriver` `geos` `BasicSegmentString` in`inlines.o` vs.`libnoding.a` . Logs`gettext` `gflags` `ghc` `[email protected]` `ghostscript` `giflib` `git` `Undefined symbols for architecture arm64` . Possibly related to`libintl` and`pcre2` . Logs`git-flow` `git-gui` `git-lfs` `gitlab-runner` `gl2ps` `glew` `glib-networking` `glib` `glog` `glpk` `gmp` `gnu-getopt` `gnu-sed` `gnu-tar` `gnupg` `gnutls` works`gnuplot` `gnutls` `gobject-introspection` `go` Bootstrapped `go` (x86_64) is killed at build time. LogsRe-check when upstream 1.16 is released `gpatch` `gpgme` `gradle` `openjdk` works`grafana` `graphicsmagick` `graphite2` `graphviz` `gts` works`grep` `groonga` `groovy` `grpc` `gsettings-desktop-schemas` `gsl` `gst-plugins-bad` `gstreamer` `gtk+3` `gtk+` `gtk-mac-integration` `gts` `netpbm` works`guile` `harfbuzz` `hdf5` `gcc` works`helm` `go` works`helm@2` `glide` and`go` work`hicolor-icon-theme` `highlight` `htop` `httpd` `httpie` `hub` `go` works`hugo` `go` works`hwloc` `icu4c` `ideviceinstaller` `ilmbase` `imagemagick@6` `imagemagick` `ghostscript` ,`libheif` and`libomp` work`inetutils` `ios-deploy` `ios-webkit-debug-proxy` `iperf3` `ipython` `isl` `itstool` `jansson` `jasper` `jemalloc` `jenkins` `openjdk@11` works`jenkins-lts` `openjdk@11` works`jenv` `jmeter` `jpeg` `jq` `json-c` `jupyterlab` `pandoc` works`kafka` `openjdk` (or some other form of Java) and`zookeeper` work`kops` `kotlin` `openjdk` (or some other form of Java) works`krb5` `kubectx` `kubernetes-cli` `go` works`kustomize` `lame` `ldns` `leptonica` `libarchive` `libassuan` `libass` `libb2` `libbluray` `openjdk` (or some other form of Java) works`libcbor` `libcerf` `libcroco` `libdap` `libde265` `libepoxy` `libevent` `libev` `libexif` `libffi` `libfido2` `libgcrypt` `libgeotiff` `libgit2` `libgpg-error` `libheif` `libde265` works`libiconv` `libidn2` `libidn` `libilbc` `libimobiledevice` `libksba` `liblqr` `libmagic` `libmaxminddb` `libmetalink` `libmpc` `libnet` `libogg` `libomp` `make install` fails while trying to make sense of x86_64 assembly for Linux. Logs`libp11` `libplist` `libpng` `libpq` `libpsl` `librdkafka` `libressl` `librsvg` `libsamplerate` `libscrypt` `libsmi` `libsndfile` `libsodium` `libsoup` `libsoxr` `libspatialite` `libspiro` `libssh` `libssh2` `libtasn1` `libtermkey` `libtiff` `libtool` `libuninameslist` `libunistring` `libusb-compat` `libusbmuxd` `libusb` `libuv` `libvidstab` `libvirt` `libvorbis` `libvpx` `libvterm` `libwebsockets` `libxml2` `libxslt` `libyaml` `libzip` `little-cms2` `llvm` `HEAD` does and 11.0.0 will be compatible.`lua` `[email protected]` `luajit` `luarocks` `lynx` `lz4` `lzo` `macvim` `mad` `/bin/ksh ./config.sub -apple-darwin20.0.0 failed` Logs`make` `mariadb` `groonga` works`mas` `maven` `openjdk` works`mbedtls` `mcrypt` `mecab` `mecab-ipadic` `memcached` `mercurial` `meson` `metis` `midnight-commander` `minikube` `minizip` `mitmproxy` `mkcert` `mkvtoolnix` `mono` `mosh` `mpfr` `mpv` `msgpack` `mtr` `mujs` `mutt` `mysql` `mysqld_safe` fails with`syntax error near unexpected token 'then'` in line 831.`[email protected]` `make` errors out after building the target`event_extra` . Logs`[email protected]` `mysqld_safe` fails:`syntax error near unexpected token 'then'` in line 804.`mysql-client` `nano` `nasm` `ncdu` `ncurses` `neofetch` `neovim` `netcdf` `netpbm` `subversion` works`nettle` `nghttp2` `nginx` `ninja` `nmap` `node` Patched for now. See also nodejs/node#34043 and nodejs/TSC#886 for upstream progress. `node@10` `node@12` `node-build` `nodebrew` `npth` `nspr` `nss` `softokn3` . Logs`ntfs-3g` `numpy` `nvm` `ocaml` 4.10 backport in progress, see ocaml/ocaml#10026. 4.10 formula-patches PR: Homebrew/formula-patches#318 `octave` `oniguruma` `opam` `open-mpi` `gcc` works`openblas` `openconnect` `opencore-amr` `opencv` `openexr` `openjdk` `openjdk@11` `openjpeg` `openldap` `openssh` `openssl` aka`[email protected]` Patched for now. Works well enough until the upstream fix is released. `openvpn` `opusfile` `opus` `orc` `p11-kit` `p7zip` `packer` `go` works`pandoc` `cabal-install` and`ghc` work`pango` `parallel` `pcre2` `pcre` `perl` `[email protected]` `php` . Might want to triage as 🚫.`[email protected]` `php` . Might want to triage as 🚫.`php` `pinentry` `pipenv` `pixman` `pkcs11-helper` `pkg-config` `plantuml` `poppler` `nss` and`qt` work`popt` `portaudio` `postgis` `gdal` ,`geos` ,`gpp` and`sfcgal` work`postgresql` `[email protected]` `postgresql@10` `postgresql@11` `pre-commit` `proj` `protobuf` `protobuf-c` `pstoedit` `pstree` `pulumi` `putty` `py3cairo` `pyenv` `pyenv-realpath.dylib` as a builtin. Log`pyenv-virtualenv` `pygobject3` `pyqt` `[email protected]` `[email protected]` Patched for now but `brew test` fails.Re-check after Homebrew/homebrew-core#64872 is merged. `python` aka`[email protected]` `brew test` currently fails.Re-check after Homebrew/homebrew-core#64869 is merged. `qemu` `qhull` `qrupdate` `qt` `find_sdk.py` late in the build. (logs, full make log)`rabbitmq` `erlang` works`rav1e` `cargo-c` and`rust` work`rbenv` `rclone` `readline` `redis` `rename` `ripgrep` `rsync` `rtmpdump` `rubberband` `ruby-build` `[email protected]` `ruby` `rust` `rustup-init` `s-lang` `s3cmd` `sbcl` `sbt` `scala` `scrcpy` `screenresolution` `sdl2` `sdl` `sfcgal` `cgal` works`shared-mime-info` `shellcheck` `cabal-install` ,`[email protected]` and`pandoc` work`sip` `skaffold` `snappy` `socat` `source-highlight` `sox` `mad` works`spandsp` `speedtest-cli` `speex` `sphinx-doc` `sqlite` `sqlmap` `srt` `'GLES/gl.h' file not found` during`make install` . Logs`ssh-copy-id` `sshfs` `sshpass` `sshuttle` `starship` `stoken` `subversion` `brew test` fails. Logs.`suite-sparse` `sundials` `swagger-codegen` `swiftformat` `swiftlint` `swig` `szip` `tbb` `tcl-tk` `telnetd` `telnet` `terraform` `go` works`terragrunt` `tesseract` `texinfo` `tfenv` `tflint` `thefuck` `theora` `the_silver_searcher` `tidy-html5` `tig` `tmux` `tomcat` `tor` `tree` `uchardet` `unar` `unbound` `unibilium` `unixodbc` `unrar` `utf8proc` `v8` `vala` `graphviz` works`valgrind` `vapoursynth` `vault` `vde` `vim` `vips` `watchman` `watch` `webp` `wget` `wimlib` `winetricks` `wireshark` `wxmac` `x264` `x265` `xcodegen` Re-check with upstream version > 2.17.0 once released. `xerces-c` `xmlto` `xvid` `xxhash` `xz` `yara` `yarn` `yasm` `youtube-dl` `yq` `go` works`zeromq` `zimg` `zlib` `zookeeper` `ant` works`zsh` `zsh-autosuggestions` `zsh-completions` `zsh-syntax-highlighting` `zstd` ## Source 1ForWorks on 11.0, the key is:`brew install -s` succeeds on Apple Silicon. The software works well enough natively.`depends_on :arch => [:x86_64, :build]` . The software works well enough on Rosetta.Commentsfield.`depends_on :arch => :x86_64` . The software has been deemed to work on Intel only (for now).The text was updated successfully, but these errors were encountered:
true
true
true
Latest news on native ARM compatibility (2020-12-26) We currently have 3168 formulas bottled for Apple Silicon in homebrew-core. At this point, the table below is probably not really relevant anymo...
2024-10-12 00:00:00
2020-06-30 00:00:00
https://opengraph.githubassets.com/c788b7d409a5fdeaab5e9c8f033a9437cade46d8b1493d165dcdfb5a89c05ef8/Homebrew/brew/issues/7857
object
github.com
GitHub
null
null
39,173,633
https://kobzol.github.io/rust/2024/01/28/process-spawning-performance-in-rust.html
Process spawning performance in Rust
Kobzol's blog
# Process spawning performance in Rust As part of my PhD studies, I’m working on a distributed task runtime called HyperQueue. Its goal is to provide an ergonomic and efficient way to execute task graphs on High-Performance Computing (HPC) distributed clusters, and one of its duties is to be able to spawn a large amount of Linux processes efficiently. HyperQueue is of course written in Rust1, and it uses the standard library’s `Command` API to spawn processes2. When I was benchmarking how quickly it can spawn processes on an HPC cluster, I found a few surprising performance bottlenecks, which I will describe in this post. Even though most of these bottlenecks are only significant if you literally spawn thousands of processes per second, which is not a very common use-case, I think that it’s still interesting to understand what causes them. Note that this is a rather complex topic, and I’m not sure if I understand all the implications of the various Linux syscalls that I talk about in this post. If you find something that I got wrong, please let me know! :) # High-Performance Command spawning My investigation into Rust process spawning performance on Linux started a few years ago, when I was trying to measure what is the pure internal overhead of executing a task graph in HyperQueue (HQ). To do that, I needed the executed tasks to be as short as possible, therefore I let them execute an “empty program” (`sleep 0` ). My assumption was that since running such a process should be essentially free, most of the benchmarked overhead would be coming from HyperQueue. While running the benchmarks, I noticed that they behave quite differently on my local laptop and on the HPC cluster that I was using. After a bit of profiling and looking at flamegraphs, I realized that the difference was in process spawning. To find out what was the cause of it, I moved outside HyperQueue and designed a benchmark that purely measured the performance of spawning a process on Linux in Rust. Basically, I started to benchmark this: ``` Command::new("sleep").arg("0").spawn().unwrap(); ``` Notice that here I only benchmark the *spawning* (i.e. starting) of a process. I’m not waiting until the process stops executing3. If you’re interested, the benchmark harness that I have used can be found here. On my laptop, spawning 10 000 processes takes a little bit under a second, not bad. Let’s see what happens if we do a few benchmarks to compare how long does it take to spawn `N` processes (displayed on the X axis) locally vs on the cluster: Uh-oh. For 25 thousand processes, it’s ~2.5s locally, but ~20s on the cluster, almost ten times more. That’s not good. But what could cause the difference? The cluster node has `256 GiB` RAM and a `128-core` AMD Zen 2 CPU, so simply put it is *much* more powerful than my local laptop. Well, spawning a process shouldn’t ideally do that much work, but it will definitely perform some syscalls, right? So let’s compare what happens locally vs on the cluster with the venerable `strace` tool (I kept only the interesting parts and removed memory addresses and return values): ``` (local) $ strace ./target/release/spawn clone3({flags=CLONE_VM|CLONE_VFORK, …}, …) = … ``` ``` (cluster) $ strace ./target/release/spawn socketpair(AF_UNIX, SOCK_SEQPACKET|SOCK_CLOEXEC, 0, [3, 4]) = … clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, … = … close(4) = … recvfrom(3, …) = … ``` Okay, it does indeed look a bit different. A different syscall (`clone3` vs `clone` ) is used, different flags are passed to it, and in addition the program opens up a Unix socket on the cluster for some reason (more on that later). Different syscalls could explain the performance difference, but why are they even different in the first place? We’ll find out soon, but first we’ll need to understand how process spawning works in Linux. # The perils of forking Apparently, there are many ways how to create a process that executes a program on Linux, with various trade-offs. I’m not an expert in this area by any means, but I’ll try to quickly describe my understanding of how it works, because it is needed to explain the performance difference. The traditional way of creating a new process on Linux is called *forking*. The `fork` syscall essentially clones the currently running process (hence its name), which means that it will continue running forward in two copies. `fork` lets you know which copy is the newly created one, so that you can do something different in it, typically execute a new program using `exec` , which replaces the address space of the process with fresh data loaded from some binary program. Back in the days of yore, `fork` used to literally copy the whole address space of the forked process, which was quite wasteful and slow if you wanted to run `exec` immediately after forking, which replaces all its memory anyway. To solve this performance issue, a new syscall called `vfork` was introduced. It’s basically a specialized version of `fork` that expects you to immediately call `exec` after forking, otherwise it results in undefined behavior. Thanks to this assumption, it doesn’t actually copy the memory of the original process, because it expects that you won’t modify it in any way, and thus improves the performance of process spawning. Later, `fork` was changed so that it no longer copied the actual contents of the process memory, and switched to a “copy-on-write” (CoW) technique. This is implemented by copying the page tables of the original process, and marking them as read-only. When a write is later attempted on such a page, it is cloned on-the-fly before being modified (hence the “copy-on-write” term), which makes process memory cloning lazy and avoids doing unnecessary work. Since `fork` is now much more efficient, and there are some issues with `vfork` , it seems that the conventional wisdom is to just use `fork` , although we will shortly see that it is not so simple. So, why have we seen `clone` syscalls, and not `fork` /`vfork` ? That’s just an implementation detail of the kernel. These days, `fork` is implemented in terms of a much more general syscall called `clone` , which can create both threads and processes4, and can also use “vfork mode”, where it doesn’t copy memory of the original process before executing the new program. Armed with this knowledge, let’s compare the syscalls again: ``` (local) clone3({flags=CLONE_VM|CLONE_VFORK, …}, …) = … (cluster) clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, … = … ``` The `clone3` call is essentially a `vfork` , since it uses the `CLONE_VM` and `CLONE_VFORK` flags, while the `clone` call used on the cluster is essentially a `fork` . So, what causes the difference? We’ll have to take a look inside the Rust standard library to find out. The Unix-based `Command::spawn` implementation is relatively complicated (partly because it supports multiple operating systems and platforms at once). It does a bunch of stuff that I don’t completely understand, and that would probably warrant a blog post of its own, but there is one peculiar thing that immediately caught my attention - it exercises a different code-path based on the version5 of `glibc` (the C standard library implementation) of your environment: - It you have at least `glibc 2.24` , it will use a “fast path”, which involves calling the`posix_spawnp` `glibc` function, which then in turns generates the efficient`clone3(CLONE_VM|CLONE_VFORK)` syscall (effectively`vfork` ). - If you have an older `glibc` version (or if you use some complicated spawn parameters), it instead falls back to just calling`fork` directly, followed by`execvp` . In addition, it also creates a UDS (Unix Domain Socket) pair to exchange some information between the original and the forked process6, which explains the`socketpair` and`recvfrom` syscalls that we saw earlier. When I saw that, I was pretty sure that this is indeed the source of the problem, since I already had my share of troubles with old `glibc` versions on HPC clusters before. Sure enough, my local system has `glibc 2.35` , while the cluster still uses `glibc 2.17` (which is actually the oldest version supported by Rust today). Good, now that we at least know why different syscalls are being generated, let’s try to find out why is their performance different. After all, shouldn’t `fork` be essentially as fast as `vfork` these days? # `fork` vs `vfork` (are you gonna need that memory?) To better understand what is happening, and to make sure that the effect is not Rust-specific, I wrote a very simple C++ program that tries to replicate the process spawning syscalls executed by the Rust standard library by executing the `posix_spawn` function. To select between `fork` vs `vfork` “semantics”, I use the `POSIX_SPAWN_USEVFORK` flag. Let’s see what happens locally vs on the cluster again: Okay, it’s indeed slower on the cluster, which can be probably attributed to an older kernel (`3.10` vs `5.15` ) and/or older `glibc` (`2.17` vs `2.35` ), but it’s nowhere as slow as before. So what gives? Is it Rust’s fault? Well, let’s see what happens if we benchmark the spawning of `10000` processes, but this time we will progressively increase the RSS (the amount of allocated memory) of the original process by allocating some bogus memory up-front: …is it just me or does one of the bars stand out? First, let’s try to understand why is `fork` still so fast on my laptop, even though it became much slower on the cluster. We can find the answer in the documentation of the `POSIX_SPAWN_USEVFORK` flag: ``` POSIX_SPAWN_USEVFORK Since glibc 2.24, this flag has no effect. On older implementations, setting this flag forces the fork() step to use vfork(2) instead of fork(2). The _GNU_SOURCE feature test macro must be defined to obtain the definition of this constant. ``` In other words, if you have at least `glibc 2.24` , this flag is basically a no-op, and all processes created using `posix_spawn` (including those created by Rust’s `Command` ) will use the fast `vfork` method by default, making process spawning quite fast. This basically shows that there’s no point in even trying to debug/profile this issue on my local laptop, since with a recent `glibc` , the slow spawning will be basically unreproducible. Note that Rust doesn’t actually set the `POSIX_SPAWN_USEVFORK` flag manually. It just benefits from the faster spawning by default, as long as you have`glibc 2.24+` . Now let’s get to the elephant in the room. Why does it take 5 seconds to spawn 10 000 processes if I have first allocated 1 GiB of memory, but a whopping 25 seconds when I have already allocated 5 GiB? The almost linear scaling pattern should give it away - it’s the “copy-on-write” mechanism of `fork` . While it is true that almost no memory is copied outright, the kernel still has to copy the *page tables* of the previously allocated memory, and mark them as read-only/copy-on-write. This is normally relatively fast, but if you do it ten thousand times per second, and you have a few GiB of memory allocated in your process, it quickly adds up. Of course, I’m far from being the first one to notice this phenomenon, but it still surprised me just how big of a performance hit it can be. This also explains why this might not show up in trivial programs, but can be an issue in real-world applications (like HyperQueue), because these will typically have a non-trivial amount of memory allocated at the moment when the processes are being spawned. Just to double-check that I’m on the right track, I also checked a third programming language, and tried to benchmark `subprocess.Popen(["sleep", "0"])` in Python 3: Sure enough, it’s again much slower on the cluster. And if we peek inside with `strace` again, we’ll find that on the cluster, Python uses `clone` without the `CLONE_VFORK` flag, so esssentially the same thing as Rust, while locally it uses the `vfork` syscall directly. Ok, so at this point I knew that some of the slowdown is obviously caused by the usage of `fork` (and some of it is probably also introduced by the Unix sockets, but I didn’t want to deal with that). I saw that even on the cluster, I can achieve much better performance, but I would need to avoid the `fork` slow path in the Rust standard library and also add the `POSIX_SPAWN_USEVFORK` flag to its `posix_spawnp` call. By the way, there is also another common solution to this problem, to use a “zygote” process with small RSS, which is forked repeatedly to avoid the page table copying overhead. But it doesn’t seem like it would help here, because in Rust, even the small benchmark program with small RSS was quite slow to `fork` using the slow path approach. Perhaps the overhead lies in the UDS sockets, or something else. # Should we use `vfork` ? After I learned about the source of the bottleneck, I created an issue about it in the Rust issue tracker7. This led me to investigate how other languages deal with this issue. We already saw that Python uses the slow method with older `glibc` . I found out that Go (which AFAIK actually doesn’t use `glibc` on Linux and basically implements syscalls from scratch) switched to the `vfork` method in `1.9` , which produced some nice wins in production. However, I was also directed to some sources from people much more knowledgeable about Linux than me, that basically explained that there is a reason why `vfork` wasn’t being used for older `glibc` versions, and that reason is because these old `glibc` versions implemented it in a buggy way. So I decided that it’s probably not a worthwhile effort to push this further and risk the option of introducing arcane `glibc` bugs, and I closed the issue. As we’ll see later, this wasn’t *the only* bottleneck in process spawning on the cluster, though. # Aside: modifying the standard library When I first learned about the issue and saw that the `POSIX_SPAWN_USEVFORK` flag fixes the performance problem on the cluster, I was a bit sad at first that what would have been essentially a one-line change in C or C++ (since they do not really have any high-level standard library API for spawning processes) would require me to either propose a change deep within, or fork, the Rust standard library (neither of which is trivial). However, I realized that this line of thinking is misleading. Yes, it would be a one-line change in C or C++, but only because in these languages, I would have to first write the whole process spawning infrastructure myself! Or I would have to use a third-party library, but then I would encounter a similar issue - I would either have to fork it, copy-paste it in my code or get a (potentially controversial) change merged to it. I’m actually really glad that it’s so easy to use third-party libraries in Rust and that the standard library allows me to use fundamental functionality (like process spawning) out of the box. But there are trade-offs everywhere - one implementation can never fit all use-cases perfectly, and if you’re interacting with an HPC cluster with a 10-year-old kernel, the probability of not fitting within the intended use-case increases rapidly . So why couldn’t I just copy-paste the code from the standard library into my own code, modify it and then use the modified version? Well, this would be an option, of course, but the issue is that it would be a lot of work. I need process spawning to be asynchronous, and thus in addition to modifying the single line in the standard library, I would also need to copy-paste the whole of `std::process::Command` , and also `tokio::process::Command` , and who knows what else. If it was possible to build a custom version of the standard library in an easier way, I could just keep this one-line change on top of the mainline version, and rebuild it as needed, without having to modify all the other Rust code that is built on top of it (like `tokio::process::Command` ). And since Rust links to its standard library statically by default, it shouldn’t even cause problems with distributing the final built binary. Hopefully, `build-std` will be able to help with this use-case one day. As I already stated in the introduction, it’s important to note that the bottleneck that I’m describing here has essentially only been showing in microbenchmarks, and it usually isn’t such a big problem in practice. If it was a larger issue for us, I would seriously consider the option of copy-pasting the code, or writing our own process spawning code from scratch (but that would increase our maintenance burden, so I’d like to avoid that if possible). # It’s all in the environment I thought that the `fork` vs `vfork` fiasco was the only issue that I was encountering, but after performing more benchmarks, I found out another interesting bottleneck. I was still seeing some unexpected slowdown on the cluster that couldn’t have been explained by the usage of `fork` alone. After another round of investigation, I noticed that on the cluster, there is a relatively large amount (~180) of environment variables that amounted to almost 30 KiB of data. Could this also pose a performance problem? Let’s see what happens if we spawn 10 000 processes again, but this time we progressively increase the amount of environment variables set in the process: Clearly, the amount of environment variables has an effect! And as usually, it’s much higher on the cluster than on my local laptop. Increasing the amount of environment variables from `50` to `250` makes the spawning 50% slower! We will need to take another look inside the Rust standard library to see what happens here. The chart above actually shows a “fast path”. Since I only execute `Command::new(...).spawn()` , without setting any custom environment variables and without clearing the environment, stdlib recognizes this and doesn’t actually process the environment variables in any way. It simply passes the `environ` pointer from `glibc` directly to `posix_spawn` or `execvp` (as we already know, depending on your `glibc` version). I’m not sure why does the spawning get so much slower with more environment variables, but I assume that it’s simply by the kernel having to copy all these environment variables to the address space of the new process. What if we don’t go through the fast path? If we set even just a single environment variable for the spawned command, it will no longer be possible to just copy the environment of the original process, and therefore it will need to be built from scratch for each single spawned command (the `-set` variants set an environment variable, the `-noset` variants do not): What does the building of a fresh environment entail? The current environment is combined with the environment variables set on the `Command` being spawned, and the result is stored in a `BTreeMap` . It is then converted to an array of C strings that is then eventually passed as an array of `char*` pointers to `posix_spawnp` or `execvp` . During this process, each environment key and value is copied several times, and the environment list is also sorted (by being inserted into the `BTreeMap` ). This contributes to the spawning overhead if you create a lot of processes and your process has hundreds of environment variables. I tried to take a shot at optimizing the fresh environment preparation, to reduce some of the allocations and also remove the sorting, which (AFAIK) isn’t needed on Linux. I performed some benchmarks using the new version, but they were quite inconclusive, so I won’t even show them here, because I’m not really confident that my change produced a real speed-up. And even if it did, I’m not sure if it’s worth the additional complexity, especially when the bottleneck only shows up when you have a large amount of environment variables. I wanted to also try benchmarking it on the cluster, but I would have to build a toolchain of the compiler that supports glibc `2.17` for that, and I wasn’t motivated enough to do that yet. # Can we parallelize this? One of the things that I tried to make the process spawning faster (both for `fork` and `vfork` ) was to parallelize it. While investigating this possibility, I realized a few things: - `tokio::process::Command::spawn` is actually a blocking function. Since HyperQueue uses a single-threaded`tokio` runtime, it is not possible to truly concurrently spawn multiple processes. I will either need to use the multithreaded`tokio` runtime, or use`spawn_blocking` . - The documentation of `CLONE_VFORK` (which is the`clone` flag that enables “vfork mode”) claims this:`CLONE_VFORK (since Linux 2.2) If CLONE_VFORK is set, the execution of the calling process is suspended until the child releases its virtual memory resources via a call to execve(2) or _exit(2) (as with vfork(2)).` In other words, it claims that the *whole process*(not just the calling thread) is suspended when a new process is being spawned. If this was indeed true, parallelization probably wouldn’t help that much. However, I did some experiments, and it seems that it indeed just stops the thread that spawns the new process, so this might be a bit misleading. The documentation of`vfork` supports this:`When vfork() is called in a multithreaded process, only the calling thread is suspended until the child terminates or executes a new program.` But I’m not sure if the semantics of `vfork` vs`clone(CLONE_VFORK)` are the same… I didn’t have a lot of energy left to examine this in detail, so I just executed a few benchmarks that spawn 20 000 processes in several asynchronous “modes”: - `single` : Use a single-threaded`tokio` runtime. - `singleblocking` : Use a single-threaded`tokio` runtime, but wrap the`Command::spawn` call in`spawn_blocking` . - `multi-n` : Use a multithreaded`tokio` runtime with`n` worker threads. Here are the results, again on my local laptop vs on the cluster: Locally, parallelizing doesn’t really seem to help, but on the cluster it provides some speedup. An even larger speedup is achieved if there is more (blocking) work to be done per spawn, e.g. when we set an environment variable on the spawned command, and thus we have to go through the dance of building the fresh environment: In HyperQueue, we can’t (and don’t want to) easily switch to a multithreaded runtime, but the `spawn_blocking` approach looks relatively promising. # Bonus: `sleep` vs `/usr/bin/sleep` Since I’m focusing on micro-benchmarks in this blog post, let’s see one more. Do you think that there is any performance difference between spawning a process that executes `sleep` vs spawning a process that executes `/usr/bin/sleep` ? No tricks are being played here, these two paths lead to the same binary. It turns out that there is: The difference is not large, but it is there. Why? When you specify an absolute path, the binary can be executed directly. However, if you just specify a binary name, then the operating system first has to find the actual binary that you want to execute, by iterating through directories in the `PATH` environment variable. For reference, my `PATH` environment variable had `14` entries and `248` bytes in total, so it’s not exactly gargantuan. But again, if you spawn thousands of processes, it all adds up :) # Conclusion I hope that you have learned something new about process spawning on Linux using Rust. Even though I think that the presented bottlenecks won’t cause any issues for the vast majority of Rust programs, if you’ll ever need to spawn a gigantic amount of processes in a short time, perhaps this blog post could serve as a reference on what to watch out for. I’m pretty sure that it would be possible to dive much deeper into this complex topic, but I already spent enough time on it, at least for now. If you have any comments or questions, please let me know on Reddit. - Before we knew Rust, we were developing distributed systems in our lab in C++. Suffice to say… it was not a good idea. ↩ - Well, it actually uses `tokio::process::Command` , but that’s just an async wrapper over`std::process::Command` anyway. ↩ - Although my benchmark harness does wait for all the spawned processes to end after each benchmark iteration (this waiting is not part of the measured duration), otherwise Linux starts to complain pretty quickly that you’re making too many processes without joining them. ↩ - To the Linux kernel, these are essentially the same thing anyway, just with different memory mappings and other configuration. ↩ - Fun fact: the standard library parses the glibc version every time you try to spawn a process. Luckily, it’s probably just a few instructions. ↩ - I don’t really know why that happens, and I was lazy to search `git blame` to find the original motivation. Seems to be related to signal safety. Anyway, this blog post is already long enough without delving deeper into this. ↩ - Funnily enough, I basically rediscovered all of what I describe in this blog post by performing many experiments 2.5 years later, rather than just re-reading my old issue… ↩
true
true
true
As part of my PhD studies, I’m working on a distributed task runtime called HyperQueue. Its goal is to provide an ergonomic and efficient way to execute task graphs on High-Performance Computing (HPC) distributed clusters, and one of its duties is to be able to spawn a large amount of Linux processes efficiently. HyperQueue is of course written in Rust1, and it uses the standard library’s Command API to spawn processes2. When I was benchmarking how quickly it can spawn processes on an HPC cluster, I found a few surprising performance bottlenecks, which I will describe in this post. Even though most of these bottlenecks are only significant if you literally spawn thousands of processes per second, which is not a very common use-case, I think that it’s still interesting to understand what causes them. Before we knew Rust, we were developing distributed systems in our lab in C++. Suffice to say… it was not a good idea. ↩ Well, it actually uses tokio::process::Command, but that’s just an async wrapper over std::process::Command anyway. ↩
2024-10-12 00:00:00
2024-01-28 00:00:00
null
article
github.io
Kobzol’s blog
null
null
6,901,990
http://www.theawl.com/2013/12/the-new-spammer-panic
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,728,301
https://restofworld.org/2020/the-life-and-death-of-snet-havanas-alternative-internet/
How young Cubans circumvented embargos to make their own DIY network
Priscila Bellini
Sometime in 2010 or 2011, José Javier Mena Mustelier’s friends invited him to join a *Defense of the Ancients *battle in eastern Havana. His *compadres* recalled a sort of LAN party, at which young people gathered on a local network to play pirated video games together. At the time, getting an internet connection in Cuba looked like a distant dream. The United States’ economic embargo had made it nearly impossible to find routers and other equipment, while the government kept a close watch on the circulation of information. Cables scattered throughout buildings created small, hyperlocal intranets. But they rarely went beyond the neighborhood. Mustelier joined his friend, but the game suffered delays as contestants struggled to stay connected. Back then, a Cuban citizen could legally buy a computer but not network equipment. Internet service was expensive and slow; only around 16% of the island’s population had access to the web in 2011. (Nowadays, monthly use of even the slowest private Wi-Fi connection comes to 120 convertible Cuban pesos a month, nearly four times the average Cuban salary.) As a response, in 2011, a group of more than 100 Havana residents decided to unify their hyperlocal networks into a larger structure. The Havana “street network” (or SNET) would soon become one of the largest such community networks in the world. At its peak, user estimates hovered around 100,000 IP addresses. Isolated from the internet and beyond the government’s control, young Cubans set their own terms on forums, social media platforms, and local websites. During the network’s decade-long golden era, it offered a rare example of citizen and community exchange in a country where the state carefully controls communication, until the state finally took it over. To many users, SNET’s amateur, volunteer intranet provided a better service than the network the Cuban government ultimately replaced it with. Mustelier was part of this effort to bring together groups of computers that were already connected from the beginning. This meant gathering necessary hardware, like longer cables and better routers and servers. To acquire it, SNET’s founders relied on Revolico, Cuba’s version of Craigslist, which runs classified ads on- and offline, as well as on friends and family who traveled abroad. The group would then link the small neighborhood networks, set up servers, and tinker with the equipment. Yenier Medina Chávez, another SNET founding member, told *Rest of World *that they “used $60 equipment for something that would require a $500 machine.” Routers meant for households were made into primary links to the system; 100-meter cables connected houses. Chávez also contacted the devices’ manufacturers. “When we told them the details of what we were doing,” he recalled, “they did not believe us.” SNET in time became a kind of citywide internet, one divided into neighborhoods with sites of all kinds. Some resembled social networking sites like Facebook; others offered copies of Wikipedia and video game platforms, like Steam. Members hacked popular multiplayer games, such as *World of Warcraft* and *Dota*, and ran them on SNET. Artists would release their latest works there, and cinephiles could stream their movies of choice. Users would contribute monthly to a tip jar to cover the costs. Workarounds of this sort have a long-standing tradition in Cuba. Since most can’t afford streaming, they rely instead, for example, on external hard drives called *paquetes semanales *(“weekly packages”), on which television shows, albums, and offline versions of entire sites are available for download. Administrators describe SNET as an attempt to “connect the Cuban family.” It came at very low cost to users, while also offering a glimpse of what was available outside the island. This access was limited. Websites for international publications were unreachable, unless someone hosted articles on a connected server. SNET’s moderators would often curate information for thematic forums and sites. To avoid government interference, users were obliged to obey strict ground rules. They were not to discuss religion, politics, or topics that could “destabilize” the Cuban state — including news and controversial public posts. Viewing pornography, posting insults, or attempting to connect SNET to the World Wide Web could lead to a temporary or permanent ban. Ensuring compliance with these rules required hundreds of volunteers, including software developers and gaming enthusiasts. Volunteers were tasked with tracking down users who harassed or leaked private photos of other members. They sometimes took extreme measures, like infecting a harasser’s computer with viruses or even attempting to delete their files from hard drives remotely. The street network felt homemade and small-scale. Although users could choose to be anonymous, they depended on their neighbors to connect their computers physically to the network as well as on local administrators to restore access, in case they lost their passwords. SNET was an open secret in Havana, its routers visible on the city’s avenues. In 2016, the state-run news site* Cubadebate *launched its technology section with an article about it, which members took as a sign the government knew of their endeavors. But the country’s growing access to the internet proved detrimental to SNET. In 2015, the state-run telecommunications giant ETECSA expanded its public Wi-Fi hot spots. Three years later, it provided nationwide 3G mobile internet. In mid-2019, the ministry of communications authorized private wired and wireless connections for local businesses and individuals; 63% of Cubans can now connect to the web, joining platforms like Facebook and Instagram as well as Cuban sites (which are more affordable to access). New laws restrict community-run networks, requiring each constituent to be registered, sanctioned, and overseen. As a result, Cuba’s network has lost its homemade feel and become a state-run institution, where decisions have little to do with LAN parties and *compadres*. Coming up with new sites, launching a tool for beta testing, and moderating forums have become complex bureaucratic processes. Authorities had promised to deliver a revamped nationwide intranet, previously only accessible at public computer centers. But to do so, they needed SNET; members were forced to donate their own equipment in order to expand the state-approved intranet into people’s homes. When the pandemic hit, and the computer centers closed down, 18,000 users could still access the new national network from home, including state-approved games, like Fighting Covid-19, and educational content. But the state-run site remains faulty. Parts of the original platforms have not migrated to official servers — instead languishing in the neighborhoods where they were conceived, as divided as they were before SNET. Former members regret the loss of their network: “In a way, the state was the big winner,” said Mustelier.
true
true
true
As Cuba sluggishly got its population online, the shadow internet developed by volunteers provided a lifeline for thousands of people.
2024-10-12 00:00:00
2020-10-08 00:00:00
https://149346090.v2.pre…407-1600x900.jpg
article
restofworld.org
Rest of World
null
null
3,363,600
http://www.youtube.com/watch?v=sc_cGRZNjLA&feature=player_detailpage#t=384s
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,494,738
http://googleonlinesecurity.blogspot.co.uk/2014/03/googles-public-dns-intercepted-in-turkey.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,639,390
https://thecommuterapp.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,536,712
http://science.sciencemag.org/content/360/6387/444.full
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,678,319
https://delta.chat/en/
Delta Chat: Delta Chat, decentralized secure messenger
null
# Delta Chat is a decentralized and secure messenger app 💬 Reliable instant messaging with multi-profile and multi-device support ⚡️ Sign up to secure fast chatmail servers or use classic e-mail servers 🥳 Interactive web apps in chats for gaming and collaboration 🔒 Audited end-to-end encryption safe against network and server attacks 👉 FOSS software, built on Internet Standards, avoiding xkcd927 :) Available on mobile and desktop.
true
true
true
Delta Chat is a decentralized and secure messenger app 💬 Reliable instant messaging with multi-profile and multi-device support ⚡️ Sign up to secure fast chatmail servers or use classic e-mail serv...
2024-10-12 00:00:00
2024-03-25 00:00:00
https://delta.chat/asset…/home/intro1.png
null
delta.chat
delta.chat
null
null
1,361,017
http://fsharpcode.blogspot.com/2010/03/generic-monadic-map-and-join-using.html
Generic Monadic Map and Join using statically resolved type variables
Holoed
let inline mapM b f m = let unit x = (^x: (member Return: ^b -> ^n) b, x) let (>>=) m f = (^x: (member Bind: ^m -> (^a -> ^n) -> ^n) b, m, f) m >>= (fun x -> unit (f x)) let inline joinM b m = let (>>=) m f = (^x: (member Bind: ^m -> (^n -> ^n) -> ^n) b, m, f) m >>= id ## No comments: Post a Comment
true
true
true
let inline mapM b f m = let unit x = (^x: ( member Return: ^b -> ^n) b, x) let (>>=) m f = (^x: ( member Bind: ...
2024-10-12 00:00:00
2010-03-30 00:00:00
null
null
blogspot.com
fsharpcode.blogspot.com
null
null
1,673,231
http://eepurl.com/00V2
StartupDigest: Demo for Free at BizTechDay
null
**Demo for Free at BizTechDay** Do you want to demo your startup alongside Wildfire Interactive, Weebly (YC '07), Flowtown, and Square? BizTechDay San Francisco is looking for primarily B2B startups to demo on October 23. Applications are due by September 15. There will be a total of 10 companies demoing for 5 minutes each during the conference, and it costs $0 to demo your product. Speakers include Naval Ravikant (investor in Twitter), Ben Parr (Mashable) Sue Kwon (CBS Reporter), and more. Also my good friend Yujin (Andreessen Horowitz) will be hosting a panel during the event with the corporate development groups of Yahoo!, Facebook, and Intuit so the crowd will be an interesting one. It's not every day you can reach the media, corporate budgets, investors, and other great entrepreneurs all in one place for free. **Apply to demo at BizTechDay** **here** Share this with a friend: ** Silicon Valley StartupDigest is curated by:** Chris McCann @Mccannatron, Co-Founder of StartupDigest **The Best Startup Events This Week** Sept 8 - Scrappy Startup Happy Hour (Free) Sept 9 - True Ventures Coffee Thursdays (Free) Sept 10 - VCs vs Super Angels (Free) Sept 11 - Teens in Tech miniConference ($40) Sept 11 - Facebook Developer Garage - Girls in Tech (Free) Sept 11 - Hacking 4 Health (Free) Top Upcoming Events **Scrappy Startup Happy Hour** **(Free)** **When:** Wednesday, September 8th @6pm **Where**: Kaama Lounge, San Jose Happy hour and tech demos. **True Ventures Coffee Thursdays** **(Free)** **When: **Thursday, September 9th @9am **Where:** Pier 38, San Francisco Get coffee with True Ventures. **VCs vs Super Angels** **(Free)** **When:** Friday, September 10th @8am **Where:** Orrick, Menlo Park Come watch Dave McClure of 500startups battle David Hornik of August Capital! **Teens in Tech miniConference**** ($40)** **When: **Saturday, September 11th @9am **Where:** KickLabs, San Francisco Meet some incredible teenagers starting/working with startups. **Facebook Developer Garage - Girls in Tech** **(Free)** **When:** Saturday, September 11th @12pm **Where:** Facebook, Palo Alto A hackathon with Girls in Tech, need I say more? **Hacking 4 Health** **(Free)** **When**: Saturday, September 11th @10am **Where**: HealthTap, Palo Alto Join and change the health industry. **The Best Upcoming Startup Events** Sept 24 - Smart Phone Games Summit (15% off with code SPSD) Sept 25 - Application deadline for Startup School Sept 30 - Mobilize 2010 by GigaOM (20% off with code StartupDigest) __________________________________ | StartupDigest serves 50+ startup communities around the world. To **subscribe** click here. To **unsubscribe** click here. | To subscribeclick here.To unsubscribeclick here.
true
true
true
null
2024-10-12 00:00:00
2014-09-13 00:00:00
null
null
null
null
null
null
4,933,188
http://spectrum.ieee.org/automaton/robotics/diy/2012-robot-gift-guide#.UM9TSFFlzh8.hackernews
2012 Robot Gift Guide
Evan Ackerman
Whatever holiday you celebrate, this season is all about ~~ family ~~ gettin' stuff from other people. Unfortunately, we don't usually get to decide what stuff that is. So if you want a robot, you have two options: option one is to send this gift guide to everyone you know with your top three choices highlighted and dire threats upon receipt of anything else. Option two is to use the guide yourself, buy whatever robot(s) you want for the least appropriate member of your immediate family, and then offer to "help" them with it. In either case, we've put together this list of twelve robots that any serious (or not so) roboticist would love to add to their collection. Enjoy! **AR Drone 2.0** We've been playing around with Parrot's latest version of the AR Drone. It's *stupendously* impressive. It's a cinch to fly right out of the box both indoors and out, and you don't need experience with robotics (or flying R/C helicopters) to immediately enjoy it without destroying anything or killing anyone. There are plenty of safety features to make sure you don't lose the drone, it self-calibrates, self-hovers, and will even compensate for crosswinds. The AR Drone is more than a fancy toy, however. We've been seeing it show up as part of research projects more and more often, as researchers realize that they can get a cheap, reliable drone with plenty of sensors right off the shelf for what's effectively dirt cheap. Plus, you can take advantage of the AR Drone's inherent hackability as well, thanks to some wonderful AR Drone + ROS tutorials by Mike Hamer. No experience with AR Drones or ROS is necessary to begin controlling the robot directly from your computer, and it's free. How can you possibly pass that up? **Price: $270**, from Amazon. Also available locally in stores like Brookstone. As robots go, Sphero is probably the simplest one on this list. It connects to your smartphone or tablet via Bluetooth, and a tilt or to will send it rolling all over the place. It's fun to play with, but it can be a lot more than just a toy: a full API and mobile SDK for both iOS and Android enables you to get the little robotic ball to do whatever it is you've always fantasized about little robotic balls doing. **Price: $99**, from Brookstone. Also available locally in stores like Target for a bit more. **TurtleBot 2** For the serious roboticist, nothing beats a TurtleBot 2. With a sensor-laden mobile base, a laptop for a brain, a Kinect, and backed by the full power of ROS, the next-gen TurtleBot offers a platform that you can program to utilize many of the software innovations currently under development at universities with PR2s at their disposal. And it's even affordable, mostly. Check out our TurtleBot 2 preview for more information. **Price: $1,600**, from Clearpath Robotics or I <3 Engineering. **Neato XV-11** As much as we like the Roomba, if you want to <i>really</i> impress someone with the gift of a robot vacuum, definitely take a close look at the Neato XV-11 or XV-21. The Neato performs at least as well as the Roomba for typical vacuuming (read our comparison for more head-to-head details), it's not as expensive as many Roombas, and it's got a frikkin' LASER SCANNER in it. The XV-21 is designed for pet owners, with an upgraded brush and a better air filter. **Price: $370**, from Overstock.com. **Thymio II** It's always nice when someone decides to forego the customary profit margins and distribution costs and whatnot and just builds the best robot kit for as cheap as possible. Thymio is Swiss in origin, from EPFL, and it's really got a lot of cool stuff going on for the price. Beginners can program it through a graphical interface, and for experts, everything (hardware and software) is open source and hackable. **Price: $200**, from TechyKids. If you can wait until February, it's $140. **Mint** Robot vacuums are great, but if you have hardwood floors, you might rather have a robot Swiffer instead. Mint is small, clever, and nearly silent, and like the Neato, is able to localize itself thanks to a beacon that projects an invisible constellation of lights onto your ceiling. It cleans using wet or dry microfiber pads, and as we found, does a remarkably good job for something that appears so simple. **Price: $199**, from iRobot, for the original Mint. Also available locally in stores like Bed Bath & Beyond. **3pi Robot Kit** Line-following and maze-solving are some of the most basic, and most fun, robotics competitions, and there's no better way to get into robotics than to throw yourself into a competitive environment. Pololu's 3pi kit is an easy way to get started (especially if you want to learn C at the same time). **Price: $100**, from Pololu Robotics. **Scooba 230** iRobot makes big, Roomba-sized Scoobas, but we're fans of the cute little Scooba 230. It's designed for bathrooms, but will capably clean just about any hard surface, squirting out water, scrubbing, and then vacuuming the water back up again. It's tiny, it's determined, it does a good job, and it means you don't have to scrub the bathroom floor anymore. **Price: $280**, from iRobot. **LEGO Mindstorms NXT** LEGO Mindstorms has been a great entry point for roboticists for years now, which is why we keep on recommending it. Just being LEGO makes it immediately accessible to kids with previous LEGO experience, and a drag and drop programming interface (that gets as complex as you want) is easy and fun to use. Mindstorms is part of a lot of educational programs already, which means that there's a lot of support out there, as well as a huge number of peripherals that you can use to grow your set. **Price: $280**, from LEGO. Also available locally in stores like Target. **PhantomX Hexapod** We love how freakily lifelike this hexapod moves. We also love the fact that it's big, and fast, and strong. You can use it as a toy, if you like, or you can use it to destroy the world. Up to you. **Price: $1,200**, from Trossen Robotics. **Hummingbird Robot Kit** Hummingbird comes from a CMU spinoff, and the objective of the kit it to make building (like, from scratch) a robot as cheap and easy as possible. You get a *lot* of stuff to work with, including a controller board, LEDs, motors, as well as light, temperature, sound, and distance sensors, and more. No structural components are included, so instead, you're encouraged to use anything you can find lying around, like cardboard. **Price: $200**, from Hummingbird Robotics. If you *really* want to splurge this year, Nao's five-figure price tag ought to do it. One of the most advanced hobby-grade (or barely hobby-grade) humanoid robots available, Nao is fully mobile, with multi-fingered hands capable of human-like grasping and a ton of sensors. Like the AR Drone, Nao has been adopted by the research community as an affordable humanoid platform, meaning that all kinds of exciting things are being done with it, and you can be a little part of that. If, of course, you can afford it. **Price: $16,000**, from RobotShop. $5,000 off if you buy five! **Robots for iPad App** Last but not least, if you'd like to send a cool robot gift to your friends (but you can't afford buying each a $16,000 Nao), here's a suggestion for those of them who have iPads: Robots for iPad is a fun app featuring real-world robots from around the world. It's now only $1.99 on the App Store. It's an awesome app, and not just because, *ahem*, *IEEE Spectrum* is its creator; it has 126 robots from 19 countries, with hundreds of interactives, images and videos, plus interviews with roboticists, a timeline of robotics, a glossary of robot and AI terms, rankings of robots, and much more. You can learn more about it here, and you can send it to your friends as a gift by using the "Gift This App" option in the App Store. **Price: $1.99**, from Apple's App Store. Evan Ackerman is a senior editor at *IEEE Spectrum*. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
true
true
true
We hope you've been saving up all year, because we have thousands of dollars of robots for you to blow all your hard-earned money on
2024-10-12 00:00:00
2012-12-17 00:00:00
https://spectrum.ieee.or…%2C122%2C0%2C122
article
ieee.org
IEEE Spectrum
null
null