text
stringlengths 44
776k
| meta
dict |
---|---|
Terminal tools that make my life easier - spudlyo
https://jonathanh.co.uk/blog/tools-that-make-my-life-easier.html
======
dontdieych
Upvote for fzf, ripgrep, fd, bat.
\- shell history search with fzf(ctrl + r) is most productive use case in
addition to shell auto suggestion(fish shell).
------
bengale
Bat sounds handy. I’ll have to give that a go tomorrow.
| {
"pile_set_name": "HackerNews"
} |
All Activities Monitored - dangerman
https://www.thenewatlantis.com/publications/all-activities-monitored
======
ohazi
So this surveillance technology started in the military to track insurgents,
expanded to law enforcement and government to track suspects and the general
public, and is now expanding to corporations to monitor customers.
This pattern makes it seem like the cat is out of the bag, and that pervasive
expansion and abuse of this technology is all but inevitable.
Let's extrapolate a bit and start thinking about what society will look like
when this sort of surveillance is made available to the general public.
Imagine a free, Google-Maps-like application that allows you to see individual
people and rewind with half second resolution.
This would be such a massive wrench thrown into the gears of society that I'm
having trouble coming up with anything coherent.
~~~
dawg-
You just described Life360, which is running on my phone right now.
Also, corporations monitoring customers is not as scary as you make it out to
be with your connection to the military. Yes it sucks. But you can choose
which company to be a customer of. With the rise of corporate surveillance,
the choices we all make about consumption just got another layer of
importance.
~~~
smolder
With the rise of pervasive surveillance, I think the choices we all make are
going to cease to be important. We can be micromanaged into making the "right"
ones. We truly become cogs in an uncaring, world destroying machine, with no
more agency than rats in lab experiments. For people with a strong conformity
streak, this may sound fine. If you don't like being told what to do, and
don't like the direction of society in general, it's pretty dismal.
~~~
smolder
In retrospect this comment is hyper cynical even for me, and doesn't make a
lot of sense that our choices wouldn't matter. I just can't get past the idea
that my whole online life is there for someone with sufficient privilege to
dig through. I feel so violated by the betrayal of mass surveillance because I
invested most of my life into the internet thinking it was actually intended
to be safe and egalitarian, not a spy machine and a tool for social control.
------
gumby
For some perspective on this level of surveillance (and what it's like when
the technology extends to the consumer level) check out this 21 year old book
by David Brin, "The Transparent Society: Will Technology Force Us to Choose
Between Privacy and Freedom".
(Preferably get it from your local library, if you still have one, or a small
independent bookseller.)
~~~
JohnFen
I have a very hard time picturing how someone could have freedom without
privacy.
~~~
smolder
I agree, and it's specifically because everything is hierarchical. If you have
more power you are free to be more private than everyone else, and know more
about everyone else than they know about you. The ideal transparent society
can't exist without a level playing field, so it's not going to happen.
------
Animats
"It's 10 PM. Do you know where your Congressman is?"
~~~
Buttons840
Heh. If we can't stop it we can at least exploit it, right?
All this surveillance and transparency might actually make things better _iff_
we can force the government to be more transparent than the people. Imagine a
government that is so well surveilled by the people that the people could
trust it to surveil them in return. That doesn't seem to be where things are
heading though.
------
mlb_hn
Seems parts are a bit misleading and exaggerating capabilities. E.g.
>> Images from the cameras are in turn fed to computer programs that allow
analysts to track suspects, and even to rewind to look back over their paths,
like watching TiVo.
I'm pretty sure that really just means that they digitized footage and the
analysts did the actual tracking manually but it reads as having good image
recognition back in 2006.
~~~
sedachv
> I'm pretty sure that really just means that they digitized footage and the
> analysts did the actual tracking manually but it reads as having good image
> recognition back in 2006.
I do not know the specifics of the software used in the Gorgon Stare program,
but SRI started work on automated image surveillance under DARPA contract in
1982 with ImagCalc. The system was later expanded to video and continued
development until two years ago:
[http://www.ai.sri.com/software/freedius](http://www.ai.sri.com/software/freedius)
I am not sure what year they added automated video tracking. At the 2007
International Lisp Conference Christopher Connolly and/or Lynn Quam (can't
remember) showed a demo of FREEDIUS that, among other things, had automated
track analysis on aerial and CCTV surveillance footage; by that time the
problem was long solved, and they were working on automated event detection.
Same year (2007) the same SRI group also published this paper, "Recovering
Social Networks From Massive Track Datasets":
[http://www.ai.sri.com/pubs/files/1552.pdf](http://www.ai.sri.com/pubs/files/1552.pdf)
The "massive track datasets" were automatically derived from surveillance
motion sensors, of course.
Automated surveillance capabilities were very far along by 2006.
The really interesting thing is that FREEDIUS is publicly available under the
Mozilla Public License:
[https://github.com/SRI-CSL/f3d](https://github.com/SRI-CSL/f3d)
Drone away!
------
seph-reed
> "Eyes in the Sky tells the story of a top-secret surveillance system that
> helped turn the tide in Iraq."
Everyone loves a good under-dog story!
------
JohnFen
This is utterly terrifying. What can we ordinary people do to protect
ourselves from this stuff?
~~~
kylek
This wiki says they use smartphone cameras
[https://en.wikipedia.org/wiki/Gorgon_Stare#Phase_two](https://en.wikipedia.org/wiki/Gorgon_Stare#Phase_two)
Anyone want to build a drone-seeking high-powered-laser-pointer turret to
dazzle them?
~~~
JohnFen
I don't really think this would count as a solution, but it is an interesting
technical challenge.
I own a bunch of small, very powerful lasers (intended for engraving) that
would have a high chance of not just dazzling a camera, but permanently
damaging it. I wonder how feasible it would be to build a system that can
physically locate a moving drone with enough accuracy. Maybe by tracking its
RF signals?
Hmm, this might be a fun project, purely for intellectual purposes. I also own
a bunch of RC aircraft that I could use as targets...
------
whatshisface
> _The advent of this technology, combined with artificial intelligence and
> vast data banks, makes almost nonsensical the ideas of privacy and probable
> cause of an earlier age._
That's like saying that the invention of atomic weapons made almost
nonsensical the idea of not being vaporized. The fact that I _could_ be
vaporized more easily now that at any time in history has nothing to do with
my hardline anti-vaporization position, except possibly that it steels it.
~~~
X6S1x6Okd1st
The reason why we haven't seen use of atomic weapons is that it's clear that
use of atomic weapons will be met with more atomic weapons.
It's not clear that the state is worried about retaliation against wide area
surveillance.
~~~
whatshisface
I don't think it was ever clear that atomic weapons wouldn't be used,
especially not near the time of their invention. Remember the Cold War?
~~~
jdbernard
He didn't say they wouldn't be used, quite the opposite. He said there was
certainty that if they were used, your opponent would respond with more of the
same. This led to the doctrine of Mutually Assured Destruction from the Cold
War.
Stated another way: if we use them we know they will be used on us, so we'd
better not use them.
OP's point was the there is no similar guaranteed retaliation that might
motivate restraint in the use of surveillance.
| {
"pile_set_name": "HackerNews"
} |
HN Replies – Get notified of replies to your comments - sandebert
http://hnreplies.com
======
sandebert
I know this has been posted six months ago, but it's so great that I thought
it deserved more exposure. Seriously, until HN has something like this built
in, dang & co should consider putting a link to it in the footer.
Enter your username and email to get up and running, that's it. (No password.)
Built by dangrossman.
------
amenghra
This should really be a default feature.
| {
"pile_set_name": "HackerNews"
} |
Google: 90% of our engineers use the software you wrote (Homebrew), but... - syrusakbary
https://twitter.com/mxcl/status/608682016205344768
======
mikek
At a certain point, your resume should speak for itself. The fact that
experienced engineers with impressive resumes are put through these types of
interviews is insulting and frustrating to the interviewees.
Succeeding at these whiteboard questions requires weeks of preparation. You
need to practice, practice, practice. After enough practice, you are pretty
likely to pass. So ultimately, it is more of a test of "how much do you want
to work here." If you care enough, you can pass. That said, these types of
interviews do favor fresh grads over experienced programmers (you have
algorithms and data structures fresh in your mind), which means that they are
flawed in IMHO.
~~~
kenrikm
These types of interviews work really well for the "I got 1600 on my SATs and
went to {insert high profile school here} crowd" There are books out there
just to prep you for Google interviews I see these as very similar to SAT test
prep books. I'm not so sure Google is really interested in hiring the best
engineers but rather a specific type of engineer.
~~~
thethimble
Think of the problem from Google's perspective though. At some point, you have
tens of thousands of candidates and you need a system to quantify how good
they are. Further, it's reasonable to have false negatives (people you don't
hire that should have been hired) but really bad to have false positives
(people that you hire that you should not have). Together, these boil down
into the de facto whiteboarding interview process.
~~~
fsk
This has got to be the biggest hiring fallacy I've ever heard. "It's better to
reject a good candidate than hire a bad candidate."
That's completely false, and anyone who says that is completely ignorant of
Bayesian logic.
Here are some simple numbers.
Suppose that a "good" candidate is a 1-in-100 find. Suppose that a "bad"
candidate has a 1% chance of tricking you into hiring them anyway.
Every time you pass on a "good" candidate, that is greater opportunity for a
"bad" candidate to trick you into hiring them!
Counterintuitively, if you pass on too many of the "good" candidates, in your
overzealousness to reject bad candidates, you're actually INCREASING THE ODDS
OF A BAD HIRE.
This is management porn. "It's better to reject a good candidate than hire a
bad candidate." It makes the manager feel good. "Wow! That candidate seemed
smart, but I rejected him anyway! I'm such a great leader! I make the tough
decisions!"
tl;dr summary
Because "good" candidates are rare, every time you pass on a good candidate
that increases your odds of making a bad hire! This is simple Bayesian
reasoning!
~~~
bskinny129
Those are simple numbers, but they don't get at the real issue. People say
that because 1 bad hire can have negative effects on the whole team, causing
others to get less done and leave. Missing a good hire doesn't poison your
team.
~~~
krschultz
But 1 bad hire is easily correctable - you fire the person. You won't know
that you missed on the good hires.
I personally have worked with several awesome people that have interviewed
with Google, and none of them got hired. They all said the interview process
was flat out insulting.
Interestingly enough, most of them ended up at Facebook.
~~~
enraged_camel
>>But 1 bad hire is easily correctable - you fire the person.
Not sure if you have ever been a manager, because firing someone is NEVER
easy. It hurts everyone emotionally. The person getting fired feels awful. The
person doing the firing feels awful (unless they are a real sociopath). And
team morale tends to take a big hit.
Here is my stance: if you hire someone who isn't a good fit, unless they
actively deceived you, it is your god damn job to find a way to make it work.
As a principle you should treat people with respect and dignity, and not as
easily disposable and replaceable cogs.
~~~
kogepathic
> it is your god damn job to find a way to make it work.
No, it isn't. This is why probation periods exist. If you hired someone and
they aren't working out, then at the end of the probation period you don't
keep them on.
Trying to force someone who doesn't fit the team dynamic to stay is going to
hurt your org. It doesn't make you a "good" manager to say to everyone "I know
this sucks but deal with it because letting them go would mean I was wrong."
> As a principle you should treat people with respect and dignity, and not as
> easily disposable and replaceable cogs.
Which is ironic because so many job descriptions I see today are basically
written as "we want to hire someone with exactly these skills, who requires
zero training, and can become an expert in our systems in their first week."
No one is that way, unless your system consists of pressing one button all
day, but then the job description would probably require that the person have
intricate knowledge of the button and is friends with the engineer who
designed it so they know what to do if the button suddenly stops working.
If companies actually treated people with respect and dignity, I'd get the
training I need to become a better employee, instead of going to management
and begging them for any training every 6 months like the industry is now.
/rant
~~~
tommorris
> Trying to force someone who doesn't fit the team dynamic to stay is going to
> hurt your org.
Doesn't this rather presume that your "team dynamic" is actually good to start
with?
Let's say you had a team of not very good engineers. Then you hired quite a
good engineer who looked at all the terrible practices (e.g. no version
control, shitty or nonexistent testing, poor build processes) and said to
themselves "look, I need to fix this shit or I'm leaving, and I've got ten
better offers in my inbox".
They might not be a very good team fit, but that's because the team is filled
with idiots who haven't figured out how to use version control or whatever.
So, you know, the best thing to do is to get rid of them for not being a
culture fit or for not being good for the team dynamic...?
~~~
kogepathic
> They might not be a very good team fit, but that's because the team is
> filled with idiots who haven't figured out how to use version control or
> whatever.
Yes, if you have a team like this and you hire someone who has a higher
standard, there is going to be some friction between them and the existing
team members. Letting that person go isn't your concern, because as you said
yourself:
> and said to themselves "look, I need to fix this shit or I'm leaving, and
> I've got ten better offers in my inbox".
I've been in that situation before. I was hired to a company and when I got
there I found out that every single day they were fighting fires because of
stupid decisions management made with little foresight into how it would
affect the team. Funny enough none of this was mentioned during the interview,
although it was a definite red flag that they had high churn for this
particular position.
I stayed there for my probation period trying to fix things so the team would
fire fight every day and change management's mentality, but it wasn't
happening, so I gave notice and went somewhere better.
> So, you know, the best thing to do is to get rid of them for not being a
> culture fit or for not being good for the team dynamic...?
No, you took me too literally. Obviously if you've got someone who has
friction with the team, but they're a hard working individual who is trying to
make your team better and more efficient, you should try to work through those
stressful periods because in the long run it will be better for your team's
health and the company's health. If some of your low performers leave during
this period, that's okay, they were only going to hurt you in the long run.
That being said, don't burn bridges with your existing employees. Try to find
a happy middle ground that results in a better work environment for everyone.
------
joshstrange
To all of those saying ranting on twitter is the wrong move I couldn't
disagree more. Twitter is often the ONLY tool that the average person can use
to communicate and/or call out large companies on their actions. This is BS
and should be made known. Homebrew is an amazing tool and I'd be falling over
myself getting the offer papers in this guy's hands if he came to me looking
for a job. The fact that google turned him away only further cements my
opinion that google would be a terrible place to work.
~~~
philipwalton
My main complaint with the tweet is that it's almost certainly speculation.
1) Most companies (for legal reasons) don't tell candidates why they weren't
offered a job. Maybe it was because of the binary tree question, but maybe it
was for some other reason.
2) Homebrew is a Mac-only product, so the likelihood that 90% of Googlers use
homebrew is very low. Moreover, Google does not track the software its
employees download onto their laptops, so there's no way they would even know
the percentage.
~~~
austinjp
Really? In the UK companies are legally obliged to reveal why a candidate did
not get a job, if asked... in my understanding.
~~~
mdpopescu
Huh? I've been rejected by an UK company with just "you're obviously a very
experienced programmer, but "we are not able to take your application any
further at this time".
I did get a code review out of it though, and it did point out a few real
issues, so I'm ok with it.
~~~
austinjp
Maybe I don't understand the legals fully. My experience comes from being on
the hiring side. Our HR department were very keen that we kept detailed notes
so that the decision not to hire could be justified in the case of a tribunal
or suing situation. For what it's worth, it did make me question my motives
and felt that it ensured I gave every candidate a fair shot.
------
Alupis
Has no one stopped to question what Google may have been looking for in a
candidate?
The OP has written some great apps, sure, but there is a huge difference
between writing a package manager for Mac (among other Mac/iOS apps and
utilities) and writing incredibly complex, highly performant algorithms for
say search indexing, machine learning, ai, etc...
In that context, knowing CompSci basics like Binary Trees (usually taught to
pre-major CompSci students) has a lot of relevance. It's clear Google was
looking for a Computer Scientist here, not just another developer.
I am also put off by the extreme display of non-professionalism here. It's
like the OP took this personally as an insult. "I've written popular things,
how dare you not hire me if I want to work for you!". That's not a
professional I'd like to work on a team with. Rather, that's a throw-back to
the "Rockstar" programmer that nobody wants to work with.
This boils down to a person who was personally offended a company did not hire
them at a drop of a hat, as-if he was "blessing" Google with his presence.
I'd say, Google (and other companies) are better off without this type of ego.
They made the proper choice to weed this individual out.
~~~
emsy
If you follow the twitter discussion, it says he applied for iOS tools. I
don't know what tools they write but I'd be surprised if someone who manages
to write a (pretty good, actually) package manager can't solve the problems in
this position.
~~~
Alupis
> If you follow the twitter discussion, it says he applied for iOS tools
We cannot just assume that since he applied for iOS something at Google that
he's the best fit and what they were looking for.
Maybe the tools Google needs built are very CS heavy (hence the "Build us a
Binary Tree" question)? Maybe during the interview his arrogant attitude was
on full display? Maybe the interview team dug up past episodes of explosive
arrogance like this very one we're discussing right now?
Even if they were looking for someone that matched his profile exactly, his
post-interview display of non-professionalism is sure to hurt his chances of a
re-interview anytime in the future (and quite possibly at many more companies
than just Google).
He conveyed several things with this display, none good;
* he's incapable of handling rejection
* he feels entitled
* he feels he's better than everyone else
* he's unwilling to admit his own shortcomings
None of these are good qualities.
~~~
omegote
He actually didn't apply, it was Google who contacted him in the first place.
~~~
pound
Google contacts plenty of people. Reaching out to yet another engineer (not
even talking about this particular case) doesn't mean they really want someone
particular. They just going through the pool of potential matches. Person who
initiated contact may not even know what exactly Homebrew is.
------
fishnchips
[Ex-Googler here] Truth be told this is a trivial question to be asked during
an algo interview and as an interviewer I'd consider this a warm-up. Otherwise
it's a rather poor question since either you know how to do it (ie, you have
an idea about recursion) or you don't - there aren't too many shades of grey
or possible follow-up questions that I can ask to probe the depth of your
knowledge.
That being said if I asked this as a warm-up and we'd spend the whole
interview trying to get that done then of course my verdict would be No Hire.
As an interviewer it is not my job to look at your GitHub profile - instead I
am assigned an area to check and I try to come up with the best understanding
of the candidate _in that area_. While failing to reverse a binary tree is a
total failure in algo/ds you can still be hired since there are several
interviewers (if you make it to onsites, that is), each probing a different
area.
My biggest problem with Google style interview process is that it's easily
passed by folks who already passed it in a different company. After having
interviewed hundreds of Google's candidates I moved to another big company
with the same interview type and the experience was really surreal. On my
algo/ds interview I got asked slight variations of the same questions I was
asking myself - and over time I've seen some totally brilliant, unexpected
solutions. Must have been a strange experience for the interviewer who got his
questions each answered in 3 minutes tops. I also made it a sport to solve
each one in a different language because why not. Not sure though about
validity of the signal the company got from this interview.
~~~
Aloisius
_> [Ex-Googler here] Truth be told this is a trivial question to be asked
during an algo interview and as an interviewer I'd consider this a warm-up.
Otherwise it's a rather poor question since either you know how to do it (ie,
you have an idea about recursion) or you don't - there aren't too many shades
of grey or possible follow-up questions that I can ask to probe the depth of
your knowledge._
It is a terrible question.
First, you can't invert a binary tree (as in flip upside down). If you did,
you'd end up with multiple roots and since all binary trees are rooted, you'd
no longer have a binary tree. It'd be a tree, just not a binary tree.
If the questioner meant mirror a binary tree (swap left & right at each node),
then it is a no-op. You do not need to modify memory to do it. Left & right
are just labels for two pointers. You can just swap the order your variables
in whatever struct defines your node and cast it against your existing memory
(or change what your accessors return in a derived class or create a reverse
iterator or however you want to implement it) and there you go, a "reversed"
tree with zero overhead.
Either way, it is a terrible question unless you wanted to see if someone
understood the difference between how a data structure is drawn on a
whiteboard for humans vs. how it actually works. Maybe they were actually
asking that question, but that seems highly unlikely.
And if they actually meant for you to recurse down and swap left and right on
everything, it would dramatically lower my opinion of them because it would
make me wonder if _they_ knew the difference between how a binary tree is
drawn on a whiteboard vs. how it is laid out in memory.
~~~
gaustin
Would a multiply-rooted tree still be a tree? I thought a single root was part
of the definition of a tree. Would it instead be a graph?
Sorry for the elementary questions. I'm bad at algorithms and just trying to
get a grasp here.
~~~
kele
At my university it's common to call acyclic, connected graph a tree. We
distinguish between rooted and unrooted trees. For example, minimal spanning
tree doesn't really have to be rooted, it just has no cycles.
~~~
haversoe
> acyclic, connected graph a tree
This is the graph theoretic definition of a tree.
------
kazinator
WTF is "inverting a binary tree?". The smattering of search engine results
points to some seemingly operation that basically generates garbage by
destructively manipulating the tree into a DAG in which the leaves of the
original tree are roots, which point downward the parents. The original root
node is returned, and still points to its children. Unless you return some
aggregate (e.g. list) of all the roots which are not direct children of the
original root, or it is understood that something elsewhere holds reference to
them, they are leaked.
Be that as it may, _if the interviewers hand you a reasonably detailed
specification of what "invert a binary tree" means_ and you can't whiteboard
it, I don't see how you can expect the job.
If you're expected to know what it means, and get no hint, then that is just
stupid. "Whiteboard up an implementation for something we refuse to specify,
beyond citing its name."
~~~
tzs
My guess is that the potential leaking of some nodes is one of the points of
the question. A lot of people would not notice it.
Note that in the original tree each node needs storage for two pointers. After
inversion only one is needed. You can use that now unused storage to link
together the multiple roots to solve the leak problem.
But note that only consumes the second pointer storage in the roots. Interior
nodes end up with some free storage after inversion.
Since inversion is reversible, the binary tree and its inverse contain the
same information. Does this mean we should only need one pointer's worth of
storage in the binary tree nodes?
Is there something for binary trees similar to Knuth's xor trick for doubly
linked lists?
~~~
kazinator
If you invert the tree such that each node points to its parent, and there are
no child pointers, you lose information. Namely, the order among children is
lost. A given interior node still has two children as before, but it is not
known which is the left child and which is the right child; there is just a
set of up to two nodes which point to the same parent.
The binary tree inverse specifications/implementations I have looked at
preserve the information by selectively using the left or right links. For
instance, given:
P
/ \
L R
The inverse would be:
L R
\ /
P
Both children point to the parent. But the left child uses its right pointer,
and the right child uses the left pointer. That's what preserves the
information about which child is which.
------
nostrademons
Sorta ironic, but remember that the point of an interview is to determine how
well you'd do _in the environment that 's hiring you_, not _how good a
developer you are_. Because Google tends to reinvent the wheel for basically
everything, algorithmic knowledge really does matter. Package management only
matters within a few very specific subteams. If you're interviewing for a
general SWE position, you could be Mark Zuckerburg and still not qualify for
it.
FWIW, I find Facebook turning down Jan Koum in 2009 and then spending $19B to
acquire his company even more ironic.
~~~
apendleton
That may well be, but if you're considering hiring a guy who's an expert at
writing package management tools, don't you put him on one of the specific
subteams that needs that skillset? Surely it's something Google deals with?
~~~
nostrademons
The problem is that Homebrew is very different from the sort of package
management tasks that Google deals with. The design goals for Homebrew
include: make it easy for users, make it not require root, make it work on
Macs, handle dependencies robustly. If he were at Google it would be Linux-
only, he'd be using Linux containerization extensively, he'd be deploying
packages to thousands of machines instead of one, it'd be a virtual guarantee
that some of the machines would fail during the installation process, there
would be little or no user intervention and if there was a user it'd be a
trained SRE, and the installation procedure would probably need to be an order
of magnitude more efficient than Homebrew is.
I don't want to take away from his accomplishments as a programmer - I use
Homebrew too. But my point is that it's very easy to see "Good programmer, of
course he should get hired" from the outside, while the reality is that it may
not be all that similar to the tasks he'd be doing.
~~~
smackfu
If you are hiring the Homebrew dev, and your devs currently use Homebrew, why
wouldn't you hire him to work on Homebrew for you?
~~~
Alupis
> If you are hiring the Homebrew dev, and your devs currently use Homebrew,
> why wouldn't you hire him to work on Homebrew for you?
Macs account for something like 8% of the total marketplace of all PC's. For
developers, they account for something like 20%... another 20% on Linux, and
remainder on Windows or other.
So even if somehow having a paid Google employee work on Homebrew seemed
advantageous, it would only benefit 20% of Google's staff, and 0% of the
company itself (all Google servers are Linux).
~~~
loopbit
Except that google 'banned' the use of windows internally a few years ago
unless you had a really, really good reason for it.
Not sure what's the status of that ban (nor I care), but will skew the numbers
enough to invalidate your point.
~~~
Alupis
> enough to invalidate your point.
It might except ex-googlers in this thread of stated Google "banned" use of
Homebrew internally. So the net benefit to the company and/or employees
remains small to zero.
This is off topic though, since he was not being interviewed for homebrew
development.
------
robbrit
I'd like to present HN with a challenge. Come up with an interview process
that matches _all_ of these requirements:
Objective - the process avoids human bias (this guy was in my frat, therefore
is awesome).
Inclusive - someone doesn't need extensive past knowledge in a particular area
of coding to do well in the interview.
Risk Averse - Avoids false positives.
Relevant - it properly tests the abilities needed for a coding job, and not
irrelevant ones (like the ability to write an impressive resume).
Scales - this will be used for tens of thousands of interviews.
Easy-to-do - this will need to be done by hundreds/thousands of engineers, who
would probably rather be coding.
It's easy to poke fun at what is perceived to be a flawed process. It's much
harder to propose a solution that satisfies the above requirements. Google has
done extensive research on this topic and has done remarkably well with it
compared to other companies of similar size.
~~~
ridiculous_fish
Xoogler here. I can't meet your challenge. IMO the very premise of a company-
wide unified interview process for all software engineers is wrongheaded.
How can you make the interview relevant without knowing what the position is?
I was asked the typical CS-type questions in my interview, but the team I
ended up on required no theory.
How do you define false positives before you know what the candidate will work
on? A superstar in one team will be a dud in others.
And let me add another bullet point to your process wish-list: gives the
candidate a sense of whether they want the job. This is impossible when the
interviewer is a random engineer from an unrelated team, unable to speak to
what the candidate's work life will be like. A Google style process gives
candidate very little information.
I would instead propose something very old-fashioned: teams hire for
themselves. The usual reply is that this results in an "inconsistent hiring
bar", but so what? Teams have different requirements and need engineers with
different skills, so why shouldn't the hiring process reflect that? We are not
fungible engineering units.
------
carc
To be fair, inverting a binary tree is a pretty easy question. Google also
tells you BEFORE you start the interview process that it'll be very data
structure/algorithm oriented and asks that you please prepare (and take as
much time as you want doing so). They even say that they want you to prepare
because they know a bad candidate that prepares can look better than a good
candidate that doesn't prepare - then want all candidates on a level playing
field so they can make accurate judgements. All that being said, I still think
that there is lots of room for improvement in the process.
edit: really good english skills
~~~
jblow
I am dismayed by the way all the reactions on Twitter are piling on with
outrage and/or relating similar experiences.
Inverting a binary tree is pretty easy. It is not quite as trivial as
FizzBuzz, but it is something any programmer should be able to do. If you
can't do it, you probably don't understand recursion, which is a _very basic_
programming concept.
This isn't one of those much-maligned trick interview questions. This is
exactly the kind of problem one may have to solve when writing real software,
and though you may never have to do this specific thing, it is very related to
a lot of other things you might do.
I run a small software company and I very likely would not hire a programmer
who was not able to step through this problem and express a pseudocode
solution on a whiteboard.
~~~
callum85
> If you can't do it, you probably don't understand recursion
No, I can't do it (don't even understand the question) but I certainly
understand what recursion is, and can solve problems and make things work far
more reliably than many of the more academic programmers I have worked with.
~~~
SamReidHughes
I believe "invert" here means to flip the left-right direction. A better word
for it would be "mirror."
------
seccess
What does it mean to invert a binary tree? I'm not familiar with this
operation on binary trees. Does it mean to swap parents with child nodes? Or
to swap siblings?
~~~
x3n0ph3n3
In this case, it means to reverse the binary tree, so that you get the largest
item by iterating down the left branch to the bottom of the tree. You would do
this by breadth-first scanning the tree, swapping the left and right pointers
as you go.
~~~
attilaolah
Oddly enough, Google shows me interview questions where "inverting a binary
tree" means something quite different — for example, flipping it upside down,
and making the left leaf the root, the right leaf the left leaf and the root
the right leaf.
If this was really about "reversing" the tree, as you mention, the question
seems more likely to address how the candidate approaches the situation. Like,
he should start by making sure they both agree on what the question actually
means.
Once that's out of the way, it seems relatively easy to come up with a naive
solution, without having memorised any algorithms. It seems more like a case
of brainfreeze to me, which can be sort-of fixed with practice (which in turn
many candidates refuse to do: the dreaded "If I have to cram for the
interview, I don't want the job" statement.)
So maybe he really wasn't a good fit for Google, despite apparently being a
rockstar developer. Hey, startups need rockstar devs too.
~~~
DannoHung
I was gonna say, who calls reversing a tree's ordering inverting?
> Oddly enough, Google shows me interview questions where "inverting a binary
> tree" means something quite different — for example, flipping it upside
> down, and making the left leaf the root, the right leaf the left leaf and
> the root the right leaf.
Whaaaaa? I can't find this, but it seems like such a weird operation. Got a
link?
~~~
morpher
>I was gonna say, who calls reversing a tree's ordering inverting?
The guy writing the tweet (not necessarily the interviewer).
------
squiguy7
Interviewing seems broken to me too. I have spent the last few months trying
to get a job out of college and everyone seems to be interested in how well
you can regurgitate CS fundamentals.
They are seemingly less interested in seeing how you solve problems and work
through the process of software.
I would be more than happy to see this process change, I am just not sure what
it entails.
~~~
bstamour
Devil's avocado: if they're truly CS fundamentals, then they should be baked
into you good and deep during the course of your college education. It
shouldn't be painful at all.
~~~
gkoberger
I took CS 101 classes almost a decade ago, and since then, I have never once
needed to write a binary search tree outside of an interview.
I think "CS Fundamentals" are really just "abstract concepts used to teach
programming", and calling them fundamentals is disingenuous.
~~~
jblow
Maybe you're just not doing serious programming. Most people I know implement
data structure searches quite often.
If you're writing scripts, or JS code for web pages or something like that,
then maybe you don't use CS stuff, but ... are you able to write a web browser
if you had to? Are you able to write an operating system or navigational
software for a spacecraft? If not, then maybe just see this as revealing
sectors of your skill set that could be beefed up, rather than presuming that
none of that stuff is important.
~~~
Aloisius
> _Maybe you 're just not doing serious programming. Most people I know
> implement data structure searches quite often._
Wow. Really? Most serious people I know use other people's implementations
that have already been highly optimized and well tested because they have
better shit to do than reinvent the wheel.
I suppose if you want to write your own red-black tree from scratch, that's
your prerogative. The last time I did was 20 years ago and not only will I
never do that again, I will laugh at anyone who does it without a damn good
reason.
~~~
MaulingMonkey
> Wow. Really? Most serious people I know use other people's implementations
> that have already been highly optimized and well tested because they have
> better shit to do than reinvent the wheel.
Ditto. Those who decided to reinvent even basic data structure stuff left me a
huge string of bugs to fix, which I eventually got so fed up with, that I
started replacing their code wholesale with off the shelf solutions to stem
the flow at my last job.
Aside from fixing an untold number of implementation bugs, the replacement
caught several _usage_ bugs as well, due to actually having some error
checking built in.
We had just plain broken hashtables, "lock free queues" that didn't use memory
barriers... or interlocked intrinsics... or even volatile, if my memory is
correct - and not a debug visualizer to be seen before I got my hands on them,
of course.
> I suppose if you want to write your own red-black tree from scratch, that's
> your prerogative. The last time I did was 20 years ago and not only will I
> never do that again, I will laugh at anyone who does it without a damn good
> reason.
Besides laughing, I'll tend to -1 the code review as well.
------
bane
I've phone interviewed with Google a couple times. I wasn't really interested
in working there, but wanted to see what it was like. Both times, the people
who interviewed me were decent, friendly folks and we had a good chat. They
then dug into algorithm questions on topics I hadn't seen since my undergrad
(I'm about 20 years into my career) and haven't touched since then -- though
I've done a bit of algorithm work outside of that they weren't particularly
interested in that.
I reached into my way-back machine and tried to derive some approaches where I
simply didn't remember the answer (and I was very open about it). I made it to
call backs both times, but they declined to move forward, probably because
they wanted a younger person who remembered their big-Os off the top of their
head, but I was okay with it. I told all the interviewers I had fun and I did.
Even if I had made it in, I'm not sure I would have taken the job at those
times. So my lack of motivation ended up turning what could have been
stressful into a fun look at their hiring practices.
However, I can see for people who are really dead set on working there, it can
be a harrowing experience.
------
notacoward
Humorous answer: Maybe I can't invert a tree, but watch me flip this table.
Serious answer: Companies that pull this crap deserve to starve and die.
~~~
westernmostcoy
What specifically are you objecting to? That specific question? Writing code
on a whiteboard?
How would you interview software engineering candidates?
~~~
gt565k
I think the best way is to give them access to a code base, give them 48 hours
and have them submit a pull request for a feature. That way you can see that
they can learn the codebase and implement the feature in their own time.
During the interview, you can discuss their code and the reasoning behind the
implementation details.
~~~
snom337
Which is fine and dandy until you get that one person that cheats by getting
help from someone else.
~~~
ruswick
As if people who interview for Google don't just look up the most common
interview questions and memorize answers before the interview...
~~~
snom337
Still I think that would become obvious pretty quickly. I've actually had
people start writing out a textbook algorithm that just solved the wrong
problem. And then completely stumbled when trying to explain how the program
would arrive at the intended result.
I usually ask questions until the person is out of his comfort zone, and if he
is completely clueless on how to proceed at that point it's a red flag.
------
markbnj
Number of times I have had to invert a binary tree in my 25+ year career: 0.
Number of times I have been asked to invert a binary tree in an interview: 0.
What I would do if I had to invert a binary tree: look it up.
~~~
ksk
>Number of times I have had to invert a binary tree in my 25+ year career: 0.
Number of times I have been asked to invert a binary tree in an interview: 0
Well, if you wanted to pull that card, it would have been nice to mention the
sorts of problems you've worked on in those 25 years.
>What I would do if I had to invert a binary tree: look it up.
Unless you're already good at algorithms, it would net you a mediocre
solution.
For e.g. You would only know that there are multiple ways to implement an
algorithm when you've actually done the work dozens of times and noticed that
some implementations won't be appropriate for you needs. Like with many
things, its about having experience, knowing whats a good fit, and knowing why
something similar isn't a good fit.
Certainly - every single time you choose an algorithm, and then decide on a
particular way to implement it - you could in theory, implement each algorithm
in 10 different ways and then choose the best one after benchmarking. But that
would be a huge drag on productivity. And if you had to choose 5 algorithms
for 5 different tasks then it quickly becomes a quadratic complexity time
sink. It would be far better to just know via experience. Its kind of like in
chess - the better players just KNOW that certain paths lead to less
favourable winning odds. Well, novices do take those paths and eventually find
out the hard way !
~~~
markbnj
>> Well, if you wanted to pull that card, it would have been nice to mention
the sorts of problems you've worked on in those 25 years.
Quite a range of stuff, unsurprisingly. I started on an HP3000 mainframe in
1976, if you want to go back to the very beginning, writing BASIC programs on
a teletype or one of the two early CRTs, and storing my programs on paper
tape.
Since then I've worked in DOS, 16-bit Windows, 32-bit Windows, 64-bit Windows,
Ubuntu, iOS, and Android, using Pascal, 8086 assembler, C, C++, C#,
javascript, Python, and Java. I've worked on applications in multimedia,
telephony, banking, insurance, pharmaceuticals, cosmetics, health care, and
I'm sure a few other things I've forgotten.
All of which will mean jack squat if, tomorrow morning, the most important
thing I have to do is invert a binary tree. But I'm fairly certain I would be
able to figure out what I needed to do, and I am fairly certain I could manage
to implement it well. It's what we're supposed to be able to do, and if you
think that having studied up on it so that you could pass a Google interview
means that, a few years down the road when you actually need it you'll just
whip it off the top of your head, then I think life may hold some surprises
for you.
~~~
ksk
Thanks for replying. I simply wanted to know what kinds of problems you worked
on. A binary tree can be inverted presumably on any OS and using most
languages. If you're going to claim that a basic binary tree operation has not
be necessary for you in your 25+ year career, you should have mentioned your
problem (!industry) domains. It was nothing personal.
>and if you think that having studied up on it so that you could pass a Google
interview means that, a few years down the road when you actually need it
you'll just whip it off the top of your head, then I think life may hold some
surprises for you.
That is a misrepresentation of what I said. I said that _RETAINING_ basic and
higher order CS fundamental knowledge is much more useful, in a way that
simply looking up an algorithm on wikipedia would not be.
~~~
markbnj
>> That is a misrepresentation of what I said. I said that _RETAINING_ basic
and higher order CS fundamental knowledge is much more useful, in a way that
simply looking up an algorithm on wikipedia would not be.
Sorry it was not my intention to misrepresent you. I was assuming that we all
agree that simply looking something up without having any fundamental basis
for understanding would not be of much use, and that we further assume the
person doing the searching is in need of a refresh, and not basic education in
the craft. In that context it is the idea that you might be called on to go up
to a whiteboard and trot out something you haven't done in three years, and
then be judged competent or not based on how successful the trotting is, that
gets people worked up.
------
el_fuser
Another instance where silicon valley favors the young... he probably would've
nailed this question if he'd been only a couple of years removed from school.
He also would've nailed it had he been given 10 minutes to do some research to
refresh.
Silicon valley (and the numerous companies that mock the interview style) are
testing for the wrong thing when they hire, then complaining about not being
able to find good engineers.
~~~
CydeWeys
You're given weeks to refresh your knowledge ahead of a Google interview. They
tell you the basic CS fundamentals that are going to be covered. Binary trees
are MOST ASSUREDLY on that list. They just don't have time during 45 minute
interviews to let the candidate go on the Internet to look things up they
should already have come prepared for.
Anecdotally, at a previous company, we tried an "open book" (i.e. you can use
the web) interview policy for a few interviews. It was a train wreck.
~~~
bitkrieg
Out of curiosity, can you elaborate why the open book interview policy failed?
~~~
eropple
At prior employers we've had very good luck with open-Google interview
policies. I mean, we'd watch what you're Googling, and if you're copy-and-
pasting we're going to drill you to make sure you actually _understand_ it,
but I expect to have a search engine when programming and I think you should
too.
I prefer not to ask code questions at all, though.
------
spiritplumber
A project manager at Google was upset that my 'bot literally ran circles
around theirs (it was 2010, Android based robots were just starting) and told
me that I was just a hobbyist and my project did not exist.
So I gave him one of the spare logic boards (open design anyway). And wrapped
my hand around his. And squeezed. And asked him, if it doesn't exist, why is
it making you bleed?
He watched impotently as the people who had been invited for the presentation
played with my robots and ignored his as two of his guys tried to get it to
work.
I finished the two projects I was doing with Google and did not call them
again.
(Before you downvote: Yes, there is some video, and I consider the small
amount of pain I inflicted that day a kindness compared to the much greater
amount of pain that an engineer is in danger of enduring if he says things
like "This thing that is in front of me, it does not exist", especially if he
works with big machines).
~~~
to3m
I think you need to work on your people skills.
~~~
eonw
or the other guy could work on his ego skills?
~~~
to3m
That can be dealt with when they post their side of the story.
~~~
eonw
agreed, i would love to hear both sides of this one.
------
bla2
Interviewing is stressful and all, but if the guy's reaction to not getting
hired is to flame on twitter, not hiring might've been the right call.
~~~
vezzy-fnord
I also somewhat laughed at the poster who stated "Apparently my GitHub wasn't
enough."
~~~
r0naa
Well, he is the author of numerous widely used open source projects. It is a
bit redundant (and useless) to give him a "homework assignment when he has
such an impressive portfolio that speaks for itself and of which you can
assess the quality.
------
topher6345
In my office, opinions are 50/50 on this.
Interview with Lazlo Bock on Google's hiring practices:
[http://youarenotsosmart.com/2015/06/08/yanss-051-how-
google-...](http://youarenotsosmart.com/2015/06/08/yanss-051-how-google-uses-
behavioral-science-to-make-work-suck-less/)
Some of the claims Lazlo makes:
As large organizations grow, their workforce trends towards mediocrity. Google
* takes special care to counter-act this effect. * researches their
hiring/interviewing practices just as much as their machine learning. *
publishes their methodologies:
[https://www.workrules.net/](https://www.workrules.net/)
The algorithm in question is discussed in Coding Interviews by Harry He.
[http://www.apress.com/9781430247616](http://www.apress.com/9781430247616)
I feel the original tweet conveyed a bad attitude, was emotional, reactionary,
and ultimately a bad career move on the part of OP.
In my younger days I suspect I would have done something similar. I'd like to
think I would see the experience as a learning opportunity and be able to
react with humility and maturity, but who knows? Hopefully I can think of OP
and not tap the tweet button.
~~~
mildbow
Wow.
Really? Pretty much everyone recognizes that google style interviews weed out
perfectly good people. What Max went through is just an example of one such
obvious case. It's very much a case of the google hiring algo failing. Lot's
of people would have no doubt that Max can cut it when it comes to iOS dev.
That's all it is. Now, if you are going to read "tweet conveyed a bad
attitude, was emotional, reactionary," into a perfectly human tweet, holy crap
you are judgmental.
>> ultimately a bad career move on the part of OP.
Not at all. I've done a _lot_ of interviews and basically _none_ of them
required us trawling twitter. I think it would have to be something pretty
heinous for me to _not_ hire someone based on their social media crap.
Definitely not something as mundane as this. This sort of hilarious cowardice
about expressing feelings just makes me angry. At what point do we stop acting
like these trivial, humanizing glimpses into a person are somethign that is a
bad career move?
------
shanemhansen
I had a big comment here, but I erased it. I think one story proves my point.
When talking to google engineers one thing I noticed was that they considered
youtube to basically be a joke. The reason why is that youtube has a messy
python codebase. I asked them what they worked on while they were at google.
They had rewritten an internal web portal for a support tool. From everything
I can tell it was literally a mysql crud app.
If this is how success and failure are determined at google, it's no surprise
how many of their products that people actually use come from acquisitions.
------
harel
I don't know if I would rant on Twitter, but I would be as frustrated as well.
Those 'tests' are very academic and I am not an academic person. I've not even
officially finished high school. This test will not properly judge how well I
can code or design software. I've been doing just that for 20 years now, and
launched a few start ups, but I would fail that interview at the door.
I interviewed once for Google (at their request) and failed. For some reason
they interviewed me for a networking position instead of a code one, so
questions about TCP internals were not really my forte. I was just launching
my second start up at the time and would have declined the job had I gotten
it. I admit it does sting a bit to be declined - not just for Google, but for
any position - even those you would decline yourself.
------
yongjik
As others said, much more than 10% of Google's engineers don't even own an
Apple machine. So, even if Google's Mac management team somehow uses Homebrew
to manage the employees' machines (which may or may not be true: I have no
idea, even though I used a Macbook in Google until recently for years), the
percentage of Google engineers using that software is nowhere near 90%. The
percentage of Google engineers _knowingly_ using that software is certainly
closer to 10% than 90%.
------
chubot
I think there is some validity to the general point, but I'm not commenting on
that.
Just quibbling: I don't think anywhere near 90% of engineers use homebrew?
Google development is done on Linux, and Homebrew is a Mac thing AFAIK. I have
never used Homebrew.
Android development can be done on Macs but I doubt they use Homebrew.
Certainly not for anything important.
~~~
devy
Re: "and Homebrew is a Mac thing AFAIK." -> FYI There is a fork of Homebrew
called Linuxbrew now. [http://brew.sh/linuxbrew/](http://brew.sh/linuxbrew/)
FYI.
------
pearjuice
It's funny because the guy actually thinks having built something popular
equals being a good software engineer. Wordpress, PHPMyAdmin and so forth are
all really popular but the code is shit and though it's used by millions of
people, a _real_ software engineer will shudder looking at its source code.
Now, I have no idea what the code quality of Homebrew is, but just because he
built something popular doesn't mean he should get a green light in every
company. If Google is looking specifically for top-notch software engineers,
they are probably filtering them very well with their practices.
Maybe they are only good on paper at that moment and don't have something like
"Homebrew" on their Github, their knowledge is sufficient to perform work at
Google. So why pick someone who has fame to his name, probably wants to get
paid accordingly and thinks he is a hotshot because of his Twitter and Github
follower count over someone who proved himself in an interview?
The first is not necessarily better than the latter.
------
malkia
And then you could've been just cool all about it:
[http://www.businessinsider.com/facebook-rejected-whatsapp-
co...](http://www.businessinsider.com/facebook-rejected-whatsapp-co-founder-
brian-acton-for-a-job-back-in-2009-2014-2)
~~~
matheweis
This. @brianacton's response was super classy.
------
lukaslalinsky
I strongly believe that the best kind of technical interview is to talk with
the person about things they have done in the past, go into details and see if
they are telling you bullshit. If the things are interesting, at least somehow
relevant to what the company is doing and the person knows what they are
talking about, it's a good hire.
One problem is that only experienced developers can do these kind of
interviews, because you need wide experience, be able to talk about various
technical topics and tell whether the other person is telling you stories from
their own experience or some quickly learned facts.
It's funny, but the best experience I had interviewing at Google and Amazon
was talking with the managers.
------
JamesBarney
This is especially ironic given how much Google has complained they need more
H1-B's because they can't find enough good devs.
------
sp332
Was it really because he couldn't do the problem, or was it that he didn't
handle himself well in the interview? At two different interviews I was given
logic puzzles just so they could watch how I went about trying to solve them.
~~~
sown
> At two different interviews I was given logic puzzles just so they could
> watch how I went about trying to solve them.
That's usually what they say, but I've found that if you don't solve the
puzzles, you don't get hired ever.
~~~
mrobins
That experience doesn't mean that's just what they're saying. It could mean in
those cases the person a) didn't solve them problem, and b) didn't demonstrate
an approach to solving they problem they were looking for.
~~~
esturk
OR c) didn't solve the problem fast enough
------
pan69
Personally I don't really care about Google's hiring process. It's unlikely
I'd ever want to work there anyway.
What does bother me is that other companies, who are not even in the same
league as Google, start to copy their hiring process.
I remember interviewing with a digital ad agency a few years back and I swear,
these guys thought they were Google. The number of academic trivia questions
that came up, it was ridiculous.
In a way, I think Google has done a lot of harm to the industry in general by
making others believe that everyone should have a hiring process like
Google's.
~~~
hiou
I've been there as well. I interviewed at a design agency a while back. I
nailed all of their puzzles pretty easily only to get there and realize I was
going to be doing Wordpress hacking and other CMS work from 2010. I left after
a few months because of how trivial I realized the work would be in the long
term.
On the exit meeting it was relayed to me that they were having trouble finding
people because no one could pass their tests and I was beside myself because I
couldn't understand how they would expect someone with that technical ability
to want to bang out Wordpress sites all day while there are 100s of people who
would love to do that job and be very successful without ever even knowing
what basic recursion let along the stuff they had in their test. Bizarre.
~~~
lucidguppy2000
Also - a big motivation for work is learning. Didn't someone say "never apply
for a position your qualified for"?
~~~
stuxnet79
I don't know who said that, and while I strongly agree with it in principle
... in practice it just tends to not work out like that. Employers want a cog
in a machine. They don't want you to 'stretch, grow or learn' on their dime -
no employer is willing to take that risk. Further, career progression in the
tech industry is commensurate with the degree to which you've been pigeonholed
in a particular skill or task. So a high salary is usually indicative, not of
the diversity of your skillset but how well you perform within certain narrow
parameters.
------
nqzero
i had a similar experience with ita software, later acquired by google. i'd
submitted a solution to one of their puzzles (which was actually very close to
their business) which exploited a symmetry in the data and my solution
produced results significantly better than anything else that they'd seen (per
the engineer that managed that puzzle)
interview was brutal - lots of whiteboarding very artificial problems totally
unrelated to the business and i just couldn't get excited about it and didn't
end up getting an offer
hiring is tough, these things happen
------
sjg007
You just have to know the basic data structures and some algorithms for them.
Liked list, binary tree, Hash map etc... In the worst case, set up the data
structure, then derive the algorithm. Do this in Java FWIW and make your life
easy.
This is like knowing math and/or stats and applying it to a word problem.
------
SubuSS
Actually my pet theory is this: Interviews for experienced folks are more
meant to keep them in their current companies rather than to filter the
incoming new ones. This acts as a gentleman's agreement between the giant
companies to keep their own talent pool semi-intact :) /s.
I mean imagine the amount of extra work someone has to put in to start
interviewing. Take a break from your current work, Prepare your CS basics
again, Prepare from interview question dumps online, read up / analyze
everything the new company is doing and form reasonable opinions, practice
white board coding / coding without an IDE, allocate time for any homework
projects given, Psych yourself up if you are introverted etc. The alternative
is just to stay in the current role and hope stuff gets better. 90% of the
folks I know choose the alternative over the dehumanizing process of
interviews. So many folks I know are good engineers get chewed up in
interviews (both in my current company and elsewhere) because the process is
pretty cooked. We are trying to see how this can be improved, but yeah - I
just keep going back to my pet theory :)
I do agree with one of the commenters here though: At one point your resume
should speak for itself. These are the kind of problems I would like LinkedIn
to be solving instead of finding ways to spam me with recruiter deluge.
------
mentat
It's interesting the amount of hate and either rumors of bad experience or bad
experiences directly. I interviewed for an SRE position last September and
they were clearly trying very hard to make it a good experience no matter the
outcome. I flubbed a couple of questions and they didn't make an offer, but
the impression that they cared about my experience as an interviewee lasted. I
wonder why my experience was so dramatically different from many here.
------
TheMagicHorsey
I've been invited to interview at Google three times. And they've declined to
hire me three times. The last time I interviewed there the quality of the
people that interviewed me was much lower than the earlier two times. I was
still rejected, but I felt much better about working somewhere else.
I'm sure Google is still a great place to work, but its reminding me more and
more of 1999 Microsoft. In fact the similarity is spooky.
------
exacube
Firstly, you're not entitled to any job you want just because you wrote
Homebrew. If you accepted an interview with Google, then you accepted the fact
that Google will judge you based on your problem solving skills, just like
every other person was asked.
Secondly, I don't think this is a hard interview question; it's certainly
fair. Did you expect to be asked knowledge-based questions that Google knows
you're already good at? Questions specifically geared towards you? Or
questions where Google can watch you solve a problem and be comfortable with
the fact that you are able to solve coding problems? Did you think Google
would hire you to write Homebrew? Or solve problems on teams Google has?
I think this person is just being unreasonable.
~~~
ciupicri
If 90% of Google's engineers use his software, it's reasonable to expect to be
hired for continuing to work on that software.
~~~
exacube
That may be somewhat true, depending on how crucial and dependent Google is on
Homebrew. But 90% of Googlers don't rely on Brew for work.
It is just a figure he made up to make a point about how popular his software
is. Using his software outside the context of our jobs is no means to justify
a hire. He should go through the same interview process as 90% (much higher
than that actually) of Googlers.
~~~
ciupicri
My impression was that he was talking about usage for business, not personal
purposes. As for the popularity, even if the 90% figure is made up, I was only
trying to explain/justify his point of view.
------
plg
just playing devil's advocate, but how do we know the reason for the no-hire
was the reason the OP thinks it was?
~~~
hebdo
It seems that almost everyone here knows better than Google how to hire
employees for Google. Given that you can see why it is trivial to see through
hours-long hiring committee decisions in just two seconds.
Edit: as pointed out by others, the hiring decision probably does not take a
few hours, but under an hour. Still, the point is valid.
~~~
nilkn
I'm pretty sure the hiring committees do not actually deliberate for hours on
one candidate. Maybe in very rare cases.
~~~
DannyBee
The most i've deliberated was 45-50 minutes on a single candidate in HC
itself. (often hours are spent reading packets and preparing notes before HC)
It's not because candidates aren't worth it, it's that if you can't come to
consensus in that time period, you are unlikely to be able to :)
------
myth_buster
Well, this thread escalated quickly! Am I wrong in my understanding that when
a company rejects they don't specify why and hence "rejected due to failure to
invert binary tree" may be a guess here?
------
adsr
"I never commit to memory anything that can easily be looked up in a book."
Albert Einstein
It seems like this tests a) how much you want to work at Google and b) how
good you are at memorize things.
------
zyxley
A pair of good articles on just this kind of thing:
[http://www.unlimitednovelty.com/2011/12/can-you-solve-
this-p...](http://www.unlimitednovelty.com/2011/12/can-you-solve-this-problem-
for-me-on.html) [http://sockpuppet.org/blog/2015/03/06/the-hiring-
post/](http://sockpuppet.org/blog/2015/03/06/the-hiring-post/)
------
happinessis
Nowhere near 90% of Google engineers even use Apple, let alone use this
person's software for it.
~~~
grahamar
As Google bragged in 2013 that it managed a Mac fleet of over 40k and with a
workforce of 55,419 in Q1 2015 (not just engineers, 2013 numbers were about
10k engineers), that's 72%+ of Google's workforce using Macs.
Homebrew is at least one of the best package managers for Mac. I would be very
surprised if it was not at least near the 90% mark...
------
cheradenine01
Don't we have Universities for this?
I mean - what's the point of spending 3-4 years in an Academic environment
that proceeds to then test and grade students on exactly how good they are, at
the time - then only to perform the whole process over again some number of
years down the road, with fuzzier results?
Seems dumb to me.
I've worked with people who could likely do very well on algorithmic tasks -
(of which most software projects require precisely zero) - but actually
_deliver_ something of use... not so much.
------
overgard
When I used to interview people (I wasn't a manager, but I was senior enough
to be entrusted to the role), I'd just ask about projects they had placed on
their resume (to get a feel for their contributions) and then the rest of the
interview would be focused on what the job was and how that matched with their
career goals, why they thought they'd want the job, that sort of thing. The
latter part was a bit harder because people are naturally defensive during an
interview, so they can't be like "well I want an entry level web developer job
so I can parlay this into something better in two years" (which is, IMO,
totally an acceptable answer), but you can generally politely get the idea.
Maybe I'm weird, but I just sort of think interviews should be more about
determining fit than giving someone a lie detector test.
I get the puzzlers or whatever for a phone screens (quickly weed out people
that are obviously unqualified), or if you're hiring someone junior who
doesn't have work experience, but if you're at the point where you're bringing
someone in you probably think they're minimally qualified, so it should really
be about determining if goals are aligned IMO.
------
mattbillenstein
I like the spirit of this, but he may well have not been hired for a variety
of other reasons besides this -- including how he attempted to solve the
problem or how he handled not being able to solve it on a whiteboard in an
interview.
I hate whiteboard programming questions and I don't give them when I interview
someone - I give them a laptop with 10 different languages on it, and some
data to munge -- I think it's a pretty decent thing for both parties.
------
russtrotter
Is it possible that technical impression was not the sole reason somebody
isn't hired?
------
benol
I, for one, agree with this kind of hiring process.
From my own experience - people that do well in such interviews are good
generalists. On their own they will start discussing performance improvements
and ways to parallelize the solution, it's a pleasure to have such an
interview.
It's about enjoying problem solving and willing to keep your brain fit. It has
nothing to do with memorizing solutions to some existing set of problems.
------
coldcode
I've always wondered if during an interview with Google you answered a
question with "I'd Google it" what the reaction would be.
~~~
CydeWeys
The response would be "I want you to come up with the answer yourself."
------
bkessler100
These interviews are biased towards new grads ...
GEORGE: You know what I do at the Yankees, when one of these old guys is
breathing down my neck?
ELAINE: What?
GEORGE: You schedule a late meeting.
------
mildbow
This thread is weird.
The people vilifying Max or saying "duh you didn't get hired, Google requires
awesome people" seem to have a totally warped sense of exactly what Google
engineers do day to day.
Blows my mind that there are so many people defending this (well-known and
pretty much taken as a trade-off) lapse in the Google hiring algo and instead
making it seem like Max's fault.
------
fallat
Google really does only hire individuals who are strong in theory.
Maybe we are jealous. We wish we had the brains of the people getting into
Google. I can say personally that I envy these people.
The fact that they don't mind being treated as numbers maybe says something
about these people too. They are cold. Their ego must be pretty big too if
they make it into Google.
Someone should do a study...hehe.
------
rbanffy
I don't think loudly (and impolitely) complaining about being rejected at a
job application can, ever, be the smart thing to do.
~~~
stuxnet79
It is not. IMO it's a pretty ballsy and idiotic thing to do. That's partly why
I am not too active on Twitter. I have a lot of controversial, unpopular
opinions that I wouldn't want a potential employer to get a whiff of.
------
greendesk
I remember a story from college graduation. A friend contacted an alumni from
the university. The alumni worked in the semiconductor business. One of the
questions he asked the alumni was: "How does he select a great employee?"
Alumni's response that it is exceptionally tough. He shared an anecdote about
one of his best hires.
The alumni wanted to test the interviewee's knowledge in different areas. He
asked a question on diodes - it might not have been diodes, but for the sake
of the story let's stick to diodes. The interviewee replied: "Hold on, I am
not one to know about it."
I have not worked at Google and I do not think I'd pass its interview process.
It is unlikely that I be diligent enough to make Homebrew. Nevertheless, I am
inclined to the idea, that being knowledgeable in all tested areas would not
reveal the personal fit necessary to make a great team.
------
doczoidberg
It seems that google wants to have code monkeys instead of creative software
engineers. This is a common problem of big companies. IMO it is one of the
reasons they get stuck with innovations. Of course also the interviewers don't
want to hire people which are smarter than they are because of their own
career.
------
grahamar
As someone who struggles to learn by rote as opposed to learning by practical
means and has been both hired and declined by the Google recruitment process.
I can't help but agree with his sentiment.
The recruitment process (at least for experienced engineers) should be little
more then "can I work with this person". The 6 month probationary period that
follows the hiring process should be used for "can this person do the job
well". But that's just my experience, and it seems to have worked well.
Regarding the same academic questions everybody gets asked in every
development interview, I feel Einstein said it best with "[I do not] carry
such information in my mind since it is readily available in books. ...The
value of a college education is not the learning of many facts but the
training of the mind to think."
------
tlogan
Google is about monoculture (certain type personality) - and from their
business perspective it seems that approach works. Why to change it?
If they try to invent something new or in different market they might need
different type of people but as of now ads business is cash cow and they would
be crazy to try change it.
------
dionidium
Without commenting on whether this is a good interviewing strategy, surely the
point is to sacrifice some potential good hires in favor of definite good
hires. In other words, you might be able to write pretty good code even if you
_can 't_ solve problems on a whiteboard, but, given a choice, why wouldn't we
just choose the people who can do that, _too_?
I think the _stated_ philosophy of interviews like this one is that a false
positive is worse than a false negative. Every single one of the responses to
that tweet either misses that point or sounds like little more than
defensiveness in the wake of a bruised ego.
You might _disagree_ with that interviewing strategy, but you're not
addressing it directly.
~~~
stuxnet79
More or less agree with this. I'm also a Google reject (didn't make it past
the phone interview). I didn't take the rejection personally. I don't see how
the interview process could be drastically improved. They get a lot of
applications and they need some way to filter - there is a standard and it has
to be met. With the sheer amount of applications that Google gets it's a
virtual guarantee that there will be a subset of people taking the piss
regardless of what the interview process is like.
I don't doubt that the engineers who manage to jump through all those hoops
are sensational. Personally in the end it just dawned on me that I didn't want
to work at Google that badly.
The whole Twitter exchange is a pitiful sour grapes circle jerk, and I'm
surprised that it's provoked such a massive response.
------
ajhc112
This guy sounds like a tool. Sure, he's accomplished, but his website and
linkedin are dripping with self aggrandization. "Splendid Chap" \-- cringe. My
hunch is that they didn't hire him because he didn't seem like a cultural fit.
------
timtas
How can we see if this technique works? There are two methods try to know
something: deduction and induction.
First a little deduction. Let's try to be explicit about the theory behind
this technique.
It's safe to assume that the job will NOT consist mainly of cranking out
binary tree inversions on whiteboards while being watched over. So obviously
we're hoping to make a correlation with something else. Assuming the candidate
was not tipped off and learned this particular puzzle, perhaps we are
correlating to an ability to rapidly create novel solutions of long-solved
algorithms without reference tools.
But is that what the new hire will be doing? Probably not.
We could continue down this path, identifying ever more removed correlations
until we get to something that the job actually demands. This probably
involves solving hard problems like naming things. [1] But by now our theory
stands on pretty thin ice indeed.
In any case, all of this deduction is theory making. It's not knowledge until
we attempt to falsify [2] it via induction. The human mind constantly induces
hoping to verify our deductions. We reason, observe, conclude and repeat.
We're good enough at it to survive, but that's about it. Lucky for us, science
came along. Today's technical hiring is at best alchemy.
A interesting company called TripleByte [3] is trying to apply induction
(first for YC companies). They specially shun on white board coding and puzzle
solving tests in general. I will be interested to see how they fare and
whether their learnings are adopted more broadly.
[1]
[http://martinfowler.com/bliki/TwoHardThings.html](http://martinfowler.com/bliki/TwoHardThings.html)
[2]
[http://en.wikipedia.org/wiki/Falsification](http://en.wikipedia.org/wiki/Falsification)
[3]
[http://techcrunch.com/2015/05/07/triplebyte/](http://techcrunch.com/2015/05/07/triplebyte/)
------
skizm
It is pretty well known there are a lot of false negatives in the hiring
process since it is so much worse to make a bad hire than it is to not make a
good hire. Sounds bad and it is, but no one has a better solution than try
again in a year.
------
icando9
IMHO, I don't think it requires any practice to be able to invert the binary
tree. It is so trivial that it only requires a very basic level of programming
skills. I agree whiteboard is generally broken, but for this particular case,
I don't think Google is doing wrong. We can think another way, if some company
hires people based on his reputation instead of the ability of doing actual
work, I don't think it will survive. In this particular case, you just didn't
show your ability of doing actual work, that's it. I am glad to see that
Google prefers ability of doing actual work to reputation.
------
philip1209
What's the due diligence like on the hiring side during an acquihire by
Google?
~~~
dguaraglia
In my very limited experience? None. They just trust the company they acquired
to have filtered you properly. They do ask for references (like academic
records and so on) but that was about it.
Without discussing too many details, I believe the issue with Google's
recruiting process is it was designed when the company was smaller and it
follows the philosophy that anyone that goes through the hiring process should
be ready to be thrown into any of the many Google projects and be able to
function immediately. That's not strictly true anymore.
You have some divisions that are extremely hardcore or require very good
knowledge of a particular field (think Google Cloud Platform vs. Android
Kernel vs. Chrome vs. Search, all completely disparate projects), but there's
also work for people that don't need to hold a PhD from MIT (think front-end
development.)
~~~
npkarnik
Hmm, it depends a lot on the size/type of company and reason for acquisition.
If it's closer to a acqui-hire where the employees of the "acquired" company
cease development on whatever they were doing and eventually just get staffed
on a Google project, then they will MOST LIKELY do technical due diligence on
each team member. It's common for only part of the team to get an offer to
join.
------
lmilcin
Good engineer you didn't hire is not much of a cost to the company (other than
resources wasted on hiring process and perhaps some bad publicity).
On the other hand bad engineer will stay at the company, lower standards,
damage morale and set bad precedent to other engineers.
Being engineer myself, I feel much more motivated working in an environment
where you can just assume, even before meeting, that the other person is
intelligent and motivated. You trust hiring process to filter everybody else
so you don't have to subconsciously distrust every person you meet.
This comes at the cost of situations like that.
------
Frenchiie
I dont want to be an ass but how do you not know how to invert a tree? Anyone
who knows how to write a tree and traverse it should be able to do this. If
you ran out of time coding it then that's different.
~~~
jbrukh
Not even. If you know what a tree is, and you've written a couple of recursive
problems on trees in your life, then you know most of them are approximately
5-6 lines of code.
If you're spending 45 minutes writing 5 lines of code, it is not definitive,
but certainly a red flag.
~~~
gaustin
Nobody in this thread has even been able to define what inverting a tree
means. (Reversing or mirroring? Sure.) My search for how to invert a tree led
to a bunch of fairly hairy academic papers.
If you have a definition, please elucidate.
------
randomsoul
An Indian eCommerce company wanted to test my mathematics while interviewing
me for a VP of Marketing role. I had to tell them I won't fit into their
company culture, let's not waste time further.
------
dj_doh
My 2cents here - I respectfully declined to answer any JavaScript/CSS
questions prompted by a recruiter.
Being a front-end guy, I proactively request for hiring manager or a senior
front-end engineer from the team.
------
philippnagel
What's the practical point of performing such an operation?
~~~
x3n0ph3n3
It's similar to reversing a sorted array, though I can't think of a reason I'd
ever do it.
------
beliu
We're trying a different approach at Sourcegraph. In addition to looking at a
candidate's prior work in open source if available, we ask them to complete
tasks that approximate the job as closely as possible (i.e., coding on a
computer): [https://sourcegraph.com/blog/programming-
interview](https://sourcegraph.com/blog/programming-interview)
Would love to hear people's thoughts and feedback!
~~~
oxryly1
Yours sounds like an approach that measures how well someone codes in a
vacuum, instead of how they operate on a team. It very much skews the
results...
Not to mention for a qualified professional candidate, it feels an awful lot
like you're asking them to work for free.
~~~
beliu
In addition to asking them to write some code, we also have each member of the
team interview them onsite to get a sense of how they'd interact as a member
of the team. The challenge does take a few hours, which is longer than a
typical phone screen or single onsite interview, but because it lets us focus
on getting to know the person onsite rather than go through a gauntlet of
whiteboarding interviews, we think it actually saves time for everyone and is
a win-win. Obviously, every candidate is different; we think of this not as a
rigid template, but a better default option than whiteboard interviews. Thanks
for your thoughts!
------
oh_sigh
The problem with google interviews and tech interviews in general is that it
is almost impossible to capture what makes a successful candidate in a couple
of mini interviews. They don't even pretend that what you do in the interview
is what you will be doing in an actual job there. Most of a developers time is
spent in meetings, understanding their problem domain, writing documents, or
reviewing other developers documents.
------
utuxia
I don't even bother responding to any of the Big 4 when the reach out every
few weeks. They all ask these ridiculous questions.
------
mparramon
Wrote a blog post about exactly this problem: optimizing interviews for fancy
algorithm solving, when the position's daily work is nothing like that:
[http://www.developingandstuff.com/2015/05/why-i-dont-do-
codi...](http://www.developingandstuff.com/2015/05/why-i-dont-do-coding-
tests.html)
------
gjc
Can someone please help solve the problem? I have created a bounty:
[https://www.bountysource.com/issues/21606252-please-solve-
bi...](https://www.bountysource.com/issues/21606252-please-solve-binary-tree-
inversion-problem)
------
k4rtik
Anybody noticed an istx25 stalking each tweet-job suggestion and ultimately
getting busted[1]? :D
[1]:
[https://twitter.com/markmcerqueira/status/608914346706657280](https://twitter.com/markmcerqueira/status/608914346706657280)
------
treffer
Doesn't this contradict an interview with Senior Vice President of People
Operations
[http://www.wired.com/2015/04/hire-like-
google/](http://www.wired.com/2015/04/hire-like-google/)
------
lnkmails
This was years ago but a google interviewer asked me a make a complete copy of
a directed graph (i have to do cycle detection). I was given 45 mins total and
I failed. I cursed myself for not being good enough. I haven't forgotten it
yet.
------
zippy786
Google, if you have really smart people working for you then make a new
problem solving question for each person you interview! The current interview
standards highly promotes memorizing stuff which is pathetic.
------
atmosx
I wonder what an interview process at Google with Linus Torvalds or Theo de
Raadt would be. I take for granted that the interviewer would not had a clue
about their accomplishments.
Would they manage to pass the process?
------
ised
Personally, I'd prefer to work at a company where 90% used pkgsrc.
------
z3t4
At least he didn't get put off because his CV had the wrong font. It was
actually a technical question. They probably have binary trees that needs to
be reversed everywhere in Google :P
------
billconan
the tech interviews are so broken.
who does dynamic programming on a white board during a work day?
it's like you are recruiting an army, but testing people's gymnastic skills.
they should test street fight skills.
------
novaleaf
I don't know, inverting a binary tree seems like a pretty easy task. If I were
hiring a senior developer (to code as a primary task) I think it's a
reasonable fizzbuz.
------
aayala
Hiring/Interview process is broken and not only in Google
------
junkilo
Was hoping to gain some insight to improve interview cycles here but instead
just have agita.
Jobs come and go, great work is always great work, but friends are what I
remember most.
------
SQL2219
yep, hiring is broken.
[https://news.ycombinator.com/item?id=9689232](https://news.ycombinator.com/item?id=9689232)
------
eyeareque
Let's hope he makes a product that google wants at some point and then pays
him millions (or more) for it. (like whatsapp and facebook).
------
rconti
He sure has an awful lot of his self-worth wrapped up in whether he gets a job
offer from one specific company. It reminds me of being interested in a
particular girl in high school.
At a certain point, you learn that there are other jobs out there, and maybe
the one with the biggest name isn't the best one. I certainly wouldn't be the
least bit offended. Not when the market is flush with high-paying-jobs-a-
plenty, particularly for someone with his background.
------
skorecky
Probably could have just Googled the answer.
------
elif
I took my google interview as a gift from them to me. It showed me that I was
mistakenly interviewing to become a cog in a terry gilliam-esque corporate
machine, and that made me think hard about my path in life, and what i really
wanted out of a career.
------
pducks32
The best part is that someone important at Google say this and no matter how
they respond it's just funny.
------
hectorxp
change the license, add a clause saying google can't use it, and sue them
tomorrow
~~~
tbg
I know this was supposed to be a joke, but he can't retroactively change the
license for the current released version. Any license changes will only apply
to future releases.
------
sciencesama
i didnt understand can some one explain TLDR ?
~~~
ljk
guy made Homebrew, an OS X app, interviewed at Google; guy was rejected; guy
rant on twitter
------
jowiar
Last week I walked away from a Google offer that included a 70% raise. A large
portion of this was a rather dehumanizing interview process, along with the
realization that the process doesn't select for people I want to work with,
and weeds out most people who I do. I managed to do just enough in my
interviews to squeak through, but in doing so realized that it wasn't for me.
Walking through the cafeteria made me feel like I was back in CMU CS again, in
a bad way.
~~~
sshumaker
I think you may have left with the wrong impression. If you ask Googlers or
Xooglers alike, most agree that the people here are actually the best thing
about Google. Like anywhere else, there are some bad apples, but compared to
most other places the people here are on average more talented, nicer human
beings and more helpful. Certainly compared to your typical startup or other
BigCo.
In my nearly 3 years here, that is my experience as well. I've also spoken to
many engineers who have left who lamented the quality of their fellow
engineering talent at their supposedly 'hot' startup, compared to their former
team at Google. Or having to deal with way more unprofessional (or crazy)
management, prima-donna teammates, etc.
~~~
jowiar
As far as my specific interviewers are concerned, I liked 7 of the 8 as
individuals, which is great. What I didn't like was the company felt like a
monoculture. Same schools, same majors, same pre-education background.
Everybody looks the same, dresses the same, etc.
The process, on the other hand. Ugh. I have zero respect for Google as a
company after that.
It starts with the standard phone screen/day-long onsite/hazing ritual. Then
come phone calls with teams, and teams saying "yeah, we want you", and me
saying "sure, sounds good". The recruiter basically said "time for the higher-
ups to rubber stamp this, and here's the $$ to expect". Someone up the chain
said "well, we're not so sure, let's haul him back here for another round of
onsite interviews". Which I did, it went through the same process, and the
response was "well, maybe not you for this role, but lets set you up with more
teams".
All of this finally goes through, and I get an offer. Then there's a
negotiation that goes something like:
Me: I'd like 4 weeks to think about it while a couple of other applications
come back (keeping in mind they've dragged this on about 2 months longer than
it needed to be).
Google: You get 2.
Me: I'm at a conference week #2, but I'll do what i can to get a decision to
you Friday.
<Fast forward to monday of week #2>
Google: Have you made up your mind yet?
Me: I'd like to look at my options, and I'll get back to you COB Friday
Google: It's really important that we hear beginning of day Friday
<Fast forward to wednesday>
Google: Have you made up your mind yet?
Me: No. One of the options I was looking at is now off the table.
Google: So WTF are you waiting for.
Me: The other options
<Fast forward to Thursday>
<Phone rings while I'm at the conference>
Me: You can't pay me enough to deal with this.
Maybe no individual involved is a prima-donna, but the ego showed by the
company as a whole through the recruitment process is stunning. It felt like I
was dealing with the star quarterback who never considered that when they
asked someone on a date, they might get turned down.
~~~
hartator
What was their wording for "WTF are you waiting for"? This story felt like
abuse.
~~~
jowiar
The exact exchange:
Google: "Just checking in to see if you have an update for me? Can we set a
time to speak on Friday?"
Me: "I decided to pass on [OTHER OPPORTUNITY]. I let my manager know about the
offer last Thursday. I am out at a conference this week, and we're going to
discuss options on Friday.
Does end-of-the-day Friday work for you? 5? 6?"
Google: "I need to speak to you in the morning on Friday.
if you are passing up [OTHER OPPORTUNITY, misspelled] why are we waiting till
Friday. Can we talk now?"
------
michaelvkpdx
As goes the recruitment, so goes the employment.
If you wanted to work for an organization where everyone likes to show off
their skills to one another in the interviews, you'd have gotten the job, and
you'd be one of them.
The best interviews I find are like a first day of work (but unpaid). Your
experience and skills are established by resume and portfolio. The interview
shows whether you can work with the team and wrap your head around the org's
problems. If you're having to show off- well, that's how your work will be,
too.
Google's interviews convinced me, years ago, that there's no way I'd ever want
to work there. And that feeling hasn't changed one bit. Much like FB, it's an
org whose coding needs are really pretty trivial and the real work was done
and finished a long time ago (but there's plenty of need for debugging, egos
especially). If you want to work there and surf the gravy train, cool for you.
~~~
trustfundbaby
> Much like FB, it's an org whose coding needs are really pretty trivial
I was with you till riiiiiight there. Google isn't just a search company
anymore they're working on lots of very interesting _non-trivial_ things all
around the company.
~~~
discardorama
> Google isn't just a search company anymore they're working on lots of very
> interesting non-trivial things all around the company.
Right. And I really doubt that people like Andrew Ng or Geoff Hinton or
Sebastian Thrun were asked to invert binary trees....
------
comrade1
And Google engineers aren't the shit either. Their Java libraries are large
and unwieldy. They would learn something by taking an example from the apache
libraries - clean and straightforward, focused and small. I hate working with
Google code.
------
sagivo
I had very similar experience. This was one of the main reasons i decided to
create this:
[https://github.com/sagivo/algorithms](https://github.com/sagivo/algorithms)
------
Dewie3
Why is his profile picture the _attractions_ road sign as seen in Scandinavia?
:-)
~~~
jd3
Perhaps he's a fan of Susan Kare :-)
~~~
mymacbook
I don't get the message you're trying to send by this reply... and yes I know
who Susan Kare is.
~~~
jd3
Susan Kare designed the icon on Apple's command key. [0]
[0]:
[http://en.wikipedia.org/wiki/Command_key#Origin_of_the_symbo...](http://en.wikipedia.org/wiki/Command_key#Origin_of_the_symbol)
------
supergirl
big ego, looks like they made the right choice then
------
kzhahou
Max should add code in Homebrew to check if hostname contains
"corp.google.com", and exit with a message that Homebrew can't run inside
Google.
Petty, but fuck em. They don't want him, they shouldn't get the fruit of his
work.
------
smtddr
Disclaimer: GoogleFanBoy here, feel free to ignore or downvote.
So, I've interviewed with Google twice. Once was 3 years ago, the other was
like 2 weeks ago. They contacted me. The way I see Google's interviews is like
referee in Football _(soccer or "Futbol")_. Sure, you need a certain amount of
skill to play in the World Cup but you winning or losing can, and often enough
does, come down to a controversial referee call. You end up losing out to a
team that did a handball -
[https://www.youtube.com/watch?v=-ccNkksrfls](https://www.youtube.com/watch?v=-ccNkksrfls)
, but that's just how it is. What makes me like watching soccer so much is the
same thing that excites me about Google's interview process. Yes, there is
heartbreak and anger. Just like World cup fans get angry when their team loses
because they were denied a point for an off-sides call even when the player
was nowhere near off-sides.
\--- Is Google's interview process fair? Nope.
\--- Would I subject myself to their futbol-referee style of judging
candidates again? You bet.
\--- Do I think they should make their process more fair? Nope, let the drama
and __justified__ rant posts continue. Just like I want the unfairness in
futbol to stay as is. I was one of the people against putting the microchip
inside the ball to know for sure if it crosses the goal line. I want the drama
of a ref having to call it and sometimes getting it wrong.
I know people, especially on HN, love reliable & repeatable. I do too, except
when it comes to dealing with humans.
------
aaron695
Great, another self entitled engineer who thinks Google owes them a job
because they are sooooo good (Which, maybe they are good at some things)
But how about Google didn't give them a job because this is how they handle
failure, embarrass an entire company on Twitter to punish them.
| {
"pile_set_name": "HackerNews"
} |
Mock your HTTP responses to test your REST API - yotsumi
http://www.mocky.io
======
kanzure
Also, python people might be interested in
<https://github.com/gabrielfalcao/HTTPretty> or (bias disclaimer) my
serializer on top of requests+httpretty <https://github.com/kanzure/python-
requestions> for the httpetrified decorator. It loads and mocks an expected
response from a json file in your tests/.
There was a service called requests.in or something that acted like httpbin,
except it gave you a unique url to query against to view multiple requests
over a session. Does anyone know where that went?
~~~
johns
requestb.in
------
untog
It's a nice idea, but relying on a remote service for testing makes me worry.
I tend to mock HTTP responses locally, so that the tests can run when there
isn't even an internet connection available.
~~~
johns
What do you use for your local testing?
~~~
fein
Telnet when I'm lazy or its quick, curl when I actually want to write a full
harness.
------
simons
There's a post here: [http://artemave.github.io/2012/05/27/stub-like-a-
surgeon-spy...](http://artemave.github.io/2012/05/27/stub-like-a-surgeon-spy-
like-james-bond-with-rest-assured/) that talks about using a similar service
(the BBC's REST-assured <https://github.com/BBC/REST-assured>) to aid in BDD
using cucumber.
------
rschmitty
Why not use SinonJS? <http://sinonjs.org/>
Same ability to mock responses and errors, but everything is local. Check your
responses into git and every developer is testing the same stuff, no reliance
on a 3rd party
Makes for lightning fast automatic background testing.
------
memoryfault
Would someone provide an example on how this tool could be used to test a REST
API? I think I'm missing something here. I'm not seeing how a fake response
endpoint lets me test my REST API (shouldn't my test code invoke the API and
validate that the real response was correct?)
~~~
sanderjd
It seems to me that it isn't for testing a REST API but rather for testing
something that _depends_ on one without having to deal with real integration
issues.
~~~
yotsumi
Yes, you explain better than me ;)
------
rco8786
Been working on something similar for local use by running nodejs to both
server static files and mock API responses.
<https://github.com/rco8786/apimok>
------
wilig
For those looking for a local alternative have a look at
<http://wilig.github.io/mockity/>
Full disclosure: I'm the author.
------
donatj
I use <http://frisbyjs.com/> FrisbyJS for most of my front end API testing
needs.
------
gulbrandr
<http://www.hurl.it/>
I recommend this service for this kind of testing.
~~~
quarterto
Hurl is requests. Mocky is responses.
~~~
quarterto
In fact, here is Mocky serving a response to Hurl:
[http://www.hurl.it/hurls/18e4da3bfc0c2159abd1c8e769915c360a8...](http://www.hurl.it/hurls/18e4da3bfc0c2159abd1c8e769915c360a8de8ce/6dc5f86a8ac115dc0870e88cf260d5b7dcb49c15)
~~~
farmdawgnation
If a hurl falls in a forest, and only a mocky is around to hear it, did it
happen?
------
bruth
Nice idea. Are these stored as gists under my account? Can I choose to modify
an existing response so it's versioned?
------
ericmoritz
this isn't any better than using a live server for testing.
Build a good client library for your applications to use, mock the client
library and don't worry about tests failing because of availability problems.
~~~
johns
If the live server has side effects when making the call (send an email,
charge a card, etc) and you just want to test against the response
headers/body, it can be very useful. A local mocking library is also good for
that, but for quicker tests this is nice.
------
rajanikanthr
I use mocking framework(Moq for .NET) to mock my service response and various
xml responses i will save in test xml files.. Anyways, I will try to use it to
test over network rather local mocking
------
gstroup
I prefer to run my own local test server to return mock responses. I built
this little project, that you can install using NPM:
<https://npmjs.org/package/apimocker> It's intended for sandbox development as
well as automated tests. There's no UI, but you can return whatever data you
want. The features are pretty basic right now, but it works well for most
tests, and it's easily configured on the fly.
------
city41
It's a neat idea but I can't imagine I'd ever actually use this for real
testing. Relying on a third party server for your tests can be a problem. We
also have thousands of tests that rely on mocked REST responses, setting them
up with Mocky would be a ton of work.
If Mocky could be ran onsite and had a nice API for easily generating mock
responses, then I think it would be more useful.
~~~
yotsumi
It's an open source project, created 2 days ago. All is possible, this website
is just a proof of concept. And you can fork the project to run it locally.
~~~
city41
Yeah I realize that. I hope I didn't come across too harsh. It is a good idea,
and I'd like to see it grow some more.
------
sinkingfish
I just launched something almost exactly the same a fortnight ago -
e.ndpoint.com - POST/PUT/DELETE support coming soon.
~~~
johns
The URL scheme you're using makes it really easy to view everyone else's
mocks.
~~~
sinkingfish
Yea i'm not looking to obfuscate, I'm planning on introducing user accounts
whereby people by create, save, edit, and alias their mocks. Bypassing that
issue. Anonymous mocks will simply be sequential base62.
------
tjpd
I've heard good things about <http://apiary.io> on this front as well...
~~~
nyam
i'm using it on first project and it's very nice. they let you export your
complete api definition to apiary.apib file which can be parsed with their
github.com/apiaryio/blueprint-parser into json and used with your custom
server localy. it's also nice for synchronizing between devs, when added to
vcs ...
------
aespinoza
This is very cool. It is kind of a fiddler on the web. It is interesting that
I saw something similar but with fiddler in the morning:
[http://www.devcurry.com/2013/05/testing-crud-operations-
in-a...](http://www.devcurry.com/2013/05/testing-crud-operations-in-aspnet-
web_3.html)
------
jnettome
I've sent a pull request to add portuguese brazilian translation. I hope it
helps! Scala is really cool :)
------
alpb
Nice project! My suggestion would be adding JSON editor or JSON syntax
validator for JSON responses saved.
~~~
yotsumi
You already have a light Json editor. The Syntax validation is a very good
idea, thanks!
------
austengary
In case anyone was wondering about licensing:
"[1]DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE Version 2, December 2004
Copyright (C) 2004 Sam Hocevar [email protected]"
[1] <https://github.com/studiodev/Mocky>
------
nym
You can't mock HTTP responses, the responses mock you.
[http://www.flickr.com/photos/girliemac/6508102407/in/set-721...](http://www.flickr.com/photos/girliemac/6508102407/in/set-72157628409467125/)
------
yotsumi
@misframer The app is updated in real time. So it can be some sporadic errors
~~~
quarterto
Welcome to Hacker News. We don't have @replies here. We do, however, have nice
friendly reply buttons. Great app, by the way!
------
antonpug
How exactly does this work? A short little tutorial would help ^.^
------
meryn
Wouldn't it be better if it would just say "mock your HTTP responses to test
your HTTP client", or "to test your HTTP client code"?
The service makes a lot of sense otherwise.
------
thekingshorses
This failed :(
<div>what</div><div>Not</div>this<ul id="list" data-list="f"><li
class="first">one</li></ul>
------
PuercoPop
Nice, Emacs have a mode for doing the same too:
<https://github.com/pashky/restclient.el>
~~~
yotsumi
Emacs have a REST Client, but this app is like a REST Server: Mocky serve an
HTTP response, whereas RestClient send an HTTP resquest.
------
guyht
I have wanted a service like this forever! Thank you.
------
misframer
I'm occasionally getting hit with
"HTTP/1.1 500 Internal Server Error"
when that's not what I want.
------
rcoh
Ironically, I'm seeing 500s. Great for testing my reliability in the face of
errors!
~~~
yotsumi
Yes sorry for that, I didn't expect a such traffic from HN. Things will be
stable in a few hours :)
------
rross
E.endpoint.com
A very similar offering released a week ago on github.
------
jstoja
That's cool !
| {
"pile_set_name": "HackerNews"
} |
Senate hearing on Bitcoin [video] - sinak
http://www.hsgac.senate.gov/hearings/beyond-silk-road-potential-risks-threats-and-promises-of-virtual-currencies
======
eof
I've been watching since about 20 minutes in; it is overwhelmingly positive.
There seem to be 2 'groups' of testimonies, the first coming from law
enforcement and government bodies; the overwhelming consensus was that
existing laws are satisfactory to prosecute bad actors using bitcoin.
The second group is still talking; again extremely positive.. analogies to
bitcoin being similar to the internet in the mid nineties; its scary but
overall good. The lawyer from the bitcoin foundation is really driving home
the idea that the main problem right now is that bankers are too scared to
give bitcoin business bank accounts.
I see overall nothing negative; and the chairman seems extremely reasonable
and open to the notion of bitcoin and brought up the bitcoin-now <-> internet-
in-the-nineties analogy himself.
No one is asking for stricter laws, everyone is asking for clarity. Much, much
more positive than I expected; only the secret service guy seemed to be very
cautious; he never said the word bitcoin and mostly spent his time saying the
secret service was awesome.
~~~
Varcht
I think a possible reason they're overwhelmingly positive is because the FBI
has a pretty good cache of bc.
~~~
maxerickson
They wouldn't care if those coins were worth $5 billion. The federal budget is
silly compared to a few hundred thousand bitcoins. Or maybe that's the other
way around.
~~~
jarin
The FBI and CIA can use Bitcoin to anonymously pay off foreign informants
though
~~~
TomGullen
We'd see them spend it and ask what they've spent it on.
~~~
saraid216
I'd expect the CIA to know how to launder money competently. It seems like a
ridiculously obvious thing for them to learn how to do.
------
jdreaver
The chairman asked the panel about the real identity of Satoshi Nakamoto, the
pseudonym used by the creator or creators of Bitcoin. When one of the
panelists was going to respond, the chair cut him off and said "you don't
think it was Al Gore, do you?" The panelist said "well, he hasn't denied it!"
I've never seen decent humor in a Senate hearing haha
~~~
sliverstorm
The anonymity of Satoshi definitely gives the origins of BTC some mythos. I
can picture historical fiction a hundred years from now speculating (assuming
BTC takes over)
~~~
jlgreco
I suspect that historians, given free access to a very very wide range of
writings (and of course all the works attributed to Satoshi) and whatever the
modern analysis techniques will be will make quick work of figuring out who
Satoshi was.
Few writings remain truly anonymous if the author also wrote significant
amounts non-anonymously (and these days, who doesn't?).
For his sake, I hope he isn't uncovered during his lifetime, but assuming
bitcoin remains relevant I greatly suspect that history books will eventually
have his name.
------
nostromo
It's interesting to watch Jeremy Allaire repeatedly say that only well-funded
companies should be trusted to hold consumers' bitcoins.
Companies like, well, his.
~~~
sliverstorm
It's nothing new. You _want_ banks to have considerable reserve capital, and
if you are a company whose function is to "hold BTC" you are a bank.
~~~
julespitt
The definition of bank is not an entity that holds currency, otherwise any
company or you or I are a bank.
To keep it simple, Banks hold reserves because they lend. Paypal, for
instance, is not a bank.
If your tiny Bitcoin startup holds $1000 in bitcoin, and your obligations are
$1000, I don't see the problem.
~~~
camus2
> Paypal, for instance, is not a bank.
Paypal is a bank in Europe. A paypal account is legally a bank account here.
------
sehugg
WSJ: _Despite interest in bitcoin, Monday 's hearing was attended by only one
member of the Senate panel, Sen. Tom Carper (D., Del.), who chairs the
committee. Other senators were still en route to Washington after spending the
weekend in their home states._
Marketwatch: _More than one hour into the hearing, and Sen. Carper is the sole
lawmaker to ask questions. It appears no others are there._
Embarassing.
------
mindslight
Alright, the market has remained irrational longer than I've remained solvent
in resolve. So I'll admit it: This would have been a nice bandwagon to have
jumped on.
But hollow disruption porn and worse-is-better are still fucking tragedies.
~~~
chm
> "But hollow disruption porn and worse-is-better are still fucking
> tragedies."
What do you mean?
------
presidentender
Positive attitudes regarding Bitcoin by the powers that be give me pause.
Why would law enforcement be eager to support a technology which, on the face
of it, seems to reduce the power of law enforcement?
~~~
crygin
I'm unclear on why everyone seems to think that Bitcoin reduces governmental
power. It makes all transactions public -- no more cash deals, hard-to-
subpoena international wire transfers, etc. It makes the job of law
enforcement much easier.
~~~
avar
It doesn't reduce police powers, but in the long term if it pans out it'll
reduce the power of fiat currency. That's what people talk about when they say
it reduces the power of the government.
~~~
PeterisP
Can't NSA datacenters with specialized crypto chips simply get 51% of bitcoin
mining power for some time, if they want? Now _that_ would be 'fiat currency'.
~~~
tlrobinson
51% attacks don't allow you to mine arbitrary amounts of Bitcoin. At worst
they could DoS the Bitcoin network and double-spend their funds, but the
recipients (their own citizens? other governments?) would find out and be
_pissed_.
~~~
moron4hire
The citizenry is _already_ pissed and it means nothing. They do what they
want.
------
sjcsjc
"You don't think it was Al Gore, do you?" \- quip from the senator running the
hearings on the subject of Satoshi's real identity.
------
pilom
I'm reminded of a course I took which was co-taught by the CTO at the
Department of Homeland Security. He once said that "The criminals will always
be better at using cryptography than the people who use them in positive
ways."
~~~
rhizome
Except that the NSA and FBI have been defining crypto as suspicious activity,
so kind of a double-bind there.
~~~
maxk42
Well it's no wonder: they only use it for suspicious activities.
------
EricDeb
The FBI and government have considerable bitcoin holdings, assuming they
acquired the silk road wealth + what they already have their total BTC is at
least 524,000. src:
[https://bitcointalk.org/index.php?topic=321265.0](https://bitcointalk.org/index.php?topic=321265.0)
~~~
igravious
Is this the same entity that has ~ $16,000,000,000,000 external debt?
source: a part of the government
[https://www.cia.gov/library/publications/the-world-
factbook/...](https://www.cia.gov/library/publications/the-world-
factbook/rankorder/2079rank.html)
------
ademarre
It's refreshing that the overall tone of the hearing is not one of 'we don't
understand this cryptocurrency thing, we need to shut it down', but rather,
'this thing is happening, we have some concerns, but we need to adapt so we
aren't left behind'.
------
namuol
Kicking myself for all eternity...
EDIT: Or not... [1]
[1]: [http://imgur.com/cp01D2m](http://imgur.com/cp01D2m)
~~~
nisa
You are not alone.
------
nilkn
Slightly off-topic, but I don't understand how a deflationary currency could
be practical. Why would you ever spend it, except begrudgingly, if whatever
you buy will be worth less tomorrow (thinking exclusively "in bitcoin,"
without regard to other currencies)? This is a question, not a criticism.
~~~
eurleif
Why would you spend your USD if you can convert it to BTC instead and it will
be worth more tomorrow?
------
vijayboyapati
This sounds very encouraging to me. A number of panelists have talked about
how banks aren't allowing btc businesses to setup up basic checking accounts
and how this needs to be fixed if the US isn't going to be left behind.
------
andy_ppp
Once goods and services are available ubiquitously to buy in bitcoin fiat
money will become worthless.
This isn't a bubble IMO, it's just that USD won't hold it's value compared to
available currency that has at least has _some_ value.
:-D
------
Ellipsis753
Just sold off my bitcoin at $700. Who knows where the price will go next. Hope
I won't regret this too much... :)
~~~
asciimo
Congratulations on your massive profit, and thank you for stimulating the
Bitcoin economy! However, I think that you will eventually regret this
decision.
~~~
Ellipsis753
Hehe. How do you know I made a massive profit. I could have bought it just
hours ago. ;) You're correct though, I got mine for $100 dollars a little
while back. I may regret it but $700 is not to be sniffed at. I feel that
although it may go up massively it could of course go down too. The economy is
just too unstable for me at the moment.
------
thinkcomp
My comments:
[http://www.aarongreenspan.com/writing/20131118.hsgacstatemen...](http://www.aarongreenspan.com/writing/20131118.hsgacstatement.pdf)
Jerry Brito's testimony was quite good.
------
vinchuco
This is great for those that invested just before the hearings. All the
attention given to it will get more people in the US involved. I wonder how
much longer can this growth last.
~~~
oxalo
I think that depends on to what extent Bitcoin will replace the current money
system. [1] looks into the price of Bitcoin if it becomes as widespread as
Bitcoin or PayPal.
[1] [http://www.dailyfinance.com/2013/11/17/bitcoin-bubble-or-
val...](http://www.dailyfinance.com/2013/11/17/bitcoin-bubble-or-value/)
------
hgsigala
As a staffer who works down the hall from this hearing, I am glad to see all
the anti-Congress sentiment usually expressed on HN digressed for true
discussion of subject matter.
------
presty
related: [http://www.businessinsider.com/ben-bernanke-on-
bitcoin-2013-...](http://www.businessinsider.com/ben-bernanke-on-
bitcoin-2013-11) BERNANKE: Bitcoin 'May Hold Long-Term Promise'
------
shmerl
This looks very positive indeed.
------
Zoomla
I heard PATRIOT Act....
~~~
dangrossman
Considering the PATRIOT Act is what established the "know your customer" due-
diligence rules for financial institutions (i.e. why your bank wants to see ID
when you open an account), there's nothing ominous about the name of that bill
coming up.
------
sneak
Can't view without Flash, I won't install Flash, therefore I can't view.
~~~
saraid216
Do the Hacker News thing and spin up a VM with Flash on it, watch, and then
delete the VM afterwards.
~~~
idupree
That works for technical reasons but not legal reasons. You can't "virtually"
agree to the terms&conditions and then delete your agree-ment along with the
VM. (Unless the terms or the law let you do so. Sometimes they do.)
~~~
gknoy
What are the terms of the Flash EULA that you might violate while spinning up
a VM to read something? You're not planning to copy it, publish it, etc, and
since you're destroying the VM later you don't care about stability/etc --
outside of the reading event, you are not doing any actions with the product.
I'm genuinely curious. I can understand an argument from principle like RMS
might use, but I am unsure what the difference is between agreeing on a VM and
agreeing not-on-a-VM.
I'm really trying to see an edge case where doing it on a VM that you destroy
later is in any way worse (or even different) than doing it on a laptop that
you buy, use, and then later incinerate.
~~~
idupree
My point applies in the same way to a laptop you buy, use, then incinerate.
Even after you incinerate it, you are still bound by the terms you agreed to
(to the debatable extent to which [1] is enforcable at all, anyway).
[1] The download page states "By clicking the "Download now" button, you
acknowledge that you have read and agree to the Adobe Software Licensing
Agreement.". That statement is hundreds of pixels away from the actual
download button.
| {
"pile_set_name": "HackerNews"
} |
The Trip Treatment - juanplusjuan
http://www.newyorker.com/magazine/2015/02/09/trip-treatment
======
bdm
It's brilliant. I wonder why people have a knee-jerk reaction to drugs as
being "bad". Research and writing like this are quite promising.
> Only 10 percent of drug users have a problem with their substance. Some 90
> percent of people who use a drug—the overwhelming majority—are not harmed by
> it. This figure comes not from a pro-legalization group, but from the United
> Nations Office on Drug Control, the global coordinator of the drug war. Even
> William Bennett, the most aggressive drug czar in U.S. history, admits:
> “Non-addicted users still comprise the vast bulk of our drug-involved
> population.” - Why Animals Eat Psychoactive Drugs
> [[http://goo.gl/7vB8Eu](http://goo.gl/7vB8Eu)]
Drugs not only can be used responsibly, but they should be. We are needlessly
Luddite about this type of stuff. I think within the next decade we'll see a
paradigm shift towards a much wider societal acceptance of "brain drugs."
| {
"pile_set_name": "HackerNews"
} |
Yes, websites are starting to look more similar - afrcnc
https://theconversation.com/yes-websites-really-are-starting-to-look-more-similar-136484
======
KKPMW
Not only the websites, but art within them as well is starting to become more
and more similar.
In particular I have the style with disfigured colorful human figures
displayed in various weird poses. I am not sure what is the name of this
drawing style and how it got popular, for some reason it repulses me, but here
is one example I remember:
[https://todoist.com/](https://todoist.com/)
~~~
sidpatil
> am not sure what is the name of this drawing style
Corporate Memphis. [https://www.are.na/claire-l-evans/corporate-
memphis](https://www.are.na/claire-l-evans/corporate-memphis)
~~~
itcrowd
Also @humansofflat on twitter. Although the tweets are now protected, so I
don't know what the current status of the account is.
------
themodelplumber
I like seeing articles like this. I would just point out what I think is a
not-so-discussed aspect: The transition from layout-consideration to mobile-
consideration, a real issue that was capable of turning every project into a
mess of puzzling layout questions from the start. This has been and remains a
huge energy-level blow to a design effort, if you are a layout-design-first
thinker, or just feel like approaching a given project in that way.
The article seems to be rooted in this kind of design-first preference, and I
don't see that as a huge problem, but it needs to be pointed out because a lot
of us here on HN can swing both ways, so to speak--cover your economically-
necessary base and go with the flow (see frameworks, below), or focus on
beautiful creative design.
While this shift to accommodation of mobile device was troubling from a
workload perspective early on, it soon became clear that if many / most of
your audience were going to consume your content on a mobile, then there was a
huge energy-expenditure incentive to focus on layouts that stack up,
component-by-component. This became a strong hidden incentive for web
designers to say, "I see your fancy designs but things like this need to be
mobile-friendly or even mobile-first these days. Also, simplicity is going to
be huge for a lot of different reasons." A big reason: If you color outside of
the lines, you now have a huge number of ways and places in which you need to
test and troubleshoot your layout.
So the economics of site design quickly shifted: Get a vertically-stacked,
mobile-friendly site with a relatively boring layout and save money, OR go all
sorts of creative with the layout and pay more by creating more of a fractal
of work for yourself. Maybe you'd pay just a little more, but still: More--
maybe more time, or more money, or both.
Eventually as this pattern locked into place, the broader "website creation
economy" rediscovered a certain level of energy efficiency as frameworks and
libraries were developed around the mobile-consideration standard.
And as it turns out, these frameworks and libraries now underwrite a specific,
stacking-friendly, tech-first approach to design. If you don't want to work
that way, you need to go looking in the manual or cross your fingers and spend
some time with Google. Or you have to hire someone who does. Or you have to
look at DIY platforms with other trade-offs, like fragile designs that stack
up a huge load of technical debt over time, gradually introduce subtle design
bugs, and then go unsupported.
IMO these various effects are a big contributor to what the authors discuss.
| {
"pile_set_name": "HackerNews"
} |
CMC Cartonwrap box packing machine [video] - vinnyglennon
https://www.youtube.com/watch?v=9rP1wjEsbak
======
jcrites
This appears not to be a machine made by Amazon, but rather a commercially
available machine called CartonWrap 1000 made by an Italian firm named CMC
Srl. According to news reports I found online, Amazon is piloting the machines
in its warehouses, however. (Edit: I wrote this in reply to the original
article title which described the machine as Amazon's.)
News report: [https://www.reuters.com/article/us-amazon-com-automation-
exc...](https://www.reuters.com/article/us-amazon-com-automation-
exclusive/exclusive-amazon-rolls-out-machines-that-pack-orders-and-replace-
jobs-idUSKCN1SJ0X1)
Product website: [https://www.cmcmachinery.com/portfolio-
item/ecommerce1-cmc-c...](https://www.cmcmachinery.com/portfolio-
item/ecommerce1-cmc-cartonwrap/)
The effect on waste should be interesting. I assume that everyone who has
ordered a product online has had the experience of receiving that product in a
box that was quite a bit larger; that happens because standard shipping
processes use a limited and pre-determined set of cardboard box sizes. If you
look on the box somewhere, you should see a label like "1A5" which is the box
size. With past technology, you had to round up to the next largest box that
fits the product, which sometimes leaves quite a bit of waste, both in
cardboard and the packing material (plastic pillows) used to fill up large
voids in the box. It looks like this machine can cut boxes exactly to the
product dimensions, which will presumably save both on box and filler material
to lower costs, and generate less waste.
~~~
dzhiurgis
I once received 5 toothbrush heads from Amazon, package size where ~4 MBP's
would fit (~2" by 15").
The problem was huge underlying package.
~~~
aequitas
But they can never beat HP, which will send you a single ps2 mouse strapped to
a pallet or a stack of software licence documents, each of which individually
packed in a box.
[https://www.theregister.co.uk/2008/07/23/enormouse/](https://www.theregister.co.uk/2008/07/23/enormouse/)
------
Spivak
So to the manufacturing engineers around here, is this actually impressive?
Because it's definitely impressive on the "this is a really cool machine"
level but I was left with a nagging feeling that it wouldn't actually be all
that useful.
The input seems to be single items that are regtangular-ish within a certain
size and don't need any wrapping, padding, or air bags. This probably
describes a lot of Amazon's products but for this use-case wouldn't a machine
that wraps the item between two sheets of plastic on rolls be easier/cheaper?
Is this a stepping stone to the machine that can handle
multiple/delicate/irregular items?
~~~
simongr3dal
I would assume that air-bags and other padding isn't needed when the box fits
this well on the products.
I'm not a manufacturing engineer, but I have watched a lot of How It's Made,
the machine doesn't seem more impressive than any of the other plethora of
automated manufacturing machines that were available then.
That's not to say it isn't a handy thing to have running, it looks like a crew
of 4-6 persons with the right setup for folding and packing boxes could
probably keep up with the machine as it is running in the video, so as all
things in business it's gonna be a cost-benefit calculation that makes the
decision.
------
hhjjkkll
How do us meatbags compete with this?
~~~
throwaway180118
There's always going to be a human component to logistics. I wouldn't worry
| {
"pile_set_name": "HackerNews"
} |
Father built a machine to transport his kids' teeth straight to the tooth fairy - Brajeshwar
http://thenextweb.com/shareables/2013/09/13/this-guy-built-a-machine-to-transport-his-kids-teeth-straight-to-the-tooth-fairy/
======
doug1001
i know a guy who did the same thing a few years ago but when the tooth reached
the tooth fairy (no such thing really, it's more like a consortium of
fairies), he was told that he still had to wait in the queue because priority
is based on exact date-time at which the loss of tooth occurred--so you can't
really expedite reimbursement by jumping the queue. nice try though
------
vezzy-fnord
Quite entertaining, indeed. Good on the father.
On the other hand, if he can do this, I wonder why he's encouraging these age-
old childhood myths. I personally grew up with the knowledge that they weren't
real from the beginning, and can't say I missed some sort of enchantment or
anything. It speeds up your rational thinking.
Still, pretty cool idea.
| {
"pile_set_name": "HackerNews"
} |
Why Refback Still Matters - kiyanwang
https://gkbrk.com/2016/08/why-refback-still-matters/
======
floatboth
Webmention implementations usually automatically check that there's an actual
link. You can go even further and require not just a link, but a proper
microformats2 reply/like/repost.
~~~
onli
> Webmention implementations usually automatically check that there's an
> actual link.
So do trackback and pingback implementations of every major blogging software
------
abstractbeliefs
The corner popup is super annoying here - not because it popped up at all, but
because it stole KB focus and my down arrow key stopped working to cycle
through my available email autofill options. :(
~~~
gkbrk
Sadly, I can reproduce this. I will look into this issue and disable the
scroll box for now.
------
davidzimmerman
Sounds great. How do we implement this?
~~~
abstractbeliefs
I'm going to be a bit obtuse and say "start with IndieWeb". The author talks
about webmentions being super interesting, but low penetration.
Best thing to do is start with yourself, and indieweb solves a bunch of
related problems at the same time.
[http://indieweb.org/](http://indieweb.org/)
| {
"pile_set_name": "HackerNews"
} |
Show HN: Tiger Boss - I Kick Your Ass & Make You Achieve Your Goals Faster - themost123
https://tigerboss.co/
======
gus_massa
If there is no free tier to try, I think it is not a good example of a ShowNH,
because we can't try it and give feedback.
Also, all-caps in the title will get this flagged. (Even if only a part is in
all-caps.)
~~~
themost123
Thank you very much for your suggestion! A free tier is just added!
~~~
mooreds
I signed up for the free trial and just got a popup saying "We'll be in touch
soon". Here are the Show HN guidelines:
[https://news.ycombinator.com/showhn.html](https://news.ycombinator.com/showhn.html)
~~~
themost123
I emailed you but it failed to deliver. Did you enter the correct email
address?
~~~
mooreds
Ah, it should have got to me, but I'll try another one.
| {
"pile_set_name": "HackerNews"
} |
Share Your Thoughts - theopencode
http://theopencode.org
======
billyrobinson1
Great website! I think this is a very good initiative, and I can't wait to see
what you do. One suggestion, though; maybe you could get on social media? It
may help you out.
~~~
theopencode
Thanks @billyrobinson1 for the kind words. We actually do have a Twitter
account, and we're working towards using it to help our mission. Check it out
here: [https://twitter.com/theOpenCode](https://twitter.com/theOpenCode).
| {
"pile_set_name": "HackerNews"
} |
PC-XT Emulator on a ESP8266 (2018) - DanBC
https://mcuhacker.wordpress.com/2018/02/22/forsta-blogginlagget/
======
elliottkember
I'm always fascinated to see these projects on ESP8266. The board is great,
but the ESP32 is a lot better - bluetooth LE, WiFi and dual-core at 240mhz, vs
the WiFi and 80mhz available on the 8266. The firmware wasn't as robust until
recently, but these days I use it constantly for little projects.
~~~
sjwright
The ESP32 is a lot better in many ways, but it suffers a bit from version 2
syndrome. The decision to go dual-core in particular. Personally I'm looking
forward to the ESP32-S2 coming in quantity.
~~~
leggomylibro
The S2 lacks Bluetooth though, doesn't it?
The second core is meant to handle the network stack, leaving the first core
to focus on program logic. With ESP8266s, it can be hard to write complex
applications while keeping heavy WiFi usage stable.
Although, I'll bet the 8266 firmwares and libraries have improved a lot since
I was using them.
~~~
sjwright
The ESP32-S2 is best thought of as a spiritual successor to the ESP8266. If
you need Bluetooth, I’m pretty sure the ESP32 is still being produced.
------
qwerty456127
What I (and, probably, a lot of people) would actually like to have is a 486
emulator (with at least 8 MB RAM) with a working ISA bus I could connect old
extension cards to. That would be a way more practical (in fact insanely cool)
although a ghost of a genuine antique like 640K XT still surely is a fun thing
to touch. That could even have commercial applications - I believe there are
many 486/ISA-based solutions still running in production in the wild.
~~~
kristopolous
There's ISA to USB devices for under $40 so you can hook the hardware up to a
modern machine
Then I'm sure you can use one of the many emulator solutions on the market to
bridge the rest and if none are suitable it can't be that hard, the data is
making it to the machine...
I'll happily make it happen if you need to hack something like qemu, sounds
like fun.
~~~
userbinator
I think a lot of these ISA cards are for industrial control in realtime
systems, so the added latency of USB is not going to work.
~~~
kristopolous
Are you sure? We aren't in the world of USB 1.1 latency any more. Things have
improved vastly since then.
~~~
userbinator
I found two people who measured latencies, one a PCIe parallel port and the
other USB 3.0:
[https://stackoverflow.com/questions/41987430/what-is-the-
low...](https://stackoverflow.com/questions/41987430/what-is-the-lowest-
latency-communication-method-between-a-computer-and-a-microco)
[https://stackoverflow.com/questions/13831008/what-is-the-
min...](https://stackoverflow.com/questions/13831008/what-is-the-minimum-
latency-of-usb-3-0)
PCIe parallel port: 4-8us
USB 3.0: 30us
I believe a regular PCI or even ISA parallel port can be below 1us. Those are
"real" buses, unlike USB and PCIe which are more similar to packet-switched
networks.
~~~
FPGAhacker
I don’t know about ISA, but pci (not e) can easily be under 1us. It would be
an odd design to have even that much latency.
As measured on the bus.
------
basementcat
The same person also has a C-64 emulator working on the same board.
[https://mcuhacker.wordpress.com/2018/03/03/running-
the-c64-o...](https://mcuhacker.wordpress.com/2018/03/03/running-the-c64-on-
the-esp8266/)
------
userbinator
_1MB of the flash is used as a swapfile and creates virtual RAM space to the
emulation through a MMU caching system_
That sounds like it would wear out the flash very quickly, especially given
that the embedded flash on MCUs like these are not really designed for much in
the way of write cycles (the usual case being firmware updates and
configuration changes, both not high-frequency operations.)
Interesting hack nonetheless, it reminds me of this:
[https://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bi...](https://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit)
~~~
eschaton
Yeah, it’d almost be better to use SD for virtual memory to at least put the
flash wear on something consumable.
~~~
lann
> something consumable
ESP8266s can be had for ~$1...
------
GekkePrutser
Ok that is impressive... Very well done. And then to imagine I paid thousands
for one back in the day (though my dad's work "PC at home" project).
I could have just waited 35 years and spent 1 buck for the ESP8266 :)
~~~
Koshkin
Yet, “you get what you pay for” is still true...
------
DeathArrow
Can it run Wolfenstein 3D and Duke Nukem?
~~~
DanBC
He has another post where he runs a super low Res Wolfenstein on an ESP8266.
[https://mcuhacker.wordpress.com/2018/03/06/esp8266-tvout-
lib...](https://mcuhacker.wordpress.com/2018/03/06/esp8266-tvout-library/)
| {
"pile_set_name": "HackerNews"
} |
How misaligning data can increase performance 12x by reducing cache misses - luu
http://danluu.com/3c-conflict/
======
cowsandmilk
I see many comments here acting like this means not aligning on word
boundaries (e.g. using packed pragma, sandy bridge have unaligned access,
etc.). This has nothing to do with word alignment. As the article states at
the beginning, this is about aligning to page boundaries, which is on a very
different level than word boundaries or structure packing. Let's not get these
confused.
------
barrkel
It's important to at least know about the n in n-way set associative cache
(i.e. that it exists) and this article is a good reminder. Next time you see
data accesses that ought to be fast (ought to be hot in cache) but seem not to
be, this is another thing you can look for.
It's easy, from a software engineer perspective, to know that your CPU has
cache, and just think it's like any other cache you might implement in
software. But the implementation details - the hashing by masking the address,
and only having a few slots available for things that hash to the same bucket
- are actually important, as shown here.
------
mtanski
One of the things to keep in mind here is that this trick won't work or work
as well for every architecture or every processor family in that architecture.
Some architectures do not support unaligned memory access and will raise an
exception. If you're using things like packed attribute with your structs your
compiler will generate the correct code but that code will be slower. In
almost all cases it will generate many more instructions and because of that
your cache will be less effective (due to larger code size) your decoder cache
will be less effective, etc...
The author has a more modern Intel processor. The x86 family always supported
unaligned access, albeit it was always slower in terms of cycles. More recent
Intel processor have made this penalty much shorter. I believe this was driven
network applications many of which focus on efficiency of packing as many
bytes down the channel and less on the alignment requirements.
[http://www.agner.org/optimize/blog/read.php?i=142&v=t](http://www.agner.org/optimize/blog/read.php?i=142&v=t)
~~~
cube13
You're talking about word aligned boundries, which is absolutely an issue with
certain architectures(I remember dealing with the issue with older SPARC
processors). The article is talking about L2 and L3 processor cache hash
collisions, which can result in lost performance as the cache's are
overwritten.
This optimization doesn't preclude those architectures necessarily, it's
saying that instead of allocating at address 512, 1024, etc., there might be a
boost from allocating at off-page addresses.
~~~
mtanski
You are correct.
------
ori_b
The usual term for this is "cache line colouring", and many allocators do this
for you.
[http://en.wikipedia.org/wiki/Cache_coloring](http://en.wikipedia.org/wiki/Cache_coloring)
------
joe_the_user
I understand this only enough to understand that it may or may not be a
surprising and paradoxical result.
As a not-systems c++ programmer, it seems to reinforce the usual lesson: don't
optimize your structures initially for anything but readability, once you have
your system running and can pinpoint the bottlenecks, then try things like
aligning structures to various things. But naturally always have a real-world-
like test-suite to verify you are improving things.
Sorry if this is boring.
------
mikerg87
Interesting. One question. How would I take advantage of this ? Are there
special flags for the compiler or special memory allocator that I would need ?
Do libraries like BLAS already account for this ?
~~~
acqq
I'm not aware of any flags of C compilers that would "fix potential cache-
slot-collisions" for you automatically, neither of any allocators. Intel
processors have some profiling registers that can point you to this kind of
problems, but typically you first have to know what you want to profile.
Compilers are actually built to align the structures you write as there are a
lot of processors which have significantly slower access to the misaligned
values. Allocators also have to return you aligned addresses for each
"malloc." The newest Intel iX processors are actually an exception in being
able to amortize misaligned accesses.
I've actually used hand-made unaligned "string" stores. As soon as you
allocate some bigger memory block and store the character sequences of the
variable size one after another, their starts won't be aligned unless you want
that. For doubles and other fixed-size values it's still better to keep them
aligned.
Moreover I wasn't able to extract some value out of the article. Something
constructed can be constructed to be slow or fast, fine. But I don't see
anything that would inspire me to improve my real code. Maybe somebody who
reads the article manages to produce such examples?
~~~
alexkus
You can always write your own incremental allocator to provide word aligned
(or whatever alignment) blocks of memory (from a large block of memory you
obtain via a usual malloc() call) for the individual structures such that
their page offsets are spread evenly (and avoid other problems such as
spanning pages).
[EDIT] although it is tricky to do optimally given that different processors
will have different cache set characteristics (as the article shows).
~~~
acqq
"Spreading" which will produce optimal cache use is something that depends on
dynamic and not static properties of the program, so you'd have to "spread"
differently depending on the use patterns. I can't imagine any universal
solution.
And most of the programmers make much bigger omissions than those mentioned in
this topic. Like using wrong algorithms, wrong libraries, doing too many
allocations, having bad structures of the data... So this topic effects are
invisible unless you already fixed other issues.
~~~
alexkus
True, but having every structure aligned to exactly the same offset into each
page is extremely unlikely. Given a page size of 4KB then a whole bunch of 32
byte structures aligned in such a way would represent a huge waste of memory.
No sane allocator will allocate things like this.
If your structures happen to be very close to the system's page size it could
easily happen, then you'd need to avoid this yourself with your own
incremental allocator (or other tricks).
Definitely agree with your last point, I've seen lots of code (in commercial
applications) where people are optimising completely the wrong thing.
------
nkurz
Hi Dan --
Great example. I looked briefly at the source, and wasn't sure whether
"pointer_chase" was on or off in your graphs. Or maybe it didn't make a
difference?
_Page-aligned accesses like these also make compulsory misses worse, because
prefetchers won’t prefetch beyond a page boundary. But if you have enough data
that you’re aligning things to page boundaries, you probably can’t do much
about that anyway._
To the contrary, I think this is one of the relatively rare cases that
explicit prefetching can help you. But maybe this helps only once your sets
are too large for L3?
I wrote a little a few months ago on my attempts to speed up a Stream
benchmark for Sandy Bridge that might have some overlap with your post:
[http://software.intel.com/en-
us/forums/topic/480004#comment-...](http://software.intel.com/en-
us/forums/topic/480004#comment-1763753)
------
danso
> _Well, I’ve now managed to blog about three of the areas where I have the
> biggest comparative advantage. Three or four more blog posts and I’ll be
> able to write myself straight out of my job. I must be moving up in the
> world, because I was able to automate myself out of my first job with a
> couple shell scripts and a bit of Ruby. At least this requires some human
> intervention._
This was a great explanation of caching architecture, but I really want to
hear the story of how the OP automated himself out of his first job with some
shells scripts and Ruby.
------
dded
Many modern processors hash some of the address bits before using them as a
cache index to avoid these problems.
To avoid them in code, it's often sufficient to round to convenient decimal
numbers when allocating arrays, instead of powers-of-2. That is, allocate an
array of 1000, instead of an array of 1024.
------
scott_s
The linked usenet discussion is worth reading, partially because a student
asks for help on their homework, and it eventually pulls in Linus Torvalds:
[https://groups.google.com/d/msg/comp.arch/_uecSnSEQc4/jkdcQc...](https://groups.google.com/d/msg/comp.arch/_uecSnSEQc4/jkdcQcRatXoJ)
The actual discussion is good, too.
------
colanderman
Why don't caches store data based on a linear hash of the address, rather than
simply the low-order bits? This would retain the property that an aligned
block of data can be stored without collision, and would extend this benefit
to page-strided data. Even a "hash" as simple as (low order bits) XOR (middle
bits) would provide this benefit.
------
bwzhou
Why would you want to make an in-memory data structure page-aligned instead of
cache-line-aligned?
~~~
rossjudson
It's not that you would want to. It's that certain data structures
(particularly in kernel land) naturally form around powers-of-2 sizing. Since
the hashing in an n-way associative cache is done by masking away bits, you
can get into this nasty situation where multiple elements of your data
structure end up hashing to the same location set (the N in an N-way
associative cache). You don't want that if you are going to walk the elements
in your structure.
Either avoid walking the elements, or do whatever it takes to ensure that you
don't end up getting stuck behind the N.
Hardware is inescapable.
It's also worth noting that you can use this to your advantage, ensuring that
accesses to certain elements does _not_ push much else out of the associative
cache.
------
brokenparser
Can someone explain those charts?
~~~
jbl
The OP is plotting the _ratio_ of page-aligned vs. unaligned access time
against problem size. So, on the y-axis, a value of 2 means the page-aligned
access took twice as long as the unaligned access.
Took me a few moments to grok the charts too, since I haven't had my coffee
yet.
~~~
jere
Ah good explanation. The use of "vs" in a graph title makes me assume it's
describing X vs Y, which is confusing.
It also helped when I realized the graph starts at 1 for working set size of 8
and the author corresponding says:
>Except for very small working sets (1-8), the unaligned version is noticeably
faster
------
ww520
Darn, really unexpected. Learn something new everyday. Page-aligned access is
common because I/O is usually aligned on page. Now CPU intense cases are
different because of cache line usage.
| {
"pile_set_name": "HackerNews"
} |
The transgender populist fighting fascists with face glitter - kantord
https://www.economist.com/open-future/2018/12/21/the-transgender-populist-fighting-fascists-with-face-glitter
======
IronWolve
Interesting YouTube philosopher-pundit Natalie Wynn thinks everyone not left
are nazi's wanting to wipe out LGBT. That the right is accepting of everyone,
so they are winning people to their sides, thus they can't be allowed to speak
on campus. And also politically this is a problem due to the lack of anti-
capitalism and anti-nationalism pushed in mainstream politics.
I disagree. We are in an information golden age where minorities and different
groups of people can publish their issues. We have made great strides in LGBT
rights due to the Internet and Media. I just can't fathom "wipe out" as a
majority view in the West.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Do you blog using wordpress? - sharmi
My personal site is not much active but when I have something interesting I tend to post.<p>https://www.minvolai.com/blog I used to blog using wordpress in 2007. Then I migrated to static generator mynt and have been on this from atleast 5 years. Mynt is not so well maintained now and I plan to move to nikola, another static blog generator (written in python).<p>On the other hand, most of the web uses wordpress. So if I move back to wordpress, I believe I will have a better understanding of other people's workflows and issues. I do not mind keeping the installation up-to-date etc.<p>One thing that used to bug me when I was using wordpress, was embedding code snippets as a part of blog content. Wordpress would often replace embedded code symbols with html encodings , like ">" by "&gt;". It got really annoying to open every post where it happens and set it right manually.<p>So my questions are:<p>* Has anyone moved a programming blog from static blog generator to wordpress? How is the experience?<p>* Has anyone faced the code replacement situation recently and if so, how do you handle it?
======
kernelcurry
1\. Wordpress has come light years in the past 2-3 years and allows for auto
updating now
2\. I moved my site [https://kernelcurry.com](https://kernelcurry.com) from
WordPress to a statistics site generator a few years ago and I LOVE IT!
If you are looking to have GitHub host and deal with scaling (for free) I say
go for it! Jekyll, Hugo, etc... There are a thousand of them. If you just want
to write posts and have people view them... Maybe even use a comment service
(some of those are also free) then make the move...
But be warned it did take a few days me banging my head against a wall to
understand all the nitty gritty BS that comes with these statistics Site
Generators. -shrug- isn't that how it always goes?...
| {
"pile_set_name": "HackerNews"
} |
The Burden: Fossil Fuel, the Military and National Security - westurner
http://www.theburdenfilm.com/watch_the_film
======
westurner
Here's a link to the video:
[https://vimeo.com/194560636](https://vimeo.com/194560636)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Are you going to participate in this years Google Code Jam? - holdenk
Are you planning on participating in this years Google Code Jam? It starts on Friday.<p>I'm curious, what language,libraries or tools are you planning on using?<p>Do you have any pre-written contest code that you find useful?
======
_genova
pertamax
| {
"pile_set_name": "HackerNews"
} |
It’s Time to Get over That Stored Procedure Aversion You Have - fastbmk_com
https://rob.conery.io/2015/02/20/its-time-to-get-over-that-stored-procedure-aversion-you-have/
======
alunchbox
Erm. No. There's plenty of reasons why "rock star developers" and plenty of
blogs discuss this. ORM usage is only as good as the stored procedures that
are written. If devs don't understand an ORM they're bound it abuse it,
because it's easy to do so.
My biggest issue with stored procedures is change management. I've seen plenty
of crazy custom tools that try to use Git, SVN, and just file system copies of
Stores procedures. All of em have been 'lovely' to work with to say the least.
Having your core business logic on the server is amazing due to version
control. If it's performance that's a bottle neck I would say 95% (my POV)
it's a developer that doesn't understand what the ORM is doing. The beauty of
using EF or another micro ORM (dapper) is that SQL performance is optimized
from caching the SQL hash. And the option is still there to execute a stored
procedure.
Theres absolutely times for using stored procedures, but I'd sacrifice a
little bit of performance for maintainability.
~~~
fastbmk_com
What's the problem with maintainability of stored procedures?
For example, one can keep them in 'stored-procedures.sql' under Git version
control and deploy them via one command, like `psql ...`.
Won't it work that way?
~~~
alunchbox
Maintainability is more then just version control. It's about being able to
discover complex business logic, refactoring, and finding patterns to reduce
code use. There's only one useful tool I found to quickly manage thousands of
stored procedures and it was datagrip unfortunately the company I was working
at would not allow devs to use anything but SSMS. Have you tried filtering
thousands of stored procedures that don't follow naming convention and are
riddled with bugs?! (Even with datagrip it still bad, not horrible but still
bad)
Tooling is a devs best friend. Using an IDE or vim/emacs with the correct
extensions allows developers to easily see where a class is being used,
allowing quick inspection. I do believe stored procedures have a place but
they've been abused too much.
~~~
fastbmk_com
> Have you tried filtering thousands of stored procedures that don't follow
> naming convention and are riddled with bugs?!
No. I was thinking about something like a greenfield project, small or medium
size. Where about 100 stored procedures are all named nicely and 2-3
developers working on the project fully understand what they are doing with
them :)
------
purple_ducks
2015...
~~~
fastbmk_com
Yeah, so? Things have significantly changed since then?
~~~
grzm
It's common on HN to include the year in the submission title if it's over a
year old. As only the submitter or a mod can edit the title, commenters often
prompt an edit as your parent did. It's not a comment on the worthiness of the
submission: it's an aid to the readers.
~~~
fastbmk_com
Looks like the 'edit' link has disappeared for an unknown reason.
~~~
monkeydreams
It disappears very quickly after you post. You only have a short time
(minutes?) before your post belongs to the ages.
~~~
fastbmk_com
What a cruel world!
| {
"pile_set_name": "HackerNews"
} |
Intel Proposes to Use USB Type-C Digital Audio Technology - njaremko
http://www.anandtech.com/show/10273/intel-proposes-to-use-usb-typec-cables-to-connect-headsets-to-mobile-devices
======
moskie
It's recently occurred to me that the prolificness of the 3.5mm audio
connector is something to revere. Wikipedia tells that it's been around since
_1964_ , with its fame really coming with the Walkman in 1979. So, a connector
introduced _over fifty years ago_ is still in wide use today. How is that even
possible? RJ11 phone jacks I think were introduced around the same time, but
they seem archaic and old-fashioned in a way the 3.5mm audio jack doesn't. It
just astonishes me that the audio connector has been a solved problem for this
long, and with all the other advances in tech we've seen.... audio connectors
didn't need any improvements.
It makes me very skeptical of any replacements. If a fifty year old connector
has been essentially flawless all this time, it's gonna be a tough sell to
convince me that something else is really needed now.
~~~
rpgmaker
I think it needs to die and be replaced by a _standard_ wireless audio
technology of some sort, more efficient than bluetooth. I honestly don't know
how people can go about their daily commutes with cables dangling from their
ears. To me, that is very uncomfortable. Bluetooth has served me well over the
years but I think we need something better and way more _energy_ efficient.
~~~
hamburglar
And I honestly can't figure out how people can tolerate connection flakiness,
quality issues, and battery drain for a set of headphones. I have tried many
BT audio solutions and this is not wanna-be-audiophile pickiness, but the
experience is universally substandard as far as I can tell.
~~~
nisse72
Watching a video on my mac, and listening over bluetooth, the audio lags the
video by at least 0.5 second, it's unwatchable if there's any sort of
dialogue.
------
tristor
From everyone who has invested heavily in high-quality audio equipment over
the years and understands the dangerous path to DRMed digital, I say "Fuck you
Intel!". We don't want your USB-C. We'll happily stick to 3.5mm stereo analog
outputs.
~~~
bryanlarsen
The proposal includes stereo analog output.
~~~
rbanffy
For now. Think DVI-A.
~~~
bryanlarsen
Great example. VGA and DVI-I are still supported by lots of new video cards
and laptops even though analog video has been dead for quite a long time.
~~~
pdkl95
Yes, it's a great example, because VGA _is_ broken by the newer standards
because of HDCP. As others in this thread have mentioned, HDCP is included in
this new USB spec.
There has been a lot of effort over the years to close the so-called "analog
hole" at the end of the DRM chain of trust. The stereo output exists only to
distract people during the transition period. The analog out hardware may
exist in the future, but the _software_ will refuse to send it any data. This
is not theoretical; we already see this with Blu-ray software, where you get
an error message if you use a VGA monitor without HDCP.
Bonus: with Win 10 forcing updates, once the hardware is common, you won';t
get a choice about the update that disables the "analog hole".
------
fpgaminer
As others have pointed out, this may be a means to foist DRM on the audio
output of phones, tablets, and possibly desktops/laptops. But the assumption
following that is that DRM is being implemented in an attempt to prevent
piracy. That is, in my opinion, not the case. DRM's primary purpose with
respect to the video/audio industry is market control. Let's start at the top
of the market, the content producers (Warner, Fox, etc). They make a movie/TV
show/etc, and publish it to disc. But that disc has encryption on it. So a
company like Sony wants to make a device that plays the disc. To do so legally
they have to sign a bunch of agreements with the owners of the encryption
license. Part of those agreements requires the use of HDCP. Okay, so now the
disc is playing, and outputting an encrypted video signal. So a company like
LG makes a TV, but the video signal is encrypted. So, they have to sign an
agreement with the holders of the HDCP license. But the holders of that
license, and the license of disc encryption, are all held, ultimately, by the
same media industry oligarchy that holds the rights to the content that
started this chain of DRM in the first place.
The end result is that the content owners get to use DRM as a means to force
all the companies along the food chain to sign agreements with them, and thus
they can exercise power over the entire market. Not a single legit Blu-ray
player gets manufactured without signing agreements with these companies. Not
a single TV, cable box, repeater, receiver, projector, etc. DRM is not a
padlock, it's a parasite. The icing on the cake is that it also nets them a
tidy profit. HDCP requires both a yearly licensing fee, and a per-device
royalty. It ain't cheap. And there are more aggressive requirements if you
plan to implement HDCP yourself, rather than using a pre-made device. So you
can either use a pre-made device, which is conveniently manufactured by the
same oligarchy and is rather pricey, or make your own and suffer further
agreements and expenses.
Intel is the guy that the media industry hired to create HDCP, and who
currently manages it. It wasn't long ago that Intel was gung-ho about pushing
video DRM on PC's along with Microsoft. Luckily that mostly died, but here we
are again, same story, different day.
~~~
signal57
Wouldn't it still be possible to record the non-DRM analog signal with a USB-C
to 3.5 mm adapter? The change would put the amp and DAC inside the headphones.
Take it apart and connect directly to the "last leg" going to the analog
speakers. If anything, it should be a better quality signal because of the
shorter analog transmission distance.
~~~
pdkl95
An adapter like that will have the exact same problems as DVI->VGA adapters:
HDCP. The software will refuse to send the data to the port without an HDCP
negotiation. And just like the VGA adapters, I'm sure it will be possible to
make a "striper" that fakes the HDCP handshake, with all the associated legal
problems.
------
ramses0
Is USB-C basically the trojan horse for DRM audio? Likely yes, right? HDCP on
HDMI out, equivalent on USB-C out?
~~~
JonnieCache
_" Usage of digital audio means that headsets should gain their own
amplifiers, DACs and various other logic, which is currently located inside
smartphones. Intel proposes to install special multi-function processing units
(MPUs) ... The MPUs will also support HDCP technology, hence, it will not be
possible to make digital copies of records using USB-C digital headset
outputs."_
~~~
nalllar
Yet another form of irritating DRM which won't actually prevent piracy.
Take apart any device which supports the copy protection, connect to output of
DAC, bypassed.
/facepalm
~~~
MichaelGG
And just like Apple TV[1], enjoy getting HDCP errors requiring restarting
playback several times.
1: Might have been the Toshiba screen it was connected to.
~~~
duaneb
Dunno why you're being downvoted, that's exactly the type of frustrating
experience that exemplifies HDCP.
------
Sephr
Judging by all of the comments complaining about DRM and having to buy new
audio equipment, it seems like few people are actually reading the whole
article. USB-C can support analog audio output via audio adapter accessory
mode[1].
In the near future you'll be able to buy passive USB-C→3.5mm cables for
cheaper than normal USB-C→USB-C cables. You will be able to use them like any
other aux cable.
Personally, I still prefer the 3.5mm jack so that I don't need a passive
adapter for using my earbuds. Requiring a passive adapter that gives you a
perpendicular 3.5mm jack, no matter how small, would be ugly and obtrusive.
Fortunately, at least for over-the-ear headphones with a 3.5mm jack, this will
at least be much less of a hassle. Passive USB-C→3.5mm cables can just be your
new aux cable.
[1] [https://i.imgur.com/y6xCS9u.png](https://i.imgur.com/y6xCS9u.png)
~~~
dingo_bat
IMO the 3.5 mm jack is better because it can rotate 360 degrees. The USB
connector is rigidly fixed, which puts stresses on the cord and connector.
I've rarely seen a 3.5 mm fail to make proper connections or wear out the port
with use, whereas both are really common with USB.
~~~
NegativeLatency
Generally I agree with you, but my macbook's headphone connector won't hold
the cable in anymore.
------
cm3
They cannot be serious. Replacing what works great with all kinds of cheap or
expensive and easy to repair equipment with what? Another device on the
Universal Serial Bus. I see the point of attaching storage devices, sound
cards and anything else that is not simple and needs a controller and logic to
USB, but audio output? I do sometimes use a USB headset because it has its own
soundcard built-in, but I actually prefer a simple headset with a 3.5mm mic-
in/head-out connection. Why? Well, it works always, it doesn't load another
driver, and it doesn't rely on the USB bus. USB is nice but occasionally
hiccups and resets itself, which is not something you'll see happen with your
mic and headphones because they are not overcomplicated. To be totally honest,
I'm also one of those who prefers keyboards attached via PS/2 because I've had
USB keyboards reset when I attached another USB device to the shared bus. That
PS/2 has real interrupts is an added bonus for something as crucial as
keyboard input. With all that being said, as long as this happens in the phone
and tablet space, I guess I can live with it, but having to carry around an
adapter from USB to 3.5mm audio will be a PITA.
This is just another, let's change it to make money with adapters and sell new
implementations due to bugs in the old controllers and drivers, scheme.
Next year, USB power cords, and you'll have to rewire your house.
~~~
dpark
> _but I actually prefer a simple headset with a 3.5mm mic-in /head-out
> connection. Why? Well, it works always_
I can say with certainty that it does not always work. If I plug a pair of
Apple earbuds into my Android device, the audio up/down do not work. This
functionality isn't consistent even within the Android device world. Having a
consistent headphone jack that provides consistency here would be a pretty big
win. (Even bigger if Apple gets on board, which it won't.)
> _I guess I can live with it, but having to carry around an adapter from USB
> to 3.5mm audio will be a PITA_
Pretty sure the endgame here is that your headphones have a USB C connection
on them instead of a 3.5mm audio jack.
> _Next year, USB power cords, and you 'll have to rewire your house._
You can already buy outlets with USB ports. They're increasingly common, but
thankfully the transformer is in the outlet so you're not running low-voltage
wire all over.
~~~
teamfrizz
I think this is a false comparison for two reasons
1) adding the microphone and volume function to the headjack is a different
cable than a normal TRS 3.5mm aux cable, which is the cable we all love. I
would rather get rid of the 4 wired microphone enabled 3.5mm than lose the
3.52mm altogether.
2) all of the problems you mentioned that revolve around software - Apple
cables not working on android, etc - are only going to get word by introducing
USB into this. The idea is to remove proprietary things from analog audio,
that's the whole reason the standard survived.
It also goes without saying that there is an inherit loss of utility caused by
this switch, since so many of the products I already own use 3.5mm.
~~~
dpark
I'm not attached sentimentally to any cables. I'd love for audio, volume,
skip, and microphone functionality to work with all my devices and all my
headphones. Would it be annoying to lose compatibility (or require a
converter) for all my existing speakers and headphones? Of course. I'd still
take that penalty if it meant the overall experience was better and eventually
more consistent.
Frankly I'm not sure 3.5mm is going to survive in the mainstream anyway.
Bluetooth might eventually replace it for mainstream use cases.
~~~
click170
>Bluetooth might eventually replace it for mainstream use cases.
As somewho who uses Bluetooth audio I strongly disagree. I think Bluetooth
audio is going to remain niche until batteries get better. Who wants to have
to charge their headphones or Bluetooth (often battery powered) speakers every
few days?
~~~
cm3
I may be spoiled but whether I use bluetooth or old-school RF headsets, it's
always less reliable and lower quality than wired.
------
tremon
_Industry signaling a strong desire to move from analog to digital_
Which industry would that be, I wonder? The audio hardware industry, or the
content creation industry?
~~~
cstavish
This is ironic. It used to be the analog medium that had a sort natural copy
protection built in.
------
leaveyou
>Industry signaling a strong desire..
Is this the same industry that gave us the HDMI "blessing" ?
"The headphones, audio cables and the jack adapters are too cheap.. We can
solve that !"
~~~
sliverstorm
HDMI does have nice features, like ARC and CEC.
(I'd rather use DisplayPort though)
~~~
pritambaral
Neither of which require DRM, I'm sure.
HDMI causes a lot of problems (and price increases) because of HDCP.
~~~
stordoff
> HDMI causes a lot of problems (and price increases) because of HDCP.
And the way around it (in the specific case of HDCP) is often to buy cheap,
probably non-standards compliant equipment, which isn't a good situation for
anyone. I recall reading that the VitaTV had HDCP, yet I hadn't encountered
that issue. Turns out the HDMI switcher I was using (the cheapest one I could
find on Amazon with two outputs) was just stripping it off.
~~~
pritambaral
Then you got lucky. If that practice was pervasive, 1) honest people wouldn't
face as many issues with HDMI; and 2) the Hollywood DRM-lobby would go crazy
on manufacturers
------
strgrd
I imagine as a mobile device manufacturer you would be pretty excited to get
to cut the total number of ports on your device in half _and_ sell high margin
cables/adapters/hubs as accessories.
Let the dongle wars begin...
------
sp332
So now instead of buying a single DAC in my phone, I have to buy a DAC for
every set of speakers and headphones I ever want to plug in to it?
~~~
derefr
Coming from the reverse perspective: now my expensive wireless bluetooth
headphones will make use of their own good DAC all the time, instead of
relying on my phone's crappy DAC whenever I plug them in directly.
~~~
plaguuuuuu
You can already use your own DAC with smartphones.
~~~
sp332
How does that work?
~~~
makomk
On Android phones that support USB-OTG (most of the better/more modern ones)
you can literally just plug a standard USB DAC into the onboard USB port.
Honestly, it shouldn't be that hard to just include a decent onboard DAC and
headphone amp these days though - Sandisk's MP3 players and Allwinner's range
of ARM SoCs have even managed to integrate both on the same die as the main
CPU no problem.
~~~
throwanem
Apple devices have generally excellent DACs as well - to the point where it's
basically a waste of money to buy an external DAC. I wouldn't be surprised to
learn that there are Android devices with good onboard DACs, but I also
wouldn't be surprised to learn that such devices are at the high end of their
manufacturers' ranges and come at a premium price.
------
gdamjan1
> A good thing about USB Type-C headsets with MPUs is that they are going to
> be software upgradeable and could gain functionality over their lifespan.
yeah, unless it never happens, like it typically doesn't :(
~~~
pdkl95
> > software upgradeable and could gain functionality over their lifespan.
That usually means "now we get to ship it broken".
~~~
plaguuuuuu
I just can't wait to have bugs in my HEADPHONES of all things.
Or for them to be connected to the IoT.
------
zanny
Why the hell does a digital serial bus interface have an analog audio "mode"?
I mean, USB-C is already supposed to do daisy chaining, some 100w power
bidirectional power transfer, and support video channels over it according to
displayport spec. Oh, and its also thunderbolt.
Seriously, is this port supposed to cost more than its weight in gold to
manufacture, and be such an extreme nightmare to program for we should expect
exploits every other Tuesday? I am totally on board with a high bandwidth even
parallel standard port for digital data exchange with good power delivery
metrics, but all this specificity over how it performs with what data while
supporting analog modes is... feature creep, by definition, in my book.
~~~
Sephr
You just hook up your existing DAC to the USB-C port instead of a 3.5mm jack,
and the rest is just software. It's not like they need to use special
circuitry that isn't already in your phone. All phones in existence already
have a DAC (or you couldn't make phone calls using the built-in speaker).
------
stegosaurus
This reminds me of the Game Boy Advance SP and how it removed the 3.5mm jack
for seemingly no reason other than to have me buy an adapter.
I won't buy a phone without a 3.5mm jack. It just works. I don't need digital
audio. My (medium-price-range) headphones sound utterly glorious, there is no
utility here.
To be honest, I'm not sure whether I'll ever upgrade my 2013 Moto G. It'll
probably break eventually.
When did I become a luddite? It's like, at some point, things stopped getting
substantially better, and just became sidegrades with annoying tweaks for the
sake of it.
I want my toaster to take... bread. Not tomatoes. Bread is what I eat for
breakfast, not toasted tomatoes. :P
------
_wmd
Initial steps toward DRM on the audio path?
~~~
danarmak
From the article:
> The MPUs will also support HDCP technology, hence, it will not be possible
> to make digital copies of records using USB-C digital headset outputs.
------
revelation
It's bizarre. Why would we push the <5mm^2 that an audio amplifier requires in
a smartphone into the headphones, where it then inevitably requires local
decoupling, a circuit board, power management, something to decode the USB (or
can USB-C do analog?) and all the other hassle?
Makes absolutely zero sense.
~~~
jws
The amplifier in the headphones can be designed specifically for those
headphones. It may well be that a transducer with a bizarre frequency response
and an amp/EQ that compensates is a better solution in the end. Where we are
now, headphones need to have reasonably flat response (or be called Beats :-)
in order to be acceptable. Or perhaps a wildly different impedance makes a
better cost performance argument, can't go there now, but with active
headphones you can.
It may be a bit like the powered speaker market for PA gear. You can get light
weight powered speakers for not a lot of money that perform quite well. The
amplifiers do exactly what they need to for the exact speaker, they don't have
to be able to handle whatever mystery load you plug into them. They can build
in the crossovers and EQ compensation when the signal is small rather than in
the speaker cabinets.
Also, noise canceling ear buds become possible when you have power coming to
them.
Of course, the heap of Apple Earbuds I have whose clickers no longer work
doesn't really inspire me about my new active headphone future.
------
Eric_WVGG
I’m betting on next year’s Retina Macbook to nix the audio jack for a second
USB-C. Likely held off because they don’t want to detract from a big wireless
earbud launch with the iPhone 7 this fall…
~~~
TazeTSchnitzel
Maybe 2016 will be Apple's year of USB-C. A single USB-C on the iPhone 7, two
on the MacBook, and USB-C earphones?
It could happen.
~~~
akhilcacharya
They just released the refreshed MacBook, they'll probably wait for 2017 OR
they'll do that on the new Pros that will be announced at WWDC.
~~~
TazeTSchnitzel
Oh yeah, I know about the 2016 MacBook, but I'm guessing they could lay the
groundwork for the 2017 MacBook with their other products this year.
------
cornchips
"Industry Signaling a strong desire to move from analog to digital"... which
"industry"?
"New digital audio needs to offer significant value at higher end" ... Intel
market segmentation at its finest.
Long rein analog.
Looks like they missed a few groups in the job cuts.
~~~
Zekio
Apparently Apple and a Chinese company is the whole industry.
------
c0nfused
Welp.
Here is to hoping for AMD's zen.
USB audio is nice but it's also nice to not have to buy a new headset or new
DAC for my new box.
Edit: spec says analog audio over usb so, there's that.
~~~
Sephr
USB-C has an analog audio output mode (audio adapter accessory mode which uses
the device's internal DAC), so you won't have to do that.
There will eventually be passive USB-C→3.5mm cables for use with analog-only
headphones.
~~~
sp332
Do you have a link for this? I've never heard of it but it sounds cool.
~~~
Sephr
Here's a screenshot of part of the relevant section from the USB-C spec:
[https://i.imgur.com/y6xCS9u.png](https://i.imgur.com/y6xCS9u.png)
~~~
cnvogel
> The headset shall not use a USB Type-C plug to replace the 3,5mm plug.
So, the only sane way to connect an analog headset to a mobile phone is
forbidden by the spec... Assuming I _had_ a phone with a Analog-Audio-USB-
Type-C capable, I'd like to have a headset that directly plugs in, and for
which I don't have to purchase, carry, and loose a separate adapter.
~~~
Dylan16807
So attach the adapter and then pretend it's part of the wire for the lifetime
of the device.
------
anexprogrammer
"Industry signalling a strong desire to move from analogue to digital"
Industry can sod off. This is about DRM not any consumer benefit. The decent
quality is all in the hifi market that's quite happy with jack plugs.
------
thescriptkiddie
Why would I want a digital headphone jack? Headphones are analog devices, so
you're just going to have to cram a DAC into the headphones anyway, making
them heavier, more expensive, and DRM-encumbered.
------
narrator
I think this is a sign that technology is stalling. They are grasping at
straws trying to get us excited about technology that adds little value for
the consumer but drives another upgrade cycle and even removes features that
they can sell back to us. First we get the locked down no dd-wrt routers and
now this.
------
KamiCrit
Seeing how gaming keyboards and mice have gone these days. I wonder if we'll
need manufacturer specific software and an account and to access our future
audio settings and features.
------
okasaki
USB seems a lot more flimsy than 3.5mm.
~~~
PhasmaFelis
IIRC, USB-C is supposed to be the most resilient USB to date (in terms of
average plug/unplug cycles before failure). Too early to say if that's
actually true, or how that compares to 3.5mm in any case, but it's not
impossible that it's comparable--especially since USB-C finally eliminates the
"try to jam it in the wrong way 'round" problem.
~~~
pritambaral
> the most resilient USB
At the cost of the devices it's supposed to connect together[0]. When you have
to think about having to add cryptographic signatures and verification to
cables[1], I cannot see the connectivity standard you built as safe.
0:
[https://www.amazon.com/review/R2XDBFUD9CTN2R/ref=cm_cr_rdp_p...](https://www.amazon.com/review/R2XDBFUD9CTN2R/ref=cm_cr_rdp_perm)
1: [http://www.bit-tech.net/news/hardware/2016/04/13/usb-
type-c-...](http://www.bit-tech.net/news/hardware/2016/04/13/usb-type-c-
auth/1)
------
israrkhan
yet another form of DRM, for which consumers will pay and corporate will make
money, and yet it fails to solve the piracy problem. high fidelity analogue
Audio recorders are easily available.
------
astannard
It's a shame that apple to be going with a similar but incompatible strategy:
[http://www.cultofmac.com/401014/apple-now-sells-lightning-
he...](http://www.cultofmac.com/401014/apple-now-sells-lightning-headphones-
that-are-super-expensive/)
------
batbomb
Remember when 2.5mm was a thing for phones that could play MP3s?
That sucked.
~~~
cm3
I never knew what that port was for, thanks for clarifying, really, I'm not
kidding.
~~~
batbomb
Some were just for one earphone (mono) and a mic. Some were effectively the
same as the iPhone (stereo+mic). Some had weird adapters for other ports if
they just had the earphone+mic version. I think I had one of each at different
times, all pre-2007.
------
rasz_pl
We already have strong and widely adopted digital audio standard, its called
I2S. HDMI audio is four I2S channels in parallel. Provision pins for raw I2S,
or simple usb endpoint decoding into four I2S, but not another effing DRM
shit.
------
jhallenworld
I thought Bluetooth was supposed to eliminate the need for the 3.5mm jack...
------
Zekio
if phones don't get two type-C connectors there is no point to this..
~~~
bsharitt
The future will be one port on everything breakout dongles everywhere.
~~~
Zekio
Well it is gonna suck not being able to charge your phone while listening to
music
~~~
wmf
Instead of metal weights[1], headphones will contain batteries so first you'll
charge your headphones and then your phone will charge off the headphones.
[1] [https://blog.bolt.io/how-it-s-made-series-beats-by-
dre-154aa...](https://blog.bolt.io/how-it-s-made-series-beats-by-
dre-154aae384b36)
~~~
pritambaral
/s ?
~~~
wmf
Personally I think this is a terrible idea (I already have too many things
that need charging and charging a battery from another battery is inefficient)
but I predict that it will happen.
(BTW, does anyone know what happens when you plug two battery-powered USB-PD
devices together? How do they decide which direction power should flow?)
------
visarga
What is the purpose of making the audio cables digital? We don't need more
audio resolution, we are already beyond the limits of human hearing. It's like
making phones with 2000 ppi resolution for no practical benefit other than
bragging rights.
It'd rather prefer we had better wireless audio. Bluetooth is too weak for
streaming around the house and has slow connection time, is unstable and
generally discourages wireless audio.
------
SwellJoe
I'm torn on this one. I know it's a move to push DRM further out the stack,
and that annoys. But, I also want higher quality recording and playback from
my small devices. The audio circuitry of one of my tablets and my phone is
abysmal; my Nexus 7 (second generation) is nice but I don't have a good
recording option. Presumably this audio will be two ways, so if I want to
stick an ADC in that port I'll be able to record at very high quality, and if
I want to stick a DAC in that port, I'll be able to play back at very high
quality.
DRM is stupid, of course, and it's just pushing the copying out one more step
in the chain (they can't stop you from converting it to analog at _some point_
, because it's gotta be analog to get into your ear holes). And, of course,
DRM is made to be broken.
Anyway, the 3.5mm jacks on my devices are about 50/50 unusably bad (either
they aren't grounded/filtered properly and end up with a variety of noise, or
they aren't loud enough, or they distort at modest volume, etc.), so on the
whole, I won't mourn the passing of the 3.5mm jack.
~~~
TD-Linux
You can already do digital audio on Android, either over OTG USB and normal
USB Audio, or via the Android Accessory protocol. Presumably this would work
just as well over a USB C connector. Adding DRM is a pure downgrade from this.
------
headgasket
I posted this on a thread about this that should be linked somehow:
A true advance, a Jobs and Ives worthy advance, would be to superseed it. Use
the 3 prongs of a jack with mic for ground tx,rx,and keep it backward
compatible with analog only devices. reply
------
justaaron
not a shred of reality displayed here. the analog audio jack they seek to
replace is the final analog output of a digital to analog converter or DAC.
one cannot speak of a digital speaker, and a digital amplifier does not
usually refer to using the loading of the speaker driver to smooth-out the
pulse-chain like some motors on a PWM line... not in any high fidelity audio
anyway... where is the DAC or audio codec to be located? are we not just
pushing it out to the device and pretending it went away? why the assumption
for headphones or consumer audio? what about USBs derived clock and jitter?
why on
------
cm2187
What do they mean by going "digital"? At the end the signal that reaches the
speaker driver will have to be analog. What are they suggesting to happen
instead?
------
rbanffy
How about the mechanical loads headphone plugs are subjected while the phone
is in a pocket and we are walking?
I am very sure a USB-C is nowhere near as robust as a mini P2.
------
ohazi
No.
------
linux_girl
You can put _analog_ audio over USB-C, too. From the article:
> In fact, USB-C can be used to transfer analog audio in accordance with the
> specification of the connector. It all comes down as to how that audio is
> transmitted.
------
mlvljr
Somehow, I prefer "plain" USB for headsets: it's one connector instead of two,
and feels (looks) more modern (plus, requires less force to (un)plug).
No sympathy for clunky DRM-"enhanced" hw, of course :)
| {
"pile_set_name": "HackerNews"
} |
Students advised to falsely claim to be racial minorities for college admissions - Hydraulix989
https://www.marketwatch.com/story/students-were-advised-to-falsely-claim-to-be-racial-minorities-in-college-admissions-scandal-2019-05-18
======
danielscrubs
Should be noted that calling yourself asian is also disadvantageous so it’s
not all minorities.
~~~
dclusin
For those unaware, a group of Asian Americans is suing Harvard[1] for
discriminative admission practices. Similar allegations have been made against
UC Berkley and other UC's as well.
1 - [https://www.nbcnews.com/news/asian-america/harvard-
announces...](https://www.nbcnews.com/news/asian-america/harvard-announces-
high-admittance-asian-americans-judge-weighs-affirmative-action-n990051)
------
Viliam1234
If it worked for Rachel Dolezal, it would be unfair to deny the same strategy
to students.
~~~
YUMad
And for Elizabeth Warren.
------
Fjolsvith
Why not have a DNA test to determine ethnicity for college admissions?
~~~
Hydraulix989
What if I don't feel comfortable sharing my DNA?
~~~
Fjolsvith
I'm sure there are plenty of educational opportunities outside the USA.
~~~
Hydraulix989
I am a USA taxpayer and registered voter.
------
turtlecloud
With this open secret, most know the reality when walking around Harvard and
Ivy League.
If current trends continue based on self reported race in college
admissions... In the future most “white” people in Ivy League will be the
following: mixed race (mainly half or 3/4 asian), middle eastern
(arab/israeli/Iranian), or light skinned Indians. The rest are legacy WASPs.
Most Hispanic people will be white people with a Hispanic name like Beto
o’rourke. Most blacks will be from the Caribbean/African princes/non Slave
descended blacks like Obama/Kamala Harris. The asian quota will be reserved
for rich Chinese intl students. Native Americans will be Elizabeth warrens.
So really most Americans get shafted here by affirmative action. Protestant
whites, slave descended blacks, asian Americans, non white hispanics and
actual native Americans.
To protest this, whenever I am asked for any race I always randomly put down a
different race even if my last name obviously shows what race I am.
~~~
cafard
Beto O'Rourke? The Ancient Order of Hibernians would like a word with you.
~~~
turtlecloud
That’s the point lol. His first name is Robert but he goes by Beto to appear
Hispanic to appeal to Latino voters in Texas.
| {
"pile_set_name": "HackerNews"
} |
Herpes simplex virus is present in at least four out of five people - pmoriarty
https://www.vice.com/en_ca/article/j5q3gy/herpes-welcome-to-the-disease-you-probably-have
======
eganist
Missing from this: all the research that points to a substantial link between
the virus (HSV1 specifically) and Alzheimer's in some subgroups (notably the
ApoE 4 group). Selected opinions and studies from the most basic of google
scholar searches:
[https://www.j-alz.com/editors-blog/posts/case-viral-role-
alz...](https://www.j-alz.com/editors-blog/posts/case-viral-role-alzheimers-
disease)
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019841/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019841/)
[https://www.frontiersin.org/articles/10.3389/fnagi.2014.0020...](https://www.frontiersin.org/articles/10.3389/fnagi.2014.00202/full)
But HSV1 is also the most commonly transmitted form, so... shrug. Unless
there's a viable cure for it, nothing any of us can do.
------
craftyguy
That, and HPV.
| {
"pile_set_name": "HackerNews"
} |
When Google talks about "do no evil", who do you think they were talking about? - johns
http://avc.blogs.com/a_vc/2008/05/why-cant-micros.html
======
pg
It's pretty impressive of Fred to talk openly about Microsoft being evil. Very
few VCs would do that, because they wouldn't want to alienate such a powerful
partner and/or potential acquirer for the startups they fund.
Founders: this is the kind of VC you want on your side.
------
Herring
"Microsoft messed with the technology industry for a decade..."
He's talking like it's past tense. OOXML was just a few weeks ago. They worked
hard for that bad reputation. And honestly given the billions they've made,
I'm not sure I'd choose different.
------
okeumeni
I want to reiterate my advice to Xobni, Sell and run! These guys are beasts
they will make a copy of Xobni and make you history, remember Netscape.
~~~
paul
It's funny to see people making this argument. When was the last time that MS
released anything truly new or made any "competitor" history? Netscape was a
long time ago.
~~~
okeumeni
It’s easy to say that when never played on MSFT turf; asked anyone of those
companies building plug-ins or Add-ins for their platform. Ask your self why
the Europeans are so hard on MSFT these days >1 billion in fines.
------
sabat
Of course, it was "don't be evil", but that's just semantics. Mr. Buchheit
(sp?) came up with that famous motto. He meant it literally, from what he
says: don't be like the other guys. Don't be evil. I don't think that's the
same as "be perfect", but at least try to have pretty good intentions --
that's far and away better than most big companies.
| {
"pile_set_name": "HackerNews"
} |
A Peek Inside the Niantic Real World AR Platform - srameshc
https://nianticlabs.com/blog/nianticrealworldplatform/
======
ruytlm
As one of those who got stuck into Ingress in its early days, I'm always happy
to see Niantic pushing forward.
The only issue is they always seem to be a step ahead of themselves, in terms
of their ideas being just a little too far ahead of the technology.
It will be interesting to see how truly real-world AR shapes the world when it
becomes widespread. I'm sure that some day anthropologists will look back on
this era with fascination, at how both society and individual human
development/behaviour were shaped by technology, in terms of things like the
way that access to information and exposure to 'realities' not restricted to
the real laws of physics shape development.
~~~
digi_owl
Speaking of the laws of physics, i suspect the primary limitation for this
will be the power supply.
I seem to recall that there are ongoing jokes regarding existing AR games that
one basically have to bring a generator (aka car) to keep up with the battery
drain of the game.
~~~
erikpukinskis
I would expect the rendering efficiency to get better fast. The amount of
power spent on graphics relative to the actual amount of new information being
generated is absurdly high.
We should start seeing more experiences designed specifically for low power
renderability soon.
~~~
opencl
Why do you expect it get better at a significantly higher rate than it has
been over the past several decades of computer graphics? Is there some major
up-and-coming research on the topic?
~~~
erikpukinskis
No, just a kind of Hegelian turn. Computer graphics have been oriented towards
emulating photography for many years. That project is nearing its end and
still realism is lacking.
As people start to realize there is more to seeing than just photography, we
will enter an era of discovering more sparing uses of computation.
Look to painting for a possible progression. Realism/classicism was just an
early maximum. Impressionism followed. And then many other movements.
Contemporary painters can do much more with much less.
------
BLKNSLVR
The "Neon" demo displayed the horribly limiting nature of the device. The AR
aspect is great, but to have to see it through the lens of the device and to
have to occupy both hands to interact whilst full-bodily moving around the
space makes it seem quite "cacky" (I can't explain what I mean by cacky, but
it sounds like what I mean).
It's like playing a shooter game in the real world, but having to have the
device pointing in the right direction in order to see what's actually there.
Like using a torch in pitch blackness (Doom 3). This is where a Google Glass /
VR-style headgear kind of interface would be perfect.
But since this is all just in "tech demo" stage, I'm probably being too
critical. It is "cool".
Tertiary worry: AR advertising.
~~~
AndrewKemendo
It's building the communications infrastructure you need for glasses to "just
work."
You can't just come out with glasses because there is too much you don't know
about how multi-user and persistent content AR interface and communications
work from a standing start.
~~~
BLKNSLVR
My comment was based on my reaction to how unexpectedly badly the handling of
the device fitted into the activity, and how obvious it was that a different
interaction device was necessary.
And yes, I think Niantic are more of a technology / platform company than they
are a game developer, and I think that's the level of their involvement in the
Wizards Unite game.
Google "just came out with glasses", but that didn't go very well, probably
for the reasons you gave. Google Glass time may be upon us again soon,
however, given these tech demos.
~~~
AndrewKemendo
Yes, I'm agreeing with you and explaining why the big companies are making
such huge investments in this when the form factor and use case is still not
refined.
------
the-pigeon
Interesting. Niantic has done a really poor job managing Pokemon Go and
Ingress though.
Hopefully they've hired better management with boatloads of cash they've made
with Pokemon Go.
~~~
hrktb
Users have been frustrated and very vocal on the social media, there are clear
areas where Niantic is pointed pitch forks at.
Yet I am not sure I would call that bad management, in that they kept the game
running, core players are still there in decent numbers while casual players
seem to be coming back in waves.
They can surely do better and the game is riddled with bugs, but no one is
operating at that scale without significant issues, and they managed to not
ruin the game while making impacting changes to the whole system for two years
now.
I have my frustration with the games, but I genuinely think they made a very
decent job.
~~~
pedroaraujo
As an early player of Pokémon Go since the release day, I can say that the
whole game was poorly managed (and still is). Even after the initial spike of
users, they can't do a major event without messing it up:
\- [https://www.theverge.com/2017/7/25/16019404/pokemon-go-
fest-...](https://www.theverge.com/2017/7/25/16019404/pokemon-go-fest-refunds-
disaster-review)
\- [https://www.destructoid.com/niantic-is-handling-pokemon-
go-p...](https://www.destructoid.com/niantic-is-handling-pokemon-go-poorly-in-
spite-of-its-success-376241.phtml)
The people who play Pokémon Go nowadays, they do it for the novelty of being
Pokémon, not because it is a good game. Also, Pokémon Go is popular the same
way Flappy Bird was popular: it's an effortless game to handle and it's very
convenient to play when you already spend a lot of time on the phone.
Niantic managed to turn a multi-billion dollar game into a multi-million
dollar one.
~~~
Wofiel
And yet, according to some sources, as of May had the most players they've had
since launch. [0]
Whatever beef you might have Niantic or the players of Pokemon Go, the numbers
suggest that the game still has some stick, beyond novelty.
[0] [https://www.eurogamer.net/articles/2018-06-27-pokemon-go-
pla...](https://www.eurogamer.net/articles/2018-06-27-pokemon-go-player-count-
at-highest-since-2016-summer-launch)
~~~
dwild
That HAS to be from others countries then. Where I lived, in Montreal, there
was HUGE AMOUNT of peoples playing that game all around. I remember there is a
tiny park close by my work, it's cute, almost always empty but there is 3
Pokestop there. During the first few months, there was literally 30-40 peoples
there constantly. It was always funny to see them all flee somewhere else when
there was no longer the "boost" on theses Pokestop. That wasn't including the
people I was constantly seeing walking around playing. I now rarely see people
even play that game. It still happens but it pretty far from the first few
months where it was everywhere.
~~~
hrktb
There is a clear shift on who plays the game the most and how they play.
The first 3~4 months I remember seeing a lot of youngish and very active
people, the heavy players being those travelling all day around the city to
complete their dex.
Now I end up a lot more with elder people who manage to play during their
jobs, do a lot less “grinding” but do it more efficiently and can pop real
money here and there when it matters.
I am not surprise by the number of players rising again while there is no huge
30~40 people croud rushing everywhere: we don’t need to rush anymore, and the
main events can easily be planned 30~15 min in advance.
If you are interested in huge crowds, public parks during community days might
be the remaining attraction.
------
pbw
Pokemon Go's AR was a joke, but the promise of AR was very real and compelling
to people, so they were wildly successful and now have the resources to
actually try and solve the technical problems for real.
A lot of startups work this way. They have an ambition to do something but
really no chance of doing it. But if they can attract enough attention and
raise enough money they might actually be able to attempt it for real.
Crowd funding overtly works like this, but it's really common in regular
companies as well. If you can show there's a chance, maybe you can raise
enough money to actually do it.
~~~
CharlesW
> _Pokemon Go 's AR was a joke…_
If you think of AR as video overlays on reality, for sure.
If you think beyond video, Pokémon's AR was a home run. My kids know where
Pokémon are most likely to live. They know where the gyms are. Their reality
has _definitely_ been augmented.
~~~
proto-n
Yeah that's what many people confuse. In pokemon go, AR, meaning 3d stuff
rendered over the camera input, is not really essential to the game, and most
people never even use it, as it drains the battery too much.
On the other hand AR, as in augmented world map, is the core of the game and
is brilliant.
~~~
birdman3131
Has nothing to do with battery drain and everything to do with the fact it is
significantly easier to hit the harder throws that give you a higher chance of
catching the pokemon.
------
MaxLeiter
Niantic is currently developing a Harry Potter AR game. I wouldn’t be
surprised if they allow you to cast spells at other players during duels and
what-not, as one of the demos shows
~~~
jerrysievert
as both someone who's been in the mobile location industry (having headed an
R&D center specifically focused on real-time location for mobile devices), and
is an avid pokemon go player (and thus Niantic customer), I don't expect them
to have anything close to their demos until a year after launch, and even then
in extreme beta.
that said, it's almost time for me to do another 1km walk to try to eek out
400m of "egg distance" in pokemon go.
------
madrox
Well done. The occlusion is far better than I would've ever thought possible
with a single camera on current mobile hardware.
Whenever they get the form factor right for AR, I think we'll get to see some
really interesting apps
------
wpietri
One of the big questions I have about AR is the extent to which it's a novelty
versus something that delivers lasting value. As an example, 3D movies and
especially 3D TVs were an impressive technical accomplishment that basically
nobody cared about. It'll be interesting to see how this plays out.
------
taneq
Good to see them working on real AR instead of "render an geolocated object
with the camera feed in the background" (original Pokemon Go). At least the
new ARKit version attempts to track the feed a bit.
------
kriro
I'm curious when Blizzard will enter the mobile AR/geo-game market. Walking
around questing in groups with dungeon spawns etc. using the WoW-IP would be
interesting as would hack and slashing around Diablo-style. Plenty of skinner-
box random loot material to keep people playing as well. The battle systems of
MMORPGs or Diablo should map nicely onto AR-games. I suppose actually using
distance could get tricky as people would try risky things to get battle
advantages so just spawn and fight round based is probably the way to go.
I thought Pokemon Go got dull rather quickly (played upto level 35, I liked
Ingress a lot more) and I still can't understand why they didn't opt for round
based battles. I've recently been playing Jurassic World Alive very casually
and like the overall design more. Each "catch" is sort of meaningfull and the
DNA-extraction sequence is more fun than catching a pokemon. On top of that
the battle system is cooler and rewards good play to a certain degree.
Before playing Pokemon Go I thought of AR in terms of overlaying 3D models
over a camera feed (which is the feature I turned off in Pokemon Go). It
certainly gave me a different perspective as I thought of some other use cases
for geo-location overlay.
------
kauloswag
I wonder when the first AR adblocker will become available.
------
Animats
Not too bad. They definitely have Pokemon Go, the Next Generation. How good is
the phone location system? Staying locked to the real world is essential for
AR.
[1]
[https://www.youtube.com/watch?v=kPMHcanq0xM](https://www.youtube.com/watch?v=kPMHcanq0xM)
------
Applethief
This is awesome! I'm loving the advancements in AR.
| {
"pile_set_name": "HackerNews"
} |
Responsive Design Won’t Fix Your Content Problem - Ashuu
http://alistapart.com/column/responsive-design-wont-fix-your-content-problem
======
danso
It's astonishing how much "responsive design" gets thrown around as a
buzzword...however, unlike a lot of buzzwords, "responsive design" actually
means something and implementing it has implications systemwide...Across
legacy sites, I've almost never seen it implemented in a way that didn't hide
critical information, often because the designers and the people in charge of
the legacy CMS probably don't coordinate enough. Things like, "Make everything
that isn't in a p or image tag go to the bottom of the page" can hugely affect
the context of certain elements.
For example, I worked on a site that hand hand-coded captions for photos and
so those captions ended up having tags that were displayed:none when the
device had a low-enough width. That's not great for photos that require the
context of the captions.
~~~
kamjam
Indeed. The thing that annoys me about responsive design is when I browse on
my phone and can't find what I'm looking for so I "Request Desktop Site" and
it's the same! Grrr. Even worse when they STILL serve me all those images, but
they are just hidden, eating up my limited bandwidth. Double Grrrr.
~~~
Pxtl
Usually "Request desktop site" means booting you back to the homepage of the
desktop site.
~~~
kamjam
No, that's not correct, at least in my experience. "Request desktop site"
usually means "reload the same page, but send a User Agent string so the
server thinks I'm calling from a full desktop browser". This works for
websites where the server does some UA string sniffing and sends different
html+assets for different types of devices.
The same works in reverse. In Chrome Dev Tools I can set the UA to
iPhone/iPad/Android ([http://imgur.com/mJY6lP6](http://imgur.com/mJY6lP6)) and
I _should_ expect to see a mobile version of the site. Of course, with
Responsive this does not work since responsive looks at screen size, not UA
string.
For example, try changing your UA string in Chrome to iOS 6 and visit
[http://www.bbc.co.uk/](http://www.bbc.co.uk/)
~~~
Pxtl
I mean literally if there's a button on the page that says "I want to see the
desktop site". Those inevitably stink.
------
ColinWright
Lessons I've learned the hard way that appear late in the article:
* Design your editorial workflow first
* You won’t have time to edit everything
* Plan for long-term governance
~~~
j_s
Both the OP and your reply emphasize the _what /why_... are there any
resources available explaining _how_? If not, it sounds like this is a great
opportunity for some blog posts!
------
beaker52
Content shouldn't just be defecated into pretty grids, with a responsive label
slapped on it and boardroom demo.
~~~
beat
Depends. If you want to get a budget and work to do in a big corporation,
that's _exactly_ how you should do it.
You simply need to drop your petty concerns about the quality of your work at
start looking at building your own personal fiefdom within the empire. Learn
the critical formula, _success = ass_kissing + buzzword_compliance_ , and
you're off on your magical race to the middle! Within 20 years, you'll be
staring at the layoff pink slip in your hand, looking back on a life of
mediocrity and forward to being unhireable anywhere else, wondering what went
wrong.
~~~
timje1
Woah, I bet you're great fun at parties.
~~~
coldtea
I bet this tired cliche of a phrase doesn't make you very popular at parties
either.
Couldn't you stick to replying to what he said?
| {
"pile_set_name": "HackerNews"
} |
Font Awesome 4.1.0 Released – 71 New Icons - fortawesome
http://fontawesome.io/whats-new/?r=hn&v=4.1.0
======
hiharryhere
Thanks for the hard work. It's a great contribution to the community.
One thing, could be my eyes, but is the box on the top of the cab a little off
centre? Am I going mad?
[http://fontawesome.io/icon/taxi/](http://fontawesome.io/icon/taxi/)
~~~
fortawesome
Excellent catch! Want to open an issue?
------
kipple
Still no infinity symbol? Much sadness :'(
[https://github.com/FortAwesome/Font-
Awesome/issues/1647](https://github.com/FortAwesome/Font-Awesome/issues/1647)
------
saltado
There's 3 Pied Piper icons to chose from!
~~~
fortawesome
Well, really just 2. One's an alias.
~~~
saltado
ah yeah, the (alias) appears on the next line on Chrome. Great work on the new
release!
~~~
fortawesome
On it.
------
pzaich
Stanford tree!
------
mkempe
bouy -> buoy
~~~
fortawesome
Nice catch. Fixing.
| {
"pile_set_name": "HackerNews"
} |
How to sell your company to Microsoft - kitsguy
http://www.techvibes.com/blog/jon-gelsey-director-of-acquisitions-and-investments-at-microsoft-talks-tactical-at-banff-venture-forum
======
helveticaman
This appears to be the guy that makes acq decisitons, or works for the acq
department. I know I read this intently.
------
Flemlord
> have your investors deck be 100% complete, be prepared and be quick
Anybody know what this means?
~~~
brown
He refers to the Powerpoint presentation that you would show to VC's or other
potential investors. It includes the high level objectives of your company,
why you're different, market size, plans, etc.
Refer to Guy Kawasaki's famous blog post on the 10/20/30 rule for a good
intro:
[http://blog.guykawasaki.com/2005/12/the_102030_rule.html#axz...](http://blog.guykawasaki.com/2005/12/the_102030_rule.html#axzz0SqMyZEG9)
I also prefer to have about 20 backup slides at the end that address most
common questions. Usually these will be deeper drill downs into market sizing,
competitors, financials, short/medium/long term plans.
The successful entrepreneurs who I've worked with are almost fanatical about
the investor deck. They obsess over every word on every slide. It's both
incredibly inspiring and utterly painful.
| {
"pile_set_name": "HackerNews"
} |
Decades after Chernobyl disaster, engineers slide high-tech shelter over reactor - d_e_solomon
http://arstechnica.com/science/2016/11/decades-after-chernobyl-disaster-engineers-slide-high-tech-shelter-over-reactor/
======
Tempest1981
30 years later, what a project: "More than 40 governments have contributed to
funding its construction (€1.5 billion), which involved 10,000 workers."
------
d_e_solomon
I was really impressed that it was slid on rails into place instead of being
assembled in place in sections.
| {
"pile_set_name": "HackerNews"
} |
23andme replies to the GAO: GAO Studies Science Non-Scientifically - jamesbritt
http://spittoon.23andme.com/2010/07/23/gao-studies-science-non-scientifically/
======
jballanc
First, let me say that I'm very much in favor of the direction that 23andMe is
headed. Personalized genetic medicine _is_ the future. Generic small molecule
treatments (and by generic, I mean that they're given to patients based on the
disease and not based on the patient+disease profile) have pretty much hit
their limit of efficacy and biologics are a lot like supersonic airliners:
they'd be great in theory if there weren't so many problems with them in
practice.
I also sympathise with the way that the GAO, the FDA, and a number of other
groups that have been weighing in on this issue have been lumping together
23andMe with some of the less reputable players in this burgeoning industry.
That said, I still don't agree with what 23andMe is doing. I've nearly
completed a Ph.D. in computational biochemistry and have been helping my wife
with her thesis in cell signaling, and the only thing I can tell you about the
information contained in a genetic profile that 23andMe provides you is that
you can't really tell much, if anything, from the profile that 23andMe
provides you. Even supposedly straight-forward genetics like the presumably
Mendelian pattern of inheritance in CFTR SNPs can get complicated fast. For
example: <http://www.ncbi.nlm.nih.gov/pubmed/14966131>
Of course, the real problem with 23andMe isn't with 23andMe but with the FDA.
The FDA's job, first and foremost, is to keep people safe. _IT IS NOT THE
FDA's CONCERN TO SEE NEW TREATMENTS DEVELOPED!_ The result of this is that the
medical community and the pharmaceutical industry are caught in a bit of a
catch-22. Everyone that I've spoken too (including past CSO's of large-name
pharmaceutical companies) recognizes that including genetic profiling of
patients in drug trials might reveal that different people benefit from some
drugs more than others, and that this is linked to genetics. In fact, you can
probably recognize this yourself: When you get a headache, what do you reach
for? For me, Ibuprofen does the trick, but Acetaminophen (Paracetemol) does
nothing. For my wife, it's the reverse. This is almost definitely linked to
genetics.
The problem is that genetic profiling for drug trials is expensive and, get
this, as of yet unproven to make a difference! Therefore, even if a
pharmaceutical company went to the expense of including genetic profiling in
their drug trial, the FDA would reject the results based on the fact that
genetic profiling is unproven. So, nobody does genetic profiling. So you can't
prove the usefulness of genetic profiling. etc.
Now, you might think that 23andMe has a chance of solving this issue, by
allowing people to take control of their own genetic profiling, but I think
they're actually doing more harm than good. Unless you are highly trained in
genetics and keep up with recent literature, you're not going to know what to
do with that profile. If you bring that profile to a doctor, they will pretty
much have to disregard it without a second thought (see FDA argument above).
Honestly, with the current state of medical research and pharmaceutical
regulation, a 23andMe profile is worth the paper you printed it on. What's
worse, the more doctors get pestered with people bringing in their own
profiles, the less likely they are to pay heed to any of it, and the
distraction that this causes the FDA, worrying about whether they should
regulate 23andMe or not, prevents the root issue from being addressed.
I liken 23andMe to Quest Diagnostics. If you've visited a doctor recently,
you're probably familiar with Quest. But when was the last time you went to
Quest asking for a blood test without being prompted to do so by a physician?
Honestly, you'd probably learn more on your own looking at the results of a
cholesterol test or CBC than by perusing your own genetic profile. Until the
U.S. (or, more likely these days, the E.U.) makes a major push for research
into personalized genetic medicine, 23andMe will be little more than a
novelty.
~~~
moultano
>Now, you might think that 23andMe has a chance of solving this issue, by
allowing people to take control of their own genetic profiling, but I think
they're actually doing more harm than good.
23andMe told me that I'm a carrier for phenylketonuria. That's useful
information to me, and strictly factual. The other slight-increased-risk-of-
this slight-increased-risk-of-that isn't all that useful, but who cares? They
give you the odds. I can't imagine what more you'd expect from them than to
present current research conclusions as unbiasedly as they can.
I think you have a mistaken impression about what their product does.
~~~
jballanc
> 23andMe told me that I'm a carrier for phenylketonuria. That's useful
> information to me, and strictly factual.
Quick! What's your chance of passing this on to your child? If you said that
it depends on the genotype of your spouse, you win! If your spouse doesn't
have phenylketonuria, what's the probability? If you said 1/4, your right!
Now, let's say you've met someone and convince them to get a genetic test, and
they turn out to also be a carrier, what do you do? Do you risk the 1/4
chance? One of the really interesting things about the human genome project
was that the scientists involved knew that these sorts of hard questions would
come up, so they really emphasized the human and counseling aspect of the
research, in addition to the hard science. These are the types of people who
are upset that 23andMe has mostly undone what they were attempting to do by
emphasizing a holistic approach to people understanding their own genetics.
Also, I just have to point out how ironic your example is. In fact, this
information is rather useless. Phenylketonuria is a relatively easily managed
disease, and all children born in the U.S. (and many other countries) are
already tested at birth, paid for by the government who did the studies and
decided this was a good test to do. You actually didn't learn anything you
wouldn't have potentially found out anyway (and at no cost to you).
~~~
moultano
>One of the really interesting things about the human genome project was that
the scientists involved knew that these sorts of hard questions would come up,
so they really emphasized the human and counseling aspect of the research, in
addition to the hard science.
How paternalistic of them. These questions are hard because they are personal,
and not the sort of thing that should be regulated.
>Now, let's say you've met someone and convince them to get a genetic test,
and they turn out to also be a carrier, what do you do? Do you risk the 1/4
chance?
That's my choice. Otherwise, it wouldn't have been. Though it isn't that
important for phenylketonuria, it might have been a deal-breaker if I were a
carrier of sickle-cell.
You're dancing around my point here. Some of the information they provide is
iron-clad binary, and can be very useful. Most people may not get more out of
it than slight increases or decreases in their relative risk, but some will
find out things that are life-changing. The last time 23andMe came up, one HN
commentator said that he found out that he was likely to be lactose-intolerant
from it, so he changed his diet and it changed his life. He had lived with the
symptoms for so long that he just assumed that was how life was supposed to
be.
Here's what Sergei Brin got out of it:
<http://too.blogspot.com/2008/09/lrrk2.html>
I don't understand your motivation for wanting to forcibly withhold this
information from people.
------
roder
I'm a 23andme customer (post-DNA day) and after seeing the DNA mixup[1] and
watching the youtube video released by the subcommittee on hearings and
oversite[2], I am becoming increasingly weary of consumer genetic testing.
I am glad to read that 23andme support regulation, because ultimately that is
what should be required. Much like the regulation that HIPAA provides, DNA
information should be federally protected and regulated.
[1] [http://www.switched.com/2010/06/08/23andmes-dna-mixup-
leaves...](http://www.switched.com/2010/06/08/23andmes-dna-mixup-
leaves-96-customers-with-wrong-test-results/) [2] <http://bit.ly/9MrBpe>
~~~
jamesbritt
I'd prefer to see Consumer Reports handle this instead of anything like the
FDA.
I'm an adult; I can decide for myself what to make of the information I
obtain.
~~~
timr
_"I'd prefer to see Consumer Reports handle this instead of anything like the
FDA. I'm an adult; I can decide for myself what to make of the information I
obtain."_
I realize that this isn't going to be a popular opinion around these parts,
but no, you can't. Smart as you may be, you're utterly ignorant when it comes
to this stuff, and you haven't got a chance of beginning to understand the
intricacies that go into interpreting the data that these companies are giving
you. More importantly, you don't have _time_ to understand.
I have a Ph.D. in Biochemistry, and for the most part, I couldn't tell you
whether the data in a 23andMe report is meaningful or utter garbage. I perhaps
have enough knowledge to go out and _find_ the necessary papers, read and
interpret them in light of the literature on genomic analysis, and evaluate
risks...but I wouldn't begin to have the time to do it properly for all of the
data in a given report. No offense, but you haven't got a prayer. These
companies could be fabricating results whole-cloth, and you'd have no way of
knowing it.
Most importantly, because there's no regulation of the methods used by these
companies, neither you nor I have any way of knowing what methods they're
using, whether the techniques are precise or accurate, or even if the labwork
is done by skilled technicians in a sterile environment. Without these
assurances, even if you _could_ understand the data, you would have no
guarantee that the data was even gathered correctly.
There are some domains where expertise matters more than anything else, and no
amount of brainpower makes up for the instincts provided by years of training
and experience. This is one such area.
~~~
moultano
And for some reason you think this is different from any other field? I don't
have a prayer when I walk into a car dealership of figuring out whether a car
I'm thinking of buying will function properly. There are too many parts for me
to conceivably understand. It's very complicated.
Thankfully, there are many experts and expert organizations that I trust to
make this determination for me. I defer to their authority.
How is this any different? The market seems to handle this just fine.
~~~
jballanc
> And for some reason you think this is different from any other field?
Yes, actually. If we must really flog a tortured analogy: This is not like
going to your local mechanic friend and asking for car buying advice. It's
more like this guy named Karl Benz comes and tells you he's got a great new
invention called the internal combustion engine and you should definitely buy
it. So you go to your friend the steam engine mechanic and ask him for advice.
Sure, he understands the principles, maybe, but this is something completely
new!
If you must appeal to authority, consider you have two experts in this thread
telling you that even they think the topic is too complex to draw useful
conclusions. If you don't trust random people on the internet (not that I
blame you), then go to the source. From the article:
_There are valid scientific reasons for different estimates from different
companies, such as: companies employ slightly different statistical models for
making risk estimates; companies establish different criteria for the
inclusion of associations in their reports; new associations are being
discovered at a faster rate than companies’ development cycles; companies may
test for an imperfectly overlapping set of genetic variants for reasons
including the ability of different genotyping technologies to assay certain
variants._
To be clear, this is not a case of Car and Driver favoring German engineering
but JD Powers always skewing towards American manufacturers (completely
contrived example, BTW). In this case the statistical models _are_ the
science, not just some interpretation of the science. This field is young. Too
young. This money and effort would be better spent on basic research.
> The market seems to handle this just fine.
No, the _market_ gives us homeopathy and snake-oil salesmen. Humans,
especially when confronted with areas in which they are not knowledgeable, can
be surprisingly irrational!
~~~
moultano
>To be clear, this is not a case of Car and Driver favoring German engineering
but JD Powers always skewing towards American manufacturers (completely
contrived example, BTW). In this case the statistical models are the science,
not just some interpretation of the science.
Statistical models that are _far far_ more suspect drive decisions in areas of
all our lives that have far more material effect than this.
>This field is young. Too young. This money and effort would be better spent
on basic research.
Whose money? Mine? They aren't a non-profit.
>Humans, especially when confronted with areas in which they are not
knowledgeable, can be surprisingly irrational!
Isn't that their right?
------
bkrausz
I definitely treated my 23andme profile as a piece of entertainment, and don't
think people should take it too seriously. I found someone who may be a
distant cousin, and a few interesting traits, so I'd consider it a slightly
more informed (and similarly priced) palm reading.
I'm really glad I approached it like that, because I was one of the people who
were in the DNA mixup. They told me I was a Tay-Sachs[1] carrier, which has a
potentially vital impact on my relationship decisions (and was also a bit
concerning given that neither of my parents are carriers). I'm really glad
they caught and fixed it, and that I didn't take it seriously enough to lose
sleep over the results.
[1] - <http://en.wikipedia.org/wiki/Tay%E2%80%93Sachs_disease>
------
carbocation
23andme uses GWAS SNPs, about 1 million of them. You have a 3 billion base
pair haploid genotype, so they sample ~1/3000 base pairs. What does this mean?
First, most of the SNPs are noncoding, found not in exons but in introns and
intergenic space. Still, you can't dismiss these; some are causal for major
gene expression changes (my lab has articles set to be published next week on
this topic; more details then).
Second, the fact that you're only covering 1/3000 nucleotides tells you that
there's no way you can fully specify someone's genetic risk with this data,
ever. There is a debate in the community right now about whether common
variants (which are on 23andme type chips) or rare variants contribute most of
the genetic variability. I think it's clear that common variants are winning
this game — but this is on a population level! On an individual level, I don't
think I'd ever be comfortable telling someone their risk of X with just GWAS
data. For the population, their rare/private mutation in Gene Y has little
impact, perhaps, but for try telling that to the 5 people with the Gene Y
mutation who will die by age 20. (Extreme hypothetical to try to drive home
the broader point.)
Let's also not forget that phenotype is a computed expression of
genotype+environment. If your genetic risk score from 23andme says you are at
high risk of heart disease, yet your grandparents on both sides are 100 and
healthy and your LDL-C is 60, should you really be concerned?
~~~
khafra
If you know somebody's medical history and 23andMe results, can you say more
things about them at any given level of confidence than you could with just
zis medical history?
~~~
carbocation
Not really, at least not for cholesterol; not right now. [1] Hopefully in the
next few years, yes. But genetic data, since our knowledge is limited, does
not increase risk discrimination beyond family history. It does, modestly,
increase risk classification.
_Edit_ \- Link didn't work initially. I was not trying to just snarkily link
to pubmed - I had a specific paper in mind. Sorry!
[1] <http://www.ncbi.nlm.nih.gov/pubmed/18354102>
------
jacquesm
23andme engage in infotainment with an additional helping of candidates
disease thrown in for each and every one of their customers. GPs are already
amongst the most overworked people on the planet, the last thing they need is
a herd of people with waving print-outs they (the people) don't understand in
support of yet another round of imaginary diseases.
It's the perfect product for the hypochondriac.
| {
"pile_set_name": "HackerNews"
} |
Lossless compression of English messages using GPT-2 - kleiba
http://textsynth.org/sms.html
======
cs702
...by the one and only Fabrice Bellard: "gpt2tc is a small program using the
GPT-2 language model to complete and compress (English) texts. It has no
external dependency, requires no GPU and is quite fast...The compression
ratios are much higher than conventional compressors at the expense of speed
and of a much larger decompressor. See the documentation to get results on
text files from well known compression data sets."
A natural question I've pondered from time to time is whether Fabrice is
really a time traveler from a more advanced civilization in the future, sent
back in time to show us, mere mortals, what humankind will be capable of in
the future.
If this sounds far-fetched, consider that he has created FFMPEG, QEMU, LibBF,
SoftFP, BPG, TinyEMU, a software implementation of 4G/LTE, a PC emulator in
Javascript, the TCC compiler, TinyGL, LZEXE, and a tiny program for computing
the biggest known prime number.
And that's just a partial list of his successful projects, which now of course
also include software for lossless compression with Transformer neural
networks.
Any of these projects, on its own, would be considered a notable achievement
for an ordinary human being.
Source: [https://bellard.org](https://bellard.org)
\--
Copied and edited some text from my post a year ago:
[https://news.ycombinator.com/item?id=19591308](https://news.ycombinator.com/item?id=19591308)
\-- I never cease to be amazed by the guy.
~~~
londons_explore
This particular project is noteworthy mostly for its completeness and 'it just
works' functionality. Tens of researchers before him have used arithmetic
coding on the outputs of various neural network models to do lossless
compression of text or images.
Bellards contributions are a packaged tool (as opposed to PoC code) and demo
webpage, and the idea of using CJK characters rather than outputting binary
data (in todays world of JSON, binary data has fallen out of fashion).
------
goodside
Not to diminish what a cool idea this is, but isn’t it cheating to not count
the size of the GPT2 parameters as part of the final compression ratio?
Assuming the decompressor already has GPT2 weights is analogous to assuming it
has a massive fixed dictionary of English words and phrases and doing code
substitution — it’s likely the pragmatic answer in some scenario, but it’s not
a fair basis for comparison. Real-world compressors use dictionary coders, but
they build the dictionary specifically for the data when it’s compressed and
then count that dictionary in the compressed size. For competitions like the
Hutter Compression Prize (1GB of English Wikipedia) the reported size includes
the complete binary of the decompressor program too.
GPT2 model weights require over 5GB of storage, so you’d need a corpus orders
of magnitude larger for it to be even close to competitive by that standard.
And it appears it would lose anyway — the OP claims ~15% ratio even with
“cheating”, and the current Hutter Prize winner for 1GB of enwiki is ~11%
without “cheating”.
~~~
Jaxkr
Static dictionaries or models in compression algorithms are not “cheating”.
Brotli, for example, achieves amazing results with its [static
dictionary]([https://gist.github.com/klauspost/2900d5ba6f9b65d69c8e](https://gist.github.com/klauspost/2900d5ba6f9b65d69c8e)).
However, I agree with you on the real-world uselessness of a GPT-based
compression algorithm.
~~~
goodside
That’s why I put “cheating” in quotes — it’s pragmatic, but it complicates the
comparison into something that can’t be measured in a single number. I grant
you that typical bechmarks ignore the static dictionary in comparing Brotli to
other compressors, but they also ignore the size of the binary itself. This is
because both are assumed to be small and highly general, and GPT2 violates
both assumptions. Brotli’s dictionary is 122 KB and covers many natural and
programming languages, whereas GPT2 weights are 5 GB and only cover English.
No real-world static dictionary is even a thousandth of that size.
Large static dictionaries exploit a loophole that would make comparisons
meaningless if carried to the extreme — you could trivially include the entire
benchmark corpus in the decompressor itself and claim your compressed file
size is 0 bytes. That’s why the Hutter Prize rules are what they are.
------
matthewfcarlson
Just for kicks and giggles, I threw in some rather obscure words to see what
would happen. It's been compressing for a few minutes and showing no sign of
progress. Cool project!
~~~
jkhdigital
For anyone who doesn't get why this would happen: GPT-2 basically outputs a
probability distribution for its guess of the next word, and then the encoder
uses these distributions to perform arithmetic coding adaptively. If the next
word in the source text is not actually present anywhere in the output
distribution, it cannot encode it.
~~~
londons_explore
I may be wrong, but I thought GPT2 could also output partial words/syllables
(for unknown words), or individual letters if they don't make a syllable.
The simple way to achieve that is to have an encoding dictionary of words, but
then add to the end of the dictionary "sh", etc., and then add to the end of
that "a", "b", "c", etc. When tokenizing words, prefer to use a whole word,
but if you can't do that, split to syllables, and failing that, individual
letters. That has the benefit that any ascii string can go through the system.
~~~
jkhdigital
Yes, this is why I said "basically". The fact that GPT-2 tokens are not
necessarily prefix-free can be a problem for arithmetic coding, but I've found
that "greedy" parsing almost never fails in practice.
So yes, there are ways to work around this but it seems like the simplest
explanation for why unusual words break the encoder.
------
speedgoose
I don't understand why it shows Chinese characters. Assuming utf-8, English
characters are a lot more compact than Chinese characters. So we can't really
compare.
Otherwise it's a good idea and it works, but it's super slow, only working for
English text, and the system requirements are huge. I like it.
~~~
sp332
It's counting characters, so it is comparable.
This is useful for applications that limit the number of characters, e.g.
Twitter.
~~~
m4rtink
Yep, as far as I can tell, you can cram about twice as much information to the
same number of Japanese as you would cram into Latin characters.
I wonder if Chinese is even more info dense, as it does not have the syllabic
hiragana/katakana characters ?
~~~
dheera
Modern Chinese is typically more dense than modern Japanese (which is
partially phoenetic), and ancient formal Chinese is even more compact than
modern Chinese.
However it's worth noting that Chinese characters are analogous to entire
words in English, and are composed of components much like English characters
are composed of letters.
For example "thanks" is spelled "t h a n k s"
"謝" is made up of "言 身 寸"
(Of course, the components in Chinese have less correlation to their
pronunciation, but the main point I'm making here is that there is a LOT of
overlap in the common components used to assemble the entire Chinese lexicon.)
It is really not a fair comparison to compare languages in terms of their
number of characters needed to represent something.
Better measures would be the fastest time (in seconds) needed to use speech to
convey a concept intelligibly to an average native speaker, or the square
centimeters of paper needed to convey an idea given the same level of
eyesight.
~~~
m4rtink
Indeed, what I meant was basically how much information you could cram into a
message in digital medium that is character limited, but not really limited in
what characters you can use in it. Like SMS messages or Twitter messages when
still limited to 140 characters.
------
minimaxir
A neat trick I found while working with GPT-2 is that byte-pair encoding is,
in itself a compression method. With Huggingface Transformers,
encoding/decoding this way is very fast.
I've implemented this approach in my aitextgen package
([https://github.com/minimaxir/aitextgen/blob/master/aitextgen...](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/TokenDataset.py#L238))
to encode massive input datasets as a uint16 Numpy array; when gzipped on
disk, it's about 1/10th of the original data set size.
However, the technique in this submission gets about compression to 1/10 w/o
the gzipping. Hmm.
~~~
jkhdigital
This is really just a way to show how good GPT-2 is at predicting text. If you
know anything about information theory, you'll know that the entropy of the
information source places a hard limit on how much it can be compressed. If
GPT-2 is really good at predicting English text, then the entropy of its
output should be very very close to the entropy of natural English text. Thus,
using GPT-2 predictions as an adaptive source encoder will achieve compression
ratios that approach the information content (entropy) of English text.
------
starpilot
I compressed "I am going to work outside today," then put the compressed
output in Google Translate. Google translated the Chinese characters back to
English as "raccoon."
~~~
dhosek
I think the Chinese text that comes out confuses Google translate. I took the
whole first sentence of Hamlet's soliloquy which compressed to 䮛趁䌆뺜㞵蹧泔됛姞音逎贊
and plugged that into Google Translate. It came back with "Commendation." The
reverse translation is 表彰
~~~
james412
It's not Chinese text, it's an arithmetic-coded stream of bits mapped so the
bits fall within the range of some codepoints. It's basically a variant of
base64 except for Unicode.
(Side note: aren't these codepoints very expensive to encode in UTF-8? It
seems there must be a lower-valued range more suited to it)
~~~
toast0
The page for base32768 has some efficiency charts for different binary to text
encodings on top of different UTF encodings, as well as how many bytes you can
use them to stuff in a tweet. Depends on where you're going to house the data,
I guess.
[https://github.com/qntm/base32768](https://github.com/qntm/base32768)
~~~
infogulch
In addition to being 94% efficient in UTF-16 (!), this reveals some additional
reasons why one might want to optimize for number of characters: fitting as
many bytes as possible into a _tweet_ which is bounded in the number of
characters not bytes.
------
fla
Try swapping a few characters in the compressed string before decompressing
and get a totally unrelated, but somewhat plausible, sentence.
~~~
VMG
Try swapping a few characters in the compressed string before decompressing and get a totally unrelated, but somewhat plausible, sentence. -->
䔹䧹焫놉勏㦿顱㦽膑裚躈葊
Swapping last two:
䔹䧹焫놉勏㦿顱㦽膑裚葊躈 -->
Try swapping a few characters in the compressed string before decompressing and get a totally unrelated, but somewhat applied tlh
Swapping first two:
䧹䔹焫놉勏㦿顱㦽膑裚躈葊 -->
Sexy Shania Twain acting as a sprite for sexy Hogan's Alley demo dude
my site
my favorite animal's name is camelid 2 my favorite artist is david maile my favorite movie's are
Pretty wild!
~~~
jkhdigital
It's just adaptive arithmetic coding, with the distribution provided by GPT-2
instead of some other statistical analysis of the source. He uses CJK simply
to make the output printable, but it's really just random bits. I mean, it's a
neat idea, but certainly not novel.
------
dmarchand90
I'm really impressed that this seems largely written by scratch in c. "This
demo has no external dependency."
------
vessenes
I am guessing that Fabrice is planning on some sort of commercialization here;
this is a re-issue of something originally on his website.
A fun game to play is to see how many characters a name takes: it’s an
indication of your importance to the Internet.
In answer to the why Chinese, it seems to me to be easier to read and more
compact to display than hexlified bytes.
~~~
dhosek
My last name compressed to 3 characters. I tried my wife's last name and it
was 3 characters, then I decided to add the accent to it that normally gets
dropped in an English-language context and it compressed to 2. Adding first
names, I was 4 characters and she was 5 with and without the accent. William
Shatner went to 6 characters. Barack Obama went to 2. William Shakespeare also
to 2.
~~~
vessenes
Right, I guess your last name is the importance of all Hoseks worldwide,
albeit vis-a-vis some chunking of the word, so it has to compete with the
importance of other Hos es like hospitals and so on.
------
hint23
FYI, the corresponding standalone Linux command line version is available at
[https://bellard.org/nncp/gpt2tc.html](https://bellard.org/nncp/gpt2tc.html) .
It also does text completion and file compression.
------
lxe
> using the probability of the next word computed by the GPT-2 language model
Can the same effect be achieved by looking at actual probability of the next
word from a large corpus of existing text (a-la markov chains)?
~~~
duskwuff
Less effectively. GPT-2 and a Markov chain are both predictive models; GPT-2
just happens to be a much more complex (and, in most cases, more accurate)
model for English text, so fewer bits are required on average to encode the
delta between its predictions and the actual text.
------
jkhdigital
Paste encrypted bits (mapped to the CJK range he uses) in the "decompress" box
and you've got format-transforming encryption.
------
maest
I'm not at all familiar with arithmetic encoding (or adaptive version
tehreof), but, after reading some guides, it seems to me that the novel thing
here is using GPT2 to somehow generate a character probability distribution?
The theory being that GPT2 should have a distribution closely matching
"reality" and thus minimizing the output size?
------
aapeli
So if you end up being famous and talked about a lot on Wikipedia, your name
will compress better?
The impact of bias in training data is interesting in general here. What's the
impact on Wikipedia's article biases? That's probably one of the main corpuses
used.
------
nmca
This guy should enter the Hutter Prize -
[http://prize.hutter1.net/](http://prize.hutter1.net/)
This won't win, but it seems he cares and has some talent :)
------
d_burfoot
A fun game is to compress some text, then look up some random Chinese words,
cut-and-paste them into the compressed output, and then decompress again.
~~~
nmstoker
Yes or even just swap the compressed character order and it still results in
interesting somewhat similar texts.
------
aquajet
Title should be renamed to "Language Models are Lossless Compressors"
~~~
jkhdigital
Exactly, along with a link to some basic information theory Wikipedia
articles.
------
Vvector
Looks like the work is done server-side. And we've hit a bottleneck
------
knolax
You boomers need to know that 먓띑뒢끟 are precomposed hangul characters. It's not
hard to just say CJK.
| {
"pile_set_name": "HackerNews"
} |
Voting is a Sham Mathematically Speaking - eibrahim
http://haacked.com/archive/2012/11/27/condorcet-paradox.aspx?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+haacked+%28you%27ve+been+HAACKED%29
======
maratd
This is off. Voting is not a system for selecting the best candidate. Voting,
in whatever forms it exists, is a system to avoid violence and conflict.
Each side feels it had a fair shot, regardless of outcome. The purpose of
voting is to leave you with that feeling, to avoid unpleasant behavior from
the losing party. The best person for the job almost never gets it. That's why
we don't vote people in when hiring somebody at a company.
~~~
diego
Yes, and in addition it's also a system to ensure that elected officials and
their parties have accountability. If you win an election and then "betray"
your voters, you won't be reelected. If you can't be reelected, you could
damage the chances of your party. In essence, democracy in its current form is
not so much about choosing the right/best candidate. It's more about making
sure the winner cannot become a despot.
~~~
ontheotherhand
_"If you win an election and then "betray" your voters, you won't be
reelected."_
That ain't accountability, that's a joke. If you pay me 5000$ to do a job, and
I don't lift a finger, and your "punishment" is that we won't do that a second
time, then I have free money and you're a fool. It doesn't hurt the party one
bit either; that's the whole point of corporatism, you can swap out
individuals while the "brand" rolls on, saying "whoops, bad apple" every 5
seconds, with people forgetting after 2.
_"It's more about making sure the winner cannot become a despot."_
Correction: it replaces a single despotic individual with a tag team of people
who basically can do whatever they want - within boundaries, sure, they at
least have to be somewhat slick about it, and know how to make a puppy face,
too; but certainly not within boundaries defined by the actual will of the
people who handed their power as souvereigns over to their representatives.
Having despotic entities control you is not one iota better than despotic
humans, not in the long run.
Despotism is marked by the control going only down, accountability going only
up -- period. Not by angry men on podiums necessarily, and not by bloodshed.
(Not that there isn't plenty bloodshed, but that's besides the point) If you
seriously see a huge difference or improvement there, you've fallen for it I'm
afraid.
~~~
maratd
You know, it's pretty easy to poke holes in something. It's another thing
entirely to come up with something better.
There hasn't been a single political system that hasn't been corrupted.
~~~
ontheotherhand
_"You know, it's pretty easy to poke holes in something. It's another thing
entirely to come up with something better."_
I have no problems with coming up with something better. More like 3 a day
before breakfast; I'd just have problems making people actually go along with
whatever I would come up with. But you know what? If people are so fucked that
even I can't magically solve it, that doesn't mean I can't say they're fucked.
It just means they're gonna pout and roll their eyes, none of which is news or
unexpected.
_"There hasn't been a single political system that hasn't been corrupted."_
What's your point? That therefore criticism isn't allowed? That naive believe
in cynical manipulation is not an issue?
Also, was I talking about a "system"? No, I was talking about specific
circumstances, an actual situation, and individuals and their responsibilitie.
But of course, it's easier to just throw some mud into a completely different
direction, not hitting anything, and then deluding oneself into having dealt
with the issue just nicely, than to actually address any of it.
~~~
maratd
> I have no problems with coming up with something better. More like 3 a day
> before breakfast; I'd just have problems making people actually go along
> with whatever I would come up with.
Perhaps because you don't actually share your "something better"?
Two posts in, lots of words, still no alternatives.
------
mtgx
Approval Voting mostly solves the "strategic voting" part that almost forces
you to choose the "most likely to win" candidate, or if you hate that one, the
one closest to him, while eliminating the spoiler effect, and giving 3rd party
candidates a much higher chance of winning than with current traditional
voting systems.
<http://www.electology.org/approval-voting>
<http://en.wikipedia.org/wiki/Approval_voting>
~~~
DennisP
Plus it doesn't run into trouble with Arrow's theorem, since it's not a "rank-
order voting system," unlike plurality, instant runoff, and various others.
Range voting has the same advantage. In computer simulations measuring how
well the election result matches voter preferences, either range or approval
is as much an improvement over plurality as plurality is over picking someone
at random (or, if you like, monarchy).
<http://rangevoting.org/BayRegsFig.html>
~~~
gus_massa
I'm not sure about the definitions, but if the Arrow theorem doesn't apply to
the Approval Voting sistem them I think that it must not be applicable to the
"majority rules" criterion.
In this two system the idea is that you get very little information from the
voters (best candidate / a set of candidates) and don't know all the
information about the order of preference and the relative strength. So I
don't understand why having less information is better (theoreticaly).
~~~
Empact
> I don't understand why having less information is better (theoreticaly).
A ranked-choice ballot only encodes the orders the candidates against one
another, whereas approval and score votes also encode the candidates'
positions within the voter's range of subjective preferences. That is, if we
have 3 candidates (A, B, C) and a few voters which each voters has a range
from love to hate for each candidate, like so:
Love Hate
|-A--B-------------------C-|
|-A-------------B-----C----|
|-------------------A-B-C--|
|-A-B---C------------------|
Under ranked choice voting, every one of these voters' ballots would look the
same:
1)A, 2)B, 3)C
Ranked choice voting encodes the ordering of the preferences, but the
intensity of those preferences is lost when the ballot is cast. Whereas under
approval and score voting, every one of these voters represents their
preferences differently, because they're reflecting their personal response to
each candidate:
Approval | Disapproval
|-A--B-----|-------------C-|
|-A--------|----B-----C----|
|----------|--------A-B-C--|
|-A-B---C--|---------------|
Of course, some information is lost in the fact that we only have 2 values
approval/disapproval to encode positional preferences. But I would argue this
information is already more meaningful than a fully-expressed ranked ballot.
And if necessary, score voting can capture more of that information by
offering > 2 levels to divide the candidates into.
~~~
gus_massa
OK, this method recollect some information that the ranked choice voting
ignores. But I still don't understand why the Arrow's theorem doesn't apply.
If in a hypothetic population everyone loves/hates each candidate equally
spaced, then in that population it is possible to apply the Arrow's theorem
and prove that for that population this method doesn't work. But the method
should be useful for every population, even the pathological ones.
------
gabemart
I found this article quite frustrating.
>Condorcet formalized the idea that group preferences are also non-transitive.
If people prefer Hanselman to me. And they prefer me to Guthrie. It does not
necessarily mean they will prefer Hanselman to Guthrie. It could be that
Guthrie would pull a surprise upset when faced head to head with Hanselman.
I found this by far the most interesting assertion, but the examples under
"Historical Examples" don't demonstrate this phenomenon at all.
For instance, the author asserts that the Nader spoiler effect demonstrates
nontransitive preference relationships. But from my reading, it wasn't the
case that that group as a whole preferred (Gore over Nader) and (Nader over
Bush) but (Bush over Gore). It was simply that due to the structure of the
election, they happened to elect Bush. While this ties into the author's point
about the "unfairness" of elections, it doesn't demonstrate nontransitive
relationships in group preferences.
Could someone post an example of a group preference configuration in which the
group prefers (A over B) and (B over C) but (C over A)?
I understand the concept of nontransitive relationships in general, but in the
specific domain of fitness for office, I can't work out how this would come to
be.
~~~
Dove
_Could someone post an example of a group preference configuration in which
the group prefers (A over B) and (B over C) but (C over A)?_
Sure, that's easy to construct.
Peter's preferences: A, B, C
Paul's: B, C, A
Mary's: C, A, B
The group prefers A over B, by a 2-1 vote. Likewise B over C, and C over A.
------
saraid216
> Voting is a method that a group of people use to pick the “best choice” out
> of a set of candidates. It’s pretty obvious, right?
And like many other pieces of "common sense", this isn't correct.
Wikipedia says, "Voting is a method for a group such as a meeting or an
electorate to make a decision or express an opinion—often following
discussions, debates, or election campaigns. Democracies elect holders of high
office by voting." This is _very_ different from "picking the best choice".
I realize that the American public has been indoctrinated for the past few
decades that voting is the only way you make yourself heard, but this isn't
true and never has been. I recently learned about Wellstone Action (
<http://en.wikipedia.org/wiki/Wellstone_Action> ); I encourage everyone to
look into enrolling. (I haven't done so myself yet. I probably will at some
point, though.)
> On one hand, this seems to be an endorsement of the two-party political
> system we have in the United States.
Actually, what it's an endorsement of is all of our other voting systems where
the choice is between APPROVE and REJECT. You have to endorse the existence of
political parties in the first place before you can endorse a two-party
system, and Arrow's theorem goes nowhere near that.
~~~
bo1024
> _Wikipedia says, "Voting is a method for a group such as a meeting or an
> electorate to make a decision or express an opinion—often following
> discussions, debates, or election campaigns. Democracies elect holders of
> high office by voting." This is very different from "picking the best
> choice"._
I don't see how they are different. The author never said that candidates have
to be people. Substitute the word "alternatives" if you prefer.
~~~
saraid216
> I don't see how they are different.
"I think this is the way we should proceed" is qualitatively different from "I
think this is the best choice".
> The author never said that candidates have to be people. Substitute the word
> "alternatives" if you prefer.
Substitute it in place of what? Where did I require that the candidates must
be people?
------
wam
Learning about Arrow's theorem definitely changed the way I think about
elections in the US. It also changed the way I think about election news
coverage. I used to be an ardent "horse race news" hater. I still am, in terms
of how utterly it dominates election news, but now I see some utility in it as
well.
Arrow and these others have focused how I look at the game-theoretic
underpinnings of elections and the importance of being up to speed on exactly
how candidates and interested parties are crafting strategies around the
complexities built into the game. When people conflate the "message" of the
candidate with the strategy (which is always) I still get irritated. I have a
tendency toward partisanship and that kind of thing clouds my judgment. But
the day-in day-out workings of the campaigns and PACs are more interesting to
me now, because they shed light on what's fundamentally "broken" (from my
point of view) in the underlying system, as opposed to what I simply find
distasteful or disappointing.
Math!
~~~
saraid216
Social choice theory is One Of Those Things which everyone (myself included)
needs to spend more time learning about.
------
basseq
> A voting system can only, at times, choose the most preferred of the options
> given. But it doesn't necessarily present us with the best candidates to
> choose from in the first place.
Reminds me of HHGTTG: "Anyone who is capable of getting themselves made
President should on no account be allowed to do the job."
------
Tloewald
The article confuses two issues, one illustrated by Arrow's Theorem which is
more relevant to parliamentary procedures (where any set of more than two
choices has to be resolved as a series of binary choices, and the voting
population is small and its preferences well understood) and first past the
post electoral systems which are completely hopeless, especially when tiered,
as in the US.
Most of the article is essentially discussing an example of Arrow's Theorem
where if you know people's preferences and can present them with binary
options in an order of your choosing you can obtain any outcome except the
least popular option. This is very artificial and not a real flaw of
preferential and proportional electoral systems where (a) individual
preferences are not known and (b) the entire vote is done in one step, not in
a carefully chosen series of binary options. Great for gaming a committee,
lousy for elections.
As others have observed, the chief purpose of voting is allowing government
transitions without violence and with the appearance of procedural fairness,
but the fact remains voting works just fine when the population has a clear
cut preference ("throw the bastards out").
Well, modulo corrupt redistricting.
Americans who want to talk about voting really need to understand that there
are other voting systems than the horse and buggy system used in the US and
UK.
------
aprescott
_In this case, Hanselman is the clear winner with three votes, whereas the
other two candidates each have two votes. This is how our elections are held
today._
This is dependent on the exact election taking place. With the US presidential
elections, my understanding is that a plurality of electoral college votes is
not enough to win, you need an actual majority. In the event of a simple
plurality win with no majority, the result is decided by the House of
Representatives (which may itself be tied).
~~~
rjzzleep
is it? the actual candidates already get preselected.
even if youre right, which is likely, it doesn't matter, because any majority
was previously generated by pluarility.
------
mmphosis
Mixed Member Proportional (MMP) Representation solves some of the problems.
[http://www.youtube.com/watch?v=QT0I-sdoSXU&feature=relmf...](http://www.youtube.com/watch?v=QT0I-sdoSXU&feature=relmfu)
The problem with MMP is when the parties choose the ranking of their list of
representatives. I think it would be even better if rather than use a party
generated list, instead the representatives are determined by people's votes.
------
streptomycin
Also, there's this: <http://papers.nber.org/papers/w15220>
------
Noughmad
I find it interesting that in all those discussions about voting systems,
which are mostly focused on USA president elections, nobody mentions two-round
voting, also known as run-off voting.
This is what we have in Slovenia for electing our president. In the first
round, there are many candidates, and each voter can vote for one. If any
candidate gets at least 50% of votes, he automatically wins.
If, on the other hand, there is no majority winner, the two best candidates
compete head-to-head in the second round.
Such a system allows you to always vote for your favourite candidate in the
first round, and if your candidate doesn't make it into the second round, you
can vote for the fallback one.
Details: <https://en.wikipedia.org/wiki/Two-round_system>
~~~
kscaldef
I don't believe this satisfies the Condercet criterion either. Consider these
rankings of preferences:
20% A ...
20% B ...
15% C ...
15% D C ...
15% E C ...
15% F C ...
In a two-round run-off, one of A or B will be elected, despite the fact that
60% of voters prefer C over either A or B.
~~~
im3w1l
And in the real world, people would second guess this, and enough people would
tactic vote for C that it would't be a problem.
"But then they can't vote for their prefered candidate which was the whole
point"
Well, _some_ people can. D, E, F could still get a few percentage points. More
importantly, I don't think we would see convergence to a 2-party system.
Unless I am missing something, it looks like at least 3 parties could be
sustained.
------
gradstudent
Preferential voting solves all these problems. You vote by ranking the
candidates on order of preference. If your top candidate does not win the vote
goes to next guy down the line until eventually it ends up for one of two
candidates.
~~~
Pinckney
Preferential voting does not satisfy the Condorcet criterion.
[http://en.wikipedia.org/wiki/Instant-
runoff_voting#Voting_sy...](http://en.wikipedia.org/wiki/Instant-
runoff_voting#Voting_system_criteria)
~~~
bradbeattie
To demonstrate this, consider the following.
80 people: A, C, B
50 people: B, C, A
35 people: C, B, A
IRV eliminates C (as it has the fewest first-place votes) and elects B. But
voters on the whole prefer C over B (115 to 50). This is the failure that
Pinckney refers to.
------
hcarvalhoalves
If you look at Brazil, which has multiple parties and plurality voting, the
problems are pretty clear.
In this year's elections, the candidate with 28% of the votes was elected
mayor in my city.
------
nikatwork
I've always thought New Zealand's mixed-member proportional (MMP) voting
system [1] is the least bad solution currently in use.
I'm not from NZ so I'd be interested to hear what the locals think.
[1]
[http://en.wikipedia.org/wiki/New_Zealand_voting_system_refer...](http://en.wikipedia.org/wiki/New_Zealand_voting_system_referendum,_2011)
~~~
lmkg
Not a local, but... Proportional voting systems suffer from the problem that
voting "power" is not proportional to representation.
Consider a parliament with 100 members and 3 parties. Suppose the breakdown
is: A has 49 members, B has 48 members, and C has 3 members. Guess what... A,
B, and C all have equal voting power! Any two parties are enough to reach a
majority of 51 votes, and any one party is not. Despite A having, in theory,
over 16 times the representation of C, it does not have any voting advantage.
Keep in mind that any voting system based on parties will tend to have very
partisan voting blocs. Representatives in the US are more independent and
likely to break with the party because they are elected in geographically
isolated elections. Representatives elected directly by a party generally have
about as much independence as the Electors in the Electoral College.
~~~
NickNameNick
I'm from NZ...
To form a government, the party with the most votes, or a coalition of parties
which collectively holds a majority petition the governor general.
For a single party this is quite straightforward.
To form a coalition the member parties agree on a 'Confidence and supply
agreement" This is basically a statement that in the event of a vote of no-
confidence, all of the coalitions members will support the coalition, and also
a broad agreement on the budget. Getting an agreement on confidence usually
involves a certain amount of horse trading about ministerial and vice
ministerial positions. Likewise, the agreement on supply will probably involve
some intense budget and joint-policy negotiations.
If you had a parliament of 101 seats, split into an opposition of 50 seats,
and a government of 51, itself made up of a large party (48 seats) and a small
party (3 seats) what you will probably see is the small party only has the
tiniest influence on the coalition agreement. They probably traded everything
else to get their senior member a ministerial position.
------
fluxon
Wasn't this issue addressed rather well in a recent hackernews-linked item
which mathematically showed both that voting is not a sham, but that the
Electoral College system is more fair than it has been represented? (sorry
can't find the link!)
------
stretchwithme
Winner-take-all elections, no matter how they operate, leave many people
without the representation they prefer. Proportional representation is much
less likely to do this.
Proportional representation can used in the executive branch too. Switzerland
does it.
------
bo1024
This is a very nice summary of/intro to the classic/standard mathematical
approach to voting and Arrow's Theorem.
------
frozenport
This is why we have a 2 party system :-)
------
jQueryIsAwesome
Some of you are forgetting something; that even if you had some form of
"stadistical fairness" (whatever that may be); you still have the biggest
problem of most democracies: Uneducated people; people who think an atheist
shouldn't be president, people who like to reinforce their biases more than
they like to have deep discussions about the nation's issues, people who were
never taught to do critical thinking... and without doing exceptions for their
government, their parents, their religion and the law.
~~~
bluedanieru
That's not really in scope for choosing a voting system that best represents
the people. But yes, point taken.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Mongodb vs Redis for Django? - fjabre
Hi,<p>We're slowly realizing that the benefits of using a non-relational db for our upcoming web app are too big to ignore in terms of scalability. We're running a typical Django/Postgres setup but are looking at alternatives. We're discovering that Postgres is becoming quite the bottleneck in terms of users/server that we can handle.<p>My question is this: Are there any particular nosql solutions recommended for a Django app like Mongo or Redis? Also, are there any good cloud based services that are setup to do just this other than Amazon's Simple DB?<p>Thanks
======
knuckle_cake
While I don't have enough Django experience to give suggestions on that front,
you can use BigTable if you want to move to Google App Engine, though the
drawbacks may outweigh the benefits when dealing with an established app.
That said, I did recently make this choice for a Ruby/Sinatra app I'm working
on, and ended up going with MongoDB due to conveniences like MongoMapper more
than anything else (I wanted to enforce a partial schema without having to
resort to silliness like customField1, customField2, etc.) I'm pleased with
this choice so far, though plan to look at Redis again in the future when I
need something lighter in weight for storage.
------
hcm
I'd say it really depends on what you're storing and how you're using it.
Redis is great for situations where you need a very large volume of reads and
writes, and have a fairly simple data model. See
<http://simonwillison.net/static/2010/redis-tutorial/> for more information
and some use cases.
MongoDB allows for data to be structured and queried in more complex ways, and
touts itself as more of an alternative to an RDBMS than Redis does. If you're
looking to use it with Django, check out MongoEngine at
<http://github.com/hmarr/mongoengine>
------
mark_l_watson
From a Ruby perspective, but this may still be useful: Redis is very nice for
data that fits in memory (disk persistence is for recovery, not for realtime
access) and support for counters, sets, etc. is cool.
That said, I really like MongoDB for many reasons: interactive shell, great
Ruby support (and Scala and Clojure, etc.), very easy to set up and use, and
some replication support (not as good as Cassandra, but I will never need that
kind of scalability).
I think that the Python support for MongoDB is very good.
| {
"pile_set_name": "HackerNews"
} |
Faceted Search (2009) [pdf] - pmoriarty
http://disi.unitn.it/~bernardi/Courses/DL/faceted_search.pdf
======
pmoriarty
Can anyone recommend any open-source faceted search tools?
Bonus for:
- being:
- easy to install (via yum, apt-get, brew, etc)
- accessible from the command line / shell
- accessible as a library
- actively developed
- not requiring (though allowing) an external database
- allowing search through regular expressions
- emacs integration
~~~
jonstewart
Both ElasticSearch and Solr are services, and both are built from Apache
Lucene, a set of Java libraries providing indexed search, with facets and many
other features. Xapian is a C++ library with similar functionality, although
it doesn't seem to have the same level of popularity as Lucene.
------
graycat
Right, as can see by page 9 of the book, traditional library cataloging
techniques, e.g., the Dewey Decimal System, has a tough time knowing just
where to catalog a book, say, _History of Nineteenth Century European Military
Technology_ , that is, in history, Europe, European history, military history,
history of technology, military technology, European technology, etc.?
So, _facets_ are a generalization of the Dewey system that is supposed to
provide better options for such cataloging challenges. Okay.
In a sense Google's YouTube has a similar problem: Often, maybe usually, at
the end of playing a video clip, there is a display of related video clips.
So, if play a video of Heifetz playing the Beethoven violin concerto with von
Karajan (assuming there is such), then what to recommend next, anything by
Heifetz, Beethoven, violin, von Karajan, or any concerto, any violin concerto,
any violin music, or just something related _artistically_ , determined
however, any music from near year 1800, etc.?
Right, there needs to be a better way.
Okay, been working on that. Got some ideas and the code written. Loading some
initial data now, and intend to go live ASAP.
------
irickt
A current example of faceted search:
[https://news.ycombinator.com/item?id=8834611](https://news.ycombinator.com/item?id=8834611)
------
gedrap
I was disappointed by the content... Most of it is somewhat obvious and
something I just skipped. I expected to find something valuable at least at
the end, some insights for example in the front end concerns section (e.g.
ideas for dynamic ranking, something I am currently working on). But again,
nothing really valuable.
So to put it briefly, Faceted Search for Dummies in 100 pages (which probably
could be halved without losing anything).
------
tbarbugli
I found it a bit weird that the "What Are Facets?" section does not actually
give a formal definition of what a facet is.
~~~
tbarbugli
but thanks for sharing :)
------
PaulHoule
This is good stuff -- the author was in charge of faceted search at LinkedIn.
~~~
hnriot
That's not where he learned about faceted search, but rather Endeca where he
was a developer.
| {
"pile_set_name": "HackerNews"
} |
Man Billed $1,200 for Reading Email on a Plane - carlchenet
http://www.businessinsider.com/1200-for-reading-email-on-a-plane-2014-11
======
steego
This isn't greed. It's incompetence. It sounds like they they've outfitted
their planes with high-end internet connections that are typically reserved
for private jets.
Don't misunderstand me, this is different from the system you use on domestic
flights that use ground base stations. Singapore airlines fly everywhere and
that sort of ground base station system would never work for them, so getting
a connection over international waters isn't cheap.
This is like offering business class passengers a 1960 Petrus as the house red
wine.
| {
"pile_set_name": "HackerNews"
} |
Winning A/B results were not translating into improved user acquisition - pretzel
http://blog.sumall.com/journal/optimizely-got-me-fired.html
======
pmiller2
The red flag here for me was that Optimizely encourages you to stop the test
as soon as it "reaches significance." You shouldn't do that. What you should
do is precalculate a sample size based on the statistical power you need,
which involves determining your tolerance for the probability of making an
error and on the minimum effect size you need to detect. Then, you run the
test to completion and crunch the numbers afterward. This helps prevent the
scenario where your page tests 18% better than itself by minimizing
probability that your "results" are just a consequence of a streak of positive
results in one branch of the test.
I was also disturbed that the effect size was taken into account in the sample
size selection. You need to know this before you do any type of statistical
test. Otherwise, you are likely to get "positive" results that just don't mean
anything.
OTOH, I wasn't too concerned that the test was a one-tailed test. Honestly, in
a website A/B test, all I really am concerned about is whether my new page is
better than the old page. A one-tailed test tells you that. It might be
interesting to run two-tailed tests just so you can get an idea what not to
do, but for this use I think a one-tailed test is fine. It's not like you're
testing drugs, where finding any effect, either positive or negative, can be
valuable.
I should also note that I only really know enough about statistics to not
shoot myself in the foot in a big, obvious way. You should get a real stats
person to work on this stuff if your livelihood depends on it.
~~~
dsiroker
Hi pmiller, Dan from Optimizely here. Thanks for your thoughtful response.
This is a really important issue for us, so I wanted to set the record
straight on a couple of points:
#1 - “Optimizely encourages you to stop the test as soon as it reaches
‘statistical significance.’” - This actually isn’t true. We recommend you
calculate your sample size before you start your test using a statistical
significance calculator and waiting until you reach that sample size before
stopping your test. We wrote a detailed article about how long to run a test,
here: [https://help.optimizely.com/hc/en-
us/articles/200133789-How-...](https://help.optimizely.com/hc/en-
us/articles/200133789-How-long-to-run-a-test)
We also have a sample size calculator you can use, here:
[https://www.optimizely.com/resources/sample-size-
calculator](https://www.optimizely.com/resources/sample-size-calculator)
#2 - Optimizely uses a one-tailed test, rather than a 2-tailed test. - This is
a point the article makes and it came up in our customer community a few weeks
ago. One of our statisticians wrote a detailed reply, and here’s the TL;DR:
\- Optimizely actually uses two 1-tailed tests, not one.
\- There is no mathematical difference between a 2-tailed test at 95%
confidence and two 1-tailed tests at 97.5% confidence.
\- There is a difference in the way you describe error, and we believe we
define error in a way that is most natural within the context of A/B testing.
\- You can achieve the same result as a 2-tailed test at 95% confidence in
Optimizely by requiring the Chance to Beat Baseline to exceed 97.5%.
\- We’re working on some exciting enhancements to our methodologies to make
results even easier to interpret and more meaningfully actionable for those
with no formal Statistics background. Stay tuned!
Here’s the full response if you’re interested in reading more:
[http://community.optimizely.com/t5/Strategy-Culture/Let-s-
ta...](http://community.optimizely.com/t5/Strategy-Culture/Let-s-talk-about-
Single-Tailed-vs-Double-Tailed/m-p/4278#M114)
Overall I think it’s great that we’re having this conversation in a public
forum because it draws attention to the fact that statistics matter in
interpreting test results accurately. All too often, I see people running A/B
tests without thinking about how to ensure their results are statistically
valid.
Dan
~~~
pmiller2
Thanks for replying. I agree with all the points you mention your statistician
covered, but you should make sure your users know what kind of test you're
using. The only reason I say this is because this article gives me the
impression that you were using a single one-tailed test (which, as I said in
my post, is a perfectly acceptable thing to do in the context of web site A/B
testing).
But, as far as "Optimezely encourages you to stop the test as soon as it
reaches 'statistical significance,'" I'm not saying your user documentation or
anything encourages people to stop tests early. I'm saying (and this is based
only on the article as I've never used Optimizely) that your platform is
psychologically encouraging users to stop tests early. E.g. from the article:
Most A/B testing tools recommend terminating tests as soon as they show significance, even though that significance may very well be due to short-term bias. A little green indicator will pop up, as it does in Optimizely, and the marketer will turn the test off.
<image with a green check mark saying "Variation 1 is beating Variation 2 by 18.1%">
But most tests should run longer and in many cases it’s likely that the results would be less impressive if they did. Again, this is a great example of the default settings in these platforms being used to increase excitement and keep the users coming back for more.
I am aware of literature in experimental design that talks about criteria for
stopping an experiment before its designed conclusion. Such things are useful
in, say, medical research, where if you see a very strong positive or negative
result early on, you want to have that safety valve to either get the
drug/treatment to market more quickly or to avoid hurting people
unnecessarily.
Unless you've built that analysis into when you display your "success message"
that "Variation 1 is beating Variation 2 by 18.1%," I'd argue that you're
doing users a disservice. When I see that message, I want to celebrate,
declare victory, and stop the test; and that's not what you should encourage
people to do unless it's statistically sound to do so.
The other thing in the article that lead me to this position is that you
display "conversion rate over time" as a time series graph. Again, if I see
that and I notice one variation is outperforming the other, what I want to do
is declare victory and stop the test. That might not be
mathematically/statistically warranted.
IMO, as a provider of statistical software, I think you'd do your users a
service to not display anything about a running experiment by default until
it's either finished or you can mathematically say it's safe to stop the
trial. Some people will want their pretty graphs and such, so give them a way
to see them, but make them expend some effort to do so. Same thing with
prematurely ended experiments; don't provide any conclusions based on an
incomplete trial. Give users the ability to download the raw data from a
prematurely ended experiment, but don't make it easy or the default.
------
antr
Note on SumAll
All users who use SumAll should be wary of their service. We tried them out
and we then found out that they used our social media accounts to spam our
followers and users with their advertising. We contacted them asking for
answers and we never heard from them. Our suggestion: Avoid SumAll.
~~~
JacobSumAll
Hey Antr, Jacob from SumAll here. Sorry to hear you had a bad experience with
us. The tweets you're talking about that "spam" your accounts were most likely
the performance tweets that you are free to toggle on and off. Here's how you
can do that:
[https://support.sumall.com/customer/portal/articles/1378662-...](https://support.sumall.com/customer/portal/articles/1378662-disable-
performance-or-thank-you-tweet)
Best, Jacob
~~~
pluma
As the tweets contain both SumAll-related hash tags and Links to SumAll, this
is definitely marketing that should be opt-in, not opt-out. Unless the user of
your service is explicitly made aware of these automated tweets in clear terms
when they sign up, this is a bit shady and dishonest to say the least.
~~~
spacefight
Even if it's in the terms - do it opt-in.
------
josefresco
This article comes off as a bit boastful and somewhat of an advertisement for
the company...
"What threw a wrench into the works was that SumAll isn’t your typical
company. We’re a group of incredibly technical people, with many data analysts
and statisticians on staff. We have to be, as our company specializes in
aggregating and analyzing business data. Flashy, impressive numbers aren’t
enough to convince us that the lifts we were seeing were real unless we
examined them under the cold, hard light of our key business metrics."
I was expecting some admission of how their business is actually
different/unusual, not just "incredibly technical". Secondly, I was expecting
to hear that these "technical" people monkeyed with the A/B testing (or simply
over-thought it) which got them in to trouble .. but no, just a statement
about how "flashy" numbers don't appeal to them.
I think the article would be much better without some of that background.
~~~
falsestprophet
They are incredible as in literally not credible.
------
jere
>We decided to test two identical versions of our homepage against each
other... we saw that the new variation, which was identical to the first, saw
an 18.1% improvement. Even more troubling was that there was a “100%”
probability of this result being accurate.
Wow. Cool explanation of one-tailed, two tailed tests. Somehow I have never
run across that. Here's a link with more detail (I think it's the one intended
in the article, but a different one was used):
[http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests...](http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests.htm)
------
raverbashing
Oh great, another misuse of A/B testing
Here's the thing, stop A/Bing every little thing (and/or "just because") and
you'll get more significant results.
Do you think the true success of something is due to A/B testing? A/B testing
is optimizing, not archtecting.
~~~
seanflyon
Indeed. A/B testing will get you stuck on local optimums.
------
ssharp
It seems like I see these articles pop up on a regular basis over at Inbound
or GrowthHackers.
I think the problem is two-sided: one on the part of the tester and one on the
part of the tools. The tools "statistically significant" winners MUST be taken
with a grain of salt.
On the user side, you simply cannot trust the tools. To avoid these pitfalls,
I'd recommend a few key things. One, know your conversion rates. If you're new
to a site and don't know patterns, run A/A tests, run small A/B tests, dig
into your analytics. Before you run a serious A/B test, you'd better know
historical conversion rates and recent conversion rates. If you know your
variances, it's even better, but you could probably heuristically understand
your rate fluctuations just by looking at analytics and doing A/A test. Two,
run your tests for long after you get a "winning" result. Three, have the
traffic. If you don't have enough traffic, your ability to run A/B tests is
greatly reduced and you become more prone to making mistakes because you're
probably an ambitious person and want to keep making improvements! The nice
thing here is that if you don't have enough traffic to run tests, you're
probably better off doing other stuff anyway.
On the tools side (and I speak from using VWO, not Optimizely, so things could
be different), but VWO tags are on all my pages. VWO knows what my goals are.
Even if I'm not running active tests on pages, why can't they collect data
anyway and get a better idea of what my typical conversion rates are? That
way, that data can be included and considered before they tell me I have a
"winner". Maybe this is nitpicky, but I keep seeing people who are actively
involved in A/B testing write articles like this, and I have to think the
tools could do a better job in not steering intermediate-level users down the
wrong path, let alone novice users.
------
pocp2
What he did in that article is more commonly known as an "A/A test"
Optimizely actually has a decent article on it:
[https://help.optimizely.com/hc/en-
us/articles/200040355-Run-...](https://help.optimizely.com/hc/en-
us/articles/200040355-Run-and-interpret-an-A-A-test)
------
jmount
I just checked in one possible R calculation of two-sided significance under a
binomial model under the simple null hypothesis A and B have the same common
rate (and that that rate is exactly what was observed, a simplifying
assumption) here
[http://winvector.github.io/rateTest/rateTestExample.html](http://winvector.github.io/rateTest/rateTestExample.html)
. The long and short is you get slightly different significances under what
model you assume, but in all cases you should consider it easy to calculate an
exact significance subject to your assumptions. In this case it says
differences this large would only be seen in about 1.8% to 2% of the time (a
two-sided test). So the result isn't that likely under the null-hypothesis
(and then you make a leap of faith that maybe the rates are different). I've
written a lot of these topics at the Win-Vector blog [http://www.win-
vector.com/blog/2014/05/a-clear-picture-of-po...](http://www.win-
vector.com/blog/2014/05/a-clear-picture-of-power-and-significance-in-ab-
tests/) .
They said they ran an A/A test (a very good idea), but the numbers seem
slightly implausible under the two tests are identical assumption (which
again, doesn't immediately imply the two tests are in fact different).
The important thing to remember is your exact significances/probabilities are
a function of the unknown true rates, your data, and your modeling
assumptions. The usual advice is to control the undesirable dependence on
modeling assumptions by using only "brand name tests." I actually prefer using
ad-hoc tests, but discussion what is assumed in them (one-sided/two-sided,
pooled data for null, and so on). You definitely can't assume away a thumb on
the scale.
Also this calculation is not compensating for any multiple trial or early
stopping effect. It (rightly or wrongly) assumes this is the only experiment
run and it was stopped without looking at the rates.
This may look like a lot of code, but the code doesn't change over different
data.
~~~
davnola
What do you mean by "brand name tests"?
------
thoughtpalette
I was looking for a much more personal article from the headline.
------
hvass
I would be curious to know what percentage of teams with statisticians / data
people actually use tools like Optimizely? A lot of people seem to be building
their own frameworks that use a lot of different algorithms (two-armed
bandits, etc.). From my understanding, Optimizely is really aimed at marketers
without much statistical knowledge.
Of course, if you're a startup, building an A/B testing tool is your last
priority, so you would use an existing solution.
Are there much more advanced 'out-of-the-box' tools for testing out there
besides the usual suspects, i.e. Optimizely, Monetate, VWO, etc.?
------
kareemm
This title used to read "How Optimizely (Almost) Got Me Fired", which is the
actual title of the article.
It seems a mod (?) changed it to "Winning A/B results were not translating
into improved user acquisition".
I've seen a descriptive title left by the submitter change back to the less
descriptive original by a mod. But I'm curious why a mod would editorialize
certain titles and change them away from their original, but undo the
editorializing of others and change them to the less descriptive originals.
~~~
dshacker
I feel that the second title is better, as it talks about the kind of testing
they are using, instead of being a click bait of "HOW DID IT GET YOU FIRED?".
~~~
kareemm
My question is why mods change some headlines away from the originals to be
more descriptive (good) and why they change back to the originals even though
they are less descriptive (bad).
FWIW the change to this headline seems like the right decision to me.
~~~
dang
The guideline is to use the original title _unless it is misleading or
linkbait_ [1]. It's astonishing how often that qualifier gets dropped from
these discussions. It's pretty critical, and makes the reason for most title
changes pretty obvious.
1\.
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
~~~
kareemm
Thanks for the response. I'd humbly submit that there are occasions where the
guidelines should be ignored in service of a more descriptive (non-linkbaity)
title.
I can't find the submission but one recent example that comes to mind is a
presentation on radar detectors that was fascinating. I clicked because the
submitter described the article; the original title was (IIRC) the model
number of the radar gun.
Later a mod changed the HN post back to the model number, which had zero
relevance to anybody not in the radar gun industry.
------
tieTYT
> The kicker with one-tailed tests is that they only measure – to continue
> with the example above – whether the new drug is better than the old one.
> They don’t measure whether the new drug is the same as the old drug, or if
> the old drug is actually better than the new one. _They only look for
> indications that the new drug is better..._
I don't understand this paragraph. They only look for indications that the
drug is better... than what?
------
dk8996
Do any of these tools show you a distribution of variable your trying to
optimize? I am just thinking that some product features might be polarizing
but if you measure, the mean it might give you different results than
expected. I am thinking that's where the two-tailed comes in.
------
hawkice
Perhaps the most troubling element is that optimizely seems comfortable
claiming 100% certainty in anything. That requires (in Bayesian terminology)
infinite evidence, or equivalently (in frequentist terminology) if they have
finite data, an infinite gap between mean performances.
------
dmourati
Peculiar use of the word bug in this context:
"They make it easy to catch the A/B testing bug..."
~~~
rrrx3
meaning "fever" \- generally cured by more cowbell, but in this case only
"curable" by more A/B testing
------
dsugarman
this is all fine and good, but if you're goal is to see what works best
between X new versions of a page and you are rigorous in creating variants,
Optimizely is a great tool for figuring out the best converting variant.
~~~
pdpi
Except, apparently, they aren't actually that good at _that_. If an A/A test
to not yield 100% chance of 18% uplift, what gives you any degree of certainty
that other tests won't have equally skewed results?
~~~
vitamen
Run an A/A/B (or A/A/B/B) test, decide on traffic levels before you start the
test, and let it run until you reach those levels before you peek.
------
fvdessen
In my experience Optimizely does everything they can to mislead their users
into overestimating their gains.
Optimizely is best suited at creating exciting graphs and numbers that will
impress the management, which I guess is a more lucrative business than
providing real insight.
------
claar
The headline isn't really what this article is about, particularly the
disparaging of Optimizely. Might I suggest "The dangers of naive A/B testing"
or "Buyer beware -- A/B methodologies dissected" or "Don't Blindly Trust A/B
Test Results".
------
michaelhoffman
Where's the part where he "(almost)" got fired?
~~~
markolschesky
Maybe that's the headline that did best in an A/B test.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What native features would you like to see in a browser - ThomPete
Hi all,<p>Leaving the HTML/CSS rendering alone for a while, if you could decide, what native features would you like to see in your browser that isn't already there?
======
networked
— Trails! They are a branching history feature implemented in the TrailBlazer
browser [1] and later in the Trailblazer add-on for Chrome [2]. A recent
article from Mozilla [3] describes the concept well enough. (Mozilla is
experimenting with trails in their Servo-based browser research project.)
— Attaching persistent notes to parts of the page.
— Bookmarks that let you add a comment.
[1]
[https://www-s.acm.illinois.edu/macwarriors/projects/trailbla...](https://www-s.acm.illinois.edu/macwarriors/projects/trailblazer/)
[2] [http://www.trailblazer.io/](http://www.trailblazer.io/)
[3] [https://medium.freecodecamp.org/lossless-web-navigation-
with...](https://medium.freecodecamp.org/lossless-web-navigation-with-
trails-9cd48c0abb56)
~~~
severine
Thanks a lot for the trails links.
I've come to desire that all desktop apps had a dedicated persistent
scratchpad attached, it feels great to see your ideas and links, go you! and
trails!
------
27182818284
* Native ad blocking and script blocking (I think Brave is doing a good job at this right now with their UI/UX where you can raise and lower shields)
* Better reading modes
* Better selection modes. There have been a lot of tools/extensions made over the years to help you do things like extract every image on the page or copy out a single column of values from a table and it might be time to think of making some of these native features.
* Great screenshot ability: Firefox has this worked out pretty well now enabling you take the picture of the entire page natively rather than having to install a 3rd party extension
------
cphoover
I would like a unified standard for defining voice interfaces. A unified
standard across the web would allow for easily transferring from one voice
interface to another seamlessly. It could also allow devices like Alexa, and
Google home to do more than just search the web to answer your question.
------
tropo
I want resource usage confirmation.
Limit each site to a megabyte. If it wants more, it can wait until I approve a
doubling. Each doubling needs my approval. For example, upon hitting 128 MiB,
the site should freeze up until I give the OK for going up to 256 MiB.
Limit each site to a single thread. Let me approve more or just force the site
to live within a limit.
Limit each site to running in the foreground. (only the visible tab of the
currently focused window runs) If I really want something to run in the
background, let me indicate that with a right-click menu on the tab.
------
leejoramo
Better long term caching long term shared resources such as jQuery and web
fonts. I use Decentraleyes for this in Firefox, but I would love to see this
built in to the browser.
[https://decentraleyes.org](https://decentraleyes.org)
Many of these could even be pre-bundled with the browser.
------
kitsunesoba
Built in per-site custom stylesheets, much like Stylish (sans spyware). The
only browser I’ve seen that does this is the now barely maintained OmniWeb,
which is a shame. It’s such a basic feature that one shouldn’t need an add on
for it.
------
LUmBULtERA
In addition to what others have said, I'd love it if a dark-mode type feature
was included natively. Not just dark theme, but something that can darken
websites like the Dark Reader extension for Chrome and Firefox.
------
mabynogy
A lightweight browser doing only a reader mode or a design like Wikipedia. A
such browser wouldn't use any existing engine.
------
sethammons
better history. I want full content search of things I've looked at, including
images. "Google" for my personal history.
------
Rjevski
Built-in ad/cancer blocking.
------
billconan
access to low level gpu apis, vulkan and cuda for example.
| {
"pile_set_name": "HackerNews"
} |
Woman Drank Herself to Death with Coca-Cola - mikecane
http://news.discovery.com/human/health/deadly-coca-cola-habit-130212.htm#mkcpgn=rssnws1
======
kellishaver
> Crerar said the family had not considered her Coke habit dangerous because
> the drink did not carry any health warnings.
What!? Are people really that dense? 2.2 gallons a day is, for the average
person, a ridiculous amount of any beverage, even water. Assuming it's not
diet Coke (which has its own set of problems) that's 1760kcal/day, 475g/day of
sugar, on top of the 400mg/day of caffeine. It seems like a no-brainer that
this would slowly kill you. It would just be a question of what got you first
- the caffeine or the type II diabetes.
------
mikecane
I know this sounds like an item more suited to Reddit, but given the Mountain
Dew and Red Bull diets of some, this might be relevant.
| {
"pile_set_name": "HackerNews"
} |
Fog Creek's Intern Hiring Process - dodger
http://behindthescenesrecruiter.com/post/82005145232/the-single-most-sure-fire-hiring-decision-you-will-ever
======
crazypyro
As someone who just went through the internship process with a few different
companies, I find this fascinating. This is pretty much what I expected going
into the experience (multiple interviews, at least 1-2 coding
questions/examples to do, a test maybe). Out of the few companies I
interviewed from, I had nothing as intense as this. The majority of them
didn't even test coding/theory knowledge at all. They were just simple
interviews that lasted 2-5 hours. The hardest part of any interview was a
freaking mental acuity standardized test I took at the company who I'll be
working for that wasn't hard, so take the term "hardest" lightly. Good news is
I accepted an offer at that smaller engineering company! A good portion of my
interviews were for engineering companies because of the employers my
university attracts, so that could also have affected the technical parts of
the interview.
I'm not sure how I feel about how many interviews and how long this process
is. I know some of my fellow students would be completely blindsided by such a
long process unless it was clearly laid out. The compensation seems nice from
the companies that hire around here (I go to a predominately STEM university
in the Mid-West and all the companies I interviewed with came to our career
fair in February, which is pretty late in the process). I'll make just over
half that much monthly, but it'll be June-December and in STL. The highest
I've heard from my classmates is 7k/month, but that was from Exxon Mobile and
there was very little technical parts of the interview. He did have to take a
hair test for drugs though. Ideally, I believe most of the larger
corporations, like Boeing, Monsanto, etc, (like the article said) start
interviews after the fall career fair.
Another side note about compensation: Seems to be pretty wide spread between
13-30/hr (without adding in housing) at companies around the Midwest. I don't
exactly have the greatest academic credentials though (3.0 gpa), so some of
the more selective companies may pay more, especially for graduating seniors.
Exxon-Mobile being the highest, Boeing right in the middle of that range, and
a local ISP looking for a non-coding cs major on the low end for the curious.
edit: Just adding in details as I get time.
FORGOT THE MOST ANNOYING THING
I was given the offer on Friday and he needed an answer on Monday, else he was
going to extend the offer to other candidates. This was pretty obnoxious to
me, but I ended up taking the offer because I was interested in it more than
my other potential offers, but seriously, recruiters, a weekend is not enough
time to get back to you with an offer, especially when other companies are
asking you to keep them notified with enough time that they can either speed
things up or not waste time on a candidate.
~~~
scrumper
> MOST ANNOYING THING
A weekend is _plenty_. You're an intern: there are many, many more of you to
choose from. As you pointed out, nobody wants to waste time. The person that
offered you the placement wants someone who wants to be there, not someone
looking for an backup offer.
~~~
asafira
I am going to respectfully disagree. Just because the company can do it
doesn't mean a weekend is _plenty_. You are given a weekend to decide where
you might spend months of your life, potentially in a place you've never been.
On top of that, who knows if that weekend was going to be extremely busy for
you? Just because you take one week to decide on an offer doesn't mean you
don't want to be there. It's also not at all an industry standard to give such
a short timespan for the decision; if anything, it's a reflection of how
little the company will care about the intern when he/she is there.
All in all, a weekend is certainly not "plenty". I sympathize with crazypyro.
~~~
scrumper
Thank you for the respectful disagreement (vs. 'smug'.) You make good points
and I have some sympathy too - I understand that it's not easy being forced to
make a quick and major decision. That being said, you don't always get to set
the pace, and being confident in making decisions on the basis of imperfect or
incomplete information is a valuable life skill. I can think of no better time
to make a low-risk, snap decision about where to spend a few months than in
the middle of college.
With regards to the decision around where you might spend months of your life,
it's not like the location of the internship was a secret before crazypyro
interviewed. The question about whether the company will treat the intern well
is more nuanced, and you'd have to go with your gut.
Given the competitive nature of the market for CS interns and the quick
decisions needed, the Secretary Problem might offer a good solution for
crazypyro and others in that situation.
------
dominotw
Can't they atleast hire one person that didn't luck out by being born in a
rich/middle class american family to go to ivy league universities.
What is such complicated product that Fog Creek makes that it needs graduates
from top 10 universities? Serious question.
~~~
HerokuMan
Jews that went to Ivy League schools tend to hire other jews that go to Ivy
League schools
~~~
ProAm
Oh go be a racist-troll somewhere else.
------
ultimoo
As someone who did a summer internship last year at an amazing SF company, the
pay-scale at fogcreek sounds pretty competitive (read amazing).
$6,000 a month comes to a shade less than $40/hour. Bear in mind that most
full time students work only in summers so although considerable federal tax
is deducted from this amount, the intern is likely to receive most of it back
when filing taxes next year.
Also, catered lunches plus an apartment in NYC plus two amazing events in
twice a week (which likely include dinner) means that the only money that
needs to be spent is a handful of weekday dinners plus weekend fun, and I
haven't even gotten to the thousand dollar signing bonus yet!
Being in college, I knew about 10-12 others who interned last summer in the
Bay Area. With most companies in the SF Bay Area you're looking at about $27
to $34 at most large companies in the south bay and $35 to $40 in SF. Plus an
hourly pay scale means that interns don't get paid on holidays like 4th of
July, Labor Day, or when they get sick (didn't know anyone who got paid
monthly instead of hourly in the Bay Area). I haven't heard of housing
benefits in the south bay much and heard of only one company in SF that threw
in free housing.
(Sorry about a long comment focusing only on the financial aspects of an
internship program but it is an important factor that debt-ridden students
take into account).
~~~
yen223
$6000 a month is more than what most senior software engineers earn here,
_before_ considering currency conversions. You guys are lucky man.
------
sbuccini
On behalf of a student who just finished up the internship search:
Companies/recruiters, please note the advice put forth here.
A couple of points I'd like to touch on:
* Be sure to provide your interns with a ton of guidance, and promote this in during your recruitment process. Many of my fellow students are turned off by the bigger companies since they feel like they won't be able to make an impact. As a smaller company, this is your ace in the hole. Use it to your advantage.
* Personally, exploding offers leave a bad taste in my mouth. Everyone knows how long the recruitment process takes, and you should give the intern the common courtesy to make an informed decision. The last thing you want is a disgruntled intern on your payroll for a few months.
* You should consider internships as an investment. Build a relationship with your intern, and it will pay numerous dividends in the long run. They might return for a full-time position or they may refer a friend that they respect. A good way to support your intern during the school year is to sponsor a hackathon or an interview workshop at their school. This gets you face-to-face with some of the most motivated hackers at any school, where you can begin the courting process.
Just some quick thoughts from the student's side of the table.
------
LukeWalsh
> If you don’t know where to begin here’s a good rule: only target colleges
> that admit less than 30% of applicants. That will give you a head start on
> being selective, especially if you have limited spots available in your
> program.
I personally think this is silly. If you want to be selective just focus on
applicants who actually build things. If you look at collegiate hackathons at
places like university of michigan, UIUC, or Purdue it's clear that there is a
lot of talent in the midwest. Just because someone wasn't born on a coast or
with a connection to an ivy league school doesn't mean they don't make a cut
for selectiveness.
~~~
sadfnjksdf
I never thought of Fog Creek that way before. In fact, I've always gotten the
impression they were down-to-earth. But, that one shot of a spreadsheet in
this post listing Brown, Rutgers, Princeton, Yale, etc. changed my mind.
The other turnoff in this was the weeding out of candidates based on resumes.
We hired an excellent employee out of a batch of horrid resumes- what a great
hire, though.
~~~
dlp211
I'm glad that you put Rutgers with the likes of Princeton et al, but it is the
state university of NJ. So not everyone came from a prestigious school.
~~~
barry-cotter
Rutgers is one of the seven members of the ivy league. I'm guessing it's
pretty selective. If it's not at least eliteish like UC Berkeley or U Michigan
something went badly wrong.
~~~
dlp211
I hate to burst you bubble, and I am glad that you believe that Rutgers is a
part of the Ivy League[1], but I assure you it isn't. Rutgers admits nearly
61% of applicants in, and based on a cursory google search, UMich accepts
about 37% and UC Berkeley accepts 18%.
Rutgers is The State University of NJ[2]. It is a very old institution (8th
oldest), and that may be where the confusion comes from, since all the other
Ivy's came from that time period.
[1]
[http://en.wikipedia.org/wiki/Ivy_League#Members](http://en.wikipedia.org/wiki/Ivy_League#Members)
[2] [http://www.rutgers.edu/](http://www.rutgers.edu/)
~~~
barry-cotter
I sit corrected.
------
inconshreveable
As a former Fog Creek intern (2010), I can tell you that Fog Creek's
internship program is one of the best built out programs I've seen in the
industry. It rivals and exceeds those of software firms with 10-100x
resources. The talent they attract is top-notch too.
------
covi
I have to say the pay is by no way "spoiling". It is no where near the top
tier pay (for interns) seen in the industry.
~~~
1a2a3a4a
It's not that far off the top tier pay for tech companies. Glassdoor compiled
their list for 2014, and it's not too inaccurate [1]. Speaking from personal
experience, the numbers for SWE undergrads for some of the companies on the
list this year:
Palantir - 7,500 - 1,200 for housing if you choose
Facebook - 6,200 + free housing
Salesforce - Varies per year, 34.50/hr for rising junior, housing.
Cisco - 22/hr
Quora and Dropbox are both missing from this list but they both have higher
salaries than Palantir, but not by too much.
[1] [http://www.glassdoor.com/blog/25-highest-paying-companies-
in...](http://www.glassdoor.com/blog/25-highest-paying-companies-
interns-2014-interns-earn-7000-month/)
~~~
shubb
Wow... this sounds kind of irrational. I mean, these are close to senior
salaries annualized - Sales force is around 70K, while a senior gets about
100K across most of the US.
Are 4 interns really more useful than 3 seniors? Really?
~~~
nickbarnwell
"Get 'em while they're young" is as valid for recruiting as it is brand
preferences ;)
Those interns will turn into salaried FTEs whose first three year's annual
compensation – amortised signing bonus, stock grants, and performance bonus
included – will be ~150k. Compared to new graduate FTEs, interns are
positively cheap!
The ~6.5k, housing inclusive, perks out the wazoo also all come from highly
profitable, competitive companies falling over each other to recruit from a
highly constrained pool. There are only so many Stanford, MIT, and CMU
graduates a year, and an even smaller number of hackathon winners, open source
contributors, inveterate interns, etc. For many, this is the last time they'll
ever openly be on the job market.
------
jonheller
There was a whole movie about interns at Fog Creek.
[https://www.youtube.com/watch?v=0NRL7YsXjSg](https://www.youtube.com/watch?v=0NRL7YsXjSg)
I admit it could have been edited a bit better (read: more interesting), but
it was still fun to get a bit more of an inside view of a process like this.
------
bcaine
This sounds like a great program, I just wish it was offered year-round. Even
though I think Northeastern University and Waterloo are the only schools with
a completely integrated, well defined Co-op program, it seems like its a
growing trend.
I'd assume having year round interns and a continuous recruitment process
would be less disruptive to the team's work velocity and give you a bit bigger
reach for students too.
Plus, I'm a bit jealous of some of the summer-only internships at a lot of
interesting companies. Can't complain about graduating with 18+ months of
interesting work experience pretty much guaranteed though.
~~~
CocaKoala
I didn't attend the Rochester Institute of Technology, but friends of mine who
go there tell me that co-ops are a mandatory part of the CS program there.
~~~
acchow
Waterloo's "co-op" system is quite different. The whole undergraduate co-op
program lasts about 5 years, and you alternate between 4 months in school and
4 months working throughout (i.e., you don't get summers "off"). This allows
the students to try many different companies of varying size and culture.
------
sergiotapia
All of this sounds extremely exhausting for a simple internship. About 30
times more effort than I've ever had to put to land a job as a freelancer.
I'll take my standard $50/hour rate and avoid these rat-races. 400 applicants
and only 8 hires!? YIKES. Are these fellas going to the moon?
------
mathattack
Remember that this is New York City. $6000 is great money to begin with. Add
$2000/month minimum for rent. (And imagine digging up a security deposit
too...) This is investment banking money for a software firm with a much more
respectable work-life balance.
(I have no connection to the firm, though I have read pretty much everything
that Joel has written)
------
sscalia
Am I the only one flabbergasted by the comp #'s thrown around in the article
and in these threads?
------
asselinpaul
Does anyone know how much one would make in a Finance Internship at a hedge-
fund, prop firm and investment bank?
~~~
S4M
I think an internship in a top tier bank in London pays about 3000
pounds/month for a summer analyst and 5000 pounds/month. My data are outdated
though, maybe the salaries have gone down after the crisis, but I doubt it and
would rather think they decreased the number of interns.
~~~
robotcookies
I've heard the hours are much longer though for that field. Correct me if I'm
wrong.
| {
"pile_set_name": "HackerNews"
} |
Isn't it Byronic? Don Juan at 200 - gruseom
https://www.the-tls.co.uk/articles/public/isnt-it-byronic-don-juan/
======
gruseom
Here’s one passage I remember after many years: Byron explaining how
convenient it was for schoolboys that the editors of the classics had
thoughtfully collected all the obscene bits in one place. They were too
prudish to leave them in the text, but too scholarly to delete them
altogether.
Juan was taught from out the best edition,
Expurgated by learnéd men, who place
Judiciously, from out the schoolboy's vision,
The grosser parts; but, fearful to deface
Too much their modest bard by this omission,
And pitying sore his mutilated case,
They only add them all in an appendix,
Which saves, in fact, the trouble of an index;
For there we have them all "at one fell swoop,"
Instead of being scatter'd through the Pages;
They stand forth marshall'd in a handsome troop,
To meet the ingenuous youth of future ages,
Till some less rigid editor shall stoop
To call them back into their separate cages,
Instead of standing staring all together,
Like garden gods—and not so decent either.
~~~
billman
This makes me now want to read the book, after sitting on my shelf for the
better part of 15 years. Thanks!
------
throwaway3627
Only Cantos I and II were available in 1819.
XVI and unfinished XVII were available in 1823 and 1824 respectively.
| {
"pile_set_name": "HackerNews"
} |
My fully optimized life allows me ample time to optimize yours - geoah
https://www.mcsweeneys.net/articles/my-fully-optimized-life-allows-me-ample-time-to-optimize-yours
======
cgrusden
I like to see how other people try to "optimize" their lives. Unfortunately
there is no silver bullet, but definitely some takeaways from this lifestyle I
like:
* Multiple blender pitchers (to not have to keep rewashing one)
* The multi-photo frame on the desk (I would probably put cars/places to travel or that I have already traveled)
* Dedicated 3pm time to exercise
Most of the actual activities of this "fully optimized life" are completely
subjective. This optimized life description is really just a routine and
sticking to it. If everyone actually stuck to a routine, they would also have
ample time, but most everyone allows distractions to de-rail them.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How did the Erlang articles disappear on HN yesterday? - socratees
Were we censored or did we flag the articles by ourselves, or did the admin remove all the Erlang entries from appearing on the front page? How did we get rid of the Erlang articles? I'm just curious about it.
======
jacquesm
I guess enough people flagged it to take it away.
There is a lower limit where stuff will get auto killed if enough people
flagged it.
What bothers me most about these silly flooding tactics is that you've
potentially burned a lot of good content about Erlang from ever appearing on
HN.
Erlang is a neat concept, and I think that those that flooded the 'new' page
with Erlang stuff have done more damage than good.
What you could have simply done is to flag the articles you thought had no
place on HN instead of trying to monopolize the discussion by flooding.
~~~
daleharvey
I think its pretty certain the articles were manually removed, they all went
at the same time along with every new submission almost immediately.
I dont disagree with having them deaded though, although my site was one of
the ones that someone submitted, and I actually think it was submitted because
it was useful.
~~~
jacquesm
Imagine a series of counters as attributes to the articles submitted. If
enough people decide 'enough Erlang, let's flag that stuff down' then you can
imagine a flurry of activity by a limited number of users (say 10 or 15 or so)
that would remove all the articles within a minute or so, which is when the
_last_ person to be able to do so clicks 'flag' for the relevant articles.
Don't attribute to 'divine' intervention what you could easily achieve with
the tools at hand.
~~~
mbrubeck
<http://news.ycombinator.com/item?id=686303>
_pg: "If a story has enough flags, that alone will kill it, without moderator
intervention. I just added a point threshold to prevent this happening to
stories that have received a significant number of votes."_
On the other hand this change doesn't seem to be in the latest news.arc from
arc3.tar, whether it was removed, not included in that distribution yet, or
I'm just looking in the wrong place.
~~~
jacquesm
What's distributed is not necessarily what is running on the site. I think
there may be some secret sauce, if only to make it a bit harder to game the
system.
| {
"pile_set_name": "HackerNews"
} |
Techniques for Distributed TensorFlow - jamesblonde
https://www.oreilly.com/ideas/distributed-tensorflow
======
hopsworks
Disclaimer: developer of Hops. This blog basically argues that systems like
Horovod (Ring AllReduce) are architecturally superior to Parameter Server
models (like TensorFlowOnSpark).
| {
"pile_set_name": "HackerNews"
} |
Travel Writer Booted Off United Flight for Taking Picture of His Seat - rosser
http://upgrd.com/matthew/thrown-off-a-united-airlines-flight-for-taking-pictures.html
======
randomdrake
>"I want you to understand why I was taking pictures. _I hope you didn't think
I was a terrorist._ Here is my business card [offering her one]. I write about
United Airlines on an almost-daily basis and the folks at United in Chicago
are even aware of my blog."
(emphasis added)
Reacting to a command given by a flight attendant with anything along the
lines of "Whoa, sorry. I hope you don't think I'm a terrorist," is a terrible
decision. Even if you're not, saying it will immediately make them wonder if
you are.
Don't think of a pink elephant. How's that mental image of a pink elephant
looking?
Terrorist. Bomb. Threat. 9/11. These are things you don't go saying on
airplanes, these days, unless you're expecting some sort of discomfort from
anyone within earshot. This is especially true for someone in charge of
ensuring the safety of people aboard the flight.
I don't think the traveler would have been removed from the plane if they
would have just complied and not mentioned such a charged word.
~~~
ryguytilidie
I mean, this is the problem right here. You're claiming that because a plane
ran into a building 12 years ago, it actually makes sense for an American
citizen to kick another American citizen off a plane while lying about it. How
about we get back to being rational sensible human beings who understand
context and can have a thought process beyond "If I hear the word terrorist,
that person is a terrorist and I need to call homeland security"
/roboticthinking
I get that it is a bad idea to say terrorist on a plane, but if after 12
years, we still are unable to understand context here, people should be
getting fired, especially if they need to lie to justify their bizarre, made
up fears.
~~~
randomdrake
>You're claiming that because a plane ran into a building 12 years ago, it
actually makes sense for an American citizen to kick another American citizen
off a plane while lying about it.
No, I'm not. I'm claiming that there are words that shouldn't be said on a
plane anymore. A building exploding didn't do that. Decisions from
organizations and individuals created the catalyst for that change. But,
that's a completely different discussion.
It's common knowledge whether it's okay or not. I agree: we as a society,
should strive to be more open and accepting about using words. But,
unfortunately, we're not all there yet.
The title is sensationalist and misleading. The photo taking clearly wasn't
the problem because the author wasn't removed from the plane after taking the
photo, they were removed after they said something they shouldn't have.
"Travel Author Irritated After Being Kicked Off a Plane for Claiming 'Not a
Terrorist'" is hardly a story, is it?
~~~
Amadou
Except that, according to the author, he was accused of refusing to stop
taking pictures. Giving him the benefit of the doubt, it seems reasonable for
him to critise the airline based on their words, even if he suspects their
words are not truthful.
~~~
ryguytilidie
Not even just according to the author. According to the official reason the
airline gave him. I can certainly understand why one might suspect this isn't
the real reason, but calling the guy sensationalist for repeating EXACTLY what
he was told is pretty insane imo.
------
ryguytilidie
The fact that we got attacked by a few Saudi nationals 12 years ago makes one
American lie to another American because shes scared an American passenger on
their plane is a terrorist because he said the word terrorist. Is it really
debatable whether we did exactly what the terrorists hoped we would do at this
point? I don't want to say they won, because its not a game and no one wins,
but they certainly accomplished some objectives here if this is the way people
are allowed to act.
~~~
rdtsc
Or in this case apparently it turns flying attendants into power tripping
liars.
------
codenerdz
Same article 16 hours and 200+ comments ago
<http://news.ycombinator.com/item?id=5256051>
------
plaguuuuuu
I give this blog post Autism/10
Flight attendant's perspective was that she is routinely telling passengers to
quit with the happy snaps, as required by company policy. Flight attendant is
tired and very busy with getting the plane together for takeoff. The photo guy
from before is suddenly angrily motioning her over and saying a bunch of crazy
shit about not being a terrorist, and trying to make her take/keep his
business card. flight attendant freaks out and runs to her boss.
Crew boss sees really freaked out flight attendant and assumes the passenger
needs to go. Once that decision is made, crazy travel guy probably doesn't
have any hope of reversing it.
The real issue is that the guy grossly messed up his interaction with the
flight attendant and paid the price for it. If you misbehave on a flight
you're gonna get kicked off - yes, airlines have a duty to have fair rules for
customers to follow. But passengers have a duty to act relatively normally in
their standard human interactions with staff.
Trying to air one's grievances with an airline's cabin policies with a flight
attendant is ridiculous anyway, hence my rating out of 10.
------
JulianWasTaken
This sounds dubious. It's obviously one sided which is OK, but what exactly is
the motivation of a flight attendant to flat out lie about continuing to take
pictures. And why wait to do it while continuing to service other passengers.
------
chrisbennet
Even if the travel writer had done something horrible this is going to give
United a big black mark. I guess he'll record his future interactions with
cabin steward/stewardess'.
~~~
nthj
I'm just curious who already had a good feeling about United. I mean, I fly
Delta/United/America when I have to, but I've been pretty annoyed by all 3 for
years.
I don't really see anything changing from this article.
~~~
acheron
I'm reasonably happy with United out of the big airlines. They almost always
have seats with extra leg room, and they have a hub at the airport that's 10
minutes from my house.
I mean, they're still an airline, and I've had problems, but overall they
haven't been bad to me.
------
JulianMorrison
You poked the bear and, because you're white and rich, you got growled at
instead of bitten. Pardon me if I do not feel sympathy for this "elite status"
privileged whine.
~~~
Amadou
Social change (without revolution) doesn't come until enough of the already
empowered embrace it. So it may seem trivial to those of us that have seen far
worse, but the alternative is for the white and rich to never hear about the
problems that have affected one of their own. Since most people only recognize
a problem when they are personally at risk, this sort of "whine" is a
necessary part of the process.
------
cup
Without any information from the airline hostess, captain or company this
paints a very one sided and incomplete picture. The authors account might be
accurate however I'm inclined to think that it may have more to do with the
fact that he uttered the word 'terrorist' rather than any of his other
actions.
I mean most seasoned travellers, let alone someone in the industry, should
know by now that when you're in an airport you jump through all the hoops
regardless of how stupid they may appear simply because airports and airlines
hold power over you. For better or worse free speech does not exist in this
environment and I wonder whether the author should have just apoligised and
swallowed his pride rather than try to make a point or even apologise. Some
times you just need to bite your tongue.
Edit: I'm curious about why people disagree with me.
| {
"pile_set_name": "HackerNews"
} |
Hey everyone, do you have some spare time to do my survey? - hkuhl
https://www.surveymonkey.de/r/D5QLLXP
======
hkuhl
I hope it's okay to post this here, but I'm running a Developer Happiness
Survey at the moment and will publish the results in an index. If you have a
few minutes spare, it would be so helpful to have your input! Thanks heaps,
and let me know if this shouldn't be here.
| {
"pile_set_name": "HackerNews"
} |
AI-generated fake content could unleash a virtual arms race - kristintynski
https://venturebeat.com/2019/11/11/ai-generated-fake-content-could-unleash-a-virtual-arms-race/
======
echelon
These deep fake articles are becoming a meme. They mostly seem alarmist, and
yet they're not authored by people actually in the industry.
Deep fakes automate what deep pockets and state actors could already do with
Photoshop and other professional tools. The world isn't going to become a
scary place because the barrier to entry got lower and the technology has been
democratized. People are smart. Fakes will be detectable through entropy
measures, corroboration, common sense, etc.
FWIW, I've been working on real time voice to voice style transfer.
[https://drive.google.com/open?id=1zRvJEGJjTpKvvzel-J0agh3fKB...](https://drive.google.com/open?id=1zRvJEGJjTpKvvzel-J0agh3fKBn9aqGy)
There are already a few other (non-real time) players in this field.
I'm hoping to spin this up as a small social app or filter and sell it so I
can fund my capital-intensive film making startup.
I think this tech _should_ be widely available. Not only will it make people
think and question more, but it'll be fun too.
It's also amusing (and terrifying) to see all the anti-1st Amendment
legislation aimed at combating deep fakes. The truth is that there is nothing
to fear except our freedoms being taken away.
~~~
ipython
I don’t think you should dismiss these concerns so quickly. It sounds like you
have experience in this field. Perhaps that would make it easier for you to
spot potential fakes? What about your grandma? How would she fare?
And besides, the end game isn’t to fool everyone into believing a fake. No,
the more insidious goal is to flood the zone with enough dis- and
misinformation to overload our ability to filter it. It’s like gaslighting at
scale- at some point you just stop being able to process information because
it’s so voluminous and of dubious quality that you stop believing any of it.
~~~
bransonf
> What about your grandma? How would she fare?
Grandma’s still falling for the phone and mail scams. No amount of legislation
is going to fix the reality of technological illiteracy among the oldest
adults.
Deepfakes might fool some of today’s adults who don’t quite understand, but we
are raising a generation that has turned into a meme: “Everything you read on
the internet is true” -Abraham Lincoln
I think the real silver lining here is that the internet is an alternate
reality. Many of us refuse to believe that, but social media has created
manufactured people. The only solution is to bring people back to the real
world. The people are real here. Their opinion, no matter how controversial,
comes from a real mouth, and the face you see is the one they were born with.
If anyone forms their worldview based entirely on things they read on the
internet, they probably would be just as susceptible to our real world forms
of propaganda/gaslighting/ whatever you want to call it.
~~~
skybrian
This essentially means the web is too difficult for some users and they need
something else, like maybe an app store. Maybe some company will win big by
providing a safer (or apparently safer) alternative?
Previous examples: Gmail had a better spam filter. Apple and Google did a
better (though not perfect) job of protecting users from arbitrary code
execution, as did the web itself, way back when.
This doesn't happen all that often, but if it succeeds, power users will scoff
at how nerfed the new thing is.
I'm reminded of an old story [1] about an early game for children:
> I found myself unable to reconcile the idea of a virtual world, where kids
> would run around, play with objects, and chat with each other without
> someone saying or doing something that might upset another. Even in 1996, we
> knew that text-filters are no good at solving this kind of problem, so I
> asked for a clarification: "I’m confused. What standard should we use to
> decide if a message would be a problem for Disney?"
> The response was one I will never forget: "Disney’s standard is quite clear:
> No kid will be harassed, even if they don’t know they are being harassed."
But maybe text filters will be better if you throw enough machine learning at
the problem?
[1] [http://habitatchronicles.com/2007/03/the-untold-history-
of-t...](http://habitatchronicles.com/2007/03/the-untold-history-of-toontowns-
speedchat-or-blockchattm-from-disney-finally-arrives/)
~~~
bostik
> _This essentially means the web is too difficult for some users and they
> need something else_
I think you are on the right track, but not going all the way. The bigger
issue here is that media literacy is _incredibly_ hard. You need a wide body
of knowledge, essentially an educated[ß] mind, and an almost unhealthy
skepticism against absolutely everything you read, see or hear.
As a short cut, a good first approximation is to be a cynic. Assume everyone
is pushing their own agenda, and that even at best you can only see half of
it.
(If you are asking yourself what agenda _I_ am pushing with this post, well
done. You're off to a good start.)
ß: The ability to question information, conduct research, cross-check the
results of research, and have the mental agility to identify your own biases -
these are not natural tendencies, but learned traits. We can lump them all
under the "educated" label, even if that's not the optimal term.
~~~
skybrian
Yes, it is hard. But I think it's not just education, but epistemic humility.
We have no direct knowledge of what's going on in other parts of the world.
The past is often not recorded accurately, the future often unpredictable. So
our default assumption should often be that we don't know what's going on.
Highly educated people in the grip of an ideology can dream up conclusions far
beyond the limited and unreliable evidence we get from media consumption. They
are often rewarded for this.
And one of these ideologies is the myth of rugged individualism (or competent
adulthood), the idea that each person can and should figure out what's going
on by themselves. It's obviously not true of children and the elderly, but
most of us outsource a lot of our thinking. Living in modern civilization
inherently means having a lot of trust and dependency on others.
The ideals of media literacy are simply unrealistic for most people. It's not
clear what the alternative is, though.
------
blunte
This pretty much describes the end of the internet as we know it. Even before
AI generated "content", the internet has become lower signal-to-noise as time
has moved forward.
It is already the case that for many everyday searches I do, I am forced to be
very creative in my search phrase in hopes of filtering out the garbage sites
that manage to dominate the first results page.
Watching less tech-saavy people use computers (such as elder family) is
enlightening and frightening. They either cannot tell real content from fake
content, or worse they are satisfied with what they get from obviously
suspicious sites.
Maybe my concerns of polluted websites are less relevant considering the
general population is getting more of their "information" from within Facebook
rather than even going to search engines (of which they use the default for
their browser!).
~~~
seibelj
New companies and technologies will be invented to solve this problem. Every
problem has a solution. You are falling into the same trap that has caught
humans since the dawn of man. The printing press, the car, the internet, and
now “deep fakes” will cause hand wringing but will not destroy us. Just give
it time.
~~~
glenstein
>The printing press, the car, the internet
These all came with real tradeoffs and we've just accepted them. The printing
press and the internet, in their own ways, sped up the world and shortened
attention spans. Cars changed cities. The benefits have been there, but we've
engaged with or ignored the harms posed by changes in different ways, and the
same unconscious trade is going to happen again.
------
Abishek_Muthian
Considering video, audio are accepted as evidence in most courts without any
independent verification; I'm seriously worried about the implications of deep
fake on justice.
There is an urgent need gap[1] on detection of deep fakes.
[1]:[https://needgap.com/problems/21-deep-fake-video-detection-
fa...](https://needgap.com/problems/21-deep-fake-video-detection-fakenews-
machinelearning)
~~~
bostik
Risky Business did a really good interview on the subject early last year[0].
Law profession is already aware of the potential problems.
Me? I welcome the future where audio and video evidence are just another piece
of evidence.
0: [https://risky.biz/RB489/](https://risky.biz/RB489/)
------
lordgrenville
We've had fake photographs for decades and it hasn't seemed to make a big
difference in politics. But I think that's because in the past you had
gatekeepers, like the editors and factcheckers of "respectable" publications,
who would ascertain the legitimacy of a picture before using it. They'd make
mistakes sometimes, but got it right 99% of the time.
Now news spreads horizontally through social media and group chats. It's
common to see, say, a clip purportedly of police brutality right now in
country X, which is actually 7 years old and from country Y. Someone will
correct it, someone will dispute the correction, whatever - the damage is
done. So I don't think deepfakes will move the needle much. The real damage is
the end of gatekeeping, and that's already happened.
~~~
QuantumGood
We haven't had high-velocity media for decades, and information is easier to
make extremely false and get believers than photographs. You can't create a
complete narrative through photos alone. You need associated information.
------
YarickR2
Well, this probably means end of unsigned content ; every line of text, every
article, etc should be / will be signed by living person's key, or it will be
heavily penalized in search engine output; governments will run keystores with
citizens' keys, and content signatures will be checked against such keystores
to ensure content authenticity (or lack thereof) . Time to reopen GPG , I
guess.
------
joe_the_user
I was experimenting with this stuff and you can too here [1]. It's kind of
impressive but not convincing. The main impression it gives is it doesn't know
what subjects affect which objects, what one kind of relation implies about
another relation and so-forth. Still, it gives a sequence of words with a
consistent "feel" which is impressive.
However, I would still only find it's text convincing for producing ... a
marketing blog since such things just seem like a contentless stream of
buzzwords to begin with. If anything, it gives a certain idea of how marketing
speech require something, a stream of words with certain feeling, but not real
logic.
[1] [https://talktotransformer.com/](https://talktotransformer.com/)
~~~
jeffshek
I built [https://writeup.ai](https://writeup.ai) to help with that, but while
it helps, it still feels like it's missing "something" at times.
~~~
joe_the_user
The thing is that I think language over a longer term is about actually
communicating a structure to world - in a way that requires knowledge of the
world. It is just that over a shorter period, a good portion of language isn't
about this communication but about just certain coloring of communication.
Which is to say that I think this lacks more than it seems at first blush.
------
achow
OTOH: I'm pretty excited that these technologies are maturing so that they can
be harnessed for empowering common people, or workers in enterprises to make
their content beautiful, simple & into effective stories.
One example: Pentagon's slide decks.
[https://archive.org/details/MilitaryIndustrialPowerpointComp...](https://archive.org/details/MilitaryIndustrialPowerpointComplex)
------
QuantumGood
The effects of an ever-higher velocity of fake news isn't clear, but there is
no "solution".
Real news not believed, fake news believed has been an unsolved problem for a
long time. For example, the history of medical advances show doctors not
believing exceptionally solid science in many cases.
There are a number of quotes about progress along the lines of "First they say
it's impossible, they they fight it, then they say they believed it all
along".
This is a people problem and a media velocity problem going back to the famous
quote "A lie travels around the globe while the truth is putting on its
shoes."
You can't stop people from believing a lie after it has been released.
Removing the lie doesn't help. "Reputable" sources not repeating the lie
doesn't help.
------
this_was_posted
We shouldn't talk too much about our skepticism on this becoming problematic.
Otherwise believable skeptic text can be generated by malicious actors through
AI once it does become problematic so that they can drown out real concerns
with virtual trolls.
------
shams93
This has been true long before ai. Writing and journalism have always been
weaponized. The opposite could be true in that it's easier to recognize
automated fake news than well crafted hand done human deception.
------
jon_akimbo
People very concerned about this should spend some time reading ${opposing
political group} social media. As you'll discover, people will believe what
suits them. Veracity is of remarkably little interest to a remarkably high
percentage of the population. Most people, and this is not an exaggeration,
would sooner kill/die than change their mind. And if that's true, then
consider the mental acrobatics individuals are willing to go through before
they even reach that point.
------
zahrc
I have personally yet to be convinced of AI generated media content (read
articles, videos, photos) maybe the bias that I know that they are AI-
generated, but to me it’s equivalent buying a cheap knockoff iPhone from
China: it’ll work if you don’t really think about it, or do not know the
difference.
We have to top-up education and teach media-awareness in school, while giving
badly researched and generally toxic content the cold shoulder.
------
hertzdog
I try to take a different direction. Let's suppose some ai generated content
is better than human created content (IMHO we are quite there). Let's go
further: maybe in the future we will trust again only some "trusted sources"
(newspapers? HN?) while everything else will be not taken into account because
the quality will be low (like some comments saying the source is not in the
industry...).
~~~
account73466
>> maybe in the future we will trust again only some "trusted sources"
(newspapers? HN?) while everything else will be not taken into account because
the quality will be low (like some comments saying the source is not in the
industry...)
Do you realize that current conversational NNs are better at making comments
than you?
~~~
hertzdog
Yes. That’s the point :)
------
greggman2
I often wonder if Ranker, Thrillist, Collider, Vulture are all AI based. The
seem to show up in every search
------
nightnight
All tech demos without strong use cases yet. Machine-generated content,
spinning content, etc. are black hat tactics employed for decades in order to
game Google. Works (just look at what crap ranks high) but the foundation for
new huge industries? No.
------
100011
I am going to take the contrary opinion here. AI-generated fake content will
inflate away the informational value transmitted by whatever it is trying to
fake. It's like 'deep fakes', they'll just destroy trust to video.
------
seddin
I might be wrong, but on some social networks as Reddit, many comments or
shared links seem too weird, like if they were not real, and some posts that
get resposted always end up with the same comments or similar words.
------
r0h1t4sh
Looks like this would be the new form of spam we will have to fight.
------
daxfohl
How do we know this article was not generated by a bot?
------
unityByFreedom
Doubtful. It's easier to photoshop fake content and we haven't seen that get
out of control.
------
EGreg
Wow that AI-generated blog text actually made sense! The best I have ever
seen. How did they do it?
------
HocusLocus
muching virtual popcorn
| {
"pile_set_name": "HackerNews"
} |
A data visualization curriculum using Vega-Lite and Altair - Anon84
https://github.com/uwdata/visualization-curriculum
======
randyzwitch
This is a really comprehensive tutorial, one of better uses of Jupyter
Notebooks
| {
"pile_set_name": "HackerNews"
} |
Show HN: Pass a URL, get summarized content - meeper16
http://54.86.121.4/recommend/getSummary.html
======
peter_l_downs
Not summary, just sentence ranking and extraction. Still cool but not anything
new. Sweet side project though! For anyone wondering how this was done, I have
a similar project up at [http://bookshrink.com](http://bookshrink.com) (source
code at
[https://github.com/peterldowns/bookshrink](https://github.com/peterldowns/bookshrink)),
although I don't fetch article text.
~~~
mck-
I did a similar hack a while back, which summarizes a piece of text in a
single sentence. I'm that lazy.
[https://github.com/mck-/oneliner](https://github.com/mck-/oneliner)
~~~
gravypod
I'm writing a book right now. Would you mind if I used your program to make
the title and the chapter titles?
------
despinozist
You should render the output as actual JSONAPI
([http://jsonapi.org](http://jsonapi.org)):
{
"links": {
"self": "...",
"prev": "...",
"next": "..."
},
"data": [],
"included": []
}
So that we can discover the API beyond the form. Use
[http://www.iana.org/assignments/link-relations/link-
relation...](http://www.iana.org/assignments/link-relations/link-
relations.xhtml) as the starting point for link relations ideas.
~~~
wyldfire
I was pretty quick to knee jerk ask myself "Why is this any better than any
other schema?" (I was not convinced that "API discovery" was, by itself, a
good enough case).
Then I read the very practical first sentence of the jsonapi page: "If you've
ever argued with your team about the way your JSON responses should be
formatted, JSON API can be your anti-bikeshedding tool." That alone is
probably huge. May not mean much for individual projects, but it's good enough
for me to bookmark for the future.
~~~
fishnchips
Can't help but think of [https://xkcd.com/927/](https://xkcd.com/927/) ;)
Not sure if standards like this can prevent bikeshedding. You can always
bikeshed about the need to stick to any particular standard. One
counterexample to what I'm saying may be one standard Go language formatting
with gofmt but that was introduced very early on and became a part of the
culture. Too late for that with JSON APIs.
------
ComputerGuru
I'm not sure I am seeking the same wow-factor results from the service that
everyone else is raving about.
I submitted this link [0] which was on the HN homepage a couple of days ago
and the results that I got back were more either the least important bits or
in some ways implying the _opposite_ of the article, so either the writing was
really bad or the algorithm needs some work.
Submitting a "simpler" less-ranty article [1] was even less successful,
leading to paraphrases of less-important sentences as the results.
Then I submitted the BBC article from this morning about Philae [3] and
received much, much better results. I think it works best on articles that
have single sentences that clearly sum up the gist of the post as a single,
hard fact and doesn't work with anything that works towards logical
conclusions or tries to build an argument. Which makes sense, because this
isn't an AI and can't actually deduce anything.
0: [https://neosmart.net/blog/2016/on-the-growing-intentional-
us...](https://neosmart.net/blog/2016/on-the-growing-intentional-uselessness-
of-google-search-results/)
1: [https://neosmart.net/blog/2016/when-is-the-2016-retina-
macbo...](https://neosmart.net/blog/2016/when-is-the-2016-retina-macbook-pro-
coming-out/)
3: [http://www.bbc.com/news/science-
environment-35559503](http://www.bbc.com/news/science-environment-35559503)
~~~
detaro
> _I 'm not sure I am seeking the same wow-factor results from the service
> that everyone else is raving about._
Um... where is someone raving about the result? Most of the comments seem
neutral to negative to me?
~~~
lpage
> _Most of the comments seem neutral to negative to me?_
Shameless plug, thanks to HackerMoods [1] I can quantify that statement: 0.85
neutral, 0.08 positive, 0.07 negative. The average Show HN is 0.17 positive
and 0.04 negative, so your assessment is in line with the numbers.
[1]:
[https://news.ycombinator.com/item?id=11188633](https://news.ycombinator.com/item?id=11188633)
~~~
Shamiq
I'm color blind, and the charts you use are unintelligible to me.
~~~
lpage
Sorry about that. Design isn't my wheelhouse but I updated it to what google
tells me is a colorblind friendly palette. I would definitely appreciate it if
you could take a look and let me know how it is.
------
xlayn
I would risk to say it works based on assigning information weight to words,
number of non repeating and the way they are related and then filter top down.
I did try it with a link I particularly like
[http://multivax.com/last_question.html](http://multivax.com/last_question.html)
with the following response.
{"1":"nor could anyone for the day had long since passed zee prime knew when
any man had any part of the making of a universal ac","2":"zee prime's
mentality was guided into the dim sea of galaxies and one in particular
enlarged into stars","3":"he gave no further thought to dee sub wun whose body
might be waiting on a galaxy a trillion light-years away or on the star next
to zee prime's own","4":"the universal ac said man's original star has gone
nova","5":"the universal ac interrupted zee prime's wandering thoughts not
with words but with guidance"}}
------
ShinyCyril
Hmm didn't have much luck with: [https://mikeanthonywild.com/stopping-
blocking-threads-in-pyt...](https://mikeanthonywild.com/stopping-blocking-
threads-in-python-using-gevent-sort-of.html)
{
"1":"betterthreads provides an enhanced replacement for the an enhanced replacement for the python this isn't actually a true thread instead it uses gevent to",
"2":"the widely-accepted solution is to set a timeout on our blocking functions so we can periodically check a which we set from the main thread to indicate we want the child thread to stop",
"3":"if the thread is still alive the when the *timeout* argument is not present or ``none`` the operation will block until the thread terminates",
"4":"`runtimeerror` if an attempt is made to join the current thread as that would cause a deadlock",
"5":"`join` a thread before it has been started and attempts to do so raises the same exception"
}
That said, I think to summarise that particular would require a certain level
of domain expertise, something which a general bot couldn't provide.
------
phdsummary
The summary of that guy's Phd summary [http://jxyzabc.blogspot.com/2016/02/my-
phd-abridged.html](http://jxyzabc.blogspot.com/2016/02/my-phd-abridged.html)
{"1":"for various reasons i also spend a lot of weekends in new york and make
more friends with people working on data and journalism","2":"my friend jean-
baptiste who reads it asks why my blog is so good but my paper drafts are so
bad","3":"at the beginning of this year i start telling people that i wish i
had more female friends since i realize that there are many fewer women around
me than before","4":"to keep myself from thinking about my uncertain future
all the time i start a cybersecurity accelerator cybersecurity factory with my
friend frank wang with the goal of helping research-minded people start
companies","5":"i am too lazy to make many friends so i spend my free time
reading cooking doing yoga and running"}
------
Animats
Summarization used to be a feature in Microsoft Word through Word 2007, and it
did a decent job. That feature was taken out in Word 2010.[1]
[1] [https://support.office.com/en-US/article/Automatically-
summa...](https://support.office.com/en-US/article/Automatically-summarize-a-
document-B43F20AE-EC4B-41CC-B40A-753EED6D7424)
~~~
skewart
I didn't know that. Any idea why it was taken out?
~~~
lallysingh
I suspect that office has to garbage collect features once in a while.
Otherwise the maintenance cost would be (more?)horrible.
------
fiatjaf
[http://52.90.112.133/recommend/app/getSummary?query=http%3A%...](http://52.90.112.133/recommend/app/getSummary?query=http%3A%2F%2Ftomwoods.com%2Fpodcast%2Fep-597-can-
the-private-sector-protect-against-crime-this-case-study-will-blow-your-
mind%2F&getSummary=getSummary)
------
_RPM
An array would be a better choice of structure for the sentences instead of
hard coding the indexes.."1"...
~~~
brudgers
I see your point and don't disagree.
Thinking about why someone might mike the choice to use text, text is more in
keeping with *nix philosophy. Not that I'm saying it's better, but grep is
pretty light weight and a lot of people use the command line and/or languages
other than Javascript. YMMV.
~~~
_RPM
I'm not sure I understand. JSON is text. JSON provides an array as part of the
grammar.
------
microcolonel
Good quality summaries, but it seems it caches pages based on their base URL,
and throws away the query parameters.
Some blogs use query parameters to distinguish between articles, so it makes
it kinda useless if you want to do more than one article.
------
andreygrehov
This is an off-topic, but I'd like to mention it.
I absolutely love the fact that the OP did not get a domain name for this
demo. This is an interesting "technique" I haven't seen for quite a while.
People tend to own and re-new tenths of domain names, which are just sitting
there for an "just in case" moment. This is a great example of how things can
really be simplified - spin up an instance, make a demo, shut the instance
down.
~~~
developer2
Good luck with the link still being usable in a month or a year. There's a
reason we use domain names for sharing. Not only because they are friendly to
read and remember, but also because IPs are typically far more transient than
domain names.
The IPs behind my projects have changed dozens of times over the years (new
server, changing hosting provider, adding a load balancer, etc.). A simple DNS
change allows the same domain name to follow the project.
I'm actually surprised HN permits links to IP addresses. While links posted
here are not guaranteed to point to the same content in the future anyway, it
is more likely that an IP address will change before the project is taken down
entirely. Search engine posterity and all.
------
shloub
« URL's must start with "[http://"](http://") » «
[https://medium.com/@darrenrovell/all-journalists-need-to-
be-...](https://medium.com/@darrenrovell/all-journalists-need-to-be-data-
driven-6dfc73e420d5#.mz8vd1myq) »
------
hluska
This is an exact duplicate (even posted by the same person) of a link
submitted 11 hours ago.
[https://news.ycombinator.com/item?id=11190008](https://news.ycombinator.com/item?id=11190008)
Edit - it doesn't work for me either, or maybe it is just very slow?
~~~
meeper16
Yes, it is. I sent it out too late last night and thought more people might
want to see this in the morning.
~~~
gus_massa
I think this it's ok here. From the FAQ:
[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)
> _Are reposts ok?_
> _If a story has had significant attention in the last year or so, we kill
> reposts as duplicates. If not, a_ small _number of reposts is ok._
> _Please don 't delete and repost the same story, though. Accounts that do
> that eventually lose submission privileges._
------
mohaps
Any details about the backend/implementation?
shameless plugs for two similar projects(open sourced both) I did a while back
1) Algorithmic Summarizer:
[https://github.com/mohaps/tldrzr](https://github.com/mohaps/tldrzr) 2)
Readability Clone / Article Body Extractor with summary, significant image and
text :
[https://github.com/mohaps/xtractor](https://github.com/mohaps/xtractor)
Both are deployed on heroku and the urls are in the github readme files.
------
h1fra
Huum, not quiet sure what is was expecting, but the results were not great :(
But I could see the use of this kind of service.
Also does not work with accentuated char.
------
an_ko
I'd like more details. How does it work?
------
LinkPlug
What is it built with? (Stack, Foss etc)
------
Mark_B
Fun with Lorem Ipsum:
[http://52.90.112.133/recommend/app/getSummary?query=http%3A%...](http://52.90.112.133/recommend/app/getSummary?query=http%3A%2F%2Fwww.lipsum.com%2Ffeed%2Fhtml&getSummary=getSummary)
~~~
meeper16
It seems to be multi-lingual
------
jack9
[http://www.slashdot.org](http://www.slashdot.org) and
[https://news.ycombinator.com](https://news.ycombinator.com)
{"summarized_text": {}}
------
franze
if you submit
[http://54.86.121.4/recommend/getSummary.html](http://54.86.121.4/recommend/getSummary.html)
to
[http://54.86.121.4/recommend/getSummary.html](http://54.86.121.4/recommend/getSummary.html)
you get
{"summarized_text": {"1":"insert any block of text or single url url's must start with [email protected]"}}
which is of course completely wrong
------
tuananh
urgh: empty
[http://54.86.121.4/recommend/app/getSummary?query=http%3A%2F...](http://54.86.121.4/recommend/app/getSummary?query=http%3A%2F%2Fbongdaso.com%2FThua-
Man-
City%252c-Klopp-v%25C4%2583ng-t%25E1%25BB%25A5c-lo%25E1%25BA%25A1n-x%25E1%25BA%25A1-_Art_160672.aspx&getSummary=getSummary)
------
LinkPlug
What are some alternatives to this?
~~~
jbeda
Check out [https://algorithmia.com/](https://algorithmia.com/). Stuff like
this plus a bunch more. Real business model so you can have more confidence in
it.
------
gkumartvm
Wordpress sites urls are not working !!
------
orliesaurus
Not really working as expected :(
------
dang
Url changed from
[http://52.90.112.133/recommend/getSummary.html](http://52.90.112.133/recommend/getSummary.html)
by submitter's request.
| {
"pile_set_name": "HackerNews"
} |
The puzzle that started complexity theory. - gnosis
http://cs.nyu.edu/shasha/outofmind/mccarthypuzzle.html
======
diiq
I guess I would have failed to create complexity theory with my solution ---
but everyone involved would have understood it, no long calculations required:
give the guard a lock and the spy a key. It's still a sort of one-way function
(easier to make a lock from a key then a key for a lock), but it doesn't
require squaring 100 digit numbers in the dead of night before deciding
whether to shoot a man.
~~~
Gupie
For what its worth, my solution was to give the guard a set of sealed
envelopes. Each envelope having a password written on the outside and a
password written on a piece of paper inside. The spy would give the guard the
word that is written on one of the envelopes, the guard would then open the
envelope and ask the spy for the password contained inside.
~~~
motxilo
How do I know that you, the creator of the password pairs, are not an enemy's
spy?
~~~
Gupie
How do I know that you, the selector of the 100 digit number, are not an enemy
spy?
~~~
motxilo
Every "good" spy chooses his X, and sends the corresponding Y to the guards.
No need to involve a 3rd person in between like in your solution.
------
Fargren
The "hint" is the solution. That would be annoying for someone who hadn't
figured it out.
~~~
redwood
Doesn't anyone else find the solution a bit problematic: sure great concept
but unless the guards have a black box tool which is secured and can churn out
Y from X, they will have to know the function to prove that Y was congruent
with an X...and if they know the function, so too would the enemy. On the
other hand if they have a secure tool that does this fine, but this seems to
simplify the problem considerably b/c the key is that the guard doesn't need
to know anything at all: the tool does all the work.
~~~
shub
Let f(x) = y. The guards have f and y. A spy gives a guard an x, and the guard
computes f(x) and checks it against his list of ys. If it's on the list, the
spy can pass. The enemy has f and y too, but it doesn't help them! f(x) is
trivial to compute but the inverse is hard, so the enemy knows exactly the
answer they want the guard to get and no idea how to make him get it!
One-way functions are very cool and form the basis of public-key cryptography,
although it's quite a bit more complicated than this example.
~~~
colanderman
What I don't get is: if the guards aren't to be trusted, how can the spies
safely tell them x?
I suppose this is why public-key cryptography was invented ;)
~~~
nandemo
They can throw _x_ away after using it. Alternatively, they can cross the _y_
off their list after matching it (making it a one-time password).
The idea is that the guards aren't malicious, just stupid.
------
anonymoushn
The solution doesn't seem to work on its face. If the spy simply gives the
guard Y, the scheme is the same as having a bunch of fixed passwords. If the
spy gives the guard X, X becomes public to the enemy. Some machine that lets
the spy input X (with the input hidden to the guard) and displays Y to the
guard would work, assuming the guard doesn't snoop around to discover X.
~~~
blahedo
Well, in this formulation, they're one-time passwords; but once the spy is
back in the country they could get a fresh one. Of course the more complete
solution would be public-key encryption, but they didn't know that yet in 1958
---this idea of a one-way function is basically a precursor to that.
~~~
eru
And nowadays we would probably use the discrete logarithm problem or factoring
integers as the one-way functions of choice.
~~~
cdavidcash
The suggested solution (modular squaring) already reduces to factoring.
[http://en.wikipedia.org/wiki/One-
way_function#Modular_squari...](http://en.wikipedia.org/wiki/One-
way_function#Modular_squaring_and_square_roots)
(And we'd use a cryptographic hash function anyway.)
~~~
eru
Thanks!
------
thret
In real life, similar problems were solved with a shibboleth.
<http://en.wikipedia.org/wiki/Shibboleth> Americans used 'lollapalooza' in
WW2.
------
kanak
Somewhat related, here's the letter that Kurt Godel wrote to John von Neumann
where he describes a problem very similar to the P vs NP problem:
[http://blog.computationalcomplexity.org/2006/04/kurt-
gdel-19...](http://blog.computationalcomplexity.org/2006/04/kurt-
gdel-1906-1978.html)
------
mcknz
original text:
_Out of their Minds: The Lives and Discoveries of 15 Great Computer
Scientists_
[http://books.google.com/books?id=-0tDZX3z-8UC&lpg=PA79&#...</a>
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What are your general thoughts on ActionScript? - lakeeffect
Overall : What do you know or think of ActionScript?
======
taitems
Making the transition from JavaScript to Actionscript 2.0, an vice versa, is
incredibly easy. Once you have your head wrapped around any form of
ECMAScript, it's really not too difficult at all. The help files have provided
more assistance than Kirupa or FlashDen or any of those sites ever have.
AS3.0? That's another kettle of fish entirely.
------
lakeeffect
<http://en.wikipedia.org/wiki/ActionScript>
~~~
lakeeffect
Does anyone know of a site or application that provides wikipeidia stlye data
written in ActionScript?
| {
"pile_set_name": "HackerNews"
} |
Pinspire.com - daveambrose
http://www.pinspire.com/hot
======
brandoncordell
I know imitation is supposed to be the sincerest form of flattery™ but this is
ridiculous. This is just a direct copy of Pinterest. I really hope this is a
joke or someone's development practice.
------
robwgibbons
I'm not one to rain on anyone's parade, and great artists steal, etc etc, but
isn't this a complete ripoff of Pinterest?
~~~
grizzlylazer
Yea I have to second you on that as I was completely fooled for a second...how
is this different from Pinterest?
~~~
dlf
It's European ;-)
But no. As far as I can tell it's Pinterest. Maybe they're hoping to get
acquired?
~~~
brandoncordell
acquired... or sued?
------
gf3
That's nuts, I can't believe they even copied the name.
| {
"pile_set_name": "HackerNews"
} |
DriveAssist - The Software that may safe your life while your driving - fungnyitfen
http://yaplc.blogspot.com/2008/10/driveassist-software-that-may-safe-your.html
======
bdfh42
My phone comes with an "off" button - you just have to press that before
periods of time when you don't need any interruptions - works in and out of
the car - and no idiot at the office can decide that their call is an
"emergency" and thus bypass the process.
Seriously - when does this "nanny" idiocy stop?
| {
"pile_set_name": "HackerNews"
} |
PrettyPing - colinprince
http://denilson.sa.nom.br/prettyping/
======
mrmondo
Note that the link to curl on the website is incorrect, you need to curl the
raw file to avoid downloading the 302 redirection message:
[https://raw.githubusercontent.com/denilsonsa/prettyping/mast...](https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping)
PR submitted
[https://github.com/denilsonsa/prettyping/pull/3](https://github.com/denilsonsa/prettyping/pull/3)
~~~
ShaneOG
Alternatively just add -L to curl's command line options.
------
AlexeyMK
OS X users, looks like it's on homebrew:
[http://brewformulas.org/Prettyping](http://brewformulas.org/Prettyping). Just
tried it, worked for me.
brew install prettyping
------
leni536
Smart use of unicode block elements (2581-2588 if I'm not mistaken), nice
trick with the background for that double graph.
Edit: Is there a "block-width" space in unicode? Like it's nice if one can
assume a monospace font, but it would be nice to draw unicode-art using these
characters and a space with the same width:
Edit2: Hey, HN deleted my characters, I meant 2591-2593 (25%, 50%, 75%
shading) and 2588(full block). What is missing is the 0%.
~~~
anon4
Look at U+2000-2008:
M M -- en quad
M M -- em quad
M M -- en space
M M -- em space
M M -- three-per-em space
M M -- four-per-em space
M M -- six-per-em space
M M -- figure space
M M -- punctuation space
I think you want either 2001 - em quad or 2007 - figure space
~~~
leni536
I tried out them, they don't seem to work with the fonts I tried. I skimmed
the unicode standard for "block characters" and I didn't read any constrain
for the block characters on width.
------
vog
This project looks great, but I find the comparison point "How easy to
install?" to be very misleading. It says that prettyping is easier to install
than the alternatives, but all mentioned alternatives are all readily
available as packages for your distro.
Also, I prefer installing via the package manager because of the integrity
check. To do the same with the "curl" approach, I have to donwload the code,
import the developer's GPG key, download the signature and run GPG ... Oh
wait, there is no signature file for Prettyping. Not even the Git tag "v1.0.0"
is signed. So I have to download from multiple sources, or email the author
and ask for the expected hash value.
This process is much easier if prettyping was included in the distros. So the
other tools are actually better off with regard to "How easy to install?"
I wish the project site would be more honest in that regard, or at least add
another comparison point "How easy to install _safely_?"
~~~
denilsonsa
I'm the author. Sorry about the lack of signature, I'm not well versed in GPG.
Also, I suppose the man-in-the-middle issue is mitigated by downloading
directly from GitHub over https. Unless there is something else I'm missing
(very likely, feel free to enlighten me).
Sure, I'd love to have it packaged on several distributions (I know Arch Linux
already has it; and also brew on Mac OS X), but I can't do it myself. I hope
users from other distros find it useful and contribute packages to their
distros.
Still, I wrote that comparison with good faith and based on my own experience.
For instance, I once wanted to run it on a university computer that only gave
me normal user access. I couldn't install anything outside my home directory,
and I couldn't rely on package management.
"How easy to install?" could be renamed to "How easy to install from
scratch?", because everything is essentially trivial to install using a
package manager.
~~~
vog
_> Also, I suppose the man-in-the-middle issue is mitigated by downloading
directly from GitHub over https. Unless there is something else I'm missing
(very likely, feel free to enlighten me)._
There is no substitute for end-to-end encryption, from you, the author, to me,
the user. The only generally accepted relaxation is end-to-encryption from the
maintainer (e.g. Debian maintainer) to the user - which is what you have in
the distros.
Compared to those best practices, the "HTTPS from GitHub" has the following
flaws:
1) You have to trust GitHub. If GitHub is hacked, or starts to behave like
SourceForge, you are doomed and nobody will notice.
2) Unless all of your users do certificate-pinning, a compromised CA (or a
malicious CA) may be used to issue an alternative SSL certificate for GitHub,
which is then used to deliver malware.
It may seem implausible that anyone would go that long way to attack your
prettyping project directly. However, it is very attractive to attack GitHub
as a whole and to manipulate all hosted programs systematically.
_> Sure, I'd love to have it packaged on several distributions (...), but I
can't do it myself. I hope users from other distros find it useful and
contribute packages to their distros._
Maybe it helps to ask them. I know that Debian has a mailing list for that.
Sure, you still need to find volunteers if you can't do the packaging on your
own. But maybe there are people willing to do that, who just need a little
more motivation.
_> "How easy to install from scratch?"_
Agreed, that would be a much better wording.
------
atmosx
There's a redirect and 'curl' complaints about it. To allow redirects:
curl -L -O
[https://github.com/denilsonsa/prettyping/raw/master/prettypi...](https://github.com/denilsonsa/prettyping/raw/master/prettyping)
------
raimue
As listed in the comparison, a similar tool would be noping, which is packaged
in many distributions already (Debian/Ubuntu: oping,
ArchLinux/MacPorts/Homebrew: liboping).
[http://noping.cc/](http://noping.cc/)
------
denilsonsa
Hey, I'm the author of prettyping here! I'm a bit busy these days, but I'll
take a look at the comments here and the pull requests at GitHub. In fact,
prefer using pull requests and issues in GitHub.
------
owenversteeg
Hm, looks really cool, but I'm running into issues with it and cw (color
wrapper - [http://cwrapper.sourceforge.net](http://cwrapper.sourceforge.net))
[edit] Fixed - to fix yours edit /usr/local/lib/cw/ping and comment everything
but these lines:
#!/usr/local/bin/cw
path /bin:/usr/bin:/sbin:/usr/sbin:<env>
usepty
------
oakwhiz
Pretty cool - it reminds me of the Cisco IOS ping command.
------
gcb0
the irony that just because the original was boring stream of text allows
everyone to create spify, non extensible, versions.
------
runholm
I am colorblind.
~~~
zeeZ
I am nearsighted.
The subject here is prettyping and not us, though. Try: "prettyping's color
scheme is not compatible with my specific type of color blindness and I would
like to suggest the author add additional color options". Sounds less
egocentric IMO.
~~~
denilsonsa
Indeed, feel free to suggest alternative color schemes. Also, prettyping
already has a --nocolor option.
EDIT: On a second thought, prettyping uses the standard 16 terminal colors, so
any user can customize the color scheme in the terminal itself.
------
ademarre
It took me a moment to realize the name derivation was pretty + ping. My eyes
first grabbed onto "typing", then "pretty", and for an instant considered if
it might be a portmanteau of those. I didn't catch on until actually reading
the first sentence on the page.
~~~
david-given
_Pretty Ping_ is the name of a minor character from Barry Hughart's utterly
excellent book _Bridge of Birds_.
[https://www.goodreads.com/work/quotes/958087-bridge-of-
birds...](https://www.goodreads.com/work/quotes/958087-bridge-of-birds-a-
novel-of-an-ancient-china-that-never-was)
------
seletskiy
Beautiful colored unicode output. But why on the Earth it's implemented in
bash/awk? It's completely unmaintanable and unfrendly for contributors. Just
look, how GitHub syntax colouring gives up on 46 line of prettyping script.
I mean, that it doesn't sound like a right tool for the job, and
argumentation, that it can be just curl'ed and executed doesn't sound like a
good one.
curl'ing binaries is not the way systems should be configured, while packages
is. And if software is packaged, then it doesn't actually matter (from
installation usability standpoint) will it use bash/awk or more convinient
language for implementation (python, golang, whatever). But it will make huge
difference for maintaining and further development of software.
| {
"pile_set_name": "HackerNews"
} |
Maza – Like Pi-hole but local and using your operating system - tanrax
https://github.com/tanrax/maza-ad-blocking
======
hnarn
I've been using [https://nextdns.io/](https://nextdns.io/) for a while and I
really like it. You can do DNS over HTTPS through Firefox (sadly not on an OS
level in Windows for example, but that's fine -- I'm sure OS level support
works better on Linux), and it supports a lot of user-level customization. You
can add and remove entire blocklists, you can black/white-list specific
domains, see logs of your blocks, some analytics, create your own redirects
etc. and it doesn't cost you a thing. The main website does a pretty good job
of explaining the selling points.
You can use it as-is but if you want user-specific configuration you'll get a
custom URL that looks something like
"[https://dns.nextdns.io/c8g88a"](https://dns.nextdns.io/c8g88a"), and
whatever comes in that way will use your settings and will be logged as per
your configuration (of course, you can disable logging).
~~~
darkteflon
I’ve just looked into this - it looks excellent. Can I ask: is this an all-
round superior solution to running your own pi-hole?
I set up dual redundant pi-holes on raspberry pi 4s on my home network but
switching all devices to NextDNS would give me access to filtered DNS even
when away from home, plus save me the trouble of running two raspis (including
two Ubuntu instances) just for that purpose.
Could anyone knowledgeable in such things suggest any downsides to a wholesale
switch?
~~~
jlkuester7
I recently spent a bunch of time comparing NextDNS vs PiHole. The reality is
their features-sets are pretty close, but I eventually settled on NextDNS and
here were some of my takeaways:
NextDNS Pros:
* Can use NextDNS on any network (thanks to their apps or just regular DNS-over-HTTP/TLS).
* (Could get similar functionality on PiHole with a remote hosted PiHole + VPN, but much more complex to setup)
* NextDNS allows for multiple different configuration setups per account (so you can fine-tune your blocking/filtering differently for different devices).
* (PiHole AFIK only supports a single configuration)
* NextDNS IMHO had the superior UI. With more powerful config options.
* In reality with some extra manual config/coding you could probably get PiHole to do most of what is in the config for NextDNS, but it would take some work.
PiHole Pros:
* PiHole is open source.
* The NextDNS server code is closed-source, but they do have an open-source CLI client.
* PiHole is self-hosted (much better from a privacy perspective).
* But you do get all the downsides of being responsible for hosting something as central as a DNS server yourself...
~~~
donclark
Another PiHole pro is that it can work for every device in your house (if you
set it up that way).
~~~
woadwarrior01
You could also setup PiVPN[1] on the same Raspberry Pi running Pi-hole with
Wireguard and setup all your mobile devices to automatically connect back home
when they're off the home wifi.I've had this setup running for a couple of
months now and couldn't be happier with it.
[1]: [https://github.com/pivpn/pivpn](https://github.com/pivpn/pivpn)
~~~
doctoboggan
I am using pihole and WireGuard. How did you set it up so that you
automatically connect back home when you are off your home network?
~~~
woadwarrior01
The WireGuard apps for iOS and OSX have a configuration section titled “On-
demand activation” that lets you do this. On the iOS app, I have it set to
activate on cellular connection and WiFi connections to routers if the SSID !=
my home router’s SSID. Likewise on OSX, except for the cellular option.
~~~
doctoboggan
Awesome, thank you. I am not sure how I missed that previously.
------
swinglock
Who is this for, what's the point?
If you're using a computer on which installing this software is an
alternative, you can install a web browser with an ad blocker, which performs
much better than DNS based filters.
If you're not using such a computer, Pi-Hole proves DNS filtering and this
software doesn't.
What's the use-case between these two that isn't already covered?
~~~
huhtenberg
Just for the sake of argument - to block trackers that are built into other
software, eg. chat clients and some such.
~~~
lonelappde
Pi-hole already does that. You can run pi-hole on your local OS with Docker.
It's 5 minutes to install.
~~~
brigandish
Aside from competition being a good thing, Docker itself introduces attack
vectors.
~~~
swinglock
Surely not more so than curling scripts from the web and executing them as
root, which is the exact install procedure described for this program.
~~~
mega_tux
IMHO, it's way easier to check the script content before sudoing and validate
its security than validate the Docker ecosystem.
------
bestouff
Or if you already run dsnmasq you can:
\- uncomment this in your dnsmasq.conf:
addn-hosts=/etc/banner_add_hosts
\- put this in a file in /etc/cron.daily:
wget -O /etc/banner_add_hosts 'https://pgl.yoyo.org/adservers/serverlist.php?showintro=0&mimetype=plaintext'
~~~
leeoniya
yep, i do this on my edge OPNSense appliance, except with
[https://github.com/StevenBlack/hosts](https://github.com/StevenBlack/hosts)
------
lonelappde
Oh, this is a wrapper for running dnsmasq. It's lighter weight than pihole but
less user firendly.
Not sure why the readme tries to obscure that.
[https://github.com/tanrax/maza-ad-
blocking/blob/master/maza](https://github.com/tanrax/maza-ad-
blocking/blob/master/maza)
~~~
XelNika
> Not sure why the readme tries to obscure that.
I don't think it does, dnsmasq is optional. It does configure dnsmasq
regardless, but that configuration only applies if you install and enable
dnsmasq. As far as I can see, the script does none of that nor does it change
/etc/resolv.conf. The readme is very clear about needing dnsmasq for wildcard
blocking.
The script also modifies the host file which will apply regardless.
------
4nof
I found there is a docker container of pihole which means it can run on
anything including Windows! I tried it and it works in a docker container on
windows just fine! pihole docker steps: (prereq: install docker
[https://www.docker.com/products/docker-
desktop](https://www.docker.com/products/docker-desktop))
1.setup your docker-compose.yml file with the one listed on pihole page
[https://hub.docker.com/r/pihole/pihole/](https://hub.docker.com/r/pihole/pihole/)
(starts with version: '3').
2\. save and do "docker-compose up -d"
3\. do "docker ps" and ensure your pihole is running.
4\. Go to network settings and set your DNS to 127.0.0.1 and ::1 like this:
[https://mayakron.altervista.org/wikibase/show.php?id=Acrylic...](https://mayakron.altervista.org/wikibase/show.php?id=AcrylicWindows10Configuration)
5\. if the docker container is ever stopped, you will need to reverse the
setup step 4 to get back internet.
Hope that helps all you windows users who want a DNS blocker pihole on your
machines!
~~~
jdc0589
I've been doing this for the past year or so.
couldn't run pihole network wide because too many shady "deal /discount" sites
my girlfriend uses kept breaking, so this was my alternative.
------
vezycash
I've been using adguard's dns to block ads on my phone* because pi-hole isn't
an option for me at the moment.
Also set it on a colleague's phone and he's thanked me severally for it.
* (dns.adguard.com
private DNS in network settings on android pie)
~~~
politelemon
Similar to that I've been using NextDNS - in addition to the adblock you also
get custom whitelist/blacklist, analytics... and also supports DNS-over-TLS
(works well with Android's Private DNS feature) and DNS-over-Https
See: [https://nextdns.io/](https://nextdns.io/)
~~~
k__
What can the analytics tell me?
~~~
hnarn
I've been using nextdns and I like it: for one thing, it can tell you the
amount of blocked DNS queries, but it's also very helpful for troubleshooting
since you can see the log of what was blocked, when, and why (which
blocklist). You can then completely disable the blocklist, or whitelist
specific entries if you prefer. It's a level of customization that I don't
believe other DNS adblockers provide since many of them are designed to "just
work".
------
stfwn
Fwiw, you can run Pi-hole locally just fine. But using the hosts file like
Maza does may be a little bit faster than running a DNS-server.
------
tuananh
the one reason i use pihole is to block ads network-wide. this kinda defeats
that purpose.
~~~
nxpnsv
yes, but you have pihole for that... this is if you don't need or want to
issue a network wide block
~~~
tuananh
i couldn't think of an use case for this? can you explain what would you use
this for? if you already have pihole?
~~~
Normal_gaussian
For use on a laptop that you take into other networks (coffee shops, friends
houses, work / client businesses).
For use on a desktop in a network you do not control (e.g. many devs have
complete local control over their own machine)
~~~
XelNika
> For use on a laptop that you take into other networks
I VPN to my home (and by extension my Pi-hole server) when on that kind of
network. A local ad-blocker doesn't prevent MITM or malicious DNS servers.
Maza won't help if DHCP is handing out the IP for a server that claims
google.com is a CNAME to hereisyourvirus.xyz or if the router is transparently
redirecting DNS traffic so you don't even know what DNS server you are
hitting. Which means you have to use DoH or DoT as well.
------
xtf
Network Wide > Pi-hole
Browser > Ublock
Local System > hosts-file
Android (root) > Adaway (does hosts-file)
~~~
antman
Android non root > Intra looks like vpn but its a DNS use with NextDNS
------
fuzzy2
On Windows, a large hosts file may lead to noticeably slower name resolution
performance. Maybe it's less of a problem on Linux/macOS...?
~~~
jeroenhd
I learned this the hard a few years back. The lookup performance was good
enough, but every time I woke the computer up from sleep or rebooted it, it
would spend ten minutes maxing out one or two cores trying to process a hosts
file blocking all known malware/spyware/adware domains.
This took me ages to find the cause of, I had to use a lot of highly-escalated
debuggers and such to figure out what the "system" process was trying to do
that was costing so much time. Once I cleared out the hosts file, the problem
was resolved.
------
achairapart
I'm looking for a simple tool to setup and switch to DNS over HTTPS at the OS
level (MacOS, in this case), with no success.
With it, I would simply switch to one of the many pi-holed/filtered DOH
services[0] out there, or even roll my own on a cheap VPS.
On iOS there is DNSCloak which is excellent, Android 9+ has built-in support
(Private DNS).
[0]: like pi-dns.com or blahdns.com
~~~
ddrt
Out of ignorance, how does DNS Cloak differ/compare to NextDNS?
~~~
achairapart
NextDNS is a commercial solution, there will be more limits to the free plan
when it will be out of beta. DNSCloak is just a tool that let you choose
different DNS resolvers, even your very own.
------
mcovey
For anyone running OpenWRT, you can install the adblock package to accomplish
roughly the same thing as Pi-hole does. I don't believe it supports some
advanced features like DoH/DoT or DNS resolution (e.g. a1b2c3.example.com ->
ad-server-that-should-be-blocked.com), but it does the basics - custom host
file sources, additional blacklist rules, whitelisting, and quick
enable/disable for troubleshooting.
It also has an option to force all DNS traffic (port 53, so again it won't
catch DoH/DoT) to go through the router. Occasionally I forget I've done this
and tried `dig foo.bar @1.1.1.1` and gotten confused until I remember that my
router is forcing that DNS lookup to go through it first, and then through the
router's configured DNS resolver.
~~~
touristtam
You can use dnsmasq on OpenWRT and other packages that void the need for an
additional pi-hole.
------
petre
I'm using this whenever I have a working server lying around. Unbound works
great.
[https://github.com/gbxyz/unbound-block-
hosts](https://github.com/gbxyz/unbound-block-hosts)
------
dmclamb
I use pihole for my entire home network as primary DNS and opendns for
secondary (long time user of opendns, since before Cisco bought it). I also
have VPN setup for remote access (esp. for mobile). I use ublock origin at the
browser level.
These are layers of protection from undesired content (ads, malware, porn,
etc.). If one fails, hopefully the next layer will provide desired protection.
I have kids approaching teen years. There is no magic bullet, and we still
monitor and limit their screen time.
How would you improve this setup? Just curious.
~~~
justanotherhn
Are you trying to shield your teenage kids from seeing porn by accident or
actively seeking it out? If it's the later you've already lost - presumably
they have 4G.
~~~
Tempest1981
Or at least one friend whose parents aren't tech savvy, and aren't home.
------
p2t2p
I'm using simple
[https://github.com/StevenBlack/hosts](https://github.com/StevenBlack/hosts).
Puts everything into hosts file.
------
steveharman
I wonder why the pi-hole tram doesn't also offer a paid tier (that they host),
to help those who can't or don't want to roll their own?
It could help fund future development and maintrnance costs.
~~~
lonelappde
Maybe they already have a full-time job?
Anyway, it's free software. Anyone in the world can do that if they want. You
can do that.
Also, it's poorly scoped. Pihole is just an app. Any ownclowd provider can
more efficiently host it along with a bundle of every other app people want to
"own" but not run locally.
~~~
GordonS
While this is true, I'd put much more trust in the PiHole team than I would
some random corp - by the very nature of what they've built, and how they
licensed it, I'd expect them to be privacy centric. By paying for such a
service, I'd also feel like I was contributing to the ongoing maintenance of
PiHole by the core team.
I think the GP's suggestion is a fantastic one!
------
StreamBright
I just started to write this in Rust a few months back. Thanks for this
project it is fixing most of my problems with Pi-hole.
------
throwaway4787
Can someone explain how the use case differs from simply using a well-curated
hosts file? (like Steven Black's)
~~~
rovr138
There’s some issues with them being too big and using a lot of resources.
You can even find comments about it on this thread
------
1_player
Great work! One suggestion: please make blocklists configurable.
~~~
tanrax
It is not difficult, I take note to implement it.
~~~
IngvarLynn
That was my thought exactly when I decided to upgrade the very much analogous
script [https://raw.githubusercontent.com/notracking/hosts-
blocklist...](https://raw.githubusercontent.com/notracking/hosts-blocklists-
scripts/master/notracking_update) . The end result sort of works, but I deeply
regret not using sane language for the task. Result:
[https://gist.github.com/ingvar-
lynn/f0b84d5f750bd2e555d3f1de...](https://gist.github.com/ingvar-
lynn/f0b84d5f750bd2e555d3f1ded6ef159e)
------
wp381640
I have a docker-compose.yml locally with:
dnsmasq -> pihole -> stubby
The first dnsmasq is for local .test domains for dev. Works well for when i'm
not on one of my networks.
~~~
XelNika
Why not configure your local .test domains in your Pi-hole? That's also
dnsmasq, you can use the same configuration options.
~~~
rovr138
> Works well for when i'm not on one of my networks.
On the go is the key here.
~~~
XelNika
What do you mean? There's nothing preventing him from running Pi-hole and
stubby locally in Docker. That was how I interpreted his comment.
------
amelius
The point of Pi-Hole is that you can't hack it that easily compared to
software installed on your local computer.
~~~
alpaca128
How is it supposed to be harder to hack? I thought the main point is to have
the blocking enabled in the whole network, including devices like smartphones.
~~~
amelius
Because the Pi-Hole doesn't run untrusted code, like a personal computer does
(e.g. Javascript, installed applications, etc.). Same holds for smartphones.
~~~
jlgaddis
I'd consider the web-based administration interface to be "untrusted code" \--
and there just a remote code execution vulnerability (due to _very_
insufficient input validation of MAC addresses) discussed here yesterday [0] .
[0]:
[https://news.ycombinator.com/item?id=22714661](https://news.ycombinator.com/item?id=22714661)
| {
"pile_set_name": "HackerNews"
} |
I've been tracking everything about myself - ericnakagawa
http://aprilzero.com/#
======
PStamatiou
Anand is my roommate. He's been doing this non-stop for the last 2-3 months
(but thinking about it for the last 9), including while traveling
internationally for the first month after he quit his job. The last month has
been nightly design critiques after I got home from work :D
~~~
kevinoconnor7
His handle sounds familiar. Did he used to run a design company called Dragon
--or something along those lines? If so I've been impressed by his design work
for a long time. Really interesting to see how his work has evolved.
~~~
markgarity
Dragon Interactive - yep.
------
chdir
Slightly off topic, your site design is awesome. Would you share what
libraries/frameworks/skills/time-resources are needed for something like this.
Just curious. For me, the graphics & layout are far more interesting.
~~~
aprilzero
I'm working on a detailed blog post about that. Some highlights:
• Running Django on Heroku • Coffeescript, jQuery • SASS • A lot of webkit
transitions & some animations • A souped up version of pjax for loading pages
• Getting the data from APIs from Moves, Runkeeper, Withings, Foursquare,
Github, Instagram, etc. • The run maps are a set of coordinates passed to
Mapbox to make the map tiles & Leaflet for creating the SVG line. • D3 has
really nice geo stuff, I use their mercator projection to convert lat/longs to
points on the map of the world
~~~
thoughtpalette
Love the spinning animations! Reminds me of the "tech" look and feel from
early 2000's. Think Winamp skins.
------
fasteo
I miss some important stats:
\- Your height. To calculate your BMI and contrast it with your Body Fat %. To
be a runner, your BF% is high, but I cannot see whether it is because lack of
muscle ("skinny fat") or excess of body fat. As you are not logging any weight
training session, my guess is the former, but I am sure you are not logging
all these data to end up guessing :)
\- Triglycerides: I find this much more important than LDL/HDL. It as a proxy
for excess carb (either you are eating too many of them, or you are exercising
too little). Remember, triglycerides are produced in the liver from any excess
carbohydrates that have not been used for energy. They have nothing to do with
dietary fats.
\- Total cholesterol. To be able to calculate the TC to HDL ratio.
\- LDL/HDL ratio. With you current stats it is at 1,5 (average risk), but it
should be handy to see it in the dashboard.
My suggestions:
\- Do some weight training. If you goal is to be healthy, this is key. A
couple of 30 mins heavy sessions per week will do it. No need to become a gym
rat.
\- Eat better.
\- I see that you are running outdoors, but your D3 levels are mid-low. I
guess you are running either too early in the morning or too late in the
evening. Try to get some running with the sun right above your head (just
bring more water with you)
Congrats for this herculean effort.
~~~
karlb
_> Do some weight training. If you goal is to be healthy, this is key._
Ignorant and sincere question: Why?
~~~
fasteo
Muscle mass is a metabolic master regulator:
\- It allows fast glucose clearance from blood via both insulin and non-
insulin glucose transport.
\- It drives bone density by pure mechanical tension. More muscle = stronger
bones/tendons to support them. The usual hip fracture/high mortality we see in
elderly people follows the loss of muscle mass->loss of bone strength->bone
breaks->fall pathway, not the more intuitive fall->bone break.
\- It serves as "organ reserve". In case of injury or disease, your muscle
mass will literally keep you alive. There are some interesting studies about
muscle mass on admission to the ICU and mortality/morbidity. This is the
extreme case, but you get the picture.
\- Not per-se, but the neurological effort you put in your weight training
sessions drive the secretion of Brain-derived neurotrophic factor (BDNF). BDNF
improves existing neurons signaling and promotes the creation of new ones. As
a side note, I have seen a _huge_ improvement in my - properly diagnosed -
ADHD child after putting him in a functional "lift heavy shit" exercise
program.
~~~
xiaoma
Hmm. I don't find this terribly convincing.
Running is well documented in its role in improving bone density:
[http://healthfully.org/highinterestmedical/id33.html](http://healthfully.org/highinterestmedical/id33.html)
Unlike weight-lifting there are actual studies showing running promoting
neurogenesis (the increase of brain cells) and improving performance:
[https://www.google.com/webhp?sourceid=chrome-
instant&ion=1&e...](https://www.google.com/webhp?sourceid=chrome-
instant&ion=1&espv=2&ie=UTF-8#q=running%20neurogenesis)
Finally, muscle mass is far from enough to be an effective metabolic
regulator. While I have yet to meet anyone who runs 100 miles a week and is
overweight, it's not uncommon to find that someone who benches 500lbs still
carries a gut. I myself have gained a great deal of both fat and muscle since
my school years when I was a runner.
I think weight-lifting does some great things depending on one's aesthetic
goals, and it's probably the most time efficient way to increase bone density.
It's hardly the optimal exercise for general health, though. There are many
aspects of health, ranging from neurogenesis to heart health to immune system
function to maintaining telomere length that cardio most helps.
~~~
fasteo
You are implying something that I didn´t mean.
This is not about weight lifting vs running. Anand is already running and I
suggested him to add some weight training to gain some lean mass, as his BF
level (19%) is a little high, possibly due to the lack of muscle mass.
For the record, I run - or bike - at least twice per week.
~~~
xiaoma
That doesn't make much sense. As a competitive runner in school, I was under a
5% body fat percentage without really lifting. Now I do lift and I'm at about
a 23% body fat percentage (and nearly 3x the arm strength I once had). Runners
tend to have a significant lower body fat percentage than lifters, even at a
professional level.
More likely is that the OP just isn't doing enough. Running 25 miles a week is
enough to bring about significant benefits in health and fitness along with
moderate weight control benefits. 25 miles a _month_ is just a waste. Going up
from 1-2x per week to 3-4x makes a huge difference.
Most likely is that it's a dietary issue. While living in Asia, I knew many,
many non-exercising people at healthy weight levels just because they didn't
overeat like Americans tend to. The OP probably doesn't eat like them.
~~~
fasteo
Uhmm, looking at all the comments to my initial comment, I think this has gone
offtrack:
\- I am not against running, but I consider weight lifting a necessary
addition to it.
\- I am not talking about lowering body fat or aesthetics. I am talking about
health. A lower body fat is healthier up to a point. Single digit body fat
level is just an unhealthy as a 30% body fat level.
\- In the same sense, this is not about how much calories muscles burn as this
is irrelevant to health. My point is about the role muscle has in maintaining
homeostasis in our metabolism.
\- My point is/was to help Anand: My sweet spot for body fat level is 13-14%.
This is where I feel and perform the best. Anand is at 19% and I believe it is
because lack of lean mass; that´s why I recommended him some weight lifting.
~~~
xiaoma
> _A lower body fat is healthier up to a point. Single digit body fat level is
> just an unhealthy as a 30% body fat level._
[citation needed]
> _Anand is at 19% and I believe it is because lack of lean mass; that´s why I
> recommended him some weight lifting._
My point was that this belief doesn't make sense. Some people who lift and
have a lot of muscle mass are lean, but many others aren't. A billion people
who don't body-build are leaner than the OP. An objective observation of
people (or even countries of people) who are or aren't fat doesn't generate
very convincing evidence for the theory that people are fat "because of a lack
of lean mass". It's because of their diets.
On the contrary it tends to be exactly those groups most interested in weight
training who are the fattest— e.g. Americans and, to a lesser extent,
Anglophones in general.
[http://www.nationmaster.com/country-
info/stats/Health/Obesit...](http://www.nationmaster.com/country-
info/stats/Health/Obesity)
------
hunvreus
Beautiful website indeed; there are a lot of carefully crafted details,
especially for navigation.
I genuinely wonder though what to do of it. I can't seem to see what people do
with all this data; what does one get from knowing how many steps, run,
calories, subway stops and hours of sleep were accounted for in a day, every
day.
I can see how one could be rigorous enough with his training to see value in
some of it, similarly I would see myself trying to improve my sleep patterns.
But really, so far, people I've met use this as yet another distraction.
I have yet to meet anybody who has been leveraging the data they collect; most
(all?) people I know eat healthy, exercise and sleep well do so without
relying on devices. Now, once we're able to track real health related data
continuously, we may be able to detect illness or problems as soon as they
arise and effectively create a feedback lookp. But from where I stand, as of
today, these things are just gimmicks.
~~~
hluska
I have enjoyed lifting weights for years, but in October 2013, I took the
plunge, bought a bench, some bars and some plates, and started seriously
lifting.
When I started seriously lifting, I collected some bench marks. I collected my
one rep max, six rep max and ten rep max in four different exercises. Then, I
collected data on my pulse rate after doing a ten rep set in various
exercises.
When I work out, I collect what exercises I performed, how many sets/reps, and
what kinds of weight I lifted. And, I intended to check my benchmarks once a
month, but in practice it has worked out to be closer to every six weeks.
Roughly nine months later and collecting that data has proven very beneficial.
For example, with strength training it is too easy to get into a routine and
then keep banging out that same routine every day. I always know what I did
the last time I worked a muscle group, so I feel intense pressure from myself
to either move a few more pounds, bang out another set, or add a few more reps
to a set. And, I get to track how various lifestyle changes interact with
strengh training.
For example, in December, I broke my right thumb cross country skiing and had
to take some time off lifting. Weirdly, the time off actually increased my
bench and shoulder presses because I was using my left (non-dominant) side
significantly more often. Balancing my right and left sides made me
significantly stronger.
Or, in May, the snow was gone so I started jogging again. Jogging improved
some aspects of my lifting - for example, my heart rate after a set has
dropped since I added in jogging. But, it has also hurt other aspects - for
example, my gains in strength are actually slowing. Incidentally, monitoring
my jogging showed me that my tendency to settle into a routine carries across
into other forms of exercise. I realized that I was running the same route
every single time in roughly the exact same amount of time. My body got used
to a level of effort and then stopped getting better.
Just because the 'people you have met' use this as a distraction does not mean
that everyone will. And, just because all the people you know eat well and
exercise regularly, it doesn't mean that everyone does. Some people find that
the simple act of tracking their performance keeps them motivated to
continuously improve. Others have goals beyond 'be healthy' and need to
monitor their progress if they have any hope of reaching their goals.
~~~
xiaoma
>"Incidentally, monitoring my jogging showed me that my tendency to settle
into a routine carries across into other forms of exercise. I realized that I
was running the same route every single time in roughly the exact same amount
of time. My body got used to a level of effort and then stopped getting
better."
This is the _trifecta of slow_ —consistently low mileage, no hills and running
at the same speed every workout.
------
Jemaclus
I like it. My big beef with it so far is that it looks like most of this stuff
is input manually by Anand. (The 1200+ commits suggests that it's manual and
not automatic.) I'm not anal enough to spend that kinda time tracking things.
I have a Fitbit, a scale that i step on every day, Strava to track my runs,
etc, but those are all things you just put on (or push a button) and forget
about.
Things like climbing (which I also do) don't have automatic trackers, and
tracking food intake is just too cumbersome these days for me to even try and
keep up with that.
If there were better ways to automate these things and better APIs available
to pull these things in automatically, I'd totally build something like this.
I just don't have the time, inclination, or the energy to manually add the
climbs, the calories, every food item, and myriad other things into the
system.
So I'll say this: it's beautiful and full of very, very cool info. I just
wouldn't do it myself unless I could generate all of that data. A handful of
commits to build the site, and then let it update itself automatically via
APIs. Granted, this means my site would be a bit less interesting, since the
most interesting things on here are things you can't automatically track...
but I'm working on plenty of other interesting things, and this just doesn't
rate high enough on my list to do.
I'm jealous, though. Very well done.
~~~
mirashii
For what it's worth, having worked with Anand, I don't think 1200+ commits
suggests it's input manually (you can see on the about page a list of some of
the APIs it's pulling data from), I think it more suggests that Anand likes to
commit a lot when he's building something. And for good reason, it's been
really cool to check out historical revisions and see how a design changed at
every step.
------
ChuckMcM
If the cops ever ask you "Where were you last week on Tuesday at 8AM?" you'll
have a solid answer for them :-). My question is 104 days and no journal
entries? Is the author reflecting on this information or just logging it?
I ask because I have a lot of unformed questions and thoughts about what is
known as the 'quantified self' movement. Given the technological memory of all
these things, what insights or changes do you draw/make?
~~~
PStamatiou
Yeah he's (my roommate) been working on building the site and hasn't gotten
around to any journal entries yet. He just launched this yesterday. I've been
egging him on to put more time into the blog component though, so expect some
posts about how he built it and why he wants to log everything..
------
arondeparon
I think the site is absolutely beautiful.
What I am wondering, though is: how are the vitamin/mineral stats on
[http://aprilzero.com/sport/](http://aprilzero.com/sport/) generated? Is there
a way to self-measure these stats without blood tests?
~~~
aprilzero
There's no way that I know of besides blood tests. It's a bit painful but not
too expensive and in my opinion well worth it.
The blood levels are coming from a standard blood test, available at any
doctor's office. I've been getting them about once a month.
You need to fast for at least 8 hours prior to get accurate results, and it
takes about 2 vials of blood. I'm waiting for some sort of device to give you
realtime values with just a prick of blood or constant monitoring.
------
cmdrfred
Wow. This site discourages me, as I feel that I may never make something so
pretty.
~~~
philfreo
Don't let it... Anand is a rare breed :)
------
gress
This is an astonishingly beautiful website, and clearly shows the technical
and design skill of Anand.
However, I'm genuinely not sure what the purpose of this dashboard is other
than as a résumé piece. What questions does it answer? How is it better than
doing specific investigations using R?
~~~
gress
Seriously, how is this downvoted? It's a sincere question. I enjoyed looking
at the site, and I loved the interface, but I didn't get any insight into the
data. Can someone enlighten me?
------
ArikBe
Nicholas Felton has been doing something similar for a couple of years, but he
creates a printed journal:
[http://feltron.com/ar12_02.html](http://feltron.com/ar12_02.html)
I would be interested in a turnkey solution with modular components that would
allow people to quickly "snap" together a site like this.
~~~
djtriptych
Check out Felton's other site - daytum.com. It's an app and website that helps
people get started, though the fancy visualizations aren't modularized just
yet.
~~~
hboon
And [http://www.reporter-app.com](http://www.reporter-app.com), an iOS app,
also by Felton.
BTW, anyone knows why it is Nicholas Felton and feltron.com? (with and without
"r")?
~~~
rismay
His friends gave him the nickname "Feltron"
------
nathan_f77
The design is absolutely incredible. I started using TicTrac [1] a little
while ago, but it's not great. I hate that I have to set up and arrange
everything myself. I really want to just wire up my accounts, and let a
professional designer show me the information in a beautiful way. Other
dashboards like Geckoboard [2] and TicTrac only let you dump a bunch of boxes
on a page. The sports page on AprilZero is an amazing example of a cohesive
design, where everything is laid out in a far more useful way.
For the last month, I've been tracking what I eat with MyFitnessPal, and have
been tracking my weight every morning with a Withings wifi scale. It's
extremely powerful when the data is collected effortlessly, and for the first
time in my life, I'm on track to really change some unhealthy habits. Entering
food in MFP is still a PITA, but I've managed to keep it up so far.
It's been one of my dream projects to design a personal dashboard like this,
especially in the style of the Iron Man movies. This website has exceeded
everything I imagined. I hope it becomes open source one day, and that I can
contribute a ton of new integrations and sections. Or if not, please let me
pay to use this service!
[1]: [https://tictrac.com](https://tictrac.com)
[2]: [https://www.geckoboard.com](https://www.geckoboard.com)
~~~
ezl
seconded. Anand, I would pay for this as a service. Please let me.
------
ipince
Kind of random, but why is there so much time (entire days) spent in hotels
when traveling? Is that due to Foursquare (or whatever) not allowing you to
check-in to other places or did you really stay in the hotel the entire time?
If so, doing what?
It's a genuine question--I basically NEVER stay in hotels beyond the required
sleep time, so I'm curious as to how other people do things.
~~~
aprilzero
The data is accurate. Probably mostly sleeping, working or eating. There is a
bit of a bias towards those being the most noticeable places since you spend a
lot of time there, while other stuff you do may only be for 10-15 minutes and
relatively is very little. You may be surprised by how much actual time you
spend at home or a hotel even though it feels like you've gone out and done a
lot of stuff during the day — by percentage of total time in a day it may not
be that much.
Also many places have a lot of nice facilities that might still count as being
at the location like restaurants, bars, rooftops, pools, beaches, gym, etc.
~~~
aprilzero
Just looked through it again and I think the lack of time difference is
causing what you're talking about.
All of this stuff is fixed on pacific time, even when halfway across the
world. So in Asia, the middle of the day will show boring sleep at the hotel,
and the actual activity gets split up at the beginning and ends of the
timeline.
Not ideal but I haven't figured out a good solution for that yet.
------
tlrobinson
Honestly, the only reason I use Foursquare and various other personal tracking
things is I hope someday to be able to export the data into a nice
visualization like this.
~~~
LunaSea
But in the meantime you give up any sense of privacy.
~~~
tlrobinson
Meh, at least I know exactly what I'm sharing, versus most people who have no
idea a variety of corporations and three-letter agencies could get basically
the same information from their cell phone, toll road transponders, license
plate readers, credit card transactions, etc, etc, etc.
~~~
dhruvasagar
just so you know, all the information that's stolen from people without their
knowledge, also happens to you, so the sense of knowing exactly what you share
is honestly just an illusion.
~~~
tlrobinson
I'm under no illusion. Just saying sharing on Foursquare doesn't make much
difference.
------
hawkharris
Personal health recording systems like this one are most useful for reporting
symptoms to health care providers. In the event of a flu or a running injury,
I like being able to tell my doctor exactly when, where and how the problem
started.
It's also smart to record the data yourself instead of sharing it with a
health tracking app. With due respect to those projects, I draw a line at
sharing specific and private health information. I've arrived at this personal
stance after weighing the benefits of information sharing against the risks of
my data being leaked, mishandled or mined.
~~~
k-mcgrady
>> "Personal health recording systems like this one are most useful for
reporting symptoms to health care providers. In the event of a flu or a
running injury, I like being able to tell my doctor exactly when, where and
how the problem started."
I agree but recently I read that doctors tend to completely discount this type
of data provided by a patient as they can't verify it's accuracy (did the
patient collect the data correctly) and it would be risky to base their
diagnosis on it.
Even if that is the case I think it can be very useful for people with chronic
conditions. They can find out ways to minimise their pain through this kind of
tracking/trial and error which a doctor would never have the time to do.
------
rkayg
This site is gorgeous. There is so much attention to detail. I don't quite
understand the barely visible half curves right above the transport row for a
particular explorer day.
~~~
Evan-Purkhiser
I think the half curves just represent some form of travel. If you look at
March 29th [1] there's a large curve for his 8 hour flight, and some small
gray ones for walking to his gate
[1]
[http://aprilzero.com/explorer/march-2014/29/](http://aprilzero.com/explorer/march-2014/29/)
------
danoprey
Looks like HN took it down. Will have to come back later as the screenshot
looked awesome.
------
thallukrish
If Anand had not done this whole thing artistically, there is no way it would
have elicited interest. What if every one on the planet did this?. Then this
whole thing becomes terribly boring and meaningless no matter how it looks. I
am pretty sure many got attracted by the design rather than the content
itself!
------
fuzzythinker
Very beautiful animations and visualizations. What tools did you use to build
them?
~~~
Nemcue
There is an "about this site" link at the bottom, where he lists some of the
third party services used.
In general he has a few global objects which seem to contain everything for
each section; ajax requests, animations (which are webkit only as far as I can
tell) etc. Most of it is done via jQuery.
~~~
Excavator
> animations (which are webkit only as far as I can tell)
So that's why people were calling it pretty. It does actually look good with
all that prefix nonsense fixed.
Seems odd to use prefixes for things that were unprefixed 2years ago in
Firefox.
[https://hacks.mozilla.org/2012/07/aurora-16-is-
out/](https://hacks.mozilla.org/2012/07/aurora-16-is-out/)
~~~
LocalPCGuy
One of my pet peeves is when people are so "in" the webkit world they don't
bother to even list the unprefixed version. I too looked at it in Firefox, and
things actually look a bit broken. My guess is adding the unprefixed version
would probably fix the majority of the errors I see.
I won't go so far as to say stop using prefixes, but ALWAYS include the
unprefixed version last in the CSS stack. It's so easy with Sass also.
~~~
Excavator
Insofar as I could tell, doing a simple s/-webkit-//g got things working,
except for the gradients due to them still being in the old format.
------
dominotw
Can someone tell me what is the point of this self obsession with tracking?
Why do I care to document where I went or which rock I climbed. Has narcissism
finally become socially acceptable?
~~~
criswell
I think it's nice to look back at. I don't think it's too much different from
looking back at a photo album.
~~~
sejeneoske
It's nice to have metrics to measure your progress with an exercise routine
(walking, running, strength training), weight loss effort (lbs lost, fat %),
etc. Although there are always narcissists, I think many people just like
quantifying their progress. Just like receiving grades to measure your
understanding when you were in school, these metrics allow you to assess
whether you are moving in the right direction, and if so, to feel a sense of
accomplishment. Wanting to be fit and healthy does not equal narcissism.
------
josyulakrishna
This has to be the most beautiful website i've seen.
~~~
jackweirdy
Agreed; such a satisfying font and colour scheme. Really well done.
------
sgarbi
I'm on a tablet now, what libraries is he using?
~~~
tangue
Jquery + d3. Surprisingly it works quite well on my old Ipad 1 (graceful
degradation for the animations). Javascript heavy websites usually crash
safari
------
XorNot
So a criticism of the stats: the health page bars for electrolyte levels are
poorly conceived - they give the impression that "higher is better" (there's
no numbers on them) - not whether or not the value is within the relevant
"normal" range (which itself, should be adjusted for age/demographics as
well).
------
ejain
There are several existing services that aggregate and visualize fitness and
health data, for people who are too lazy to build their own site :-) I'll plug
mine here: [https://zenobase.com/](https://zenobase.com/)
~~~
johnpc
What other aggregators are out there? No offense but I'm struggling to figure
out how to import my fitbit and moves app to zenobase.
~~~
ejain
See
[https://zenobase.uservoice.com/knowledgebase/articles/360890...](https://zenobase.uservoice.com/knowledgebase/articles/360890-how-
does-zenobase-compare-to-other-services) for a list of services that aggregate
health and fitness data.
The screencasts at
[https://www.youtube.com/user/zenobase](https://www.youtube.com/user/zenobase)
should give you an idea of what you can do with Zenobase; if all you need is a
nice dashboard showing recent data, there are simpler solutions like TicTrac.
------
afaqurk
Anand, you beautiful, brilliant bastard. That site looks awesome beyond
compare. Awesome job.
------
tabrischen
I love the design and feel of the site. What would you say are the most
important insights from tracking your activities that lead to any significant
changes in your lifestyle?
------
brenfrow
I would be really interested to see your stats change by being effected by
different diets. I'm interested in something like Paleo vs Veganism.
------
platz
I love the site and design. Maybe displaying some values at nine decimals out
is perhaps a bit more for eye candy than for information.
------
sgy
Quantified-self raises huge privacy concerns, and will make it easier to "rule
the world".
------
johnpc
What tech are you using to track all this? A fitbit? What apps/wearables are
you using?
------
vova_feldman
This is insane! Amazing work dude. Btw. love the UI/UX.
------
ing33k
for some reason this page reminds me of dcurtis's home page some time ago.
------
kevinwang
This is beautiful.
------
kayoone
its beautiful, however even the rather static looking "sports" page produces
some decent CPU load and makes the fans in my rMBP spin up. Maybe that could
be optimized :)
------
liotier
"Everything" ? Unless you publish a daily graph of your sperm count, it is not
'everything' !
------
jowag
So much personal details but no mention of the age? Grand example of SF's
ageism.
~~~
abritishguy
It says he is 24.
~~~
bennettfeely
Actually, it says he is 24.2827266, in counting
~~~
dpweb
Amazing site, I think I like the age counter most of all!
------
pdknsk
Reminds me of a guy who was on Bloomberg West last year. He does it to become
more productive, and find out what makes him less.
[http://www.bloomberg.com/video/using-sensors-to-track-
your-e...](http://www.bloomberg.com/video/using-sensors-to-track-your-entire-
life-67q3ZGiERROz9vnEjniD_A.html)
PS. The website is well done, but in all fairness, similar websites were made
in Flash more than 10 years ago.
------
tarere
" great website" "fantastic" "fabulous"...
??!!!
So nobody feel that tracking everything you do every second and log it in real
time and forever on a server is terribly frightening ?!!!!
In 2005 i predicted every one who ever logged on facebook would regret it one
day and pay a huge price for it. This is more than real now. Still you don't
stop, and now you're sending to "them" in realtime your heartbeats, your
weight, what you yeat, etc.
Did you just forgot about LIFE ? Is this the next American Way of life ? So
yout think totalitarism is Iran or Syria ? Pouarrrk !!!
You guys are totally out of your minds. Seriously.
~~~
afro88
It's only frightening if it's not deliberate / you don't know it's happening.
This is beautiful, voluntary and insightful.
~~~
tarere
"It's only frightening if it's not deliberate / you don't know it's
happening."
Oh my god, how old are you all ? Are you the next generation of this world ?
Don't you just understand you behave like products and not human !? This is
possibly the end of the world.
~~~
afro88
Pretty sure your comment was tongue in cheek, but as a parallel to the older
generations and this guy - consider that an autobiography is the author
voluntarily revealing details about their life to a potentially massive
audience. This is the same sort of thing, but without the narrative. It's also
a work of art.
------
taway98765
Nice PR campaign to prepare the floor for a new generation of wearables /
tracking & monitoring devices .. don't get me wrong, I like(LOVE) the
technology, I just don't like the idea of becoming a self-sponsored spy pawn
on me and everyone around .. focus on privacy, local/self hosted services
first, hardened leak-free hw, cloud data encryption(with keys not leaving your
devices) by default + 1000 of other privacy-related challenges that are being
largely ignored .. and since this is not in the fin. interest of hw
manuf./main sw houses .. the world is becoming a modern, more efficient,
better organised version of orwel's 1984 and it looks like regardless of the
amount of information confirming this disturbing development, people naively
trade 15min of fame for privacy again and again(and it - will - turn against
you even if you are protesting in front of the "right" embassy - well,
activism of any kind is considered a threat these days so better stick to
those kitty pictures and comments about the newest season of <insert your
favorite tv-series> )
~~~
jackweirdy
Paragraphs!
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How do you come up with a price for an API? - chirau
======
elorant
1\. Check your competition, or anything remotely similar.
2\. Ask potential clients for various prices to see which one sounds more
affordable.
3\. If both 1&2 fail you'll have to improvise. Assume the worst case scenario
for adoption (aka less than 1% from intended clientele), and the minimum
amount of money you need to be barely viable, and then you come up with a
number. Be as pessimistic as your comfort zone allows you.
| {
"pile_set_name": "HackerNews"
} |
(A Few) Ops Lessons We All Learn the Hard Way - ryukafalz
https://www.netmeister.org/blog/ops-lessons.html
======
rhabarba
> Serverless isn't.
This!
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How does Google Voice search get input w/o flash? - tibbon
In Google search now you can use your audio input to search. My question is how are they technically doing that in HTML/Javascript?<p>I've been told that HTML5 doesn't have an audio input object, but that's what it seems they are doing here. Any tips of how I'd implement similar?
======
Khao
They have this feature implemented inside of Google Chrome (or maybe it is
Webkit, I am unsure) and I take it that you're using Chrome to test this
feature. As far as I know, it's an experimental API that they have added to
the HTML5 specs. In the video in that blog post they say that you need to have
the latest version of Chrome to use it :
[http://googleblog.blogspot.com/2011/06/knocking-down-
barrier...](http://googleblog.blogspot.com/2011/06/knocking-down-barriers-to-
knowledge.html)
------
dstein
Yeah this is a Google Chrome specific feature. Chrome records your voice,
uploads an mp3 to a Google server and returns the text. It is about the least
efficient way to accomplish the task. Ridiculous really. Our operating systems
(even Windows95) have had speech features forever, but it's implemented in a
very clunky way. Instead there should be a standardized speech-to-text input,
or JavaScript API where I can use my operating system's built-in speech
features.
~~~
wmf
The built-in speech recognition in your OS isn't as good as Google's (and it
may not even be there — think Linux or Chrome OS).
~~~
dstein
My quick experiments with the Chrome speech input says otherwise. It is both
less accurate, and less useful than the built-in speech-to-text in MacOS.
There exists speech systems for Windows and open source ones for Linux that
are "good enough".
The point isn't really about accuracy, it's about usability. The way this is
implemented in Chrome does not make it possible to use voice commands to do
operations in a web browser. That's what we need. We don't just need a voice
input for Google search.
| {
"pile_set_name": "HackerNews"
} |
Application Idea - Local History - rodh257
http://cejest.com/2011/12/10/application-idea-local-history/
======
teyc
I believe there is one start up who is doing this.
| {
"pile_set_name": "HackerNews"
} |
What the new video compression strategy from Netflix means for Apple and Amazon - ca98am79
https://donmelton.com/2015/12/21/what-the-new-video-compression-strategy-from-netflix-means-for-apple-and-amazon/
======
SwellJoe
I wonder at his premise that consumers are choosing things based on wanting
larger file sizes and higher bit rates. Most of my friends literally could not
tell you anything about their mp3 or movie collection in terms of bit rate.
Half of them you'd need to explain what "bit rate" means, before even asking
the question. I only think about it for my DJ music collection (and VBR is
fine in that context, I'm just ruling out CBR stuff below ~192kpbs, because
sometimes it sounds a little harsh on the high end over the big speakers);
never worry about it for video or music streaming. If it's HD and looks/sounds
OK, I don't think about it at all. Netflix and Amazon both have acceptable
quality, so I don't think about it, I just consume it.
I think the success of Spotify and Pandora and Rhapsody are proof that
consumers don't care about quality. I don't know exactly what bit rate they're
streaming at, but, it sounds pretty bad on mobile, so I assume it's something
quite low. But, even though I recognize the crappiness of it, sometimes I
listen to them in the car (my truck has a crap stereo, anyway, so no big deal
there).
In short: Cool article, but the suggestion that consumers will stop it because
they want bigger files seems weird.
~~~
swang
I think you're missing the point. Consumers don't care, until some listicle
website or advertising firm points out that Amazon/Apple short you on your
mp3s by not even offering 128kbps all the way through.
Or imagine Amazon goes with pure VBR, then Apple makes an ad claiming their
sound quality is "better" because their bitrate never dips below 128kbps. It's
bullshit, but how is an average consumer suppose to figure this out? They'll
probably err on the side of caution and buy the CBR version since, "it can't
be any worse than the VBR one, but I don't lose bits and it's the same price!"
The whole article was talking about streaming vs. downloading. Streaming is
_fine_ and Netflix will probably get away with their compression, but will
Amazon/Apple be able to do that with downloads? He doesn't think so. People
are fine with Spotify/Pandora because there is no perceived ownership of the
songs they're streaming. People who actually buy and download audio or video,
they have money in the game so they want "the best" and any loss of that is
viewed as Amazon/Apple screwing them over.
~~~
SwellJoe
Has that ever happened? I mean, have there been consumer revolts over bit rate
that have cost Apple or Amazon customers? Pono and Tidal don't seem to be
killing the existing players, but perhaps I'm just not up to speed on the
state of the industry.
~~~
swang
There is no actual "revolt" because both Apple and Amazon make sure it doesn't
happen by using CBR. You're right in that consumers don't care about it all
that much.
But let's say Amazon decides to go with VBR to save space and download speed,
now there's an easy way for Apple to attack Amazon. "We never dip below a
certain quality. Amazon does. We care. Amazon doesn't"
Maybe that ad/slogan works, or maybe it doesn't. But if you're Amazon would
you be willing to risk some weird consumer backlash over it? And if you're
Apple it is an easy point to attack, and if it doesn't work, no harm no foul
(and then secretly also switch to VBR and announce it at the next Apple
conference!)
~~~
SwellJoe
So...um...Apple and Amazon have been shipping VBR files for _years_.
~~~
swang
I am not sure if there's confusion or what but this is literally discussed in
the article. So I am not sure what you're arguing exactly. The article
discusses how the VBR files have been encoded with minimum bitrate constraints
in the fear that someone will make a big deal out of it if it dips "too low"
~~~
SwellJoe
VBR with a mininum bit rate != CBR.
And, to be clear, the article is making a guess that Amazon or Apple are
imposing a minimum bit rate to insure some lower bound on file size. I don't
think there is really any solid evidence that Amazon or Apple are making
decisions based on trying to make file sizes bigger to convince consumers
they're getting "more value". I took exception to the premise, which is why I
commented above. I don't believe Apple and Amazon are making decisions based
on trying to increase file sizes, and I find it weird that the article suggest
they are. I believe they are trying to maximize audio quality at smaller file
sizes. Evidence seems to indicate that is what is happening.
And, we've come full circle to the point of my initial comment. I don't think
the argument he's making about maintaining large file sizes is backed by
evidence or a particularly good understanding of consumer behavior/preference.
I do think his guesses about the Netflix algorithm are interesting, but his
digression into consumer behavior is less so, IMHO.
------
shmerl
I'm waiting for something like Daala + Opus starting being used by huge
services like Netflix. Youtube already uses Opus.
Apple? It will take them another 50 years to start using free codecs.
UPDATE: A post on IETF blog about standardized free video codec effort:
[https://www.ietf.org/blog/2015/09/aiming-for-a-
standardized-...](https://www.ietf.org/blog/2015/09/aiming-for-a-standardized-
high-quality-royalty-free-video-codec-to-remove-friction-for-video-over-the-
internet/)
~~~
gillianseed
I'm somewhat confused, is this a different effort than the royalty free codec
which is being developed by Google, Amazon, Netflix, Microsoft, Mozilla,
Cisco, Intel under the name 'Alliance for Open Media' ?
~~~
shmerl
Regarding the video codec, it's the same thing, except one in IETF is the
actual engineering group, and that AOM is more of an administrative one that
synchronizes all the bureaucratic stuff probably (legal as well I guess).
UPDATE: See here:
[http://xiphmont.livejournal.com/67752.html](http://xiphmont.livejournal.com/67752.html)
~~~
gillianseed
Ah, thanks, makes sense.
------
hanklazard
"But I suspect that was a problem.
You see, it would probably be difficult to sell those VBR files — some of
which were quite a bit lower than 256 Kbps and a few even lower than 128 Kbps
— because customers might perceive a loss of value."
The vast majority of customers do not care about the actual Kbps, as long as
the sound quality remains above certain standards. Just market the different
quality levels at different prices and most people would never think twice
about it (and most would choose the cheapest version).
~~~
ohitsdom
If enough "audiophiles" repeat the claim that Apple or Amazon has worse audio
quality, it could seriously hurt their brand's reputation.
~~~
dexterdog
Isn't Apple + Beats enough to convince people that Apple is not about audio
quality?
~~~
eli
Isn't the popularity of Beats (not to mention default iPhone headphones) proof
that Good Enough is fine for most people
------
mrdrozdov
Major question about this comment.
> They all have the same server farms. Owned by Amazon, no doubt. And there
> aren’t any technical hurdles. It’s just more computation.
Does Apple use Amazon's servers? I thought Apple ran its own hardware/data
centers. I've definitely heard war stories of Apple towing trucks full of
racks into the desert so they could bump their capacity for cheap.
~~~
olau
Apple's contemplating building a server farm not terribly far from where I
live, so no, I don't think they're using Amazon. I think that comment is
supposed to mean that they all have access to big clusters, not that it's the
same clusters.
~~~
JustSomeNobody
All they need is a half dozen powermac supercomputers.
But seriously, could they not be using Amazon until they finish building out
their own farm?
------
ksec
There used to be a group of people who cared about bitrate. They wanted 64kbps
Audio that sounded better then MP3 128Kbps, which till today still isn't
possible. Be it AAC, HE-AAC, Vobris, or the new Opus. Despite the hype every
time a new codec arrived.
There used to be a group of people who wanted codec that is the same quality
as Lossless at 256Kbps to 320Kbps. Personally I think MPC ( Musepack )
accomplished it. And it is patentless as well since it is based on MP2. But
the codec never caught on in Hardware world. Meanwhile AAC does about just as
well @256kbps despite being more complex.
That was in the Naspter -> iTunes download era, Then time flies, both group of
people lost interest. Mainly for the same reason. Both group wanted to store
as many music downloads as possible. 1st group dont mind a little quality
loss, 2nd group wanted near perfect quality @256Kbps, however HDD prices
dropped to a point where 1st group dont mind storing them in ~256kbps and 2nd
group will simply store them as lossless FLAC.
Then we come to the age of Streaming, that doesn't necessarily means only
Apple Music or Spotify and the like, the largest music streaming is properly
Youtube. People dont download anymore, They just click and play on Youtube.
It tuns out, I think we have reached the stage of "good enough". Whether it is
audio, or video. With Video, we can get huge improvement if we smooth out the
noise / grain details. Our broadband speeds continues to improve, we will have
G.Fast & VDSL2, the next generation of DSL broadband tech. Most kids or
youngster of this generation dont care about Audio / Video quality as much,
they would rather want instant and ease of access.
------
azinman2
Ok everyone is stuck on the bitrate argument and consumer choice. However I'm
more focused on if this plan is even possible.
Could Netflix really want to throw so much money at transcoding so many ways?
Are there various tricks to do this at a reasonable cost? Like grab random 10
seconds across 15 points in a movie and try just that? Work with top n most
popular first? Sort by existing biggest movies?
~~~
lnanek2
Doing the transcodes is pretty much a drop in the bucket for NetFlix. Even
transcoding the same movie 1000x at full length is pretty meaningless to them
since it is a fixed cost against their library size.
What they really care about is things that are multiplied by the number of
users they have. Saving 5% of bandwidth on 100k users is very meaningful to
them. So doing extra transcodes to figure out what to send is very valuable.
They've come and given talks at Cloud dev meetups and just an unexpected bump
in bandwidth or delay in server response time causing traffic to back up is
enough to knock over their servers and they do things no one else would even
think of, like having their clients upload code to their servers to batch
requests together in the optimal format for the clients, just to reduce
bandwidth and load on their network.
------
kevin_thibedeau
I would bet money that Amazon ran into this same
conundrum with the unconstrained VBR mode of the
LAME MP3 encoder which they use.
Lame has always had a way to set minimum and maximum bounds on the VBR bit
rates. I would bet that Amazon has at least one employee who knows this. I
used to use this a lot with a hardware player that couldn't handle VBR above
224 kbps.
------
KaiserPro
Why would apple et al follow?
Unless there is any noticeable affect on quality/streaming ease then consumers
won't care.
You have to remember that most people can't/won't tell the difference between
blueray and DVD, let alone bitrate change. More importantly they have lots of
silly TV effects that actively fuck with the picture (Sharpening, Aspect ratio
stretching, over saturation, active motion, and other horrid "enhancements")
The only reason netflix et al is a thing is because of the content, not the
platform. (just look at how shit iTunes is to use) You can make the most
wonderful interface in the world, but if there is no content, there is no
point.
Streaming does cost money, but that's not the main cost of business. Most cost
comes from licensing the content in the first place. (Then paying all your
staff to do fancy things)
Seriously bandwidth is pretty cheap, compared the the cost of buy the license
to broadcast a top rated movie. (a high ranking movie is easily a few
$million. custom TV series is anywhere between 1 and 30+ million for a
season.)
~~~
johngunderman
The article seems to focus too much on the consumer side of the bandwidth
equation. I think the real win for Netflix is the aggregate egress bandwidth
savings from their DCs. If Netflix can halve their bandwidth (as the article
seems to claim) without any appreciable loss in quality, they've just saved
substantially in the infrastructure and peering contracts needed to deliver
their content. I have no idea how much money Netflix currently spends on
bandwidth/CDN, but I'd guess it's certainly in the 100s of millions. I can
imagine that Amazon and Apple would be very interested in emulating those
savings.
------
sbouafif
It's completely normal for Netflix to work on that and end-consumers wont see
a difference.
That's exactly what's been done by illegal release groups (pirated content)
which are very picky when it comes to time to release (encoding/sharing) and
do their best to encode fast enough while having a good viewable quality.
When it comes to encoding, even for a large library like Netflix's, time to
encode is always lesser than time/bandwith saved while sharing/streaming.
As of now, Netflix 1080p raw content (not transcoded) is delivered at a
bitrate of 5200kbps/5900kbps with no differences between the content (animated
or live action - here I compared BoJack Horseman to The Ridiculous 6). While
many (or even all) high quality release groups encode animated bluray at
around 3000Kbps (1080p) (around 5500kbps for very high quality) while live
action is encoded at around 11.0Mbps. The same difference is applied when the
content is capped from TV and then encoded.
~~~
paulmd
This also varies by the target audience.
Stuff that is released via publically-tracked torrent (i.e. for mass-market
audiences) often targets around 1.5-2 GB for a feature-length movie in 1080p.
In contrast, releases aimed at Usenet (the technical crowd) are often 6-7 GB
and sometimes as large as 11-13 GB for the same movie.
On the other hand the situation is much more equitable for audio. Lossless
audio torrents are pretty common even in the torrent world, and due to the
typically greater number of torrents available the overall selection of
lossless files is probably at least as large as on Usenet.
I would assume that private trackers tend towards higher-quality releases.
------
grandalf
As more video moves to higher resolutions (such as 4K) the amount of bandwidth
wasted by inefficient encoding increases exponentially.
While consumers only care if the quality is "acceptable", it's pretty easy to
tell the difference between a crisp 4K picture and a 1080p picture, and also
easier to see encoding or bitrate artifacts.
So I think this is probably an attempt to improve the margins a bit on content
delivery costs without sacrificing quality.
Netflix has also embraced 4K content with its original series, so it is in a
unique position to leverage the shift to higher resolution content for maximum
profitability.
~~~
lern_too_spel
I see it more as an attempt to offer better quality to people with crappy ISPs
as they expand to more countries. Previously, they would have seen a low-res
Bojack, but now that Netflix can decide that a full HD encode of Bojack can
fit in a smaller bitrate, those people will see a high-res Bojack. It might
even be related to the T-Mobile announcement that allows people to watch
Netflix without eating into their data caps as long as the bitrate is capped.
------
discreditable
People get really excited about this revolutionary "quality based" encoding
and to me it just sounds like -crf in x264. For trying to hit a constant
bitrate, I've heard of people performing a -crf pass, then taking the average
bitrate of the result and using that for a constant quality encode.
With this method, you let the encoder figure out the bitrate for the quality
target you want to hit, then you can use that bitrate in a constant quality
encode if you like.
------
jordache
author's point is weak. users of netflix is not concerned with knowing what
bitrate the video is. It just needs to have an acceptable quality.
~~~
sangnoir
> users of netflix is not concerned with knowing what bitrate the video is
That's part of the author's point: if netflix can halve their bandwidth
without users noticing, they will earn massive savings. Apple & Amazon will
also want in on the savings (being competitors and all)
~~~
jordache
that must have been a nuanced point. He had much more emphasize on the
strategy of differentiation via bitrate.
------
JustSomeNobody
Consumers don't understand bitrate for videos. They only care if they can
watch 1080p on their 1080p TV. It doesn't occur to them that resolution is
only a minor player in digital quality.
------
HappyTypist
Let's assume that consumers even know about or care about bit rate. Apple and
Amazon could offer two downloads, VBR and CBR.
------
nickpsecurity
Really neat stuff. Can't wait to see it in FOSS software so we can save some
space in our video collections. :)
~~~
brigade
OSS encoders (x264, Theora, VP9, Daala, Vorbis) already tend to have a
constant quality rate control mode that they'd very much prefer you use unless
you have an actual reason to need a specific bitrate. I'm always surprised at
how many people try to reinvent it via abr...
Netflix (and streaming services in general) on the other hand needs known
bitrates with known constraints so their bandwidth estimation can work
correctly without hiccups. Your personal video collection does not.
~~~
nickpsecurity
So, it only affects bandwidth and not storage space?
~~~
brigade
No - the point is that true constant quality has (almost) no constraints on
local bitrate, so one section of a movie might be twenty times the bitrate of
another section. Online streaming services continually estimate the current
available bandwidth, then choose from a selection of pre-encoded streams to
download. For this to work well, the selection must know the maximum local
bitrate of each stream to match the estimation. If this local maximum isn't
known or constrained, you get buffering because you're attempting to download
a stream that's actually currently twenty times more than the available
bandwidth.
Whereas your personal video collection probably isn't being streamed over any
link slower than several hundred MBit/s, which is more than enough for
anything short of intermediate codec bitrates, plus significant buffering
doesn't count against any data caps.
~~~
nickpsecurity
Makes sense. Thanks for the detailed explanation.
------
thecosas
Love the closing sentence from this article:
_I don’t know. It’s hard to predict because consumers… well… we’re fucking
stupid._
------
reiichiroh
Are they just using H265 HEVC?
| {
"pile_set_name": "HackerNews"
} |
Announcing the C++ FAQ - fafner
http://isocpp.org/blog/2014/03/faq
======
xjh
[http://yosefk.com/c++fqa/](http://yosefk.com/c++fqa/)
~~~
fafner
The FQA is silly and out of date.
| {
"pile_set_name": "HackerNews"
} |
Office UI Fabric - BobNisco
https://github.com/OfficeDev/Office-UI-Fabric
======
mbesto
> Does not support IE8. [0]
Thank you MS, this is well-needed ammo.
[0] [https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/gh...](https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/ghdocs/GETTINGSTARTED.md#supported-browsers)
~~~
gtk40
But supports old Safari on Windows and no Firefox on Android?
~~~
acdha
Safari on Windows isn't terribly surprising since it's probably based on a set
of core features they need and there's a long list of things which are in old
Safari but not IE8:
[http://caniuse.com/#compare=ie+8,safari+5.1](http://caniuse.com/#compare=ie+8,safari+5.1)
Firefox on Android seems like it might just be something which they don't see
enough demand for to make it an officially supported browser. It'd be
interesting to see whether it actually works on office365.com.
------
jbrantly
For those looking for examples of what it looks like:
[https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/gh...](https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/ghdocs/FEATURES.md) [https://github.com/OfficeDev/Office-
UI-Fabric/blob/master/gh...](https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/ghdocs/COMPONENTS.md)
~~~
jasonkester
Hmm... Not quite the example I was hoping for. There are some cool looking
components in there, so naturally I wanted to dig in to the source and see how
complex the markup needs to be to pull that off.
Turns out it's actually pretty simple. Just include this IMG tag:
[https://camo.githubusercontent.com/6f327f2c8f7c225358d52bec9...](https://camo.githubusercontent.com/6f327f2c8f7c225358d52bec9155dd5d50cfaa08/687474703a2f2f6f6475782e617a75726577656273697465732e6e65742f6769746875622f696d672f506572736f6e61436172642e706e67)
------
untog
The key part:
_Fabric solves many of the same problems that other front-end frameworks do,
in a way that is specific to Microsoft. We have our own design language and
interaction patterns that all Microsoft apps share._
This is specifically designed for people to make add-ons to Office 365 that
look like they belong as part of the software. While I don't doubt you could
use it standalone, I don't see MS advocating that you do.
~~~
jbigelow76
I don't agree, it seems to be a subtle attempt to spread the Office brand by
means of trying to make it's styling more prevalent. The first line of the
release seems to point at using the UI outside of Office as well as with add
ons:
_Office UI Fabric is a responsive, mobile-first, front-end framework for
developers, designed to make it simple to quickly create web experiences using
the Office Design Language._
~~~
oblio
Well, this is the best kind of promotion: we get quality stuff for free, they
get free exposure and also seem hip, unlike Microsoft circa 2010.
~~~
jbigelow76
Yeah I didn't mean it as a dig when I mentioned the spreading of the Office
brand. I will finally be able to build an app that doesn't linger in default
Bootstrap styling forever :)
------
tajen
How legal is it? Ok it's MIT license, but if I use a UI design, do I infringe
on Microsoft's imtellectual property? Is UI design copyrightable? I have the
same question for another UI framework, which by default comes with the
creator's design guidelines: Is it enough to change the color of the header to
avoid brand confusion and be safe from infringement?
From what I can gather, UI design patents actually exist. However Apple won
against Samsung but lost a case against Microsoft, which demonstrates that
it's still important to patent UI functionnality (such as the bounced scroll)
in addition to the graphical elements.
[http://patents.stackexchange.com/questions/4020/protecting-a...](http://patents.stackexchange.com/questions/4020/protecting-
a-user-interface-design-patent-and-or-copyright)
Any further answer is welcome.
~~~
icebraining
IANAL, but in some jurisdictions, there's something called promissory estoppel
- essentially, if you promise something that is expect to lead people to act
in a certain way, you can't then sue them later for doing so. Microsoft
themselves have successfully used that defense against Motorola Mobility
(though that case was relative to the price of the licenses, not whether they
could use it at all).
------
paulojreis
Looks good and seems well built.
However - like Bootstrap - it has this kind of mark-up that I'm starting to
strongly dislike:
<div class="ms-Grid-col ms-u-sm6 ms-u-md8 ms-u-lg10">Second</div>
I get that this - like Bootstrap - is nice to get a quick start and start
deploying but, as thing grows, it gets harder and harder to maintain.
I'm not all for a semantics panacea but this is hard to read and, I imagine,
harder for the browser to parse. Nowadays, I'd rather be very dumb with CSS
(just one class) and let SASS handle the complexity.
In this case, I'd create a class with an adequate and meaningful name and, in
SASS, do the composition they're doing in the class attribute - @extend the
needed column definitions per media-query. I like the idea of having the
class/style composition duty done at SASS compile time and not by the browser
at runtime.
~~~
zodiakzz
Erm, you are simply misinformed. I use Bootstrap and I never use any
presentational classes, these are just provided for convenience (although a
huge amount of people abuse them).
Bootstrap provides LESS mixins like .make-row(), .make-*column() etc. to keep
your CSS semantic.
~~~
paulojreis
I am not misinformed regarding Bootstrap; unfortunately I have extensive
experience with it. :) Anyway, I should have remarked the fact that mixins do
exist and semantic class names are totally possible with Bootstrap.
I was just pointing out the kind of mark-up which appears in the example I
quoted and, typically, in Bootstrap-powered stuff. It's not a problem with the
framework, of course, but - as you said - people abuse the pre-made classes.
Regarding Bootstrap in particular, I think most people just import the
compiled stylesheet (so, no mixins & other assorted goodies).
------
ckluis
This looks like it could benefit from a parent site explaining/showcasing all
the features. What I could see so far looks like a big step up for many LOB
applications.
~~~
jbigelow76
Agreed. At first I thought it was some kind of UI/scripting add-on for Office
extensions, it took a re-read to realize it was more akin to Bootstrap and
Foundation.
------
gapchuboy
Naming hell again by Microsoft.
Why fabric?
Windows Server AppFabric [https://msdn.microsoft.com/en-
us/library/Ff384253(v=Azure.10...](https://msdn.microsoft.com/en-
us/library/Ff384253\(v=Azure.10\).aspx)
Azure has App fabric and fabric controller.
~~~
SmellyGeekBoy
Maybe a play on Google's "Material Design" ?
------
donutdan4114
Built with LESS...
What's the current state of SASS vs. LESS? It seems like a lot more CSS
frameworks are using SASS and it has more plugins, tools, mixins, etc. But I
haven't kept tabs on it in a while.
~~~
joshuacc
For a long time, Bootstrap was the flagship Less project, but they've recently
switched to Sass. Some of the increasing momentum in Sass is probably due to
libsass, a C-based Sass compiler that can be used without depending on Ruby.
(And is also _much_ faster.)
Just FYI: Neither Sass nor Less should be written in all-caps. See the
respective websites.
~~~
bennylope
libsass is - understandably - a few versions behind the Ruby implementation.
Most of the project teams I've been on working with Sass have opted for the
Ruby implementation as a consequence.
~~~
Flenser
The ruby version is on feature freeze until libsass catches up.
~~~
Flenser
citation now I'm on desktop:
> In fact, Ruby Sass will not release any new features until LibSass catches
> up. Once it does, there will be feature parity between the two moving
> forward. Soon, we’ll have the blazing speed of LibSass with all the features
> of Ruby Sass!
[http://sassbreak.com/ruby-sass-libsass-
differences/](http://sassbreak.com/ruby-sass-libsass-differences/)
------
nailer
I posted earlier in this thread that this looks like the first time you can
use Segoe UI legally in a web app: that's wrong. Fabric CSS doesn't actually
include the webfonts.
[https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/gh...](https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/ghdocs/FEATURES.md#typography)
~~~
gordjw
Seems to be licensed on a "per core" basis from Monotype.
Perhaps I'm missing something, but that makes no practical sense to me.
Typography.com's model of per site licensing is much more understandable.
------
rw2
Why is there no demo website? No front-end framework should be without a
component (listing each component) and a demo section. This is shoddily done
compared to Google's material lite.
------
aaronbrethorst
The fourth and fifth results on Google for "fabric" are:
Fabric - Twitter's Mobile Development Platform
https://get.fabric.io/
With Fabric, you'll never have to worry about
tedious configurations or juggling different
accounts. We let you get right into coding and
building the next big app.
Welcome to Fabric! — Fabric documentation
www.fabfile.org/
Fabric is a Python (2.5-2.7) library and
command-line tool for streamlining the use
of SSH for application deployment or
systems administration tasks. It provides ...
~~~
dragonwriter
> The fourth and fifth SERPs on Google for "fabric" are
A nitpick, perhaps, but SERP is "search engine results page" [0] -- a page of
results from a search engine. Those are the fourth and fifth results -- all on
the first SERP -- not the fourth and fifth SERP.
[0] see, e.g.,
[https://en.wikipedia.org/wiki/Search_engine_results_page](https://en.wikipedia.org/wiki/Search_engine_results_page)
~~~
aaronbrethorst
doh, thanks for the correction!
------
urs2102
This would definitely benefit from having a link to a demo or at least to a
webpage implementing the compontents rather than asking users to download and
then go through a process to try out samples to view them.
On the positive side, it's good to see no support for IE8.
~~~
nightski
If you go to the "Features" link there is a screenshot of every component.
------
metaphorical
Any demo link?
~~~
nailer
[https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/gh...](https://github.com/OfficeDev/Office-UI-
Fabric/blob/master/ghdocs/FEATURES.md)
------
toolz
I don't do much front-end work, so maybe I'm just missing something, but are
all of these frameworks really that much different? If this just for people
who haven't learned one of the other frameworks or is there a compelling
reason to switch?
~~~
vukers
I think all the front-end frameworks are pretty similar, but if you are
building Office add-ins, or some other application that lives in that
ecosystem, then there is value in adhering to consistent UI elements.
I think there may be additional insights on their announcement post:
[https://blogs.office.com/2015/08/31/introducing-office-ui-
fa...](https://blogs.office.com/2015/08/31/introducing-office-ui-fabric-your-
key-to-designing-add-ins-for-office/)
------
CephalopodMD
So this is M$'s response to Bootstrap which implements M$'s response to
Material Design? Not bad! also RTL font support is nice.
~~~
daok
Do you really need to use the dollar sign? That looks so childish.
~~~
jbigelow76
Without the dollar sign the potential for error is obviously far greater, for
instance somebody might have thought he was referring to Martin Scorsese.
~~~
jevgeni
Or Multiple Sclerosis.
------
brokentone
I'm having a little trouble figuring out exactly what this does, but while I
do... I'm recalling the wonderful history Microsoft has with web dev --
Frontpage, IE 5.5, IE 6...
~~~
vonkow
Don't talk smack about IE 5.5, unless you think AJAX was a bad idea.
| {
"pile_set_name": "HackerNews"
} |
The void of undefined in JavaScript - Nassfyr
http://shapeshed.com/the-void-of-undefined-in-javascript/
======
ars
The correct solution for this problem of undefined is to do nothing!
If someone redefined undefined and it causes a problem - too bad! Some
problems are just too stupid to worry about.
~~~
CodeCube
indeed ... I'm surprised no one has mentioned "wat" yet :P
[https://www.destroyallsoftware.com/talks/wat](https://www.destroyallsoftware.com/talks/wat)
~~~
tripzilch
Absolutely brilliant/hilarious video.
------
drostie
> _Let 's say someone is using your library within that function and you
> reference undefined. You get the string "oops". Oops indeed._
That is incorrect. It might be correct if the word 'library' was replaced by
'code snippet' to indicate a copy-and-paste-of-your-code issue. But if you've
created a _library_ then your functions are over in some other file, where
`typeof undefined === "undefined"`.
The proper attitude here is the same as the Python attitude towards not having
`private` attributes: "If some other programmers want to do something crazy
with my code, that's their prerogative. If it blows up in their faces, that's
their problem."
~~~
kaoD
Even if it was in the same file, as long as it's in a different function it'll
work.
If `undefined` is an argument name, it's only bound inside that function and
nowhere else.
------
marijn
I find the cult of ' _oh my god you can redefine_ _undefined_ _'_ hilarious.
Yes, you can. You can also redefine __Array __, __Object __, and so on. If you
're writing a script to intentionally disrupt a system, __for(;;); __will also
do. This doesn 't happen in sane environments, so it really isn't a problem.
The Crockford quote (characteristically dogmatic, to the effect of ' _void
means something different in JS than in Java, so AVOID VOID!_ ') is also a
kicker.
------
lubomir
> […] if you throw your scripts out there on the web you've got to expect that
> somewhere, at some time someone is going to do it […]
And then it will be that persons' problem. Their code would be wrong, not
mine. By not making sure my code works when undefined is broken, I would be
helping them to realize they have a possible bug in their codebase and that
they need to fix it.
~~~
PommeDeTerre
This is one among many serious, and unjustifiable, flaws with JavaScript. It's
the kind of issue that should never even arise with anyone's code in the first
place, regardless of who wrote the code, because the language and its
implementations should not allow it to happen.
And, yes, we know that other languages have flaws, too. But aside from perhaps
PHP, the flaws in other languages are almost never as outright stupid as they
are with JavaScript.
~~~
coldtea
> _And, yes, we know that other languages have flaws, too. But aside from
> perhaps PHP, the flaws in other languages are almost never as outright
> stupid as they are with JavaScript._
Citation needed. There are tons of languages with huge fucking flaws to blow
your code and kick your dog.
PHP and Javascript are relatively harmless (if a little brain damaged). At
least you don't get buffer overflows using them.
You think C++ has a better design, for example? People forgot how bad Python
used to be, pre 2.4?
~~~
pgcsmd
No, the OP was correct - other languages have flaws but nowhere near the level
of Javascript. You pick on C++ which is a design by committee monstrosity but
it is nowhere near as braindead as Javascript. I mean, C++ allows you to
include other code! C++ doesn't allow you to redefine the constants of the
language. Actually, C++ _has_ constants!
Nope, as much as I like to rail against the sins of C++, it is a paragon of
design virtue next to JS. I've been programming for three decades now and JS
is the worst language I have ever seen. I use JS a lot in my day job and it
has parts I really like (object constants, for example) but really, as a piece
of language design it is really the pits.
~~~
aboodman
#define ?
~~~
PommeDeTerre
That's not really a C++ construct, as much as it is a hold-over from C, kept
around to retain compatibility with existing code.
If you're writing new C++ code, you're in no way forced to use it. You can use
constants or inline functions to achieve the same result in almost all cases.
------
benaiah
Isn't this whole issue just a minor version of the problem Ruby has with
monkeypatching? The fact that a Ruby guy can define `method_missing` to allow
for bare strings shows how Ruby is cool (though you should never do that), but
the fact that you can redefine `undefined` in JS shows how JS is stupid
(despite the fact that you should never do that). I don't understand the
dichotomy.
~~~
chrisrhoden
People don't typically inject frequently changing, unvetted advertising code
into their ruby runtimes. Generally, the most frequently that the ruby code in
your runtime changes is each deploy.
~~~
benaiah
Granted, but that's an incidental problem, not one arising from JS being a bad
language. Also, I highly doubt there is much advertising code that changes the
value of "undefined" \- certainly not any I've encountered.
This really boils down to "my code won't act the same way if I give unfettered
access to my environment to unvetted code" which is true in almost any
situation. If you're having problems because "undefined" is being redefined,
you have bigger and more fundamental problems. There are a lot of bad things
about JS, but this is not one of them - its overly nitpicky and completely
unfair. This same capability (of being able to redefine almost anything) is
lauded as part of Ruby, but when it could theoretically cause any easy-to-
avoid problem in JS, it's just more proof that JS is a shitty language.
Sorry if I'm coming across as combative - I don't mean to. I just think this
whole snide criticism of JS for every little thing is silly and unhelpful, and
has more to do with a superiority complex than actual technical issues.
~~~
gwright
In general you are correct, redefinition in Ruby can be abused, but it isn't
so easy to abuse 'nil' in Ruby as it is to abuse 'undefined' in Javascript.
In Ruby, nil, is a keyword so you can't assign to it nor can you use it as a
method parameter.
------
mkohlmyr
I tend to read these sorts of articles as "don't do stupid things". Who in
their right mind would name an argument undefined? (or use a library by
someone who would do so) The operative sentence in the article to me is "can
be avoid if you understand how it works". On an unrelated note I'm not sure if
that's a typo or a pun.
------
ChrisAntaki
Has anyone ever seen "undefined" be redefined, on one of their professional
projects?
~~~
arethuza
A quick search of github gave this line:
var undefined= undefined;
Which has a comment explaining that it is an optimisation based on the idea
that "defined variables are faster than not-defined ones" \- I have no idea if
that is true or not.
[https://github.com/Searle/mothello/blob/c31fc57bedd666e9da34...](https://github.com/Searle/mothello/blob/c31fc57bedd666e9da34c12d2f5068af80964899/v2/core.js)
I wonder if this counts as redefining undefined though!
The comment also suggests that jQuery does this, which seems to be true as
explained in this link:
[http://stackoverflow.com/questions/7141106/undefined-
variabl...](http://stackoverflow.com/questions/7141106/undefined-variable-in-
jquery-code)
"undefined in the jQuery code is actually an undefined parameter of a function
wrapping the whole code"
Presumably if you have your own local undefined that is guaranteed to be
undefined then you are safe from someone else setting it to be something
silly.
Edit: The jQuery sources are all wrapped in:
(function( window, undefined ) {
...
})(window);
~~~
moron4hire
If you have to guard against someone doing something silly, then you are never
safe. The problem is not the language at that point, it's the environment.
~~~
MaulingMonkey
To err is human -- you're never safe. Even if we say people are the root of
the problem, fixing their fundamentally imperfect nature is currently beyond
the capabilities of science, whereas doing something like enforcing constants
at a language level is not. Education will reduce, but not eliminate, the
chance of making such a mistake.
------
wldlyinaccurate
jQuery does it right:
(function( window, undefined ) {
// ...
})( window );
------
scrabble
There are a lot of potential pitfalls in coding JavaScript. Redefining
undefined is only one of many.
One of the nice things about JavaScript is that it gives you the ability to
accomplish things many different ways, but if someone uses that to shoot
themselves in the foot then that is something they need to correct.
------
lelf
> _If you have done more than one day 's programming in any language you will
> realise that this is an important building block for programmers._
I guess my _any_ languages are different. Never realized it's importain in
haskell (except it is ⊥).
~~~
thesz
Exactly. There is no void in hardware, for example. Also, there's no bottom
either.
------
aphelion
The worst thing about undefined is not its mutability but that it exists in
the first place. Trying to retrieve a non-existent attribute should throw an
error by default, not fail silently.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Language/framework recommendation for a once-upon-a-time coder - customer
Dear HN:<p>I used to code a few years ago (Bachelor's in CS) and then moved to Technology Management. I want to come back to my roots again and "develop some web apps" (just downloaded Aptana) - I know it sounds silly.. What's the most clean, easiest to learn language/framework should I start with? TIA.
======
Scott_MacGregor
PHP/Zend Framework is not the easiest but you might want to consider it.
Depending on what your planning it is scalable.
I understand Ruby has some scalability issues for large enterprise-class
applications unless you are willing to throw a ton of computer hardware at it.
| {
"pile_set_name": "HackerNews"
} |
LA to use Open Source for transportation management - arnieswap
https://www.tfir.io/2019/08/29/city-led-open-mobility-foundation-uses-open-source-to-manage-transportation/
======
oehtXRwMkIs
Good to hear, though I wish any sort of public money required public code
nationwide.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Why Can't I Downvote Submissions? - jerhewet
I've been a registered user since 2010, but I haven't been active for the past six years (give or take).<p>I've recently had copious amounts of free time to catch up on everything, and when I logged back into HN I discovered my priviledges on the site had been rather severly curtailed.<p>My best guess is I've been pigeon-holed as a "troll" because some of my comments diverge from the hive-mind that used to -- and possibly still does -- make up the majority of the contributors to HN.<p>Just because I don't buy into the HN group-think doesn't mean I'm a troll, or that my opinion (read as: downvote) has no merit.<p>I haven't really wanted to downvote anything I've seen in the past nine months, but I feel a recent posting merits my downvote... but that's apparently not an option that's available to me.<p>Dunno. Maybe things have changed around here. But if I'm a long-time verified user of this site I would hope that my opinion -- even if it's a negative one -- would carry <i>some</i> kind of weight.
======
breakerbox
I think you need 500 karma or so.
~~~
jerhewet
Ah. I'm sitting at 297 right now, so that does make sense... and thanks for
clearing that up for me!
~~~
Minenash
As a relatively new person, I didn't even know anyone could downvote
~~~
eindiran
Users with >500 karma can downvote comments, not stories. No one can downvote
stories.
------
kstenerud
You can't downvote submissions; only comments.
| {
"pile_set_name": "HackerNews"
} |
Secrets of the Little Blue Box: The Best Account of Telephone Hackers (1971) - linhir
http://www.lospadres.info/thorg/lbb.html
======
Osiris
Wow, that was nostalgic. I remember when I was a kid reading about the blue
boxes. I was fascinated, but by that time the computer age had started and I
spent my time on Commodores and early IBM PCs. It all reminds me on how much
time people spend today working on jailbreaking, rooting, and otherwise
hacking their phones, consoles, and computers.
| {
"pile_set_name": "HackerNews"
} |
Don't Drown in Documentation - rbanffy
https://dev.to/grappleshark/enough-with-documentation
======
daly
Funniest bit of satirical writing I've seen in years.
| {
"pile_set_name": "HackerNews"
} |
When GitHub kills Open Source - rsaarelm
http://t-machine.org/index.php/2012/01/13/2012-the-year-of-uncollaborative-development-or-when-github-kills-open-source/
======
pilif
For every contributor to an open source project in the old days, there might
be fifty failed forks on github, sure. But for every five failed forks, there
will be one that thrives and which commits get accepted back.
What you are seeing is both an explosion in contributions and a permanent log
of every failed contribution ever. This greatly affects your perception.
Back in the old days it was infinitely harder to provide and apply a useful
patch, so it wasn't done in nearly the same frequency. Contributions were
limited to a small circle of people motivated and skilled enough to climb the
huge hurdle.
Nowadays, creating and submitting a patch is trivial, so the hurdle is much
smaller. Hence you will get many, many more people to try and contribute,
which, because of how github works is also visible to the public for all
eternity.
At least in my case, none of my patches I sent in in the old days and which
were not accepted are visible anywhere. Heck, most of the time you'd have a
really hard time at even finding the patches that were accepted.
Github is far from killing open source. Quite to the contrary. But as
visibility increases and hurdles get torn down, you might have to adjust your
perception of reality.
~~~
turbulence
I think you have to read the article again, because you are talking about
something quite different.
~~~
pilif
Not necessarily. If a project doesn't merge a proposed patch, the patch could
simply be deemed inappropriate for the projects chosen direction.
So if a fork doesn't get merged upstream, I see this as a failed fork.
If a upstream stops working on their project and stops accepting patches, the
outlined problems can happen, but just look at any random sourceforge project
not updated since 2006. In the old days, there was practically no way for
other contributors to get back on track, but it wasn't logged for eternity
either - the project just died.
Today, Github at least provides a chance to get back on track, but, again,
your perception might be altered by the fact that on Github you don't just see
the successful reboots, but also all the failed ones?
------
jaggederest
As someone who has dealt with this, it's not as big a deal as you might think.
Most future forks are based on older forks, so all the person at the end of
the line has to do is fast forward onto the end of the branch. One FF merge,
push, end of story.
When you have bifurcations, you can do an octopus merge - git is _really_ good
at resolving these things. Very little human effort is needed except where
multiple revisions change the exact same line in different ways.
In addition to this, most patches that people submit are quite small. Even if
you have 200 people submitting patches, the odds are that most of them fall
into two categories: people fixing the same bug, and people working on
completely different sections of code. Neither is a substantial problem to
merge.
I think I can count on one hand the number of times I had to do any nontrivial
merge work on patches from contributors... And you're pretty delighted to do
it - it means they fixed something that _really_ matters.
~~~
adamrg
(as the author of the original post)
In theory, yes. And I never used to worry about this. But over time, in
practice, it's been a bigger problem than I think you're giving credit to.
e.g. ...
"Even if you have 200 people submitting patches, the odds are that most of
them fall into two categories: people fixing the same bug, and people working
on completely different sections of code. Neither is a substantial problem to
merge."
IME ... in practice, this is a HUGE problem. Because every one of those
developers fixes the bug in a slightly different way.
The longer time goes on without the original Author fixing it, the worse it
gets. And the cost to them - or anyone! - of sifting through the "100
variations on bug fix #123" becomes greater and greater.
Usually, you want to cherrypick individual lines and characters from 5-10 of
the best "solutions" to the bug.
If you'd avoided the "100 alternative fixes", then those "improved" solutions
would have been built on the "basic" solutions - and merging would be easy.
But because you've got to this massively-forked scenario, all of the patches
have been written independently and incompatibly.
~~~
moe
_But because you've got to this massively-forked scenario_
That may be true for the 0,0001% of projects that have so many contributors
that they need dedicated release managers and such anyway.
The remaining 99,9999% projects are just grateful for github making
contribution so easy that they're now receiving patches at all.
------
pixelcort
It's not that hard to do this:
git remote add other_user_who_forked git://github.com/other_user_who_forked/project_name.git
from within your checkout of your fork. In one of the projects on GitHub that
I've forked, we are merging between each other without the original project
owner even being involved.
~~~
ge0rg
And this does not even need GitHub support. I'm using remote repositories to
maintain projects with >10 contributors without much effort.
------
yummyfajitas
I don't get it.
_If A disappears with merges pending … then B/C/D find they have 3 distinct
codebases, and no way within GitHub to do a simple cross-merge.
Now, the situation is not lost – if B, C, and D get in contact (somehow) and
negotiate which one of them is going to become “the primary SubAuthor”
(somehow), and they issue manual patches to each other’s code (surprisingly
tricky to do on GitHub)..._
If B, C and D get in contact via, I dunno, github messages, and pick a primary
subauthor, it's very easy to issue manual patches. If I'm B:
git remote add C ...
git remote add D ...
git pull C master
git push github master
I agree github might not have a button for this, but I'm pretty sure most
github users are comfortable with the git command line.
------
xdissent
I really did feel the same way as the author for a long time, but I haven't
yet seen any of my fears manifest in practice. The vast majority of forks die
without fanfare after serving some singular purpose. People who want to
contribute code do so more easily than ever. People who fight over ownership
of open source projects are just jerks like they've always been. There may be
more of them, or more of them are more visible now that we all use Github, but
I consider this a trivial downside of an otherwise remarkable ecosystem.
~~~
jaggederest
I've seen smooth transitions between _de facto_ ownership of projects a ton,
but never a bitter divide where both ends are actively maintained.
One pathological example is delayed_job, which has changed 'leadership' a few
times over the years. It's still pretty easy to look at the 'network' graph
and choose the endpoint you want... or just use the published gem.
~~~
xdissent
I've been the leader of a project I assumed from another guy that he assumed
from yet another guy and then the original guy even assumed leadership back
after a while. No one missed a beat. If you're involved in this community, you
are most likely capable of tracking down the "correct" fork.
Unfortunately, the "published gem" part is the one that has given me the most
trouble historically. But now that pretty much everyone uses bundler for gems
this should be a nonissue - you can even specify a branch of a fork that you'd
like to build.
------
acdha
The author has this completely backward: this is fundamentally a social
problem - the difference is that with GitHub it's actually visible. Anyone old
enough to remember the pre-DVCS era should remember chasing down patches in
bug trackers, blog posts, etc. and maintaining local forks — with the
requisite terror-inducing periodic gigantic merges. Now we've lost all of the
manual labor in that process and made it easy for anyone who wants to do
things the right way to do so – it's still possible to waste your
collaborators' time if you really want to but before it was almost a
requirement of the process.
As a minor point of craft, this also illustrates an area where more training
is needed: the problems described are most common when someone makes a fork
and keeps every single commit in a single branch. Using feature branches – and
it'd be awesome if Github started encouraging that with the fork & edit model
– makes most of the listed problems far more manageable.
------
babarock
When you hear Linus speak about his workflow when working on the kernel, he
always mentions his "Web of Trust" concept. I think the problem is not
inherent to github, but rather to the idea we have when we foolishly think of
the possibilities combining git and a social network.
The truth is, programming is still very much about people, and you need to
trust the people in order to pull their code. Trusting the people goes beyond
trusting the code. If you give me great code, then disappear or decide to make
an unmergeable fork, it will harm my project as described in the article.
On the other hand, if I get to know the people behind the pull requests, learn
to talk to them and get them to be more involved in the project, then the
risks exposed can be easily circumvented.
------
iamwil
Actually, if you fork from the main branch, you can still pull commits from
other collaborators--though I don't know if you can send pull requests to
other people, haven't tried. But it is doable. There is nothing to stop you
from making another remote branch that tracks another person's repo and share
code that way.
------
alexchamberlain
I don't agree that GitHub is killing open source. However, the author has a
point. It is hard(er) to merge into other forks, which is a shame since git is
so good at this.
I'm not critising GitHub, their software is great, but in the next iteration,
they should consider addressing this.
------
DasIch
Most Open Source projects die. That's why CPAN, PyPi etc. are able to have
such a huge number of packages, a significant part, if not most, has no
documentation, tests, support or is dead all of which in practice is more or
less the same.
"In the old days" you didn't notice it as much because those projects just
disappeared but with Github they don't, in fact they're all over the place.
I'm not sure if this is a problem or one big enough worth caring about but in
any case Github isn't the problem.
It would be nice if authors could "archive" or "abandon" repositories which
could be filtered out on searches by default and be displayed less dominantly
on profiles.
------
6ren
To be fair, the usual consequence for a project that loses its Author is to
die.
It seems that github could facilitate the migration of an "ownerless" project
to a designated fork - including facilitating the selection of who has the
designated fork. Just support for the informal process outlined in the
submission.
It's interesting that linus deliberately avoided having a "designated fork" in
git, but instead made them all equal, and you just pull from who you trust. Of
course, in his case. _his_ fork was the socially designated one, so this was
not a problem he experienced or had to solve.
------
obtu
The commit graph (gitk --all if you are using plain decentralised git, GiHub's
network graph for the convenient everyone-github-knows online version) makes
it quite obvious which author is good at reviewing and integrating patches.
With a little bit of side-channel communication, a deficient maintainer is
easy to replace.
Also, someone who is late at merging patches won't have a lot of difficulty
catching up. If they did no divergent work at all, it's just a matter of
picking the best integrator and fast-forwarding.
------
AdrianRossouw
I don't think projects faltering out due to the bus factor [1] not being taken
into account is github's fault.
set up a team repository and give multiple people commit access? Team/project
accounts should probably be more of a standard feature of open source projects
on github, once things get beyond a certain point.
[1] <http://en.wikipedia.org/wiki/Bus_factor>
------
riosatiy
Wow, no offense dude but this was a really crappy post. This problem have
existed for all eternity, just as other people are stating: There are a lot
more projects, people who contribute and transparency of those than before.
And choosing that title. It just seems you are writing one of those. "Look at
me! I am writing something controversial"-articles
~~~
chimeracoder
Welcome to HackerNews! Since you're a new user (green name), a friendly
explanation of why you seem to be getting downvoted: At HN, we try and
encourage respectful discourse, even when we dislike or disagree with what is
being said. If you'll look at the other articles, other people seem to agree
with you that the article is poorly written and that the title is sensational,
but they aren't being downvoted because the way they phrase those complaints
comes across as less insulting or _ad hominem_.
~~~
riosatiy
Excuse me, I will use better phrasing next time. Thank you for the friendly
explanation.
------
potomak
It's a little bit extreme but I understand your point of view. Anyway I think
GitHub helps open source more than how it kills it.
------
aliguori
I think the author is missing something that makes Open Source work that few
people appreciate. It's a fundamentally lossy development model. A certain
number of patches/features end up in /dev/null for any Open Source project.
You can think of each "fork" as a new start-up trying out a new idea. But
instead of reinventing the entire world, they get to start with a functioning
product. The vast majority of these start-ups will fail but the ability to
experiment (and fail) with forking is fundamentally what makes Open Source
development better (at least IMHO) than proprietary development.
A lot of people look toward Open Source development thinking that there's a
lot of wasted development and that that's a problem worth solving, but that's
like the government trying to make 100% of businesses successful.
------
robot
This has nothing to do with github. Github is what it is as the name suggests,
it's a convenient hosting platform for git projects. The fix/merge issues are
between developers and has been around since open source first started. It's
people issues, not github.
------
powertower
Here is a question:
How do you handle merging someone else patch into your dual-licensed project?
The GitHub hosted code is GPL, but your other code license is for commercial
projects (you charge a fee for the NON-GPL license).
Obviously, the patch is based on the GPL project, but you don't have copyright
on that patch (to be able to merge it into the non-GPL codebase).
Do you ask the contributer to give you the copyright?
What if it's a simple bug fix that's only a few characters?
What if the contributer says no?
Is there a way to make this happen smoothly?
~~~
desas
1\. Yes 2\. See [http://www.softwarefreedom.org/resources/2007/originality-
re...](http://www.softwarefreedom.org/resources/2007/originality-
requirements.html) 3\. If it can only be written one way then it's not
copyrightable ianal. 4\. Canonical and others require that you fax/email a
signed document to them e.g.
<http://www.oracle.com/technetwork/community/oca-486395.html>
------
cykod
This reminds me of the famous Churchill quote: "It has been said that
democracy is the worst form of government except all the others that have been
tried."
Most of the article is true, but GitHub is also leagues above anything else
out there, and certainly leagues above the mailing list with hand-crafted
patches by de-demonizing forks and turning projects more into a meritocracy.
There is certainly room for improvement, but I think it's a step in the right
direction.
------
wavetossed
The very fact that both forks are available on github means that you can check
out both forks, then merge changes locally. After that, you can use the merged
code to create a new github project that is not a github fork of the original
ones.
If you really have a tangled web of failed forks, this is the way to fix it by
starting afresh with a merger of the best forks.
~~~
turbulence
From your comment I see you have not gone through the "fun" of merging 3+
projects with varying degree of changes.
------
astrodust
What would go a long way towards fixing this is having an organization plan
that's free, but only allows public repositories. That way the code could be
entrusted to more than a single individual as it is now. Commit rights are one
thing, but having ultimate control over the repository is usually limited to
one person.
------
timkeller
Ha! If anything GitHub is doing more to keep Open Source development alive and
healthy than any other company.
------
omarqureshi
I fail to see a better alternative unfortunately.
The only way that you can stop this is by having multiple maintainers for a
project so that projects don't just die if the main maintainer is hit by a
bus.
And yes, this crappy, albeit well known situation is not just specific to
Github.
------
keeran
Was this written (conceived) before the new pull requests mech & UI was
introduced?
How can the dead end of an ignored patch submission be better than what we
have now?
------
av500
reading that makes me wonder how open source projects existed at all _before_
GitHub...
GitHub exposes the once private forks that people had lying around on their
HDDs, so I count that as a plus.
As for developing open source in a collaborative way, that goes much further
than just a git infrastructure, there's mailing lists, patch reviews, roadmap
discussion etc... exactly like in ye olde days
~~~
adamrg
(as author of original post)
Agreed. But GitHub did a lot more than just that - across the board it removed
the barriers to collaboration (I used to run a few projects on SourceForge,
and contribute to others; the ease of GitHub was like a breath of fresh air).
It got people excited and feeling free and able to collaborate.
...and so (I suspect) we're today _less tolerant_ of unexpected barriers to
collaboration. GitHub gets you hooked, then makes it extremely difficult to
manage the "handover" part of a project (something that SF - for all its
failings - handled pretty well).
The projects that die this way may well never have existed without GitHub in
the first place - but that's not an excuse to just kill them off under a
burden of maintenance crud.
~~~
hunvreus
I think your post fail to recognize a few things. I am not going to point out
the various other argument, however I think you'd need to acknowledge the fact
that first, there are much more people contributing to OSS nowadays than
during the "SF's days". Moreover, with the acceleration of online
collaboration in all its forms, we are overwhelmed with new trends that are
sometimes hard to interpret. I genuinely believe that these trends tend to
self-regulate themselves over time and that users, in the end, learn to better
leverage the tools they are introduced to.
------
6ren
google cache:
[http://webcache.googleusercontent.com/search?q=cache:http://...](http://webcache.googleusercontent.com/search?q=cache:http://t-machine.org/index.php/2012/01/13/2012-the-
year-of-uncollaborative-development-or-when-github-kills-open-source/&strip=1)
------
biafra
What does this have to do with git_hub_? Isn't this a "problem" with git?
I don't think it is a problem at all because without git (or hg, bazaar etc.)
or github we wouldn't even have such a thriving open source and open
development community. Collaboration didn't get harder with DVCS it got
easier.
~~~
darb
His issues sound more like issues with open source project governance. Github
has just made it easier to contribute, and thus it is becoming clear that good
open source projects have good governance. The owner of a project on github is
not the only one who can merge pull requests, they can add multiple
collaborators on a project...
If anything it does highlight the need for more people to form collectives
around projects and use the organisation tools to own the projects...
------
ighost
I'm tired of this alarmist tone.
| {
"pile_set_name": "HackerNews"
} |
Emulation of Unix V6 on a PDP-11 with an emulated teletype - beefhash
https://pavel-krivanek.github.io/pdp11/
======
kps
NICE DEMONSTRATION OF WHY KEN DIDN'T SPELL CREAT() WITH AN 'E'.
~~~
ajross
Yeah, though this feels artificially slow to me. Even the earliest Teletype
machines could manage 10+ cps, and by the mid-70's time frame (v6 was released
in 1975) much faster devices were available (and of course video terminals
were starting to arrive too).
I'm sure someone used a PDP-11 with a terminal this slow, but it's unlikely to
have been the typical developer experience.
~~~
davidgould
I used a PDP-11/70 with ASR-33 TTYs as late as 1978. They were still common
because while slow and noisy they were much cheaper than the DECwriter, and
could also read and punch paper tape. Since mass storage was very expensive
(10MB for $20,000) and since floppies were not yet common, paper tape was the
USB stick of the time.
------
tyingq
The pidp-11 project is also cool. A miniature and functional PDP-11 replica.
[https://obsolescence.wixsite.com/obsolescence/pidp-11](https://obsolescence.wixsite.com/obsolescence/pidp-11)
------
scroot
Ok but let's be honest: Unix is at it's core still a teletype emulator no
matter where you use it. It's the central metaphor for the system.
~~~
Koshkin
Well, UNIX "at it's core" is the kernel; what you are talking about is what is
known as 'shell.'
~~~
johnlorentzson
UNIX and everything surrounding it is built around the shell though.
------
fortran77
This is very nice. Brought back old memories of my first programming Fortran
and BASIC+ on RSTS/E on a PDP-11
~~~
flyinghamster
You, too? My high school had an 11/34 back in the day, with a couple VT100s,
several Visual 200 terminals (cheap junk), one DECwriter II for the console,
and another one plus a DECwriter III in the lab. It also ran RSTS/E, and in
the summer after my sophomore year they offered a short introductory course. I
was hooked. We didn't have any Fortran courses, though, just BASIC+ and COBOL.
I'd take even a DECwriter II over a Teletype, but the III made the II look
downright slow, with a 4x faster printhead, the ability to seek quickly, and
the ability to print in both directions to avoid wasted motion.
------
phoe-krk
The point where I burst into giggles was when I realized that scrolling the
page while output was still going (e.g. from ls /bin) overwrote previous lines
with new letters.
That's some dedication to accuracy that's found there.
------
davidgould
Don't try to use ^S and ^Q for flow control, it doesn't work and the ^Q will
quit Firefox. As a tab horder, I hate quitting the browser.
------
Koshkin
Just remember to type CHDIR instead of CD.
------
saagarjha
Did teletypes not have rollover? The most annoying part of this was waiting
half a second after each keypress…
~~~
DonHopkins
Teletypes have metal rods and springs instead of rollover!
I love how the mouse wheel (back in reality) scrolls the paper up and down and
it overprints.
~~~
saagarjha
But typewriters (at least the one I have tried–I think it was a Selectric?)
have the same thing and can support rollover…
~~~
kps
The Selectric didn't really do rollover, but it had a mechanism that felt like
it. Each key lever had a small tab that entered a trough of ball bearings that
had just enough slack for one tab. If you pressed a second key, it would
displace the balls and descend when the previous key withdrew.
------
pfdietz
If it doesn't smell like a teletype it isn't a true emulation.
------
twknotes
backspace not invented yet? How can you write a program with this thing!
~~~
kps
Backspace _had_ been invented; the problem is that on a printing terminal you
end up with an illegible mess. So early Unix defaulted to erase '#' kill '@'.
You can still see artifacts of that choice — ‘#’ being popular for things that
start a line, like comments and C preprocessor commands, and ‘@’ being the
only ASCII punctuation with no function in any common Unix tool.
| {
"pile_set_name": "HackerNews"
} |
Email Blacklist Check - kevwedotse
https://kevwe.se/blackcheck/
======
kevwedotse
Blackcheck (BETA) helps Mailserver Admins to avoid being blacklisted.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Foursquare somehow surpasses Loopt - tempatempatempa
I have researching various mistakes and such that startups make, and in particular I have been looking at location-aware startups. I was recently looking at google trends for foursquare and loopt and noticed this: http://www.google.com/trends?q=foursquare+,+loopt&ctab=0&geo=all&date=all&sort=1 which implies that sometime around the beginning of this year foursquare must have made some sort of significant change, but I don't have a clue as to what. Do any of you guys know what I might be missing in understanding this? Thank you!
======
sabj
I think that you have to remember also that Foursquare, in 2004 on that graph,
is not about Foursquare... it's about foursquare, I suppose, you know - the
game you play with chalk and a playground ball. So I think that that trends
graph is a little bit noisy.
To me, it's a question of 4sq taking off and Loopt failing to do so, more than
foursquare surpassing them when it was a clear neck-and-neck competition.
If we're looking at trends as a buzz-o-meter, it's the kind of situation where
Loopt is not able to leverage its initial boom of interest to transcend its
beginnings.
The seemingly 'obvious' answer is to ascribe the disparity to circumstances
beyond the startups themselves -- 2009/10 sees a significantly greater
penetration of location enabled phones, the effect of Facebook destroying our
notions of privacy has sunk in more (joking on that one), etc. I don't know if
that's the whole deal, but I think there have to be some macro effects
involved beyond just, well, people really like gaming elements and Crowley is
the one and only king of location.
Quick .02 : ) I think Foursquare has done a good job, but haven't followed
Loopt very well to know where they may have stumbled (or merely been unlucky).
~~~
cicloid
Loopt was a service too US centric. At least in Mexico, the current trendy
option is Foursquare. As for Gowalla (My favorite one), didnt do so well in
the beginning.
Maybe, what the trend is showing is more adoption from outside the US.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Should we be concerned about benzene exposure from the BP Oil Spill? - notcrazyyet
I recently met someone claiming to have expertise in Organic Chemistry and Meteorology tell me that benzene levels in the region surrounding the BP oil spill are astoundingly high and will cause life threatening illnesses in the coming months ahead. In particular, if a hurricane were to disperse the toxic gases arising from the oil spill to more remote regions, we would see unprecedented exposure-related deaths. I immediately dismissed him when he started ranting about FEMA prison camps, methane deposit "c4", Haliburton, NWO, and other crackpot theories, but the basics of what he said makes sense to me.<p>Although I run the risk of contaminating the content here, I respect the HN community for its critical thinking skills and general depth of knowledge in the sciences. I also believe this topic is important enough to warrant a discussion.<p>Are benzene levels as dangerous as this guy says it is both right now and in the event of a hurricane ("kill millions" so to speak)? What about dangers related to methane, which is combustible and also a very potent greenhouse gas?
======
cperciva
My understanding (as a chemist's son, but not a chemist) is that yes, there is
benzene being released; and yes, in the _immediate_ area above the spill,
there might be high enough concentrations to cause toxicity... but that a
hurricane spreading the gas over millions of cubic miles of atmosphere would
dilute it to harmless levels.
------
Clepensky
New Scientist had an article on this. They seemed to think the release of the
oil at the depth it is at would make the concern over chemical like benzene a
non issue.
| {
"pile_set_name": "HackerNews"
} |
The Future of Developing Firefox Add-Ons - bobajeff
https://blog.mozilla.org/addons/2015/08/21/the-future-of-developing-firefox-add-ons/
======
sonnyp
[https://news.ycombinator.com/item?id=10097630](https://news.ycombinator.com/item?id=10097630)
| {
"pile_set_name": "HackerNews"
} |
Now, 'standing room' on airlines - newacc
http://business.rediff.com/slide-show/2009/jul/16/slide-show-1-airline-plans-standing-room-for-more-passengers.htm
======
jonursenbach
Logically speaking, I don't understand how this would work. You're sitting on
a stool when the plane takes off, you're going to call backwards and onto the
floor of the plane. You can't expect someone to hold onto a bar during
something like that, like you can with trains or buses. And don't even think
about holding onto that during any sort of plane turbulence. You're all going
to fall into each other.
~~~
kiddo
What if there was a thin wall that you leaned against during takeoff, with a
half seat attached to it? Then on landings you turned and faced the back of
the plane and leaned on the half-seat facing the back of the plane?
------
icey
Man, rediff.com is an irritating site.
| {
"pile_set_name": "HackerNews"
} |
Neanderthal 'artwork' found in Gibraltar cave - Turukawa
http://www.bbc.co.uk/news/science-environment-28967746
======
NatTurner
Some researchers said "the artifacts may not have been made by Neanderthals
but by modern humans." Until the truth of that be known, it is too soon to re
write human history, However 2001 in South Africa, at a site called Blombos
Cave, is found 70,000 year old writing and art on "two pieces of ochre rock
decorated with geometric patterns." The patterns could in no way be considered
to be accidental or anything other than deliberate. Maybe the re write should
have already began.
[http://a.disquscdn.com/uploads/mediaembed/images/1270/3256/o...](http://a.disquscdn.com/uploads/mediaembed/images/1270/3256/original.jpg)
Full article
[http://www.accessexcellence.org/WN/SU/caveart.php](http://www.accessexcellence.org/WN/SU/caveart.php)
| {
"pile_set_name": "HackerNews"
} |
Git Town – A high-level command line interface for Git - tnorthcutt
http://www.git-town.com/
======
git-pull
Those who think a wrapper is going to help their development is in for
something when things break and they don't know how to operate things the way
they're meant to.
git is an especially poor choice for wrappers. You're hiding the concepts of
staged and unstaged information, branches, tags, remotes, submodules.
Regardless of VCS, you're setting yourself up for failure when you buy into a
third-party tool's workflow rather than knowing what the hell you're doing.
Pick up git as you go along. Rather than a tool doing who knows what behind
the scene. If you really goof things when you're starting, don't be afraid to
git reset --hard <ref> / git commit --amend + force push, as long as you know
where you're at in history.
~~~
kevingoslar
Git Town doesn't replace Git, nor does it try to shield you from learning how
Git works. It shows the Git commands it runs for you, as well as their output.
When using it, one should make sure to understand what it is doing.
The thing is, Git is awesome, but intentionally designed as a low-level and
generic tool. Using it correctly for particular workflows (like Git Flow or
Github Flow) requires running many Git commands for each operation, and is
highly repetitive.
Good developers engineer repetition away. Great developers share what they
build. Hence Git Town.
~~~
git-pull
> Good developers engineer repetition away. Great developers share what they
> build. Hence Git Town.
As someone who has engineered repetition away and shares what he builds, I
agree, and admire your gumption.
> intentionally designed as a low-level and generic tool.
git is high level. and opinionated. It has branches and tags baked right in.
Compare to SVN or CVS where the support is second class.
> requires running many Git commands for each operation, and is highly
> repetitive.
I run lots of git commands by hand, and can be pretty verbose in commit
messages. I (sort of) try to follow this: [https://chris.beams.io/posts/git-
commit/](https://chris.beams.io/posts/git-commit/)
However, to speed things up, I will sometimes at shell prompt use `ctrl-r` and
search history a bit, then `ctrl-e` to start scrolling in a line brought back
up if I want to both 1. see what I committed last, and 2. get a head start on
writing the commit message.
I also find the staging workflow git has (another thing I personally consider
high-level, purposeful, opinionated to git, and use regularly) to be very
convenient. I can type `git status`, `git diff`, `git diff --cached` to see
what's staged and unstaged. I can use `git reset` to unstage a file. Overall,
I get more granularity on which files I want to add to that commit. This comes
in really handing when reverting, merging and rebasing.
So in my workflow, I don't want to give up control of these things.
Apparently, while I don't use these features, `git bisect` and `git blame`
also benefit from being thoughtful with commits.
> It shows the Git commands it runs for you, as well as their output.
I am glad to hear that.
> nor does it try to shield you from learning how Git works
This is what irks me. I view git as high level and opinionated already, and
have no way of knowing how it would effect someone learning git. I developed
my own habits w/ VCS a long time ago.
That said, leave it up to the people who want to try your project.
(I followed you and starred your repository.)
~~~
crdoconnor
>This is what irks me. I view git as high level and opinionated already
However high level you think it is, it has no opinion on workflows and there's
a need for a tool that will automate and enforce git workflows.
I'm not sure if this tool the answer, but there is a need for some sort of
tool like this.
I wrote a hacky 'git sync' script at an old company and it achieved what
sending a bunch of developers on a course about git did not (it sped up the
workflow and cut down on git errors).
~~~
git-pull
> it has no opinion on workflows
Oh really? Staged/Unstaged + Commit + Push to remote+branch. Branches (I
suppose you could chuck everything in master), and opt-in or out of tagging.
Maybe users will keep their own remote repositories ("forks")? Even then, it's
still pulling in code with the same history that's going to get reconciled via
a merge or rebase. Whether it's "forked" to their own repo or in a branch of
the "main" repo, it's all the same in the end.
> there's a need for a tool that will automate and enforce git workflows
There's easy, light-weight branching baked right into git.
They scale locally, remotely, and also work with different user's remotes.
You can also merge branches into branches. You can pull --rebase them as well.
> there's a need for a tool that will automate and enforce git workflows.
_Beyond_ branches and remotes?
> I wrote a hacky 'git sync' script at an old company and it achieved what
> sending a bunch of developers on a course about git did not (it sped up the
> workflow and cut down on git errors).
Checking out branches and git add/status/diff/commit/push is that time
consuming not only would you need to create a shortcut, other devs would opt-
in to it?
I use shortcuts for various things in my shell. I have a .gitconfig in my dot-
config files ([https://github.com/tony/.dot-
config](https://github.com/tony/.dot-config)). Personal tweaks for coloring
and editor settings, a global gitignore. I'm the kind of a guy who picks up
shell plugins for fun to try them, but I know that pushing a tool on top of a
VCS on colleagues won't go over well.
What did `git sync` do?
~~~
crdoconnor
>there's a need for a tool that will automate and enforce git workflows.
Beyond branches and remotes?
Yeah, because most branching and merging in a team setting follows a policy.
That branch/merge strategy (and naming) is based upon a whole host of things
including testing strategies, release schedules, issue tracker used, code
review policies, how much you need bisect, etc.
Git is entirely indifferent to those workflows and is as happy to let you
follow it as it is to let you commit and push directly to the master branch
with a commit message of "fixed shit".
>Checking out branches and git add/status/diff/commit/push is that time
consuming not only would you need to create a shortcut, other devs would opt-
in to it?
Yeah, when you add stashing, changing to the correct branches, rebasing and
pushing, changing back and unstashing it actually does get tedious, especially
since I needed to run it about 20 times a day.
I actually didn't even create the script for them originally, I created it for
me and they just started using it.
------
jph
Git Town looks thorough to me. It includes well-written source code in Go,
plenty of edge-case error checking, good messages, and excellent feature
tests. Kudos!
If you're interested in branch aliases, here are some that may be helpful that
I use at GitAlias.com.
topic-start = "!f(){ branch=$1; git checkout master; git fetch; git rebase;
git checkout -b "$branch" master; };f"
topic-pull = "!f(){ branch=$(git branch-name); git checkout master; git pull;
git checkout "$branch"; git rebase master; };f"
topic-push = "!f(){ branch=$(git branch-name); git push --set-upstream origin
"$branch"; };f"
topic-finish = "!f(){ branch=$(git branch-name); git checkout master; git
branch --delete "$branch"; git push origin ":$branch"; };f"
branch-name = rev-parse --abbrev-ref HEAD
~~~
jwilk
You should backslash-escape your inner double-quotes.
~~~
jph
Thanks for the advice! Done.
------
dahart
> squash-merge the password-reset branch into the master branch (this makes it
> look like a single, clean commit, without the convoluted merge history and
> the many intermediate commits on your branch)
Is this what most people do? And is this something you can turn off with Git
Town? I don't like to to squash-merge, I spend time making sure my commits are
as much logical and self-contained units as they can be in my branches, and I
want to preserve the ability to revert and/or bisect them later.
~~~
MBlume
It's a trade-off. Many devs don't know how to do that, don't care to do that,
will never learn to do that, and for them squash merge is a good option.
~~~
dahart
For sure. I'm not suggesting anyone else shouldn't; on the contrary just
asking if Git Town goes both ways, and whether squash merge is more common in
practice?
I would have assumed that a regular (not squashed) merge is more common, and
easier to do, because it's the default behavior of "git merge". It takes extra
git commands and/or extra non-default arguments to git merge to get a squash
merge. My GitHub also doesn't default to squash merge, IIRC... Don't you have
to choose squash merge or be told to use it, if you don't otherwise know or
care?
------
rojoca
[https://github.com/Originate/git-
town/blob/master/documentat...](https://github.com/Originate/git-
town/blob/master/documentation/commands/sync.md)
I think it would be good if the docs had the git commands that are run for a
git-town command.
~~~
kevingoslar
Good suggestion, will add them! Git Town uses Cucumber as living
documentation: [https://github.com/Originate/git-
town/blob/master/features/g...](https://github.com/Originate/git-
town/blob/master/features/git-town-
sync/current_branch/feature_branch/no_conflict/with_tracking_branch.feature)
------
sigi45
Hell of work around a few git commands. Screencast, website, promo.
I prefer aliases i configure myself to understand them, most of my colleges
don't even bother with that detail of git commands at all and use an ui.
~~~
superlopuh
I think that's sort of the point, instead of having aliases, this is a low-
effort way to have even people who prefer to use GUI clients (like me) to have
an easy-to-use/install unified command line workflow. I'm very tempted.
------
gt_
I am new to programming (less than 1 year) and the insignificance of this
project is obvious to me. This looks very well done, but my understanding is
that a user friendly wrapper for such a ubiquitous programming tech with
already widespread GUIs and pluins is comparable to reinventing a wheel. It's
a little frustrating how many projects like this appear to get so much
attention and end up on HN, because it makes for a disorienting maze of
distractions for newer programmers. I love all the productivity, excitement,
possibility but it's still peculiar and debatably problematic.
My best guess is this was a personal project that solved some person(s)
problems, and for some reason related to networking or self-promotion, it got
the decoration of a full release treatment. What else could cause this?
I know there are zillions of these every day but this seems like one we all
can see through. Can anyone share some insight here?
Should I be contributing to the heap of projects like these to further my own
career?
~~~
Normal_gaussian
> the insignificance of this project is obvious to me
> comparable to reinventing a wheel
> many projects like this appear to get so much attention and end up on HN
> Can anyone share some insight here?
First the HN audience, its core is hackers and startups. These people have
certain problems in common, and they are always on the lookout for ways to
eliminate them. The hackers build things and the startup people do a lot of
management and they are often one and the same.
Secondly good version control is hard to use across a project without swamping
new arrivals or accidentally breaking something. Git isn't good enough, but it
is what we have.
So like good hackers we take the first, see the second and try and produce
something better. This is how we end up with lots of similar looking projects.
Because they are solving real problems being faced by HN users they get
upvoted until the comments discover some fatal flaw (leaky? prevents key
conflict resolution?).
This author reckons hes solved it, so he gives it the full treatment because
_it is worth a lot to have_ __actually__ _solved it_. If I could resolve git
woes by handing a newbie a ten minute video I would be ecstatic.
Remember, it is important to reinvent the wheel [1] though don't waste time on
these projects unless you can see a way through.
[1]
[https://pbs.twimg.com/media/CMyiLuKUwAA6l-V.jpg](https://pbs.twimg.com/media/CMyiLuKUwAA6l-V.jpg)
~~~
jstimpfle
Can't resist:
[https://www.math.uh.edu/~jmorgan/trinity_talk/square_wheel.h...](https://www.math.uh.edu/~jmorgan/trinity_talk/square_wheel.htm)
------
btym
_For example, correctly merging a finished feature branch requires up to 15
individual Git commands!_
Am I missing something? Does `git merge` imply fourteen other commands?
~~~
stinos
Maybe they include things like stashing/popping current uncomitted changes,
switching to target branch, pulling source and target branches first, rebasing
feature branch onto latest target branch, resolving conflicts, ... ? All of
these are things I has to do at one point or another to 'just' merge some
feature branch from somebody else into master while I was working on another
branch myself. So if they combine all of that in one command including taking
care of everything which can go wrong I can imagine getting 15 commands.
~~~
falcolas
> So if they combine all of that in one command including taking care of
> everything which can go wrong I can imagine getting 15 commands.
My concern would be: what happens when the automation encounters and edge
case; what kind of unholy mess would you end up with?
And to be fair, with GitHub and GitLab, doing local feature branch merges has
become a very rare event for me in the last 5 years.
~~~
kevingoslar
Git Town covers a ton of edge cases. Just look at their "features" folder. If
something goes wrong, Git Town allows to cleanly abort and undo what it did so
far and go back to where it started.
That's a lot safer than the unholy mess that ensues when most people try to
run "git reset --hard" or "git push --force" manually.
~~~
Jare
Edge cases handled properly may be the killer feature of this project, at
least for me. With git, as long as I'm in familiar territory it's fine, but
when somethings goes off rails my head's working set explodes with options.
------
rwieruch
I like to keep Git puristic. I have only a few aliases, because I want to
operate on every machine the same way.
Git can be intimidating for newcomers. In the last two years, I noticed the
pattern that I only use a few essential Git commands in order to resolve a
handful of scenarios. I have written them up:
[https://www.robinwieruch.de/git-essential-
commands/](https://www.robinwieruch.de/git-essential-commands/) Maybe it helps
some people to get started.
~~~
charlierudolph
I believe knowing the low level commands is very important. I don't think
anyone should use Git Town without learning everything covered in your
article. Git Town prints every* Git command it runs and what branch it is run
on. That was my first contribution to the project as I wanted to know exactly
what the tool was doing.
* Git Town runs other git commands to inspect the state of things (for example: what is the current branch, are there any uncommitted changes). These are not printed but each one that changes the state (for example: checking out another branch, fetching updates, merging branches) are printed
~~~
rwieruch
I will give it a shot! Thanks for the clarifications :)
------
throwme_1980
Don't bother, learning GIT is a transferable skill, this will be thrown out as
soon as you join a proper development team . Gimmicky at best
~~~
SmellyGeekBoy
I don't see much utility in this but I certainly don't restrict the toolset my
developers use and would be perfectly fine with them using this on any of our
machines, especially if it made their lives easier.
------
georgecalm
Another great alternative that I use every day is
[https://hub.github.com](https://hub.github.com), especially if you work with
GitHub.
~~~
kevingoslar
Hub is awesome, and orthogonal to what Git Town does. You can use both
together, though.
------
746F7475
So this is for people who don't know how to use aliases (bash or git)?
~~~
kevingoslar
Git Town started out as Git aliases written in Bash. Version 3 was many
hundred lines of Bash, pushing it beyond what Bash was designed for. At some
point it got ridiculous, and we got requests for Windows support, as well as
better integration with the Github API. Hence the rewrite in Go.
~~~
746F7475
I still don't see the killer feature here. It just throws around ton of
commands, most of which are completely unnecessary.
------
partycoder
Over the years, there have been many "friendly interfaces to git", in both UI
or command line form.
They all suffer from the same issue: in the face of conflicts they just
failsafe to good old git.
I think these tools are good if you want to do something more productively but
in the end you will still need to know about git.
~~~
qguv
I'm not sure this is trying to prevent anyone from needing to learn the actual
git commands. (Note that the abstraction intentionally leaks by showing the
commands that are run.) It appears to be more of a tool for experienced users
on centralized teams to save some time typing.
------
paulddraper
Slick stuff.
But can you use this in practice and not know what git is doing? Aka is this
really not a leaky abstraction?
I ask sincerely; having known git for years I can't objectively answer this.
~~~
qguv
As an experienced git user, I'd use this on projects with a central repo, if
only because it saves some typing.
------
afshinmeh
Seems interesting but I don't personally like using these kind of projects.
Having a wrapper around another technology or tools to make things easier to
use, encapsulates many more important concepts that you have to know as a good
developer. I don't think giant tech companies use these kind of tools as well.
~~~
oblio
Giant tech companies basically use their own version control systems. Facebook
uses something forked from Mercurial, I think, Google has a Perforce derived
one, etc.
They basically take the approach presented here to 11.
------
franzwong
When I saw the name, I thought it was a simcity game with git :P
------
mempko
The command-line interface is what I loved about darcs. Too bad it never got
the mind share because of early performance problems.
------
jsiepkes
Seems like a more lightweight version of the 'arc' cli tool of Phabricator
(which I really like BTW)?
------
jaimex2
Shouldn't this whole thing just be a pull request into git itself?
~~~
roblabla
No. Git tries to be agnostic to your workflow. Also, some of the commands are
tailored for github, which is not the only git host. See gitlab, gogs, gerrit,
etc...
------
romanr
Looks very similar to Git Flow
------
mdekkers
_Git is a great foundation for source code management._
No. Fucking marketing doublespeak. Git is great for source-code management.
Don't start your pitch by trying to redefine and reposition Git. You lost me
right there.
| {
"pile_set_name": "HackerNews"
} |
Larry said to Gaga, ‘Do you ever a/b test your music?’ - youssefsarhan
http://blog.sefsar.com/post/35569040840/we-were-in-a-meeting-with-google-with-gaga-and
======
kadjar
Lady Gaga's entire career has been an A/B test. She started out as a singer
and songwriter who actually poured some meaning into her music, and had no
success. Then she seized the common pop chords, added a heavy back beat,
stripped her songs of any meaning, and started wearing meat dresses. I'd say
that B worked for her.
~~~
mdc
In addition, she's part of a larger A/B test being run by the industry. Music
producers have a nearly unlimited supply of cookie-cutter musicians who they
can produce in different ways to see what works. They're expendable, so when
they fail the producers can move on and the musicians can go back to playing
local clubs or whatever they did to get the producers' attention in the first
place. I've known several musicians who got popular in a local scene, got a
"big break" and released one heavily-produced album that sounded nothing like
their previous work, and then faded back into obscurity. Every now and then
one of them breaks big and the producers can cash in for a few albums.
------
KaoruAoiShiho
Media companies do a/b their products, they have test audiences, focus groups,
and such. By the time you get to that scale you think like a business, not
like an artist.
------
retrogradeorbit
It seems Gaga herself doesn't even know how the main stream music industry
operates now. The record labels certainly a/b test the mixes on test audiences
and choose the mix that rates better. That's why a lot of main stream pop (one
example I know of for sure is Katy Perry) has a different mix engineer on
almost every song.
They actually farm the mixes off to independent mixers who do the work. Then
test the results. If your mixes rate well, you move up the labels artist
hierarchy and will be offered more prominent artists to mix in future. So it
is kind of a natural selection of mixing.
Some artist management by labels involves this at more than just the mixing
stage. Song writing is also in many cases done like this. If you have the
skills of Linda Perry, you'll bubble your way to the top. I'm not sure that
Gaga does this process with song writing (does she write her own material?),
but I'm willing to bet the label certainly does it with mixing. It's quickly
becoming the norm in the industry.
Maybe Gaga knows this and that is why she dodged the topic by answering a
question with a question.
------
camus
>"This is precisely the problem with Google. Soulless." A service like google
doesnt need a "soul" , it needs strong products, great user support and fast
servers. And dont worry about A/B testing music, producers knows what works
and not , and are using the same gimmicks over and over again , and a lot of
marketing techniques ( including A/B tested marketing ).
Is there something fresh and new in Gaga's music ? no , it feels over produced
and over marketed , i dont think Gaga's music has soul , it is like mc donalds
, pre-pooped food. It has no soul.
~~~
lukev
While I'm not a Gaga fan by any means, I do have to say that her music is a
notch above typical canned pop. Sure, it borrows heavily from pop idioms, but
it's got an edge to it that seems unique to her, and the lyrics address themes
a tad more deep than most pop music does.
Even that's subjective, of course, but at least she writes and arranges all
her own music which isn't typical of factory-produced pop stars.
~~~
retrogradeorbit
> notch above typical canned pop
Sure. And there are at least 1000 notches above that.
| {
"pile_set_name": "HackerNews"
} |
Can Zapping Your Brain Make You Smarter? - RickJWagner
https://daily.jstor.org/can-zapping-your-brain-really-make-you-smarter/
======
earthboundkid
[https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines)
| {
"pile_set_name": "HackerNews"
} |
Introduction to the Samsung Qmage Codec and Remote Attack Surface - janvdberg
https://googleprojectzero.blogspot.com/2020/07/mms-exploit-part-1-introduction-to-qmage.html
======
jdsnape
This is excellent- I’m impressed with the attention to detail and
perseverance. I would have given up well before getting that amount of info
together
| {
"pile_set_name": "HackerNews"
} |
Open Source Search with Lucene & Solr - igrigorik
http://www.igvita.com/2010/10/22/open-source-search-with-lucene-solr/
======
fizx
For anyone who would like to take Solr for a spin, I invite you to check out
nzadrozny's and my startup: <http://websolr.com/>
We are a bootstrapped startup providing managed Solr hosting in the cloud
(currently EC2). We're all about making the operational side of high
performance Solr hosting as one-click easy as possible, so developers can
focus their time on doing cool stuff with it.
We love HN and are frequent commenters/lurkers around here, so we made a
"HN10" coupon which you can use on signup to get a month of our Silver plan
for free.
~~~
thorax
I really like the idea of this service. The difficulty is, I'm not seeing any
"Getting Started with Websolr" guide to understand how difficult it is to get
working with you. Where would that be?
In my ideal world you would have a demo instance or two where we could
connect/query arbitrary test data to understand performance/behavior/etc
before we signed-up to host real data there.
~~~
nzadrozny
Yeah, great points. Thanks for your feedback! Better general documentation is
pretty high on our list right now.
To answer your immediate question: we started as a Heroku add-on, so you might
take a glance at our documentation there (<http://docs.heroku.com/websolr>).
It's targeted at Rails applications using Sunspot, so ymmv. We're working on
creating and compiling similar guides for other platforms as well.
Seems like it's high time for us to do a "review my startup" post… ;)
------
evilhackerdude
Riak Search has been released recently. It’s got Lucene and part of the Solr
HTTP API built-in.
Basically you push json/xml/whatever documents into buckets. The docs will be
indexed, i.e., by field names (json & xml) or simply fulltext. It is pretty
cool because it’s based on Riak Core and thus has the same benefits as Riak
K/V. Lucene runs transparently in the background - afaik you never even have
to touch it.
Read more in their wiki: <https://wiki.basho.com/display/RIAK/Riak+Search>
Especially:
[https://wiki.basho.com/display/RIAK/Riak+Search+-+Indexing+a...](https://wiki.basho.com/display/RIAK/Riak+Search+-+Indexing+and+Querying+Riak+KV+Data)
------
ankimal
We use an Enterprise Search Platform (our biggest software acquisition) minus
the support (another dumb idea). The entire thing is like a Black Box. It
takes days to figure out what "Error: FS error" actually means. For a new
project, we used Solr to maintain a smaller index and have never looked back
since. Anybody about to start building a search index, Lucene/Solr is the way
to go.
~~~
storm
I've been using Solr for some pretty heavy lifting, and it's incredibly
impressive. Rock solid, extremely advanced analysis and search capabilities,
and the performance is amazing if it's on suitable gear. Time invested in
learning it pays off big.
I'm familiar with the enterprise black boxes you're talking about - I probably
know the specific one you're tormented by. I've seen the licensing fees alone
lead large companies to drop rows from their front-end stores to avoid going
into a new pricing tier (takes balls of steel to charge by the record, I must
say), and I've seen competitors fold at least in part due to the expense of
paying for the thing.
A lot of startup folks getting excited about NoSQL seem to have passed over
Lucene/Solr completely, and I think it's worthy of much more consideration
than it gets. It's mature, it's _fast_ , and the people working on it live and
breathe the problem space.
There are undoubtedly devs out there badly needing powerful analysis and
search to execute on their vision, but who will end up suffering with half-
baked solutions for lack of even _hearing_ about Solr, much less giving it a
try.
~~~
ankimal
I feel another issue is that management sometimes feels that paying big bucks
means your rear end is covered. It takes a lot to convince them that this is
free and works great at the same time. Whats more, the community is great!
------
dangrover
Haystack for Django is a really nice way to integrate with these systems. You
can use lucene, solr, or whoosh as backends for your search.
~~~
nzadrozny
Sunspot for Ruby is another good Solr client that's popular with Rails
applications.
<http://github.com/outoftime/sunspot/>
While Solr's API is pretty easy to work with directly, there's definitely
something to be said for using a quality client for your platform.
------
akozak
At Creative Commons we use Lucene/Nutch for our educational search prototype
DiscoverEd: <http://wiki.creativecommons.org/DiscoverEd>
It was easy enough to add in our special sauce like a triple-store for
consuming and displaying semantic data (I guess I can say easy since I didn't
do it myself).
~~~
sdesol
I would say it's pretty easy if you are technically inclined. When I
implemented the first iteration of my text search engine using Lucene, I
didn't even know Java but I was able to write my own custom tokenizer and get
it to index and retrieve results from the index in about 6 hours.
I highly recommend you get the book "Lucene in action" as it gives solid
examples that you can build upon.
------
nkurz
I'm a fan and contributor to Lucy, which is mentioned briefly in the header:
<http://incubator.apache.org/lucy/>
While Lucy did start out as a C port of Lucene (hence the name), it's since
broken any attempts at Lucene compatibility. Instead, it's aiming to be a fast
and flexible standalone C core with bindings to higher level languages. Since
it's growing out of Kinosearch, it's best developed bindings are in Perl, but
support for all the usual suspects (Python, Ruby, etc.) is planned.
Technically, the main difference from Lucene is that it gets cozier with the
machine: the OS is our VM. It's mostly mmap() IO, and we're very conscious of
paging and cache issues. While we're trying to maintain 32-bit back
compatibility, we take full advantage of 64-bit solutions when they offer
themselves. The scripted bindings are also very cool --- you can do things
like make callbacks to scoring methods in your script language to truly
customize your results.
If for some reason you're not finding what you need in Lucene and Solr, check
it out. We just became a full Apache incubator project, and are eager to get
more developers involved. You'll find clean C code, decent documentation, and
a low traffic but very responsive list. If you're using Perl, C or C++, you'll
get a great product from the start. If you're using anything else, you'll have
to help a lot on the bindings, but I think you'll be quite pleased with the
end result.
------
spoondan
Lucene is great but I wish schemas were an optional part of Solr. They add
complexity and take away flexibility. If you have a photo database where you
want searchable metadata describing the subject of the photographs, you can do
this easily and naturally in Lucene. But Solr requires you either (1)
prefigure available metadata or (2) expose field typing details to your users
(so a field for birthday is actually "birthday_d", with the "_d" indicating
it's a date). Both of these are very unattractive to me.
The worst part is that I have no idea what benefits schemas are supposed to
bring me. The documentation vaguely promises that schemas "can drive more
intelligent processing", but I have a feeling I could get that more easily
without schemas. It also tells me that "explicit types eliminate the need for
guessing of types," but only, apparently, by requiring users to _understand
and remember_ them.
~~~
storm
Schemas are an optional part of Solr. Pretty sure that the default schema.xml
has an example of a catch-all field definition, if you use that it will
automatically deal with any key you want to throw at it.
Of course you need to specify one field type (analysis stack) to apply to all,
but I don't know how you expect to avoid that - gonna have to express that
metadata _somewhere_ if you need more complex behavior.
Personally I think the _d, _i approach is ok, suffixes aside - complex field
analysis options w/o a schema.
------
cowmixtoo
So has anyone used this combination for realtime and historical log searching
(like what Splunk offers)?
~~~
igrigorik
Yep, take a look at loggly.com - AFAIK, a bunch of ex-Splunk guys. They're
building their system on EC2 + SolrCloud.
~~~
bobf
+1 for loggly -- check out logstash <http://code.google.com/p/logstash/>
~~~
kordless
Be sure to check out Jordan Sissel's Grok as well:
<http://code.google.com/p/semicomplete/wiki/Grok>. It's a field extractor.
~~~
bobf
Definitely. Just about anything Jordan makes is probably worth checking out,
actually.
------
reinhardt
Any experience on how Lucene/Solr stacks up against other search tools such as
Sphinx or Xapian ?
~~~
gtani
Not sure if you're asking about indexing speed/size, precision/recall and the
2 or 3 dozen config options (separator/tokenizers/analyzers, stopword, index
to ASCII or Latin-1, AND/OR search terms), etc.
What I recommend for precision/recall /config options is that your platform
(rails, django, java, PHP) probably has plugin for SOLR and sphinx. Set up 2-4
indexes using the config options that matter most to you (for me they're AND-
OR of search terms, and stopwords, which i use in lists of 0, 50, 100, 150).
Then do a (sort of) A-B test where you see which records one index picks up
that the other misses. (Most people recommend not using any stopwords if
you're only using one index, but i never got decent results using only one
index)
P.S. Solr is the 800-pound gorilla, has the terrific Manning book, zillions of
docs, etc. Sphinx probably covers most people's needs config-option wise(at
least for European languages) lightning fast to index, and runs in 256M VPS,
no tomcat/jetty.
------
known
I prefer <http://aspseek.org>
| {
"pile_set_name": "HackerNews"
} |
Six Flags on the Moon: What Is Their Current Condition? - shawndumas
http://www.hq.nasa.gov/alsj/ApolloFlags-Condition.html
======
DanBC
Is it possible to simulate conditions on Earth with similar flags to see how
long they last?
------
karmakaze
Great juxtaposition with the 'Roller Coaster' post.
| {
"pile_set_name": "HackerNews"
} |
Game Theory and the Startup Valuation Game - siegel
http://onstartups.com/tabid/3339/bid/200/Lessons-from-MIT-Game-Theory-and-The-Startup-Valuation-Game.aspx
======
siegel
Answer can be found at this URL:
[http://onstartups.com/tabid/3339/bid/198/Game-Theory-and-
Sof...](http://onstartups.com/tabid/3339/bid/198/Game-Theory-and-Software-
Startups-Part-II.aspx)
While I'm not sure that I buy the author's advice, I have been thinking about
ways in which founders can think outside the box in attracting funding and
signaling as part of negotiating a funding round - if they want to go the VC
route.
Obviously a huge part of getting the highest valuation and the best terms out
of a VC round rest on how the founders can directly sell investors on the
prospects for their business by talking about the technology, market,
financials/projections, etc...
But negotiating a funding round is still just that - a negotiation.
Professional investors are well-aware of this and are strategic negotiators.
From the founder side, I see much less of the type of strategic negotiation
thinking than I do on the investor side.
Curious if other agree or disagree.
| {
"pile_set_name": "HackerNews"
} |
Show HN: eBay rounding error results in incorrect credits - bdclimber14
About a month ago, I tried to sell a few items on eBay including a Nexus One phone. 2 items were left unpaid since the buyer's tried to scam me (fake PayPal payment emails, cancelled PayPal eChecks).<p>Once an item sells, eBay charges a Final Value Fee (FVF) that is a percentage of the total proceeds. However, if the item is never paid for, you can request eBay to credit you the FVF.<p>Yesterday I called eBay to get my credits, which they gave promptly. However, I noticed that the credited amount was off by one cent. I explained to the representative on the phone that I was able to see the credit, but was curious as to why it was $.01 less than the FVF. I asked if they kept a penny, assuming they did, but she assured me that the full amount was refunded in both cases. Again, out of curiosity I pushed the issue and she insisted that the full amounts were credited ($13.48 and $14.55) even though I explained that what I saw on my eBay account was different.<p>I was being incredibly polite and merely inquisitive (it's 2 cents for god's sake) but she rudely stated "I can't help you anymore, if you have more questions, then you need to use the eBay help menu online" and hung up on me.<p>Ouch.<p>$13.48 charged, I was refunded 13.47.
$14.55 charged, I was refunded 14.54.<p>Since we're literally talking pennies here, I don't care enough to call eBay back, but I'm curios as to what caused this.<p>I assume this is a rounding error from taking a percentage of the amount (possibly a floor function being used for credits). This all reminds me of Office Space, and made me wonder how much money eBay makes off of these penny differences, if this indeed happens all the time.<p>Has anyone ever come across this before, or am I an anomaly?
======
staunch
You should ask them if they have a programmer named Michael Bolton.
------
octal
I've never seen this. I wish there was an easy way to recreate this, without
almost getting scammed or abusing the Ebay system.
You could be on to something though. How many people have been wronged?!
~~~
bdclimber14
You're right, it's tough to recreate. I thought about filing a bug report with
eBay. Steps to reproduce:
\- List a high-dollar item like a computer with an artificially high buy it
now price.
\- Allow any type of buyer.
\- Wait until it's bought and you get the first scam correspondence.
\- File a non-paying buyer report.
\- Wait 2 months.
\- Call eBay to request a credit.
| {
"pile_set_name": "HackerNews"
} |
Interview with Doug Crockford, creator of JSON - alavrik
http://www.simple-talk.com/opinion/geek-of-the-week/doug-crockford-geek-of-the-week/
======
docgnome
He also wrote what is, imho, one of the best programming language books,
JavaScript: The Good Parts. (<http://oreilly.com/catalog/9780596517748>) I
found it super handy mostly because he made no claims that JS is the best
thing ever. A lot of it sucks and he admits that. Which I found super
refreshing from the standard "ZOMG X IS TEH BEST EVAR! USE X AND GOLD COINS
WILL FALL FROM THE SKY!" of most programming language books.
------
andreyf
I wonder if he minds being called the "creator of JSON"? Yes, he created the
standard, but I imagine the notation was really decided by Brendan Eich
(author of original JS implementation).
~~~
fierarul
Yeah, I've also wondered about that.
Plus that I see JSON as a minor format, it's not like someone didn't think
before: "let's just wire some lisp over a socket and eval it on the other
side".
The same about the guy that invented Markdown, another minor microformat that
somehow is seen as a great accomplishment in some circles (mostly Reddit).
~~~
jerf
JSON is a carefully-chosen subset of what "eval" will actually include, the
three major differences being that it specifies the delimiters rigidly (JSON
that uses apostrophe-delimited strings is not JSON), it rigidly specifies
Unicode (defaulting to UTF-8), and it doesn't permit anything that looks like
Javascript code. So, comparing it to "wiring over some Lisp and evaling" it is
not accurate, JSON was created for the explicit purpose of _not_ doing that.
Previous approaches typically did.
~~~
eru
Wiring some S-Expressions and parsing them, would be more apt than the eval
comparsion.
~~~
rdtsc
That's what I do. I persist python objects into s-expressions using a
c-extension module then parse and un-persist data back to python on the other
side. Everyone asks me why I don't just switch to JSON. Well I might one day,
I just like S-expression for now and the parser is very small, fast and was
easy to implement. I even have references (like Yaml) to persist arbitrary
object graphs.
~~~
eru
Interesting. If you are working with Python-only, why have you decided against
Python pickling?
~~~
rdtsc
One reason is debuggability. Being able to see exactly what is persisted and
what goes over the wire. Pickles can drag in arbitrarily large object graphs
if you are not careful and they are not completely safe from the security
point of view.
However, the other reason (the original reason peraps) is that we have some C
processes that listen and interpret s-expressions. There were there before
Python, so we already had an s-expression library for them. Then our Python
processes had to talk to the C so I implemented Python object persistence on
top of the existing s-expressions. Now we use it even between Python
processes.
Well perhaps one day we'll just switch to json, yaml or protobufs. But we
haven't decided which one yet.
I do find it interesting that nobody even mentions s-expression these day when
there compare various persistence mechanism. I guess lisp and parantheses have
stopped being cool?
~~~
eru
When the only thing you have is XML, then S-Expressions are an enlightenment.
JSON isn't nearly as moronic, so there's less pain to make you reach for
S-Expressions. I guess JSON is good enough.
We use S-Expressions for logging in XenServer.
| {
"pile_set_name": "HackerNews"
} |
GoPro Evolution: From 35mm Film To America's Fastest-Growing Camera Company - thealexknapp
http://www.forbes.com/sites/ryanmac/2013/03/04/gopro-evolution-from-35mm-film-to-americas-fastest-growing-camera-company/
======
3327
The key to GoPro is content. As the saying goes, "Content is King". Camera
Phones just do not create contact, "on Par" with a goPro generally speaking.
Think of your average Camera Phone user, and average Gopro user... The average
content generated from the goPro, although - probably not impressive like the
footage you see in the adverts, is still going to be superior to average
content from a phone camera. Naturally the goPro user has acquired the camera
because his average state when using the camera is "something Exciting", and,
the average state of the average phone camera user (when grabbing footage) is
perhaps "I will share this with my friends" or "this looks cool" (whatever you
want to label it).
~~~
alexcroox
Absolutely, I built a passion project a couple of years ago because I found so
many amazing videos I wanted to share. They just all happened to be GoPro
ones! [http://goproheroes.com/gopro-hero3-black-edition-smaller-
lig...](http://goproheroes.com/gopro-hero3-black-edition-smaller-lighter-
and-2x-more-powerful)
It's also worth mentioning they are the only company I know that gives away
everything they sell to one lucky winner every single day!
------
subsystem
Meh, consumer news.
As far as I know the GoPro is based on Ambarella’s platforms. Here are some
specifications, a teardown and a look at their newest platform:
<http://www.youtube.com/watch?v=U1nsYd3lG60>
[http://www.ambarella.com/products/consumer-hybrid-
cameras.ht...](http://www.ambarella.com/products/consumer-hybrid-cameras.html)
[http://www.anandtech.com/show/6652/ambarella-
announces-a9-ca...](http://www.anandtech.com/show/6652/ambarella-
announces-a9-camera-soc-successor-to-the-a7-in-gopro-hero-3-black)
------
faramarz
Not to down play the immense success, but having a prominent Silicon Valley VC
father and a 100k initial investment on his part must have been crucial in
getting the injection mouldings done and ready for volume production.
That was pre-kickstarter days. Kickstarter has levelled the playing ground for
other hardware entrepreneurs in getting early support to pay for the tooling
and moulding process.
------
mikek
The competition for the GoPro isn't cellphones. It's Google Glass.
~~~
rplnt
It's neither probably. Their competition is other "action" cameras within more
convenient packages. Luckily for GoPro they still excel in video quality (in
their category).
~~~
SideburnsOfDoom
Yes. I was thinking particularly of the #2 action camera vendor, Contour (
<http://contour.com/> ) as the GoPro's main competition right now.
Though Google glass and whoever competes with it will get there too,
eventually.
------
nawitus
It'll be interesting to see if they can compete with cameraphones. GoPro does
have a few differentation strategies. First is that they'll offer higher
quality video than phones. However, the video quality on phones will get
better all the time, and the difference in quality will become smaller every
year.
Another strategy is that they can compete with price. Consumers can always buy
a case and a strap to house their smartphone in, but if you're filming sports
there's a quite high risk to destroy the phone. The price of GoPro-style video
cameras will go down over time (if they won't constantly add new features in),
but the cost of phones will probably stay high in the future.
Consumers will likely choose the $79 camera instead of risking their $500
phone to film sports.
~~~
Retric
There is always going to be a significant low light advantage to having a
larger lense and sensor. Add a mourning bracket / wrist strap, improved noise
reduction and there is only so close a camera phone can get.
~~~
nawitus
Yes, that's true, but when camera phones will be as good as e.g. Canon 5D Mark
iii (and they will, relatively soon even) then that quality will be good
enough for practically everyone. At that point only professionals need better
quality.
~~~
sparky
How does this prediction jibe with the statement that a larger sensor will
always be beneficial, especially in low light?
Will smartphones be able to include much larger sensors in the future?
Is there some new physics that obviates the need for a large sensor?
The 5D Mark III has > 24x the sensor area of an iPhone 5, and most smartphones
are even worse off [0] [1]
[0]
[http://en.wikipedia.org/wiki/List_of_large_sensor_interchang...](http://en.wikipedia.org/wiki/List_of_large_sensor_interchangeable-
lens_video_cameras) [1]
[http://www.chipworks.com/blog/recentteardowns/2012/09/21/app...](http://www.chipworks.com/blog/recentteardowns/2012/09/21/apple-
iphone-5-image-sensors-and-battery/)
~~~
nawitus
>How does this prediction jibe with the statement that a larger sensor will
always be beneficial, especially in low light?
Even though larger sensors will be beneficial, at some point the small sensor
will be good enoug for 99% of consumers, though professionals will still
prefer the larger sensor.
>Will smartphones be able to include much larger sensors in the future?
Perhaps, like Nokia pureview did - however that phone is slightly larger than
the average smartphone.
>Is there some new physics that obviates the need for a large sensor?
No, but technology will advance so that small sensors will be sufficiently
good to 99% of consumers. There's progress in sensor technology ever year.
There's also progress on the processor side, which has been/is apparently a
bottleneck, as new processors in DSLR cameras enable better image quality
(take for example DIGIC processors).
------
pkteison
I'm really impressed by the timeline on the article. Appears to be implemented
with <https://github.com/athletics/infostory> ; anybody know if this was this
custom made just for Forbes, or even just for this article? Seems like a ton
of effort for a small detail, but it really enhanced the article for me.
Edit: Better googling yields this article which talks a little bit about the
timeline, so definitely not just for the article:
[http://www.forbes.com/sites/lewisdvorkin/2012/09/13/inside-f...](http://www.forbes.com/sites/lewisdvorkin/2012/09/13/inside-
forbes-our-journey-from-website-to-platform-a-2-year-interactive-timeline/)
~~~
jellisnyc
Hi, I'm one of the partners at Athletics. We originally developed the timeline
custom for Forbes and have wanted to push this a bit further at some point,
hence the repo. Really glad you liked it.
Where credit's due: The Forbes team developed the GoPro feature using our
toolkit as a starting point. (We did the timeline in Lewis D'Vorkin's post
that you referenced.)
------
morefranco
Awesome post - really interesting to see the evolution and how it was all
started without the help of sites like Kickstarter (seems like that's where
they would have started if the GoPro was about to come out today).
------
farabove
GoPro's success is based on one thing, It does it's job very well.
| {
"pile_set_name": "HackerNews"
} |
Google joins .NET Foundation as Samsung brings .NET support to Tizen - ickler8
https://techcrunch.com/2016/11/16/google-signs-on-to-the-net-foundation-and-samsung-brings-net-support-to-tizen/
======
mentat2737
Nice.
Now please make C# a first-class citizen in Android and start to migrate from
Java to C#.
~~~
geodel
Google have not made their own languages Dart/Go official to Android. Why
would they make languages other than Java a priority now?
~~~
patates
Dart doesn't have the adoption yet and has a different plan when it comes to
mobile (
[https://github.com/flutter/flutter](https://github.com/flutter/flutter) ).
I love Go, but it's not really a suitable language to do UI, or anything that
deals with data models.
C#, however, is a perfect replacement for Java, most of the times. I would say
"it's simply superior in every imaginable metric other than cross-platform
implementations of the compiler/VM" but that's just my opinion.
~~~
dom96
Why do you consider Go unsuitable for UI development?
~~~
patates
You can't have generic functions that can wrap data so you end up passing
concrete models or interfaces to views - no generic view-models for you. The
inflexibility of the type system isn't a big deal when you are working on
network applications or tools, but causes serious duplication when you do
anything that passes around concepts internally.
~~~
bsaul
i don't think generic is relevant to GUI. objective c didn't have generics,
and i don't think it mattered in any way when they built cocoa.
now generics is a problem of its own when working with data and algorithms,
but they managed to get along with it in the backend so far, so...
~~~
dagi3d
obviously it is doable, but that does not mean there aren't better solutions
today.
------
oblio
Now we're cooking.
The technical steering groups is currently formed out of: Microsoft, Red Hat
(so input from the main Linux distro), JetBrains (input from the makers of
great tools for developers), Unity (one of the leading game engine makers),
Samsung (one of the leading mobile device makers) and now Google.
.NET should have a bright future. And hopefully this should push a few buttons
over at Oracle HQ so that Java catches up faster to C#.
~~~
skizm
Is Java currently behind C# in any capacity?
Not that a shot in the arm wouldn't be good for Oracle, but Java definitely
still reigns supreme at the moment, despite Oracle's involvement.
[http://www.tiobe.com/tiobe-index/](http://www.tiobe.com/tiobe-index/)
~~~
oblio
Java the language versus C# the language.
~~~
skizm
Same question. What advantages does C# the language have vs Java the language?
Do people perceive Java as playing "catch up" with other languages?
e: I only ask because I've always heard the opposite.
~~~
on_and_off
Pretty much everything that is in kotlin should have been in java for a while
IMO and some of these features are in C#
~~~
rubber_duck
Kotlin fixes some of Java issues but it still can't fix JVM design decisions
such as lack of value types (coming to JVM in what 5 years from now ?) and
messy native interop.
Java has some really fancy JIT compilers designed for servers but .NET is much
more AoT/native interop friendly with value types and reified generics, you
can get a lot closer to C++ like code with C# than with Java (avoiding GC with
structs, controlling memory layouts in collections, etc.)
------
veeragoni
Microsoft joins Linux Foundation and Google joins .NET foundation. what a day
:)
~~~
badloginagain
The thing I find really interesting here is that Microsoft is pulling down the
walls to it's garden and building bridges instead. It will be fascinating to
see how this plays out, because this is a tectonic shift in the development
landscape.
Props to Satya Nadella for having the gumption to lean into this strategy. I
was expecting a few token open-sourcish libraries as a giant marketing
campaign, but it looks like they're really committed to the idea.
------
brilliantcode
What an amazing year for Microsoft. It's nothing like Microsoft from 2006 or
1996.
Build 2016 is probably THE defining moment for developers that have previously
shied away due to their inherent closed, proprietary nature.
At least for me anyways, AWS seriously needs a killer IDE like Visual Studio's
tight integration with Azure.
------
JBReefer
I like the laptop with a bunch of lovely Microsoft technologies, and then WiX.
Please, please die WiX. Imperative XML + non-deterministic execution order +
the worst error messages of the entire stack. I love C# and the CLR, but damn
WiX sucks.
~~~
chamakits
I haven't done a lot of 'Windows exclusive' development for a while, but
something like 6-8 years ago, I was making an installer for a small company
that up to that moment, they had to send someone over to spend a whole day
installing the software on client's machines.
I was strongly suggested to use WiX. I spent 2 months trying to get something
to work, but I wasn't able to get nothing truly useful to run. I remember
explicitly that something as simple as writing to the registry was proving
problematic. To make it worse, documentation was poor, and there wasn't much
of a community around it cause it was brand spanking new.
Two months in, without telling anyone, I decided to ditch it and use NSIS.
That day I had something that actually worked! Within 2 weeks I had something
that was running end to end, installing the software on the machine. The next
month was polishing, and testing/fixing for different versions of Windows.
I have no idea how things may have changed now, but if I tasked with making a
Windows installer today, I wouldn't even think twice about using anything
other than NSIS.
~~~
ygra
Can NSIS by now roll-back partially failed installations? That's to me the
biggest gripe I have as a user of such installers – whenever something weird
goes wrong you end up with a half-installed application of which you don't
know how to get rid of the pieces.
~~~
flukus
You delete the directory to get rid of the pieces.
~~~
ygra
Assuming it hasn't done a bunch of other stuff yet. While Microsoft recommends
that the install directory is the application bundle and programs should
confine themselves to it, that's hardly what many applications are doing.
------
echelon
As a Java developer using Linux and Mac, I couldn't be happier. I would love
to see C# and .NET on Android. I'd be equally thrilled to use Microsoft tools
(so long as they're on Unix) to develop for it too.
------
shaydoc
.NET is great... C# is a fantastic language. Apple, make it a first class
citizen for iOS also ;)
~~~
adamnemecek
What are some things that C# has over swift
~~~
mvitorino
LINQ and ability to interact with any IL compiled language (F#, VB). Also
async.
~~~
eggy
I prefer F# over C#, and I think it is a better competitor to Swift or Java or
Kotlin.
~~~
xorxornop
The important thing is CLR support. Everything else is just semantics.
(literally)
~~~
mvitorino
Semantics improves expressiveness which provides conciseness, which leads to
less code. Less code is generally less bugs (sure...arguable). But definitely
expressiveness is also often correlated to programmer happiness, which is
important in itself.
------
ocdtrekkie
Tizen supporting .NET is the interesting thing in this article to me. If
Microsoft got so far as getting UWP apps running on Tizen, Microsoft and
Samsung could potentially offer a pretty compelling offering against Android.
~~~
Grazester
Windows Phone had UWP no? Where developers were concerned it didn't offer them
anything compelling enough for that platform it seems. I think it would be
even less so on Tizen even.
~~~
ocdtrekkie
Tizen has other traits that might be more appealing, like the fact that it's
open source. (And arguably, more open source than Android by far.) And if
Samsung chose to start pushing Tizen phones over Android phones... bear in
mind, Samsung is pretty much THE Android manufacturer, everyone else rides
their coattails. Samsung is maybe the only company that can upset the apple
cart as far as Google's concerned.
~~~
Grazester
Without the Google Play Store Samsung's phones without Android are not going
to sell.
~~~
dogma1138
In the west maybe not tho the Samsung store has tons of stuff.
FYI many if not most android phones are sold without the google store today in
emerging markets if you buy a <50$ in Africa you are not getting Google's App
Store.
~~~
GFischer
To be honest, everyone sideloads the Play Store anyways. Heck, they sell it
pre-sideloaded here in South America (and with pirated apps if you want them).
------
mr_overalls
What would be required to bring the CLR up to the JVM's legendary level of
engineering?
~~~
KirinDave
It's already there? The CLR is a well maintained and engineered system.
Why do you believe otherwise?
~~~
mr_overalls
At one time, the JVM had superior configurability - many more runtime options
for aggressive garbage collection, profiling, optimization, and debugging.
But maybe you're right - it's been a few years since I made a looked at the
comparison.
~~~
KirinDave
> At one time, the JVM had superior configurability - many more runtime
> options for aggressive garbage collection, profiling, optimization, and
> debugging.
I'm not sure that this actually implies it was a more robust and production
ready system. The JVM seems to be on a similar path of reducing somewhat how
much tuning is expected of operators. Certainly we do less of it now on Java 8
(although some of the defaults it sets are truly boneheaded).
------
flinty
So if you were to build a time machine and go back to 2004 and told some one
the following, which do you think they will believe:
Microsoft joins Linux foundation and has a seat on the board
Google joins .NET foundation
Trump is president of the United States
------
bborud
I've used Java since it was first launched and I've used it as a primary
language since 2003 (it took a while before it was usable for the stuff I was
doing). Although I like Java, I don't trust Oracle. They are not a well-
behaved citizen of the software world. So for the last few years I've been
eager to migrate away from Java.
I really hope Microsoft understand that if we made the move to C# they have a
brilliant opportunity to set the standard for how to behave.
(Meanwhile, I'm in the process of using Go for projects)
------
m3rc
Does this spell Google moving away just a little bit from Java in the future?
~~~
markdoubleyou
Jon Skeet (C# guru who works at Google, for those unfamiliar with C# rock
stars) was interviewed on Software Engineering Daily, and his response to this
question was basically, "uh, no." Google isn't shifting their focus away from
Java/C++ any time in the foreseeable future. (You might see improved support
for .NET Core in Google Compute Engine, though.)
[https://softwareengineeringdaily.com/2016/09/20/cloud-
client...](https://softwareengineeringdaily.com/2016/09/20/cloud-clients-with-
jon-skeet/)
~~~
m3rc
That's an interesting interview, thanks.
------
Zigurd
IF Microsoft makes another run at phones, the should use Tizen with .NET. That
would be very much in the spirit of Android. Every Android OEM/ODM would know
how to port it, so it might pull along some 3rd party hardware makers. Most
importantly it would lose all the complexity of being Windows Everywhere while
still running key MS apps.
------
alkonaut
They say Tizen TVs with .NET support will come in 2017. Does that mean older
devices will never support .NET? I couldn't find any information about that.
------
phyushin
Tizen would be OK if you didn't have to use eclipse
~~~
Kipters
Well, now you can use Visual Studio
------
johnnydoe9
I barely understand all this but I'm excited!
| {
"pile_set_name": "HackerNews"
} |
Scheme Project that People will Find Use For? - sicpguy
I'm currently going through SICP (in chapter 4 now) and I'd like to start a 3-4 month project but I'm not sure what project to start building.<p>I know the answer is "scratch your itch" but I find that I don't usually have any itches. Usually, my main motivator is people using the project (ie. my main motivator is customers). I don't have a lot of experience in the LISP world so I'm wondering what project would be useful for the community right now. I'm using Racket btw.<p>Thanks for all the suggestions.
======
lfborjas
Something like wsgi/rack/ring would be cool (actually, ring could be your
inspiration: <https://github.com/mmcgrana/ring> )
| {
"pile_set_name": "HackerNews"
} |
Monte carlo methods vs Markov chains - mathola16
http://blog.wolfram.com/2011/06/08/what-shall-we-do-with-the-drunken-sailor-make-him-walk-the-plank/
======
dodo53
Or presumably you could go one further and solve for the fundamental matrix
which gives total probabilities (given unlimited steps) of ending at either
end of the plank ('absorbing' states which you don't come back from). see:
[http://www.math.dartmouth.edu/archive/m20x06/public_html/Lec...](http://www.math.dartmouth.edu/archive/m20x06/public_html/Lecture14.pdf)
------
scythe
You could also solve it analytically by diagonalizing and setting all of the
eigenvalues less than the maximum eigenvalue to zero.
~~~
a1k0n
You could also solve it analytically with dynamic programming, computing the
probability of falling for each spot on the plank.
~~~
scythe
Well, since you can walk back and forth across rows, you'll have to compute
all of the probabilities for a given row at once by solving the resulting
system of equations. This is generally quite simple.
For example, take a four-column, one-row version, with probability-of-falling
like so:
* 0 0 0 0 *
1 x y y x 1
We have:
x = 0 / 4 + 1 / 4 + y / 4 + x / 4
y = 0 / 4 + x / 4 + y / 4 + y / 4
giving the solutions x = 2/5 and y = 1/5. It is easier, of course, if you
exploit the symmetry of the situation, as here (by writing x y y x instead of
x y z w).
------
achompas
How random is RandomChoice[]? For everyday applications I figure it wouldn't
matter, but when taking ~160,000 steps (as with MC methods) we could possibly
observe a non-uniform pdf for the sailor's steps.
~~~
wisty
If it's something like the Mersenne Twister (in Python) then very very random.
The period is something like 2^19937-1, and it passes a lot of tests to see
whether or not it looks random.
Of course, in a big random system there are some states that simply won't be
reached by any random number generator with a finite period. Once you start
looking at combinations and permutations, you can a get staggeringly large
number of states. But in practice, this shouldn't matter for any problem where
Monte Carlo methods make sense - if your answer is very sensitive to whether
or not you have sampled a state that only crops up one in a squillion times,
you shouldn't use Monte Carlo.
~~~
achompas
Thanks for the answer!
------
dvse
With a fair bit of hand waving it's also possible to present the law of large
numbers and the central limit theorem in the same way. Can also look at
Metropolis-Hastings as modifying a suitable random walk to get the steady
state that we want.
------
PaulHoule
here's a nicer 2d drunkard's walk simulation that i put together in scratch in
a few minutes...
<http://scratch.mit.edu/projects/electric_mouse/1002199>
------
guan
Markov chain Monte Carlo!
------
mrvc
Having worked with them for a few years now, can I just say I hate the term
Monte Carlo Method. Although I can understand that calling them "random number
simulations" is not nearly as cool.
~~~
wisty
In Australia, Monte Carlo is a kind of biscuit. If you tell most people you do
Monte Carlo simulations, they will think you design biscuits. Which actually
sounds cooler than "insurance", "safety systems", or "math".
~~~
evgen
In the US a biscuit is a "cookie" and if you tell people that you do Monte
Carlo simulations but not the kind related to biscuits they will smile
politely and slowly back away from you...
| {
"pile_set_name": "HackerNews"
} |
Call for support for Lisp in WebAssembly development - patrickmay
http://article.gmane.org/gmane.lisp.steel-bank.devel/19495
======
klodolph
From digging, it looks like the issue here (or one of the issues) is that the
Web Assembly encoding for some Lisp uses cases (multiple value return) is not
very compact. Making the proposed change would presumably reduce the size of
Lisp code once compiled to Web Assembly, it would not really affect the
behavior of the program once compiled to native code.
I can relate to the Web Assembly team's reluctance to add features which are
only really wanted by a small subset of users, especially when they only
affect the binary size. These features, if implemented, may suffer from poor
test coverage. My own preference is that compact binaries are nice but if
you're going to use a high-level language, an increase in binary size is just
expected (say, an order of magnitude) unless the encoding/VM and language were
designed in concert (Java + JVM, or C# + CIL are two examples). Heck, C++
binaries can be enormous.
Then again, I didn't dig deep enough to really understand the nuances of the
argument. Perhaps someone could elaborate.
~~~
kuschku
Then again, if the WebAssembly team makes that argument, we might as well just
use JavaScript + ASM.js
~~~
dietrichepp
ASM.js builds can be quite large, even 10s of MB is not uncommon. Reducing the
binary size isn't just "nice to have", it changes the viability of the
platform.
~~~
kuschku
But wasn’t the argument of the wasm team against the LISP features that binary
size isn’t relevant and just a "nice to have" feature?
~~~
dietrichepp
I don't think anyone is arguing that binary size isn't relevant. It just has
to be weighed against the other parameters we want to optimize, like
implementation complexity.
------
lisper
> some support from other people in the lisp community seems necessary.
A clearer call to action would be helpful here. What exactly should members of
the Lisp community who care about this do?
~~~
pjlegato
Agree. I support having Lisp support in WebAssembly! Now what?
~~~
vmorgulis
My knowledge of LLVM an SBCL is limited but I know a bit emscripten and how it
works. I will look around "multivalues" and "power of two memory access" in
LLVM.
------
vmorgulis
The misunderstandings are related to the use of the word "AST". wasm looks
like an AST but in fact it's a bytecode with stackframes.
------
Sanddancer
The memory allocation feature feels more like it would be part of whatever
memory allocation library is used than something that should be baked into the
language. For example, jemalloc allows for the kind of alignment that is
discussed here and is done at runtime, and doesn't require specific behavior
from the lower levels. Any language is going to need a runtime because you
can't put in every feature that every user will need, and a malloc doesn't
seem like a huge issue, especially with one already written that an
implementation can crib from.
~~~
nabla9
Memory allocation feature asked is not a language feature. It has to be baked
into web assembly if you want to use portable byte masking with pointers.
It's very low level implementation level detail that enables fast execution of
dynamically typed languages. Boxing has high cost and it consumes memory. Type
tags embedded in pointers can be very fast. For that to work, you need objects
that are aligned with power-of-two byte boundaries.
Adding this feature enables efficient execution strategy for scripting and
dynamic languages.
~~~
Sanddancer
The primary target of WebAssembly is strongly-typed pre-compiled languages,
where the kinds of features you want would just lead to slowdowns and
excessive memory consumption. There is no hardware currently out there that is
a tagged architecture, so expecting them to bend backwards is not a realistic
option.
~~~
nabla9
You don't need tagged architecture if you allocate memory by power-of-two
regions.
------
pmarreck
I support having the Erlang BEAM in WebAssembly! Any takers?
------
zaro
"This masking strategy would in turn require a power-of-two related memory
size, and there has been a lot of resistance to this too."
A more appropriate title will be "WebAssembly team doesn't want to listen my
ideas on how WebAssembly should work".
~~~
junke
From
[https://www.w3.org/community/webassembly](https://www.w3.org/community/webassembly)
:
> The mission of this group is to promote early-stage cross-browser
> collaboration on a new, portable, size- and load-time-efficient format
> suitable for compilation to the web.
See also
[https://www.w3.org/community/council/wiki/Templates/CG_Chart...](https://www.w3.org/community/council/wiki/Templates/CG_Charter#Decision_Process).
This is a collaborative work where people can make suggestions (I cannot judge
if the proposal was fairly evaluated or not).
> WebAssembly team doesn't want to listen my ideas on how WebAssembly should
> work.
Your title implies that the WebAssembly team has the best knowledge and/or
expertise to develop WebAssembly. They are probably expert in their own domain
but are willing to take advice from other contributors.
~~~
zaro
Please read this quote again:
"This masking strategy would in turn require a power-of-two related memory
size, and there has been a lot of resistance to this too."
And try to think about the implications it has on the memory model of the VM
that is going to execute/JIT Webassemly. A power of two memory model, isn't
really viable at this level I think. And you don't need to think a lot about
it, to figure out that jumping to 256MB memory, just because your app/page
needs 130MB is a bit of counter-optimization :)
The resistance is the sensible thing to do in this case :)
------
finchisko
So what is this webassembly about? Allowing programming for web in any
language (java, c, lisp) and compile to webasm, as some kind of runtime env?
~~~
qwertyuiop924
Basically. At least, that's the theory.
------
sandra_saltlake
I don't expect it to use the hardware in a sensible manner.
------
dschiptsov
BTW, language implementations which rely on LLVM for code generation would get
it for free. Well, for much less pain.
BTW2, time to appreciate how LLVM's approach is superior to JVM
madness/religion (and how Golang's is even more clever - do less by doing it
right - essentiaistic/minimalistic ascetism)
~~~
dgellow
Could you elaborate on you BTW2?
~~~
dschiptsov
Modern CPU+OS is a good-enough hardware VM and a target platform. Process
isolation under an OS is a right level of abstraction.
A VM as user-level OS process which tries to do an OS job and reimplement
everything inside a VM is simply ridiculous. Javascript follows the same
madeness.
Multi-threading for imperative code is a big mistake, which breaks isolation
and results in lock-hell, context-switching nightmare and layers of
unnecessary complexity which is impossible to reason about.
Golang and Swift guys got it.
~~~
xaduha
How would you explain that?
[https://www.techempower.com/benchmarks/#section=data-r11&hw=...](https://www.techempower.com/benchmarks/#section=data-r11&hw=peak&test=json)
Swift is nowhere to be seen and Go is nowhere near the top.
Once it gets going JVM is a beast.
~~~
dschiptsov
At cost of wasting of almost as much resources that it serves.
Top is about popularity, not quality. Junk-food is also popular.
My analysis was about the first principles, not abstract ones, but grounded in
reality. Those who got the principles right wins in the long run.
Erlang (where VM is _not_ a byte-code interpreter), Golang, Haskell (except
when monads are abused by idiots), etc are designs based on the right
principles. Java was a primitive religion based on superstitions (the fear of
pointers) from the start.
~~~
xaduha
> Top is about popularity, not quality. Junk-food is also popular.
What are you even talking about? This is a performance benchmark.
> Java was a primitive religion based on superstitions (the fear of pointers)
> from the start.
...
~~~
dschiptsov
Performance on a simplified task is the least important metric.
BTW, it will be wonderful to see next to these charts "memory used" and "lines
of code used, including all dependencies" columns. And "length of stack trace
in kilobytes" of course.
Sorry, I didn't read this particular link. I have seen too many of them
before. Principles are above particularities.)
Edit: an illustration - closer to real world example chart from the same site:
[https://www.techempower.com/benchmarks/#section=data-r12&hw=...](https://www.techempower.com/benchmarks/#section=data-r12&hw=peak&test=update)
~~~
xaduha
Whatever you say, chief.
~~~
dschiptsov
Thank you!
Let me illustrate the thesis about necessity of proper abstractions and
principles grounded in reality in another way.
There are way too many cases of a meaningless bloatware in human history,
including writings produced by Hegel, Marx and Engels. There are millions of
people suffered because these graphomans have produced 4000+ pages of so-
called [political] philosophy, full of pure abstractions, abstract concepts
and meta-phisical design patterns. The shit doesn't fly, except for confusing
minds of bunch of lesser idiots, which ruined whole nations afterwards.
On the other hands, there are writings after "down to earth" guys, such as
Buddha or Christ, or to lesser extent, the guys who wrote Upanishads (which
uses rather poetical language) which literally saved, or at least improved,
billions of lives. In the realm of philosophy, guys like Tomas Hobbes and Adam
Smith wrote much less pages and described some aspects of reality way better.
Piling up layers upon layers of disconnected from reality crap of wrong
abstractions and dubious abstract principles, praised by brainwashed
followers, especially because they are too bogus and too abstract, is a way to
ruin.
I think it is not too hard to notice rather striking similarities.)
------
dschiptsov
Code generating backend for SBCL, like one in LLVM?
------
CyberDildonics
webasm is a strongly typed AST with manual memory management, it is not meant
to be a direct analog to lisp or a lisp interpreter.
| {
"pile_set_name": "HackerNews"
} |
Racing at 127mph in a Tunnel Under LA - awiesenhofer
https://twitter.com/boringcompany/status/1131809805876654080
======
ryzvonusef
[https://www.youtube.com/watch?v=VcMedyfcpvQ](https://www.youtube.com/watch?v=VcMedyfcpvQ)
Youtube video, better quality
------
ryzvonusef
Route:
[https://www.openstreetmap.org/way/633116268#map=18/33.92300/...](https://www.openstreetmap.org/way/633116268#map=18/33.92300/-118.34300)
| {
"pile_set_name": "HackerNews"
} |
An Unschooling Manifesto - dangoldin
http://blogs.salon.com/0002007/2009/04/25.html
======
tokenadult
Previously submitted:
<http://news.ycombinator.com/item?id=580209>
I see what URL difference kept the HN duplicate detector from noticing this
duplicate.
| {
"pile_set_name": "HackerNews"
} |
Artificial Data Gravity - alexwilliams
http://blog.mccrory.me/2012/02/20/artificial-data-gravity/
======
njyx
This makes total sense for the data storage providers, but it's also clearly
not an equilibrium in the long run since it'll be economical for all of the
data to replicate to multiple locations (where each group of users has
privileged data access rates).
| {
"pile_set_name": "HackerNews"
} |
Ask HN: F*ck HostGator. Can anyone suggest a better managed VPS alternative? - vicken
I'm sick of HostGator's constant outages, including today's. Can anyone recommend a solid managed VPS service not related to BlueHost/HostGator?<p>I'm currently paying $51.95/mo for the following and would like to stay in the same price range for similar specs:<p>2.3Ghz (1 core)
1024MB RAM
60GB Disk Space
1000 GB Bandwidth<p>I'd gratefully appreciate any input.
======
michaelchum
DigitalOcean!!! You can't get a better VPS for their price. Super fast SSD
(you feel the difference), almost no outages, extremely easy to setup and you
build the stack you want. Stellar customer service.
[https://www.digitalocean.com/](https://www.digitalocean.com/)
~~~
nitely
DO is unmanaged though.
------
stevejalim
I've had good experiences with Webfaction - they've been quick to respond to
tickets and communicate issues/outages well.
Roughly looking, your setup would be about $20/mth with them, I think.
Direct link:
[https://www.webfaction.com/features](https://www.webfaction.com/features)
Shameless affiliate link, even though I'd recommend them anyway:
[https://www.webfaction.com/features?affiliate=stevejalim](https://www.webfaction.com/features?affiliate=stevejalim)
------
thenomad
I've heard decent things about WiredTree for managed servers. Decent, not
awesome.
Bytemark are awesome, and do offer managed servers, but I don't know how much
they charge for them.
~~~
stevekemp
Bytemark start from £85 for an hour a month of hands-on work, along with the
automation, monitoring & etc:
[http://www.bytemark.co.uk/managed_hosting/transparent_pricin...](http://www.bytemark.co.uk/managed_hosting/transparent_pricing/)
------
gesman
Go for dedicated server for the same price:
[http://c.gg/ovh](http://c.gg/ovh)
That's what I use after I ran away from crappying hostgator.
------
jboss4
You should definitely check out WiredTree or Future Hosting. They are both
fantastic for managed VPS.
[http://www.futurehosting.com](http://www.futurehosting.com)
[http://www.wiredtree.com](http://www.wiredtree.com)
------
Steveism
I think LiquidWeb is certainly worth considering for a managed VPS in this
price range:
[http://www.liquidweb.com/StormServers/vps.html](http://www.liquidweb.com/StormServers/vps.html)
~~~
vicken
LiquidWeb looks very promising and fits right in my price range. Great find.
LW is the top contender so far.
Great suggestions guys, keep em coming!
------
pskittle
[https://www.strikingly.com/s/pricing](https://www.strikingly.com/s/pricing)
------
hardwaresofton
How strongly do you feel about it being managed? What are you hosting?
~~~
vicken
I strongly prefer it being managed so I don't really have to worry about
server maintenance and such.
I don't host anything too crazy. I'm a web designer and currently have about
15 sites I'm hosting for clients, with a handful of them being WordPress
sites, and the rest, simple HTML informational sites.
~~~
stevekemp
Generally "managed" means you share the login details to your host with
somebody, they apply updates, they help work with you to tune your server, and
they let you know of upcoming problems.
Although there are providers who both offer hosts and offer the management you
might find a decent compromise is to pay for them separately.
I remotely manage a lot of servers (40-80) in exchange for an ongoing minimal
fee, and I'm not alone in that I expect.
------
pixeloution
maybe these guys? [http://www.unixy.net/vps-
hosting/](http://www.unixy.net/vps-hosting/)
~~~
cordite
A team I worked with totally ditched these guys due to their managed services
quality
------
godzillabrennus
I use dotblock.com and they rock.
| {
"pile_set_name": "HackerNews"
} |
Growing a UX tool - juliushuijnk
https://medium.com/proof-of-concept/growing-free-ux-design-tool-prototype-with-ui-wireframing-and-user-scenarios-f2b0015516ef
======
daleco
When my Symbols are done in sketch, I can build a mockup very quickly. Have
you considered building a plugin for Sketch?
I meet with the InVisionApp sales team few months ago. Their phylosophie is to
work and augment the current tools (Sketch 3...) instead of replacing them. I
thought that it was a good approach.
I'd be worried that the designers will be scared by a command line based tool.
This will be hard to convince people to move away from Sketch 3.
Thanks for trying to improve the designer tools, it's much needed.
------
didgeoridoo
This is so desperately needed it isn't even funny. Is there somewhere I can
sign up for updates on your progress, try out alpha builds, etc?
~~~
choxi
What does everyone currently use for wireframing? I find it easiest to just
draw it out with a pen and paper.
~~~
didgeoridoo
My current process is:
1) Initial sketches in pen & paper.
2) Move into Sketch.app for refinement.
3) Move into Invision for clickthrough interactivity.
4) Move into Principle for animations & transitions.
5) Throughout, use Craft.io to keep track of personas, user stories, etc.
6) Realize that, despite my best efforts, documentation is scattered
everywhere. Things are out of sync. UI is in eighteen different states of
visual done-ness, and nobody knows who made what decision when.
7) Drink heavily.
My ideal workflow is:
1) Pen + paper or whiteboard for rapid ideation and exploration
2) Something exactly like this "True UX" tool for rapidly stringing together
layouts and flows in a testable, iterative, documentable way.
3) Drink heavily. Wait. Maybe this is a personal problem.
~~~
sogen
I use the same process but start with 7)
Have you used craft sync? There’s another tool to sync but forgot the name.
------
aldanor
Looks pretty cool. One thing though... Windows?..
~~~
juliushuijnk
It's a prototype I'm making in Python. You can run it on a Mac. For a web-app
prototype I can re-use much of the code.
First I want to gather feedback and get a feel for the potential. If there is
enough potential, I'd like to get one or more developers involved so we can
build a robust product for the platforms (desktop, mobile, web) that make
sense for the product.
| {
"pile_set_name": "HackerNews"
} |
Pledge support to Wikipedia if they do a SOPA blackout - yanowitz
http://www.wikipediablackout.com/
======
kevinalexbrown
I support an HN blackout, a Reddit blackout, and I would even be happy, if
somewhat hesitant, about a Wikipedia blackout.
But I absolutely do not support the conditional donation (excuse me, payment)
to Wikipedia to get it to take a particular political stance, even if that
stance concerns its long term survival. This is worse than "donate to a
politician, hope they vote your way" -- this is "pay Wikipedia money if and
only if it performs a specific action on a specific date and time." That goes
so far against why I love Wikipedia, and why it performs such a unique
service. Small, tight teams, no strings attached donations, unfettered public
input. Those are things worth preserving and fighting for, but not at the cost
of those things themselves.
I would hope Wikipedia returns the money or donates it to some other worthy
cause in the event of a blackout. Culture of an institution is a delicate
thing, and where and, perhaps more importantly, _why_ you get your money can
dramatically shift that culture one way or another. Wikipedia has a great
culture. Is that really worth risking?
~~~
redthrowaway
Agreed. I made my donation to the WMF during their last fundraising round, and
I'm participating in the discussions about a blackout. That's the extent of
the influence I feel comfortable attempting to exert.
The proposal comes from the right place, but it goes against everything
Wikipedia stands for.
------
lell
I pledged. In fact I'll donate $100 if they blackout. Those who oppose
blackouts claim that sites like wikipedia google might lose money, or that
they are essential services like utilities. I'm not sure if the essential
service analogy is valid, but it doesn't matter: correct me if I'm wrong but
the letter of the law of SOPA says that wikipedia.org can be wiped off the
internet as soon as it passes without ANY due process, just because some
people have uploaded images they don't own copyright for. Of course,
apologists will note that shutting wikipedia down won't happen, because the
bill is aimed at stuff like counterfeiters and torrents. To this I can only
say it won't be the first thing that happens. What it does is that it gives
the US government a guillotine around wikipedia's neck that they could pull at
any moment: the legal power and infrastructure for shutting it down. This is a
total affront to the independence of wikipedia as a non-profit organisation
(and to google & facebook as corporations).
By pledging we can reduce the cost of a blackout, make it more economically
viable for them, so they do it and show the world that if wikipedia(google,
facebook) really are essential, then their independence should be protected
from the growing nationalistic forces of the US government.
~~~
studentrob
This has probably been beaten to death but I wouldn't support a Google or
Wikipedia blackout. What if someone were bitten by a snake, snapped a photo of
the snake, and needed to look up the type of snake in order to administer the
right anti-venom? I bet you anything doctors these days are using the web to
do quick checks just as the rest of us do in our day jobs.
On the other hand, homepage placement for Google or something on every page of
Wikipedia for a day would be nice.
FB is non-critical but I wouldn't expect them to go for it. They are off on
their own island of hubris and not about to cooperate with any other
organization, much less with Google who is encroaching on their social
territory.
If twitter did it the entertainment industry + followers would be running
around with their heads cut off
~~~
jarin
What will happen if these sites are taken down permanently and someone needs
to look up the type of snake?
~~~
rcavezza
There is a 0% chance SOPA will lead to Wikipedia or Google being taken down
permanently.
~~~
redthrowaway
No, but it may well lead to increased legal costs for Wikipedia, as well as
forcing them to hire people to ensure no copyrighted material is posted (a
daunting task on a site that size). Those increased costs could seriously
affect their ability to continue to finance themselves through donations.
------
eekfuh
How do I know if I donate through this site that Wikipedia will actually get
the money.
Also the domain registration is private and through GoDaddy too. (less
credibility to me)
(I'd gladly donate to wikipedia, but not through this site)
EDIT: I thought they were taking the donations, my bad. It's a demand progress
site. Odd that they'd still use GoDaddy (even if GoDaddy eventually denounced
SOPA).
~~~
rhizome
GoDaddy has not denounced PIPA, and at any rate the BSA is still on the ball
as well. GoDaddy is playing both sides of the game.
------
Permit
If Wikipedia follows through, I definitely intend to donate more than one
dollar. I hope this can get some traction, as it would really help the fight
against SOPA if they participated.
------
neilk
I don't know if anyone cares, but Wikipedians have been discussing a SOPA
action for some months now.
<http://en.wikipedia.org/wiki/Wikipedia:SOPA_initiative>
The Wikimedia Foundation will support whatever the community decides. And the
community is not waiting for a "money bomb" or whatever. So I don't think
donations are going to matter in the slightest.
Activists' support for a boycott may influence it a little bit, but it's
really going to be a matter of consensus, and then someone in the community
stepping up to the plate to implement something.
The proposals have "triggers" attached, like, "if SOPA is going to a floor
vote, trigger blackout 48 hours beforehand" (that's just an example). Nobody
has yet talked about a trigger in sympathy with a site like Reddit.
In my opinion, while it might make sense for Reddit to go dark when kn0thing
is testifying before a committee, I think there is some risk of weighing in
too early. You can't do this sort of thing twice.
------
acangiano
There is an error around "I'm not in the US". I had to manually run
javascript:go_foreign().
~~~
lell
I had this error too. To "manually run" the javascript, replace the URL in the
URL bar by "javascript:go_foreign()" w/o the quotes and press enter.
~~~
dserodio
This doesn't work for me, "Uncaught ReferenceError: go_foreign is not defined"
~~~
JamesBlair
And there are no contact details, so we can't even tell them that their page
is broken.
edit: Taking a gamble on emailing the registrant.
------
ultrasaurus
I support the blackout, and the company I work at is trying to figure out how
to support it (we aren't consumer facing), but isn't influencing the site
through money the kind of thing Wikipedia wanted to avoid by not allowing
advertisements?
------
brunoqc
The "(I'm not in the US)" link is broken.
It's : "<http://act.demandprogress.orgjavascript:go_foreign()>
Should be : "javascript:go_foreign()"
------
rhizome
How about we pledge $1 for every day, starting now, that they blackout until
both PIPA and SOPA are killed unceremoniously? Why wait, just do it now.
~~~
jarin
As a frequent Wikipedia reader, I support the minimum blackout period
necessary to generate mainstream media coverage and no more.
~~~
rhizome
Don't worry, I'm sure there are enough people like you where it wouldn't take
very long. Leahy's staffers, for instance.
| {
"pile_set_name": "HackerNews"
} |
Trump's immigration crackdown could sink US home prices - schintan
https://www.bloomberg.com/news/articles/2017-02-22/why-trump-s-immigration-crackdown-could-sink-u-s-home-prices
======
woliveirajr
Perhaps it won't get near the last mortage bubble, but it would have some
interesting consequences on the economy...
| {
"pile_set_name": "HackerNews"
} |
Ask HN: what are the best looking web application user interfaces? - hoodoof
What are sexiest, most stylish AND functional web application user interfaces you know? I'm NOT talking here about garden variety websites. Talking about web applications that are functional, that require significant user interaction.
======
kellros
I would never attribute 'sexy' to a website.
Stylish and functional, I'd suggest you take a look at lifehacker.com .
Functionality wise, it depends on what's appropiate for the type of website.
There's a reason why portal websites are rarely being used (except on say,
company intranets). Stylish I would attribute to a lot of things including
layout, conciseness, typography, certain interactivity, considerate and a few
others including things like support for graceful degradation where
appropiate.
A website in my eyes is something someone visits with predetermined intentions
looking to satisfy themselves within a specific niche. This described behavior
doesn't differ from real world examples such as when you go to a supermarket
to buy food.
------
abozi
Here's a question on Quora, which I found very helpful and it keeps on getting
updated.
[http://www.quora.com/What-startup-homepages-are-most-
simple-...](http://www.quora.com/What-startup-homepages-are-most-simple-clear-
and-effective-and-what-makes-them-so?__snids__=45425424)
------
benologist
Stripe's dashboard is pretty but you've set a pretty hazy bar on what
qualifies.
| {
"pile_set_name": "HackerNews"
} |
Don't Ban “Bossy” - atomical
http://www.newyorker.com/online/blogs/comment/2014/03/dont-ban-bossy.html?mbid=gnep&google_editors_picks=true
======
growupkids
That's funny, I actually heard the word used when I was a military cadet in
high school. It was used to teach the difference between being a a leader, or
just, well bossy. Don't be a boss, don't act bossy, and so on.
Bossy people, we were taught, boss people around by using their rank. No one
wants to follow them, they just have to. That's the worst kind of leader, we
were taught. don't be bossy was good feedback. A good leader inspires, sets
the example, is firm but fair, and through their behavior and actions people
will want to follow them.
Not sure what all this implied sexism stuff is, I only heard it used around
men, and it was and still is a damn fine term for the bad ones. What term
would they prefer be used for someone that's acting bossy?
~~~
calibraxis
In Sandberg-style liberal feminism (which preserves inequality except for the
more well-off white women and therefore doesn't help all women), I can imagine
they want to improve subordination to female bosses. So you should be ready to
follow her imperatives like you would Zuckerberg's.
In corporations, boss subordination is so complete that "bossy" only applies
to someone who isn't actually literally a boss. So I can imagine "bossy" is
used to question the legitimacy of female bosses. However, more serious kinds
of feminism directly attack the existence of bosses, since many more women are
at the bottom of hierarchies than the top.
~~~
hga
In alignment with your observation, this campaign is thought by some to be
battlespace preparation for Hillary's 2016 presidential campaign.
~~~
judk
Oh my. That is brilliant submarine marketing.
~~~
hga
I prefer "cover influence operation".
------
skore
(I didn't even know the "Ban Bossy" campaign existed, so I guess my comment is
more about that and goes along with what the article is saying.)
Looking at the videos - to an outsider like myself, they look massively
ridiculous. So there is a societal problem where girls are either not
empowered to lead or are discouraged from leading by others. To stop that
practice, we will get rid of calling them a specific word.
Words only have the meaning that we put into them. "Bossy" can be applied
correctly or incorrectly. How is the word at fault?
I think "only in America" applies here. Instead of understanding that this is
a complex, complicated issue in society, let's find a catchy campaign title
and rail against intangible things. Oh aren't we all happy we have dealt with
the problem in a format that we can easily post to our facebook wall instead
of, you know, doing the hard work of actually figuring out and dealing with
people on a deeper, personal level.
And yes, I get it, the campaign uses a reductive catchphrase to get a foot in
the door and then deliver a more nuanced message. But I think a campaign set
on a weird, possibly destructive premise may do more harm than good. It may
lead people to think they're doing something when they're actually doing
nothing apart from perpetuating a meme.
How about we all just stop and check ourselves before reducing others to
adjectives in general? Grown-ups and children, women and men alike?
Maybe this tendency to grasp for the simple answer, the quick phrase at all
times is the root of the problem and should thus not be utilized as a
solution.
------
orky56
It's funny but "bully" is more associated with males and "bossy" with females.
Both have negative connotations of forcing someone to do something against
their own will. It seems that the reason behind not banning "bossy" is that
females require this opportunity for leadership development. It seems sexist
that females should be allowed to impose their will on others but males
shouldn't in similar situations. I would argue that females already have a leg
up on their male counterparts with the fact that they mature earlier during
adolescence and perhaps use this to their advantage. As the article mentions,
other ways exist to exhibit leadership. Being bossy though is the worst
alignment of incentives: power & peer acceptance thru fear vs respect.
~~~
loomio
Oh yes, this must be why leadership positions in business, government, and all
areas of life are dominated by women. Oh wait...
------
uptown
"Ban Bossy" Spokesperson: Beyoncé
Beyoncé Lyrics:
Bow down bitches, bow bow down bitches
Bow down bitches, bow bow down bitches
H-town vicious, h-h-town vicious
I’m so crown, bow bow down bitches
Beyoncé's husband Jay-Z Lyrics Excerpt:
My nigga, please - you ain't signing no checks like these
My nigga, please - you pushing no wheels like these
My nigga, please - you ain't holding no tecks like these
My nigga, please - you don't pop in vest like these
~~~
someguyonhn
Firstly, I fail to see how the lyrics from a Jay-Z song from 2002 are anything
other than completely irrelevant to the Ban Bossy campaign. But if you're
going to bring it up, we might as well do it right.
1) Pharrell says those lines not Jay-Z. This is the same person who
wrote/sings/produced the Academy-Award nominated "Happy" song from Despicable
Me.
2) The context of lyrics within a song, the intended meaning of the song
itself, and intended audience of a song should obviously be taken into
consideration. On a site like HN, that so often seems to point out the
ridiculous nature of arguments against video games causing violence, citing
lyrics of someone's husband as somehow a statement about.... well I have to be
honest, I can't follow the logic of the point you're trying to make... is
disappointing.
And finally 3) Here are some lyrics from Jay-Z that seem pretty relevant to
your comment:
"...Rap critics that say he's Money, Cash, Hoes I'm from the hood stupid, what
type of facts are those If you grew up with holes in your zapatos You'd
celebrate the minute you was having dough I'm like f-ck critics, you can kiss
my whole a--hole If you don't like my lyrics, you can press fast forward...
...I don't know what you take me as Or understand the intelligence that Jay-Z
has I'm from rags to riches, ni--as I ain't dumb I got 99 problems, but a
b-tch ain't one, hit me"
~~~
uptown
Why'd you censor the lyrics?
"I fail to see how the lyrics from a Jay-Z song from 2002 are anything other
than completely irrelevant to the Ban Bossy campaign."
We're talking about banning words. If I had to guess, I'd bet more people
would support banning "nigga" than they would "bossy". Personally, I don't
think any words should be "banned" because it's simply not possible. Society
may evolve to not use a word, or shun those that do use a word - but a
campaign to "ban" a word does an injustice to the literal meaning of the word
"ban" because it's just not realistic, or possible.
~~~
someguyonhn
To your question: I censor myself on HN because I don't believe Hacker News,
which is often used by children and is a place that seems to wish to be more
welcoming to women and racial minorities, is the right place to have an
environment where swearing or using racially inflammatory language is okay.
Especially when readers don't know my relationship to the subject matter, or
my relationship to the individual I'm addressing.
To your point about relevance, I'm going to point out to you that zero people
are actually advocating banning a word. They're saying "hey let's stop calling
girls who express leadership skills "bossy" because that has negative
consequences". To which I think the average person would probably be open to.
They are advocating not using a word in the wrong context. Call kids bossy
when they're being brats, sure, but when someone, particularly girls are being
leaders and doing the same things that boys are complimented for, don't call
them bossy.
#WhenSomeonesBeingALeaderDontCallThemBossy is a pretty long hashtag and a
terrible way to quickly market your campaign. #banbossy is memorable, gets to
the point, and can encourage a conversation.
------
jedmeyers
I understand that this is a touchy subject, especially on HN, but come on:
"Avoid editing what you want to say in your head, and try not to worry about
being wrong." This is straight from their Leadership Tips for Girls pdf. They
are encouraging girls to just say whatever comes to mind. What's next -
calling everyone who disagrees "sexist"?
~~~
Tohhou
>What's next - calling everyone who disagrees "sexist"?
If you disagree then you are automatically labeled as one of the obvious
sexist rapist rape apologist pedophile neckbeard supremacist spermjacked nerd
sperglord virgin libertarian losers. They can't possibly be wrong, so if you
disagree you must be one of the hell bound sinners of the most dire nightmare.
>"Avoid editing what you want to say in your head, and try not to worry about
being wrong."
This is very sexist of them. Their implication is that males are stupid and
don't ever censor themselves when they work to be good leaders - that they are
only gain leader status because they say every dumb idea they have, and that
saying stupid things shouldn't have any consequences.
~~~
pigDisgusting
You forgot "creepy stalker", you closed-minded male chauvinist ape.
~~~
Tohhou
That's female chauvinist ape, shitlord!
~~~
pigDisgusting
Well played, Tohhou, well played.
Now if you'll please excuse me, I've just stepped in some of my own doggy doo,
and I need scrape off my shoe.
------
gaius
I love the delicious lack of self-awareness with which these things are
delivered. Like people who pay $50,000/year for college to study a subject of
no practical use telling me to "check your privilege".
~~~
someguyonhn
This is a kind of long response. But I hope maybe you'll read through it. Just
saying "check your privilege" is probably not the best starting point for the
conversation, so I'll try to do a better job of explaining. I don't know if
you are misunderstanding what is meant by "privilege" or not, but the
privilege being talked about when someone says "check your privilege" in my
experience are the privileges that come from being part of the, for lack of a
better term, more socially accepted or socially powerful group. Things like
white privilege, male privilege, heterosexual privilege, you get the idea.
So regardless of your level of income or education, you can be, and probably
are, still privileged in the way society sees and treats you.
For example, as a man, I pretty much never have to worry about being told I
got a promotion because I was having sex with the boss, or that I'm only being
angry or "emotional" about something because it's my "time of the month".
Things that women have to deal with all the time.
Another example would be that I'm never worried wanting to have children is
going to be seen as bad for business, and result in me being denied promotions
or other advancement because of it.(Not to mention I'm statistically going to
be getting paid more than women for doing equal work.)
Hopefully you can see how these are the type of privileges men, or another
group of people in a similar situation, may never notice unless it is pointed
out to them. Or they "check their privilege".
In relation to Ban Bossy, an important example seems to be that I've been
conditioned my entire life to aspire to be a leader: team captain, salesman of
the year, best on the basketball court, you get the idea. And not once was I
ever, or will I ever, be discouraged from asserting my leadership skills as
essentially "not knowing my place" because I'm a man, which can be the outcome
when we tell girls and young women to not be bossy or other similar things.
Maybe a good exercise for you, and for all of us, is to listen to what people
are saying when they describe the privileges we have, or to ask them to
explain better because we would like to understand. Depending on our
situation, maybe we'll gain a better understanding of our heterosexual
privilege and being able to love who we want without having to worry that
their gender will result in violence against us or them. Or maybe we'll learn
about our religious privilege, and that we are able to practice a religion
without inciting fear, being called names, profiled, assaulted, or killed
because of the head covering we wear or for being "different".
~~~
gaius
_Things like white privilege, male privilege, heterosexual privilege, you get
the idea._
What you are talking about, if it even exists, is a _rounding error_ compared
to the massive good fortune relative to the entire rest of the human race that
has ever lived, of being born in the West in the late 20th/21st century.
Perhaps _you_ can see that dropping a few hundred grand on a _hobby_ makes the
speaker incredibly more privileged even within this already privileged group,
and gender is absolutely nothing to do with (as the majority of homeless, etc,
happen to be men, where's the white male heterosexual privilege there? Oops
your whole model of the world just imploded, sorry 'bout that).
~~~
someguyonhn
I can't tell if you're trolling or not. I tried to respond to you in what I
believe was a mature and respectful way.
You've responded with a breathtaking amount of immaturity. And completely
ignored any of the points I made. Perhaps one day you'll be more open to
hearing and responding to what I wrote to you. Maybe that day won't come.
Either way I wish you well.
~~~
gaius
_You 've responded with a breathtaking amount of immaturity_
To a post displaying a breathtaking amount of naivety. I didn't ignore your
points, they are, paraphrasing Feynmann, "not even wrong". And I am not sure
what "troll" even means these days, it seems to be a catch-all term for
"someone on the Internet who isn't a part of my echo chamber".
Likewise, I wish you well, and I hope that one day _you_ will come around to
what I wrote.
------
wyager
I'm immediately extraordinarily skeptical about anything that suggests solving
a cultural problem by changing language.
That's like trying to solve a math problem by changing the value of pi.
~~~
corin_
There is at least a theoretical logic to banning words like this: if being
called bossy is causing girls to lose leadership skills then maybe stopping
this artificially (even if people still think it without saying it) could lead
to less girls being affected, and therefore in the next generation the stigma
has disappeared. Obviously it's not that simple, and I have no idea to what
extent, if any, this actually works, other than in theory.
~~~
Crito
> _" if being called bossy is causing girls to lose leadership skills..."_
I don't think that this particular word is the root cause. More important is
the reason why people are using it. If you ban that particular word without
addressing why people are using it, then those people will adopt a new word to
mean the exact same thing. Creating euphemism treadmills doesn't fix anything.
~~~
jkestner
Yep. Some people are taking this too literally. (Nerds parsing? No!) The
heightened awareness of how word choice subtly undermines behavior we
presumably want to encourage, is the point. This article suggests that instead
of banning, women embrace the word as a badge they're doing something right (a
la 'nerd'), and undermining the undermining would work too.
------
mildtrepidation
From BanBossy.org:
_When a little boy asserts himself, he 's called a “leader.” Yet when a
little girl does the same, she risks being branded “bossy.” Words like bossy
send a message: don't raise your hand or speak up. By middle school, girls are
less interested in leading than boys—a trend that continues into adulthood.
Together we can encourage girls to lead._
So yes, as others have said here, the goal is not necessarily (or only) to get
rid of the usage of the word. But as is very evident from other responses,
that is not immediately obvious to everyone, in no small part because of the
arguably poor catch phrase being used.
I'm also not thrilled with some of the 'motivational' phrases being thrown
around. "I'm not bossy; I'm _the boss_ " (Beyonce) is not constructive. It's
puerile and is more likely to encourage actual bossy behavior (the negative
kind, as defined well elsewhere in this thread) than to help introduce
equality in the way we encourage leadership attributes in all children.
Not, of course, that equality seems to be emphasized here. Which is a typical
problem and one that's unlikely to help this campaign make a real difference,
as it's immediately exclusive to some degree rather than encouraging
_everyone_ to be confident.
------
iterationx
While feminists were busy telling the world about the dire need to ban the
word “bossy,” the Iraqi parliament was considering the implementation of a new
law that would legalize rape, prohibit women leaving home without the
permission of their husband, and legalize marriage for 9-year-olds.
“If passed, the law will apply to Iraq’s Shia Muslims, the majority of the
population. Provisions include prohibiting Muslim men from marrying non-Muslim
women, legalising rape inside marriage by declaring that a husband has a right
to sex regardless of consent, and prohibiting women from leaving the house
without their husband’s permission,” reports Breitbart.com. The law, which has
been denounced by Human Rights Watch as a violation of the Convention on the
Elimination of All Forms of Discrimination against Women (CEDAW), would also
lower the age of marriage to nine years old for girls and 15 for boys. Despite
the fact that the law represents an egregious assault on women’s rights and
wouldn’t look out of place in the stone age, you probably didn’t hear about it
because self-proclaimed feminists were too busy concentrating on more pressing
atrocities being inflicted upon women – such as people using the word “bossy”.
[http://www.infowars.com/new-iraqi-law-legalizes-rape-
feminis...](http://www.infowars.com/new-iraqi-law-legalizes-rape-feminists-
too-busy-banning-words-to-care/)
~~~
chilldream
I agree with the article, but "There are Starving Kids in Africa" is a stock
bad argument
~~~
chongli
Yep, it's a fallacy too:
[http://en.wikipedia.org/wiki/Fallacy_of_relative_privation](http://en.wikipedia.org/wiki/Fallacy_of_relative_privation)
------
adamnemecek
I saw the video a couple of days ago and it was flabbergasting that someone
thought that this whole thing is going to achieve anything.
~~~
mschuster91
It's just feminists. Western version of the Taliban, if you ask me.
~~~
Ambrosia
yes obviously feminists were the real ones behind 9/11
------
logicallee
Bossy is extremely specific and a terrible style of leadership. I've known
bossy women as well as women who were great leaders. The overlap has between
the two is the empty set.
How about you teach real leadership skills to girls who like to lead? Such as
understanding, empathy, reward, etc. Of course the same goes for men, and
bossy men are just as big a problem.
------
cushychicken
You can see how this campaign of proclaiming "bossy" to no longer be gender
neutral has caused me (a heterosexual white male) some serious gender identity
issues, as I was frequently called "bossy" as a child.
Does this mean I'm actually a woman?
------
droopybuns
"The number one reason that why girls are not turning into leaders is because
they are occupied with posting selfies on your fucking Facebook, Sandberg!"
-Adam Curry
------
theorique
"Bossy" doesn't refer to a person (male or female) who embodies _good_
qualities of leadership.
Instead, it is used to describe someone who takes charge in a rude and
disrespectful way. Examples include: giving others orders, shouting, emotional
manipulation, tantrums, and so forth. Anybody who behaves this way may be
"leading", in some sense, but they are not being a very good leader.
Conversely, a girl who leads her friends and peers in a kind and empowering
way is _not_ being bossy.
It would make just as much sense have a campaign to "ban douchebag" or "ban
asshole", as these terms are disproportionately applied to men. And those
terms don't apply to _being a leader_ , they apply to _being a rude,
disrespectful leader_.
------
jamesaguilar
My brothers used it on me all the time, but that might not be the typical
experience.
------
SnydenBitchy
Wow, the “discussion” here validates every negative stereotype about the tech
community, you troglodytes who I’m embarrassed to call my peers. I wonder if
it’s it too late, at 31, for me to change careers?
~~~
masterleep
Are there no online communities that you can't complain about?
------
wcummings
I'm impressed by how much people are missing the point. It's just about
raising awareness of how young girls are treated, no one is actually banning
any words.
~~~
dkrich
Then maybe they shouldn't have led with the name "Ban Bossy?"
If you create a marketing campaign and it is misinterpreted by what is
presumably largely your target group (men who don't realize their words are
apparently harming girls during their formative years) the fault is yours, not
your audience.
------
nsxwolf
Is there any empirical evidence this word harms girls?
~~~
sp332
It's not about the word "bossy". "Ban Bossy" is just the name of the campaign.
------
stefantalpalaru
I bellyfeel banning words is doubleplusgood.
------
tobehonest
I would rather "slut" gone, than bossy.
| {
"pile_set_name": "HackerNews"
} |
AmigaDOS Command Reference - doener
http://wiki.amigaos.net/wiki/AmigaOS_Manual:_AmigaDOS_Command_Reference
======
anexprogrammer
Apart from wondering where this sprang from, if anyone wonders why AmigaDOS
was such an ugly fit with the rest of exec and written in BCPL not C:
They'd contracted a SV company to produce CAOS, to a spec of Carl Sassenrath -
the creator of exec (OO multi-tasking kernel). As deadline got closer and
closer it was clear it wasn't happening, or even close. It was meant to get
resource tracking and some other features to integrate with exec.
Edit: Found the spec and story of CAOS:
[http://www.thule.no/haynie/caos.html](http://www.thule.no/haynie/caos.html)
It got AmigaDOS - a port of Tripos from UK company Metacomco. It got that
because no one else they asked believed they could deliver anything in the
remaining time. That's why there was all the BCPL weirdness with DOS.
~~~
a_thro_away
Thanks for that; that was so difficult for then young, untrained me to get my
head around, as well as reading the Amiga Reference Manuals - it was all so
foreign.
~~~
anexprogrammer
Tripos started on PDP11, but had already been ported to 68k - just needed the
exec glue code. Don't think the story ever properly came out but you get the
impression it was the week or weekend before. :)
The weird data thing was because of BCPL language - it only understood words,
not bytes.
------
magoon
I hadn't realized how ahead of its time the Amiga was for a personal computer:
TCP/IP, MIDI, SCSI, REXX, (screamtracker) MODs.
~~~
jandrese
I suspect this is a reference for a much later version of the OS. You probably
wouldn't see all of these commands on a machine from the Amiga's heyday.
~~~
a_thro_away
The A2000 was in it's heyday, I think; you would see most of those commands
(or their equiv) with an Amiga A2000 with the Amiga LANCE ethernet board A2065
and AS225 TCP/IP option, or maybe even the Amiga UNIX; there was even a DECNet
stack which worked quite well... AREXX was always there, right? There were
many SCSI commands as well to support the A2000 Zorro SCSI board.
"bigroadshow"? it was apparently part of the TCP/IP stack. I guess it just
depended on which system, boards, and options you bought at the time.
~~~
icedchai
ARexx wasn't included with the OS until 2.0. You could buy it as a third-party
add on for 1.3 and earlier.
~~~
ekianjo
2.0 was already in the amiga 500 plus and that was still very early in the
life of the Amiga.
------
cha-cho
It's been quite a while (the ole 2500 is in storage) but it seems like a
person could change directories without the "CD" command. Just type the path
in the CLI, hit return, and you moved to that directory. edit: Likely thinking
of the "implied CD" mentioned in the link.
~~~
bwldrbst
That's true (I've actually got an amiga shell window open in an emulator on
another workspace right now...) - the CD command is only needed in cases of
ambiguity.
I hope you removed the clock battery from your A2500 before storing it. By now
it would have started leaking corrosive gunk, a very common cause of death for
these old machines.
~~~
cha-cho
Nuts. I don't think I removed the battery and it's been in storage for almost
ten years. Then again if it can survive me naively washing the motherboard
with a garden house and letting it dry in the sun, I think the Amiga gods will
keep the battery intact for me. I hope so anyway.
~~~
bwldrbst
If you have the opportunity, it's worth checking it and cleaning it up.
I didn't know about this problem at the end of the 90s when I stopped using my
A4000 and the battery destroyed the motherboard.
------
csixty4
I'm not sure why this is on the front page of HN. But I'm starting to mess
around with Aros so this might come in handy I guess. Thanks!
~~~
wprapido
many of us HNers were / are amigans. some still owe them or at least use an
amiga emulator. the amiga community is still alive and kicking. don't mind the
impact amiga had on computing
~~~
jupiter2
As a non-amigan but a huge fan of alternate OSes, I am constantly impressed by
the enthusiasm and energy you guys still have for this system. It's contagious
in a non-annoying way. I upvote interesting Amigan stuff whenever I come
across it.
As an old-school DOS user, AmigaDOS, which I wasn't familiar with, looks
fascinating. I'll have to see if it's available next time I boot _Icaros
Desktop_.
~~~
wprapido
welcome to the club! AROS does have a decent amigaDOS support
------
Jaruzel
I've been hosting a similar site for years now, but in a more Amiga-Friendly
browser format (basic HTML):
[http://www.jaruzel.com/projects/AmigaDOS-Guide-
Help/index.ht...](http://www.jaruzel.com/projects/AmigaDOS-Guide-
Help/index.html)
There's also a zipfile for download.
------
mortenlarsen
Someting is messed up on that page:
COUNTLINES
Binds device drivers to hardware.
CPU
Counts how many lines a file is made of.
------
snvzz
Used to be a nice documentation site, until they decided to cover AmigaOS4,
ditching 3.
------
watmough
I still have a couple boxes of Amiga floppies. Including the source to NewTek
DigiView capture software in 68000 assembler hilariously enough.
Anyone know if there's a way to read them?
~~~
textfiles
Here and ready to help. [email protected]
~~~
watmough
I'll dig 'em and out and see what I have.
| {
"pile_set_name": "HackerNews"
} |