text
stringlengths 44
950k
| meta
dict |
---|---|
Why didn't JavaScript adopt the OO model adopted by C++/Java when it was designed? - bootload
https://www.quora.com/Why-didnt-JavaScript-adopt-the-object-oriented-model-adopted-by-C++-Java-when-it-was-designed/answer/Brendan-Eich?share=1
======
bootload
_" If JS didn’t make it into Netscape 2, we’d be speaking VBScript."_
source:
[https://twitter.com/BrendanEich/status/793685578491437056](https://twitter.com/BrendanEich/status/793685578491437056)
| {
"pile_set_name": "HackerNews"
} |
Facetime for the Mac - eddieplan9
http://www.apple.com/mac/facetime/
======
kgroll
I'm definitely not suggesting that Facetime is, or will be, a failure.
Thinking about it, however, reminded me of a passage from Infinite Jest about
the failure of video chat.
_(1) It turned out there there was something terribly stressful about visual
telephone interfaces that hadn’t been stressful at all about voice-only
interfaces. Videophone consumers seemed suddenly to realize that they’d been
subject to an insidious but wholly marvelous delusion about conventional
voice-only telephony._
...
EDIT: Instead of that wall of text, here's a link to the rest of that passage.
Sorry about that.
[http://stevereads.com/weblog/2010/06/07/iphone-4-facetimeinf...](http://stevereads.com/weblog/2010/06/07/iphone-4-facetimeinfinite-
jest-mashup/)
~~~
commieneko
Facetime, and video-phony in general, fits into a continuum of communication
strategies. When you want to _see_ someone, you video them. When you only want
to hear them, you audio them. When you don't want to hear them you text them.
Then there's the whole time-shifting thingie. What I would like now is the
video equivalent of an email, voice mail, or text message. I _could_ record a
video, and email it, but...
~~~
jbrennan
I agree, FaceMail or something would be incredibly nice, especially as I'm not
always available to answer a video call, but would still like to see what's up
eventually.
I guess it would be used like like YouTube, only privately. That is, often you
see an event and video record it for sharing with others. FaceTime would let
you share this live, and leaving a message seems a natural progression.
------
fredleblanc
We just downloaded it and tried it (bringing my total Facetime experiences to
two, both of which occurred about 10 feet from the person on the other end).
The interface is more iOS-y than normal for OS X. The picture quality was
pretty good (the Mac being wired, iPhone 4 being wireless of course).
Simple to use, pretty good stuff.
------
lukifer
Is there a reason they couldn't have just folded this feature into iChat?
~~~
e1ven
I wish they had-
It loses several major features of iChat sharing- Screensharing, replaceable
backgrounds, Multi-person chat...
~~~
johns
Which are all features that would ruin FaceTime.
~~~
e1ven
How so? Honestly? Why would it ruin Facetime, but not ruin iChat?
~~~
johns
I bet there are already more non-geek users of FaceTime than iChat because
it's so approachable. Adding features like those caters to the wrong audience.
The beauty of FaceTime is its simplicity.
(This sounds like an Apple fanboy thing to say, but I'm really not.)
~~~
slantyyz
I would also say using iChat also dilutes the FaceTime brand.
~~~
derefr
In the event video, the FaceTime icon had replaced the iChat icon's
traditional position on the dock.
------
neovive
Facetime could become a strong competitor to Skype once a Windows client is
available as the UI seems very polished and well "integrated". Now only if my
parents had a Mac so I can test it out. They always seem to have issues
getting video chat working on Skype/Windows.
~~~
moe
Not really.
Skype is strong in businesses. In the office-setting people first and foremost
use the text-chat, then the _audio_ calls, then the conference calls. Video
calls come dead last. I wouldn't be surprised if even desktop sharing is used
more than the video feature.
~~~
alphabeat
The desktop sharing of skype has a long way to go to be used seriously. I can
only assume from your comment that you haven't used it. They may have their
audio codec down, and the video codec works for live video, but not for
content. It's the same deal with JPEG for text for instance.
------
bobx11
called the wife from the mac to her iphone... she didn't know the difference.
later she called me back on facetime to the pc and it just popped up - overall
not bad!
------
philwelch
Accounts are tied to the email address on your Apple ID. Interesting way of
getting around having to create another IM account.
~~~
e1ven
It's also portable to when there is a version of FaceTime for Android/etc.
They had originally claimed it to be an open standard.
~~~
glhaynes
Yeah, has there been any progress on publishing specs?
~~~
mikedanko
According to the presentation, it's supposed to be made a standard. I'm
assuming this would hit the IETF's Audio/Video transport working group, so
that'd be where to keep a lookout.
------
dmpatierno
My favorite feature of FaceTime for the Mac: it stays full screen even when
you tab away to do work on another monitor.
FaceTime is now my preferred video conferencing software.
------
pluies
Why set the minimum OS to Snow Leopard? That sounds a bit of a far-fetched
requirement for some videoconferencing software.
~~~
g_lined
My guess is that it uses some newer APIs which were introduced in 10.6. This
may be because they wanted to use Grand Central Dispatch (better multi-core
support), a later addition to Core Graphics or simply an API which gave their
GUI the more iOS feel compared to the GUI elements in 10.5.
------
eli
It would be exciting if I'm proven wrong, but I'm not buying the hype about
video chat.
Even if/when I'm able to video chat anyone from my iPhone without being on
wifi, I still don't imagine it being terribly useful. And the few times I've
tried using video chat in a business setting have not been very fruitful.
~~~
gurraman
Video chat is just one of those nice-to-haves in my opinion.
I work remotely a lot and always choose audio chat over video chat. Video
chatting, in that context, just doesn't add anything for me.
Video chat was great when my girlfriend was living abroad for a couple of
months though!
------
rflrob
"the call rings through on every Mac you own, even if face time isn't running"
Does this sound just a little too intrusive to anyone else?
~~~
ynniv
Have you heard of a telephone? Do you know how they work?
You can turn it off in the preferences.
~~~
Samuel_Michon
You just made me laugh out loud, which I rarely do while sitting at a
computer. I honestly thought rflrob's comment made a lot of sense, until I
read yours.
I have to say though: I absolutely loathe doorbells and ringing phones, they
stress me out. Ideally, my doorbell would have a vibrate mode, making the
floor gently purr.
------
nico
I just wish there was a Facetime API.
------
todd3834
I like the icon. I am really glad to see they didn't go the same direction as
the new iTunes icon.
------
thought_alarm
Judging from this FaceTime app and the new iLife apps, it looks like there
will be all sorts of new iOS-like UI goodies for Cocoa 10.7 developers to use.
------
hasenj
Is this different from yahoo messenger's video calls?
~~~
zacharycohn
iPhone <\--> Desktop
~~~
pt
Both have platform limitations at this time:
Facetime: iPhone <\--> Mac
Yahoo Messenger : iPhone <\--> PC
~~~
contol-m
Nope. Yahoo Messenger video chat works on a Mac as well.
------
eddieplan9
Unfortunately, it mistakenly points to iWork 09 trial download for now.
~~~
philfreo
First 2 times it didn't work, then I got it:
[http://appldnld.apple.com/FaceTime/061-9589.20101020.Mbgt5/F...](http://appldnld.apple.com/FaceTime/061-9589.20101020.Mbgt5/FaceTime.dmg)
------
CharlesPal
Use the bottom link. The top link is pointing to iWork 09
------
ceejayoz
Can't get it to connect. @SteveStreza reports his doesn't work on a wired
connection at all.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Run your own OAuth2/OpenID Connect provider - aeneasr
https://github.com/ory-am/hydra#
======
simplify
If you're interested in this sort of thing, Doorkeeper[1] is a robust, open
source OAuth 2 provider that's been around for about 5 years. We use it as a
standalone app, and have many other node.js apps that sign in using it.
[1] [https://github.com/doorkeeper-
gem/doorkeeper](https://github.com/doorkeeper-gem/doorkeeper)
~~~
arekkas
Thanks, however Doorkeeper is an SDK, right? With Hydra, you simply boot the
docker image and are done.
If you're interested in OAuth2 frameworks, check out
[fosite]([https://github.com/ory-am/fosite](https://github.com/ory-
am/fosite)), which is like Doorkeeper for Go.
~~~
simplify
Doorkeeper is closer to a full-package with customizable features, including a
basic frontend. I'm not too familiar with hydra, but it seems Doorkeeper is
best when you want to get the full OAuth app & user interface running (and
customize later), whereas Hydra is best when you want to get a quick OAuth API
app and build your own frontend. Would you say this is accurate?
~~~
arekkas
Yeah I think that is valid. Hydra can also be put on top of existing
infrastructures. Not sure how well that is possible with Doorkeeper.
~~~
simplify
Doesn't the nature of an OAuth server imply that it can be added to existing
infrastructures? Or is there an issue you foresee with non-Hydra libraries?
~~~
arekkas
No, Hydra works with every existing solution :) You can read more on this
topic in the guide: [https://ory-
am.gitbooks.io/hydra/content/oauth2.html](https://ory-
am.gitbooks.io/hydra/content/oauth2.html)
~~~
simplify
You didn't answer my question. I think you may have misread it.
------
ethernetdan
Also similar: [https://github.com/coreos/dex](https://github.com/coreos/dex)
------
defiancedigital
Ask HN: Is "hydra" the most used open source project name ?
~~~
johns
Unicorn
~~~
defiancedigital
hydra vs unicorn dixit github :
Hydra = 1,934 results
([https://github.com/search?utf8=&q=hydra](https://github.com/search?utf8=&q=hydra))
Unicorn = 1,878 results
([https://github.com/search?utf8=&q=unicorn](https://github.com/search?utf8=&q=unicorn))
Winner Hyra !!!
~~~
neilellis
Hail Hydra!
------
Pyxl101
Nice! Lowering barriers to the use of technologies like these is important.
Would anyone else be interested in hosting Mozilla Persona?
[https://developer.mozilla.org/en-
US/Persona](https://developer.mozilla.org/en-US/Persona)
~~~
scrollaway
Check out Let's Auth:
[https://github.com/letsauth/letsauth.github.io](https://github.com/letsauth/letsauth.github.io)
It's a successor to Mozilla Persona in development.
Details in the readme and on freenode #letsauth (mirrored to
gitter.im/letsauth/letsauth).
~~~
arekkas
why is it written in python? why not something that compiles and runs well on
all platforms?
~~~
scrollaway
From the readme:
> Let's Auth 1.0 will ship as a single, statically compiled binary. Pre-1.0,
> we will use a variety of dynamic languages for prototyping.
~~~
arekkas
nice :)
------
olalonde
How do you integrate this with your existing API? Do you need to proxy
requests through Hydra or do you just need to read and trust Hydra-signed
tokens on every request? Is there any overlap with
[https://getkong.org/](https://getkong.org/)?
~~~
arekkas
Currently hydra issues opaque tokens but has the capabilities to switch to JWT
in the future. There is a warden HTTP API endpoint that you can use to inspect
tokens and use hydra's access control. I will probably add a more common token
info endpoint or a OAuth2 Token Introspection endpoint (
[https://tools.ietf.org/html/rfc7662](https://tools.ietf.org/html/rfc7662) )
later on.
I haven't used kong yet but from my first impression it should be possible to
use hydra together with kong.
~~~
olalonde
Ok, thanks. So let's say I wanted to use Hydra for authenticating requests
made to my REST API, I'd have to make an API call to Hydra on each request,
right? Would be interesting to have some integration examples with popular web
frameworks (e.g. Express.js, Rails, Django, etc.).
Thanks for releasing this by the way, looks really well engineered. I'm sure
you've considered it already, but you could probably sell a hosted version (a
la [https://auth0.com](https://auth0.com)) to make money and finance
development.
~~~
arekkas
Depends, if you use JWT you can cryptographically verify that the token and
the token claims are valid. Right now, Hydra does not issue JWTs but it would
be easy as pie to add that functionality.
Writing an integration guide for this is a very good idea. Hydra's APIs are
validating all requests using that technique, but it's not documented.
Auth0.com is pretty cool, they have done some cool projects that help OAuth
developers. However, they are overpriced imho. Hosting hydra is definitely
something I will consider. Thanks! :)
------
akbar501
For anyone interested the Go client library is: [https://github.com/ory-
am/fosite](https://github.com/ory-am/fosite)
------
welder
OAuth is super simple, you only need two endpoints for an OAuth provider. It
only took a few hours to write the WakaTime OAuth provider implementation[1].
No offense and serious question: why would you need a library for this? Isn't
it more trouble to integrate an external OAuth provider with an existing api
than to just write two api endpoints yourself?
[1] [https://wakatime.com/api](https://wakatime.com/api)
~~~
arekkas
The libraries (SDK) I used for my first project for had security flaws. OAuth2
is super simple to implement, but hard to get right. It's not just two
endpoints, it's multiple specs with ~200 written pages. Some people for
example don't even know that
[rfc6819]([https://tools.ietf.org/html/rfc6819](https://tools.ietf.org/html/rfc6819))
even exists. Most SDKs are also very limited or hard to extend (e.g. adding
OpenID Connect).
I believe that adding a docker container to your deployment and creating a
consent token (JWT) is even less work than integrating with an SDK and
implementing the missing parts every time you hit that new edge case. On top
of that, you can be sure that it is backed by an open source community.
------
sakopov
I know it's in the title but I don't see any OpenID capabilities here. Looks
like Oauth2 spec implementation. Am i missing something?
~~~
arekkas
OpenID has been deprecated in favor of OpenID Connect:
* [http://openid.net/specs/openid-connect-core-1_0.html](http://openid.net/specs/openid-connect-core-1_0.html) * [http://openid.net/connect/faq/](http://openid.net/connect/faq/)
------
StavrosK
This looks very nice, but isn't it overkill to use RethinkDB when SQLite would
do (and probably be about as fast)?
------
smw
It'd be really neat to see an amazon lambda serverless version of this.
~~~
arekkas
Integrating that in lambda should not be hard. If you want, create an issue on
GitHub and I will try my best.
------
ClayM
Would this or coreos/dex replace something like Auth0?
~~~
jon-wood
Auth0's big feature that isn't provided by open source platforms at the moment
is being able to request an OAuth token for third party services the user has
authenticated with, so for example you can trade in an auth token that was
issued when you logged in the user for a Facebook token.
~~~
arekkas
Not true. Dex and Hydra both support it, although you need to implement a
little bit more stuff when using Hydra. Read it in the docs: [https://ory-
am.gitbooks.io/hydra/content/connection.html](https://ory-
am.gitbooks.io/hydra/content/connection.html)
~~~
jon-wood
I stand corrected. In that case Auth0 is even more overpriced than I
originally thought.
| {
"pile_set_name": "HackerNews"
} |
What if Finland’s great teachers taught in U.S. schools? - wallflower
http://www.washingtonpost.com/blogs/answer-sheet/wp/2013/05/15/what-if-finlands-great-teachers-taught-in-u-s-schools-not-what-you-think/
======
edthrowaway
As someone with a spouse who has taught for over 10 years in an impoverished,
inner-city school, and who has won a very impressive array of grants, awards,
certifications and national-level recognition, I could not agree more strongly
with this editorial.
Americans are under the misapprehension that all of their school woes stem
from poor teachers. But even in poor districts like the one my spouse teaches
in, which have accumulated a whole layer of apathetic teachers, the impact of
both poor and excellent teachers is way overestimated, and the impact of poor
and excellent administrations (both the school principal and the district
leadership) is tremendously underestimated. A good principal, particularly one
with strong school board backing, can almost single-handedly turn an entire
school around (I've seen this happen twice now). An excellent teacher with a
poor principal and negligent school board can do very little other than
provide a strong role model for the most promising students, and (with great
effort) pull a handful more of failing students up to the barely passing level
than their less talented peers.
Primarily, good principals act just like good engineer managers in
corporations. Just as good managers keep engineers isolated from the bullshit
of upper management and adjacent managers, and give good engineers space to do
their jobs, so do good principals let their good teachers do their job, and
intervene when poor teachers fail to meet expectations.
Regarding the other factors mentioned, I do think poverty is a major hurdle,
and the author of this piece rightly underlines its importance in poor school
performance, but poverty is primarily an issue in that high-levels of poverty
correlate strongly with lack of parental support and engagement (and not
always due to a lack of care; often it's because these are single-parent
households where that one parent is working all the time). But here an
excellent principal can also make a major impact by rallying formerly
disengaged parents around their kids and their kids' teachers, and supporting
single-parent households where the parent is working multiple jobs.
It makes me sad to see all this rhetoric around teachers in the U.S., not only
because it's depressing for my spouse to be so unappreciated by people outside
the teaching profession, but also because I know it will do little to fix the
main problem: school poor administrations. Nor will it address any of the
other major contributing factors, like poverty and the lack of respect in high
school academic excellent so prevalent in our culture, rich and poor alike.
~~~
jwmerrill
Really insightful comment.
> the impact of both poor and excellent teachers is way overestimated, and the
> impact of poor and excellent administrations (both the school principal and
> the district leadership) is tremendously underestimated.
Why do you think this happens?
I'll speculate: most people form their opinion about what's important in a
school by reflecting on their experience as a student.
Most students attend only one school of each kind (e.g. elementary, junior,
and high school), so they don't have a great frame of reference about the
results of different school administrations.
But everyone experiences many different teachers, and as a student, you really
perceive the differences between the better ones and the worse ones.
I bet if you reflect on your own experience as a student, leaving aside the
context of your spouse's experience as a teacher, you will have much more
vivid memories and opinions about teachers than administrators.
I totally agree that it's likely to be much more effective to try to create
environments that help all teachers do their jobs more effectively than it is
to try to change who we're hiring. I wonder what the best way is to make
people "feel" the difference between good administrators and bad ones like
they "feel" the difference between their favorite teachers and their least
favorite ones.
------
MarcScott
I've worked as a teacher in both the UK and in Papua New Guinea, and from my
perspective, the largest performance indicator of a child's success is the
value their parents place on education.
Maybe this is one of the reasons the Finnish education system works so well.
If teaching is an occupation that is culturally considered in high esteem,
then it probably follows that schools are considered an important aspect of a
child's life. Children are therefore encouraged to do well in schools.
In PNG, students had to pay to go to school. Often a single child was
supported through their education by their extended family. Some villages
could only afford to send a few students to school. Those students worked
exceptionally hard, knowing that it was incumbent upon them to achieve, and
eventually payback their family from the proceeds of their future careers.
When working in rural schools in the UK I have encountered many students whose
parents, and therefore their children, place little value on education. Often
the attitude comes down to the single phrase "I've managed and I did badly at
school". Regardless of whether the parent's are rich or poor, the children of
these parents often struggle, and achieve below expected results in national
examinations.
If we want to raise standards in our schools (both in the UK and in the USA) I
think the key is in changing cultural attitudes towards education. This means
that we need to stop heaping blame on teachers, administrators, schools and
local authorities for perceived inadequacies. We need to make sure that our
children value the free education they are receiving.
edit - for clarity of country name
~~~
aikah
> the largest performance indicator of a child's success is the value their
> parents place on education.
+100
> This means that we need to stop heaping blame on teachers, administrators,
> schools and local authorities for perceived inadequacies.
Well sometimes authorities are to blame. Look, in my country,being a teacher
used to be like being a lawyer or a doctor.
It used to be a prestigious profession.
Then some politicians,influencial thinkers came in and said,"we need to focus
on children,they have special needs,they are always right and if they cant
learn properly it's the adults fault". 30 years forward and the education here
is totally broken,teachers are despised both by students and parents who want
instant gratification no matter how dumb their offspring is.
But hey,they cant be wrong,they've been told all their life they are "special"
and always right ...
> Regardless of whether the parent's are rich or poor, the children of these
> parents often struggle, and achieve below expected results in national
> examinations.
The big difference is rich people can literally buy a career for their
offspring even if they perform poorly at school. At worse, they'll have a job
at mom and pop's business.
~~~
javert
> The big difference is rich people can literally buy a career for their
> offspring
Details? Where do I go to buy a career? I'm not being sarcastic, I actually
want to know.
~~~
jackvalentine
Mom or Dad to business partner: if you want to make this deal with me, give my
kid a management position somewhere on the project.
Or "if you give my kid an internship then I'll give you the keys to my ski
lodge for this season"
~~~
javert
That is not buying a career. That's making a deal. The guy said:
> rich people can literally buy a career
I'm not being sarcastic here. It used to be possible to purchase a commission
in the military. That is speaking historically.
I wouldn't be surprised if there aren't careers today you can literally
purchase. Franchising comes kind of close, and running a taxi in NYC comes
close. In ancient times, you could purchase the position of tax collector.
Anyway, that person should not have said "literally" unless he meant it.
~~~
jackvalentine
Pedanticism of this nature is literally the key to living a frustrated and
lonely life. You asked for a good faith reply and I gave it to you. You then
decided to play your "trick" and point out that you're actually making a
totally unrelated grammatical dispute with the original poster.
I won't be falling for this again and replying to you further in the future.
I hope it feels good to be "right" all the time.
~~~
javert
I wasn't playing a trick. There are times when using "literally" in the
figurative sense actually makes sense, even though I don't approve. But this
is not one. So I thought maybe the person actually meant it in the non-
figurative sense.
If you want evidence that I'm an honest person, look at my comment history. I
don't go around tricking people and trying to win arguments by deception. In
fact, I frequently call people out for being nasty in various ways, much like
you are doing here.
I can understand why you think I'm trying to trick people and I was worried
that would happen. That's why I talked about historical and quasi-examples of
people buying careers. I didn't want you to think or feel that I was playing a
trick.
~~~
throwawaymsft
Please realize language has ambiguities and is not a program that is compiled.
Deliberately nitpicking the meaning of words from someone who is generously
offering to clarify a statement for you looks like a sign of bad faith. Use a
charitable interpretation and figure out the idea he/she was getting at.
Clearly, money/power/fame/beauty can "buy" things even if there is no currency
changing hands. That is the point the previous poster was making. Wealth is
influence, and influence gets you favors, like a foot into a career.
~~~
kybernetikos
I disagree. aikah made a statement, which Javert wanted more detail on.
jackvalentine claimed to explain what the other poster had said, but it didn't
actually match up. He was probably right about what was meant, but maybe he
wasn't and there's no real reason for the rest of us to assume it's an
accurate clarification of what aikah meant.
If it had been the original poster making the clarification then moaning about
'literally' would have been pedantic, but it was not, and so therefore it was
justified - it was making the point that the interpretation given by
jackvalentine did not actually clarify the statement as made, and that Javert
had assumed something else, more interesting was being said. At that point the
conversation depressingly quickly devolves into name calling, threats and
patronisation.
> Clearly, money/power/fame/beauty can "buy" things even if there is no
> currency changing hands. That is the point the previous poster was making.
According to you. Javert was actually using a charitable interpretation when
he assumed that the original maker of the statement meant what they had said.
As far as I can tell this entire subthread consists of people uncharitably
failing to spot that Javert was _not_ in fact trying to score points, (or
believes that language is a program to be compiled, or would benefit from a
list of topics to meditate on about the evolution of language) and was merely
asking for more detail, and getting upset that he is skeptical their trivial
'explanations' actually explain what was originally meant.
It's mainly a lot of people freaking out about their hot button topics without
actually spending any brain power on understanding what the other person is
saying and why.
------
danso
> _Most teachers understand that what students learn in school is because the
> whole school has made an effort, not just some individual teachers. In the
> education systems that are high in international rankings, teachers feel
> that they are empowered by their leaders and their fellow teachers._
As a layperson, I agree with the OP...The focus on the quality of teachers --
and firing "bad" ones and hiring just the "good" ones -- has always seemed to
me to be overemphasized, as it makes for a sexy, easily digestible political
debate. Not that good teachers (whatever your definition of "good" is) aren't
worth having, but it's doubtful that they alone can have a significant impact
on student outcomes...in the way we should be doubtful that the well-behaved,
well-equipped cops from a rich crime-free suburb would, when moved to the
Detroit PD, would have a significant impact.
I lived with a teacher and my best friend is a teacher, both are young and
about my age and who work at impoverished schools, and I've been constantly
amazed at how much of their talk is not about how bad the kids are, but how
bad the administration is...over things such as playing favorites (among
teachers) and squabbles over office space and, of course, having to buy their
own supplies and books (some of which is reimbursed at the end of the year).
You can chalk some of this dysfunction to the educational hot topics of the
day: the power of teachers unions, teacher pay, standardized testing...but the
bottom line is that passionate, effective teachers can be nullified by a weak
system...in the same way that a great programmer may be ineffective in an
engineering environment with poor testing/documentation processes and a
terrible office environment.
~~~
TwoBit
Reminds of how NFL coaches are so frequently fired when their teams do poorly.
It makes for good press, but the teams usually do the same under the next
coach.
~~~
bennettfeely
FiveThirtyEight examined this in the NHL.
> _Teams that fired their coaches performed exactly the same on average in the
> following season as teams that kept their coaches. Notably, teams that were
> sub-.400 performed 20 percent better on average the following season
> regardless of whether they fired their coach or not._ [...] _Playoff
> performance is no better under new coaches. Non-playoff teams go an average
> of 0.5 playoff rounds the following season, whether they fire their coach or
> not._
[http://fivethirtyeight.com/features/what-predicts-if-an-
nhl-...](http://fivethirtyeight.com/features/what-predicts-if-an-nhl-coach-
will-be-fired-and-whether-it-matters/)
~~~
cheepin
Shouldn't team performance decrease the season after a coach change since the
players have to learn a new system? This seems to say that the cost of firing
a poor coach is zero because at worst you will do the same as before in the
short term, with a great potential upside for longer time spans.
~~~
seanmcdirmid
These teams have large coaching staffs that don't get fired and change more
slowly. I would guess that a coach's impact is often more long term (in
training, recruitment of players and staff).
------
bko
> Finland is not a fan of standardization in education. However, teacher
> education in Finland is carefully standardized. All teachers must earn a
> master’s degree at one of the country’s research universities.
I don't think the author makes a convincing point as to why standardization is
bad for students but somehow beneficial to teachers.
> There is another “teacher quality” checkpoint at graduation from School of
> Education in Finland. Students are not allowed to earn degrees to teach
> unless they demonstrate that they possess knowledge, skills and morals
> necessary to be a successful teacher.
So prospective teachers would be tested on teacher quality prior to graduation
in an academic sense but not while they're actually teaching? Not sure if you
can test on paper or by demonstration whether someone is an effective teacher.
It is certainly more easily visible in the field.
It's a shame many in America don't respect teachers. Perhaps the resentment is
due to the fact that most in America don't have a choice as to the primary
school they attend. Like most Americans, I've had terrible teachers in the
past and did feel some resentment. It was less infuriating in college since at
least I had some choice as to the classes and school I attended.
Coincidentally, I notice professors get a lot more respect.
~~~
pavlov
_I don 't think the author makes a convincing point as to why standardization
is bad for students but somehow beneficial to teachers._
I felt the author's case was clearly presented, so I'll try to summarize...
Standardized tests for students are bad because they encourage bogus metrics
like "teacher effectiveness".
Requiring high qualifications for teachers is good because it improves public
perception of the profession and reduces churn. (Initiatives like "fast track"
teacher training increase churn, and this is bad because it can lead
administrators to believe that churn is part of the solution: all they need to
do is somehow weed out the bad teachers and replace them with good ones.)
~~~
mattmcknight
You have to understand that the publisher of this piece (the blog curator,
Valerie Strauss) is a paid supporter of the teacher unions. That is her
primary agenda, and the agenda of many opposed to teacher evaluation via job
effectiveness.
If teacher effectiveness can't be measured, teachers can't be fired for
performance. Union win.
~~~
rustynails77
If you don't standardise testing for students, you can't benchmark them. If
you look at the relative performance of different countries, the US is WAY
down the list of performers - and the relative scores of the US are
_significantly_ behind. Start with this link for reference,
[http://www.businessinsider.com.au/pisa-
rankings-2013-12](http://www.businessinsider.com.au/pisa-rankings-2013-12)
On reading and my own indepth observations, I've come across the same themes,
\- teachers must be respected as professionals \- teachers must challenge
themselves, constantly looking for better approaches to education \-
principals must actively support the development and training of teachers to
help them grow \- parents must support the teachers by re-enforcing the
importance of education \- parents and teachers MUST treat the students as
young adults rather than treating them as children ... I can't emphasise this
enough.
Based on my own experiences, one school we were at had terrible teachers and
an average principal ... and terrible results. The other school was
progressive and built confidence into everything the students did (eg. a
school fair fund-raiser was completely organised and run by ALL of the primary
school students). I now live in an area with one of the highest academic
performance relative to the wealth of the families, Index of Community Socio-
Educational Advantage (ICSEA). The attitude of parents, teachers and students
is staggeringly different to the previous school we were at. It's no surprise
that it's one of the best performing schools in Australia. You'll also notice
that Australia is one of the better performing countries in the world. If you
assume the problem is the teachers, or the parents, you're off the mark. You
need ALL of them to work together.
------
Panino
> _Most teachers understand that what students learn in school is because the
> whole school has made an effort, not just some individual teachers. In the
> education systems that are high in international rankings, teachers feel
> that they are empowered by their leaders and their fellow teachers._
This is the exact opposite of my (previous) experience teaching high school,
where the main purpose was clearly to provide daycare.
------
tokenadult
I wonder what really would happen if the experiment were tried. It would be
difficult indeed to find any credentialed United States schoolteachers who
speak Finnish well enough to teach in Finland (but not insuperably difficult
to find Finnish teachers who speak English well enough to teach in the United
States, which tells us something right there). I would like to include a few
more countries in the mix. Indeed, that is what I like about the new book _The
Smartest Kids in the World: And How They Got That Way_ ,[1] because the book
follows some American exchange students over to other countries (Finland, yes,
but also Korea and Poland) and examines a lot of different trade-offs that
different school systems around the world have to deal with. The boy who
traveled over to Korea to be an exchange student and was profiled in the book
traveled over from the same school district in Minnesota I have lived in since
I had children. Finland is not the only model of a different system, and we
should be studying a lot of different models to make sure we aren't missing
out on lessons we can learn from practice elsewhere.
P.S. I am a teacher by occupation, and I know that the research shows that
teacher characteristics matter for learners. The parental involvement or value
placed on education by parents mentioned in several comments that preceded
mine here are important, but I deal often (just today, in fact) with trying to
help parents who are involved in their children's educations but are
frustrated by what's happening to their children in United States public
school classrooms.
[1] [http://www.amazon.com/The-Smartest-Kids-World-
They/dp/145165...](http://www.amazon.com/The-Smartest-Kids-World-
They/dp/145165443X)
------
thrownaway2424
The author's case is sensible and clearly presented but lacks context for the
intended audience of the USA. It's pointless to say that Finland has a unified
teacher preparation program, implying that other countries do not. There are
in fact uniform teacher training regimes in the USA that are comparable in
scale to the one in Finland. The New York City Department of Education has as
many students as Finland has. LAUSD isn't much smaller. So instead of asking
what lessons we could learn from Finland, would it make as much sense to
cherry-pick some successful school districts from within our own country and
learn lessons from them? Because that's essentially what you're doing when you
use Finland as your exemplar instead of a similar-population area of Europe.
How are the schools doing in Slovenia these days?
------
stcredzero
_...education policies in Finland concentrate more on school effectiveness
than on teacher effectiveness. This indicates that what schools are expected
to do is an effort of everyone in a school, working together, rather than
teachers working individually._
Or, as Hillary Clinton put it, "It takes a village." I suspect that it's a
part of the plight of many immigrant parents who live in isolation from an
ethnic/cultural community, to feel like asking, "What's wrong with you?" of
their kids, because they keep noticing that things their kids don't know
things that they "should know." My parents expected me to know many things I
would've picked up in my environment, had I grown up in the same one my
parents did. This kind of knowledge is illustrated in Frank McCourt's _Angela
's Ashes_ when the city born lads see a cow for the first time, and onlooking
adults wonder if they are mentally deficient. "What are Cows!? Cows are cows!"
Another example of this kind of knowledge: When my family engages in
activities, like going somewhere, we generally imagine what all of the others
are doing and optimize our activities to minimize crossing paths and causing
each other wait times. This isn't something we were ever explicitly asked to
do. My sister and I just picked it up from our parents. In stark contrast, an
ex-girlfriend of mine would instead only perform narrowly delegated tasks and
discharge whatever task I delegated as quickly and directly as possible,
without regard for how that would impact my activities, even if that would
mean covering a cutting board I was using with another ingredient. Apparently,
her father would punish initiative as a matter of the principle of obedience,
and order around his family like robots.
Yet another example of this: in Japanese homes, people are expected to remove
their shoes and arrange them in a neat and orderly array, optimized for
exiting with a minimum of fuss and socks contact with the foyer floor.
Also very significant, in Finland: _" teaching is regarded as an esteemed
profession, on par with medicine, law or engineering."_ In the US, teachers
are regarded as occupying a class between the working and professionals,
esteemed lower than professions like medicine, law, and engineering. It says
much about our society's priorities, that we say, "Those who can't do, teach."
_becoming a great teacher normally takes five to ten years of systematic
practice. And determining the reliably of ‘effectiveness’ of any teacher would
require at least five years of reliable data. This would be practically
impossible._
This is somewhat the inherent dilemma of hiring for any skilled profession.
Mentoring is probably key here.
~~~
fennecfoxen
_It says much about our society 's priorities, that we say, "Those who can't
do, teach."_
Is that the root cause of our society's teaching-related woes, or is the
aphorism itself a consequence of glut of bad teachers?
~~~
stcredzero
I'm saying neither. I'm saying that it's an indication of our society's true
attitudes towards education.
------
MisterMashable
If Finland's great teachers were to teach in U.S. schools, they would
encounter significant pressure to conform from students, parents and
administrators. The ones who tow the line and preserve the status quo would
get to keep their jobs while the others would be mobbed, manipulated and
discarded. Administrators would fabricate a false narrative using negative
performance review. Parents would blame the teacher for failing to "teach"
which means graciously ignore their child's poor behavior and hand out high
grades. This is what would happen to the great majority of great Finnish
teachers were they to work here in America. The few who by good fortune places
themselves in American school communities which closely resemble Finland would
fit right in.
------
guelo
One of the ways racism expresses itself in the US is in child poverty. We
can't offer public assistance to poor children because it creates "welfare
queens", which is a stereotype that whites have about blacks that they are
lazy and will cheat.
~~~
javert
Child poverty is not a result of racism. Child poverty is a result of people
choosing to have children that they cannot support. They choose to do this
with the foreknowledge that society will not provide for those children. At
least, not enough to life them out of poverty.
People need to stop blaming everything on racism, and calling lots of things
racist that simply are not. Conflating the issues hurts on all fronts.
In other words, it makes it harder to identify and solve the real problem, and
it makes "racism" meaningless in the public dialog.
~~~
artsrc
Let's imagine you care about educational outcomes and child poverty, what
kinds of things would help?
Certainly free and easy access to sex education, contraception and abortion is
something that is a good policy.
People, even poor people, have always, and will always want children.
Once children are born to poor parents, you need to address that poverty, or
you will get poor educational outcomes for those children.
Certainly stigmatizing poor people, rather than focusing on bad luck they have
had, will provoke different policy outcomes.
~~~
javert
I didn't stigmatize poor people. If you consider stating facts of reality to
be stigmatization, you are fighting reality, and that is no way to deal with
it.
> rather than focusing on bad luck they have had
One of the main point of the article is precisely that being poor is not bad
luck: overall, it happens because a person didn't take education seriously
because they were not taught to do so by their parents.
I mean, you can say it's bad luck to be born to such parents, and I would
agree there.
------
mschuster91
I believe that the background of students is the most important factor when it
comes to education success.
When you put a "world-class teacher" in front of 30 students who have totally
different things on their mind that are NOT school-related - like e.g. having
to support their drug-addicted parents, their own addictions, having to care
for siblings, for food or sometimes even for a place to sleep - then even a
squad of the best teachers will not help any of these kids achieve "good
grades".
Putting the blame for fucked up environments on teachers (like it seems to be
done very often in the US) is unfair and stupid, because the teachers are in
no position to change their situation.
------
cryptlord
It always amazes me when these articles about the finnish education system pop
up. I'm finnish and I've dropped out of high school twice, I don't consider
myself dumb and have done very well in my life (Thank you, internet). A lot
better than my peers, most of them are dropouts as well. The only people who
have even got to an university level are people who already were from
wealthy/academic families.
It's the same thing everywhere, but I admit that education being free is a big
deal, but it doesn't fix social problems.
------
xacaxulu
There was an interesting study of IQs by college major, showing disciplines
such as social work, education and gender studies being at the lowest end of
the IQ spectrum. Seems like it would be counter intuitive to ask quite so much
from members of those demographics when it comes to educating our children.
[http://www.randalolson.com/2014/06/25/average-iq-of-
students...](http://www.randalolson.com/2014/06/25/average-iq-of-students-by-
college-major-and-gender-ratio/)
~~~
djur
The post you link to points out that the IQ values are estimated from SAT or
GRE scores, and that most of the difference is explained by the quantitative
section of the SAT. What that ends up showing is that social work and
education are low-paid, low-prestige fields, which tend to employ more women.
------
TwoBit
Why is it that poor students fail so much more at education?
~~~
rayiner
My wife was briefly a teacher on an Indian reservation. There were few jobs on
the reservation and many of the kids' parents were unemployed, and even the
ones that were rarely were in a position that required any education. That's a
huge demotivator for kids--why sacrifice for education when you can't see in
front of your face that education might yield any benefits? Then there was, of
course, the social ills associated with poverty, which distracted kids from
school: alcoholism, drug abuse, crime, domestic violence, and sexual abuse.
I'm personally enormously skeptical of the idea that education is a solution
to poverty,[1] at a large scale. There is a game theory problem in play. The
fact is that it's unlikely that education will lift an individual inner city
kid "out of the hood." A relatively excellent outcome for diligence and hard
work would be going into debt to attend a third rate college, for the
privilege of fighting for a low-paying service job. So it's totally rational
for kids to be more preoccupied with whether joining the right gang will keep
them from getting harassed on the way home from school.
But if everyone worked hard and got educated, what might happen is that
economic opportunities could be created "in the hood." That's the prisoners
dilemma--the individually rational decision to devalue diligence and education
leads to a globally worse outcome. This is where culture comes in. You see
this with poor immigrant communities. They have little capital, but have
cultural mores that create an incentive for education and hard work. A kid
might not leave the neighborhood through education, but he'll get social
standing in the community, among authority figures and peers. When everyone
has that incentive, that creates economic opportunities within the
neighborhood. After all, there are neighborhoods in Bangladesh far poorer on
an objective scale than the worst ghettos of Chicago, that nonetheless have
bustling local economies.
[1] Poverty is, of course, relative. But I'm not talking about utopia, but
just about raising the plight of the poor here in the U.S. up to that of some
places in Western Europe.
~~~
james1071
I am not sure what you are saying. Those who are able to learn will do so,
given the right environment. Those who cannot, will not, whatever help you
give them. Poverty has nothing to do with it, though those who lack the
ability to learn may well also be poor.
~~~
pixl97
It's unfortunate you are incorrect, statistically speaking.
In general giving people a stable, appropriate calorie diet is the best way to
increase IQ over a population. Poverty has a whole lot to do with that.
Next, after you lift people out of poverty you still have the education
problem. Uneducated parents don't have educated kids, statistically speaking.
The first few years of life, before kids are ever sent to school define a
person's learning capacity hugely. Babies that have working parents and have
less personal care, less emotional closeness, and less exposure to a wide
range of language are going to be disadvantaged to those kids that do.
~~~
james1071
I am from London, in the UK.
Pretty much everyone has enough calories (too many in most cases).
As for schools-of course the rich people send their children to much better
schools.
Those who can't send their children to local schools. What happens there
depends on how bright they are, how hard they work and how committed their
parents are to education.
In the case of some ethnic groups there is a strong aversion to educating
their girls properly.
None of this has much to do with poverty, emotional closeness or whatever else
you claim.
~~~
barrkel
_Pretty much everyone has enough calories (too many in most cases)._ Sure;
foods high in sugar and fat have no effects on concentration levels versus
healthier food, right?
_Those who can 't send their children to local schools. What happens there
depends on how bright they are, how hard they work and how committed their
parents are to education._ This is deeply ignorant. What happens when a smart
kid is in the middle of a class filled with kids who have no interest in
learning? Do you think the teacher will craft a whole special course specific
to that kid? Or do you think the teacher will try to get something basic to
stick at the lowest level, so almost all the kids get at least something from
their education?
What happens when a kid's peers mock the kid for being a swot? How socially
integrated is the kid going to be, when all his friends do things in the
evenings, and the kid's stuck doing homework and study? Ever heard of peer
pressure? Gangs? Do you have any memories of growing up in a state school in a
poor area, of the risk of being beaten up if you venture into the wrong area,
wearing the wrong uniform?
~~~
james1071
It's always a pleasure to interact with people who respond emotionally, based
on some other issue, than the one that is being discussed, so I congratulate
you for your response.
I presume that you're upset with life being unfair, which it most certainly
is.
It does not change the fact that lack of calories is not a significant factor
in poor educational attainment in the UK.
There are also many reasons for a pupil not getting top grades-but being
dragged into gang life by anti-intellectual peers is not one that ranks
highly.
More common, in my experience, are boys wasting hours on playing computer
games and smoking dope.
As for getting beaten up by entering into the wrong area-I don't see how that
stops them from doing their homework in their bedroom.
------
saranagati
there's a lot of talk here about teacher/school standardization but I dont see
any talk of what that standardization is, only whos at fault. students dont do
poorly in school because the information to learn isnt there or even because
its not encouraged by the students parents (speaking of the us education
system). students fail at school because of how much its geared to teach
square pegs when many students arent a square peg.
sure some teachers may teach for square pegs while others teach for round pegs
but the students dont get the option of attending the class for round pegs
instead of the square peg class. instead the student is thrown into a teachers
class and has to conform to however that teacher wants to structure and grade
the class. class A may be graded mostly on tests while class B may be mostly
on homework and even still class C may be a mix of both homework and tests.
then other teachers like to throw artifacts such as attendance to skew the
grade even more.
finally theres the problem of subjects that people just arent good at or dont
care for because they provide little real world use. subjects such as history
or soke more advanced english or math. these turn into something that a
student is not only forced to attend and contribute to but also be judged on.
for a personal anecdote and one of ky sources of critisms, when I was a
freshman in high school I was in an algebra class and aced both finals with
the highest grade in the class while getting A's and B's on all of my tests.
one thing I never did though was the homework (I turned in maybe two homework
assignments the entire year). end of the year comes and I fail the class
because I didnt turn in the homework and I had a habit of sleeping in the
class.
to top off all of that there are teachers who just suck and/or dont get along
with certain students. teachers who are condescending to soke students or try
to keep students in class during lunch. its not a teachers job to punish kids
in any way. if the student is disruptive to the class then they should dismiss
the student from the class and have the school take care of problem students.
------
tiatia
"Competition to get into these teacher education programs is tough" Let me
guess. They give their teachers a decent pay?
------
icantthinkofone
I would presume they would teach as well as great teachers from the US.
------
bobcostas55
Something worth noting whenever Finland and education come up: Finnish-
Americans in America do better than Finns in Finland.
~~~
arjie
Isn't that generally the case income-wise? Immigrants are a self-selecting
bunch. Picking up and moving to a different country is hard and you need to be
fairly dedicated to cross that gulf and put up with all the differences in
order to make it work. South Asian Indian-Americans are one of the highest
earning ethnic groups in the US, earning about twice the median national
household income. But the median income in India is awful.
~~~
bobcostas55
That's a very interesting question, and the answer isn't straight-forward. On
the one hand, obviously we should expect some selection effects. On the other
hand, there is a relative status effect counter-acting it. Long story short,
people prefer to be relatively rich in a poor country than relatively poor in
a rich country, even if their absolute level of wealth would increase. Stark &
Taylor (1991)[0] for example found that, at least when it comes to Mexico, the
relative preference trumps the absolute preference: poorer households were
more likely to migrate.
[0]
[http://www.jstor.org/discover/10.2307/2234433?sid=2110552233...](http://www.jstor.org/discover/10.2307/2234433?sid=21105522338573&uid=2&uid=3738128&uid=4)
------
platz
You'd need a lot of Finnish teachers
------
james1071
Haven't read the piece, but will say this (which is blindingly obvious to
everyone except those in control of the system):
All you need to do to get a good secondary education system is to hire good
graduates,train them and support their professional development.
What does not work is hiring idiots, de-skilling the job, filling the day with
busy work and other pointless nonsense.
~~~
GabrielF00
I am suspicious of any comment about American education that contains the
words "blindingly obvious" and "all you need to do".
The problems that exist in American education are incredibly complex. We've
tried a lot of big new things based on a simple, reductive approach (examples:
testing and accountability for schools and for teachers, small schools,
charter schools, Teach for America). I don't think any of these big new ideas
have transformed a low-performing urban public school system into a system
where educated white professionals would send their kids.
~~~
james1071
Well,each to their own. The US has proved spectacularly inept in a number of
areas (healthcare, obesity, guns and education) and the causes are indeed
blindingly obvious to anyone who is not an American.
~~~
adventured
It's worth noting that 3/4 of the problems you list didn't exist 30 years ago.
The US had a highly functional, cost effective healthcare system until the
early 1990s, when costs began to soar. In fact it still has the best hospitals
and doctors in the world to this day, along with the best technology and best
drugs. The US also has by far the most innovative healthcare tech and pharma
industries.
The US did not have an obesity _problem_ until the last 20 or 30 years.
The US still has by far the best universities on earth. There isn't even a
close second. Make a list of the top 20 universities and the US will take 17
of those slots. It had a tremendous public education system, again, until
about 20 years ago. And even now, half the country still does have an
excellent public education system.
If the US is so inept at education, how come US universities stand so far
above the rest of the world, and have for decades? Quick, name five
universities of equal quality to Harvard, Stanford, MIT, Yale, Princeton -
anywhere on earth, eg: Sweden, Switzerland, Norway.
~~~
james1071
Proving the point that Americans are totally blind to what everyone else can
see.
1 Obesity-apparently not a problem because it didn't exist 30- years ago.
That, I find hard to believe if I remember my first trip to the US in the late
1980s and the free food restaurants in Las Vegas.
2 Healthcare -an obvious disaster, due to lack of access and the lifestyle of
a large part of the population. Oh, it will also bankrupt the country without
major reforms.
3 The universities-rankings are based on research, which in turn are based on
buying in talent.Don't kid yourself that they reflect the quality of
undergraduates that are turned out or the population as a whole.
4 As for guns-no need to bother with that one.
~~~
adventured
I never said obesity isn't a problem, so right off the bat you're misleading
on what I said.
In fact the US did not have an obesity problem in the 1980s. Between 1980 and
2000, the obesity rate doubled among adults in the US, per the CDC. What I
originally said is accurate and easy to prove.
The obvious point, is that America has only become "inept" in the last 20 to
25 years on obesity. It's a very recent problem. It can be reversed and solved
as quickly as it become a problem. I'd argue that peak obesity has already
occurred, the causes have almost all been clearly identified, and over the
next 20 years Americans will get less obese measured by every five years that
go by.
2) Total healthcare costs stopped increasing several years ago. In fact it's
more likely that healthcare costs as a % of GDP and income will decline for
the next 20 years. It's not going to bankrupt the country without major
reforms. Not even remotely close. The US has among the highest disposable
income levels in the world, healthcare expenses are a very manageable problem
even at these elevated cost levels.
Americans do not have a healthcare access problem. In fact Americans are the
most over-doctored, over-tested, over-treated people on earth, and it's a huge
contributing factor for why Americans spend so much on healthcare. Americans
consume more healthcare services than any other people. Nearly 90% of
Americans have health coverage now, and the majority of those in the 10% that
do not, choose not to. The poor in America have had complete coverage for a
very long time, via state medicaid, among a dozen other programs.
Saying something is so, does not prove it. If you're going to make outlandish
claims (the US will be bankrupted by healthcare costs), you should back them
up.
3) The universities in the US outrank their peers in other countries across
the board on a direct comparison basis (top vs top, middle vs middle). It's
embarrassing how far ahead the US has been for the last 40 years. It's
universally accepted that the US has by far the best universities. There's no
debate to be had here, at all.
~~~
james1071
As I said, this is pretty much complete nonsense and a classic example of the
phenomenon of American blindness to their problems.
Take your ludicrous assertion that healthcare costs will decline as a % of GDP
over the next 20 years.
You are on another planet to the rest of us.
~~~
adventured
And yet I'm the one backing up my position, meanwhile you stick to hurling
insults.
We're already at a point where healthcare costs as a % of GDP will begin
declining. With the expansion of the ACA, the US Government has begun doing
what every country in Europe does: squeezing unnecessary costs out of
healthcare any way they can.
You claimed healthcare costs would bankrupt the US without reforms. Now let's
see you prove what you said, instead of relying on ad hominem attacks in place
of actual data points.
Per capita healthcare expenditure growth has been falling for about 12 years
now, and is down to low single digits:
[http://i.imgur.com/5ARcJ1s.jpg](http://i.imgur.com/5ARcJ1s.jpg)
"Medical Costs Register First Decline Since 1970s"
[http://blogs.wsj.com/economics/2013/06/18/medical-costs-
regi...](http://blogs.wsj.com/economics/2013/06/18/medical-costs-register-
first-decline-since-1970s/)
"CBO: Declining Health Care Costs Will Lower US Budget Deficit"
[http://www.voanews.com/content/us-cbo-estimates-slightly-
low...](http://www.voanews.com/content/us-cbo-estimates-slightly-lower-
deficits-as-health-subsidies-fall/1893114.html)
"Republicans Hurt By Slowing Costs in Health Care In the 2014 election,
Democrats seize on opportunity to talk about Medicare."
[http://www.usnews.com/news/articles/2014/09/26/republicans-h...](http://www.usnews.com/news/articles/2014/09/26/republicans-
hurt-by-slowing-costs-in-health-care)
| {
"pile_set_name": "HackerNews"
} |
Streaming application log events to the Cloud from the Docker fire-hose - viklas
http://www.emergingstack.com/2015/05/11/Cloud-Logging-and-the-Docker-Firehose.html
======
viklas
SUMMARY: A kubernetes deployed 'logging container', running on every host,
streams every docker-hosted application event to AWS Cloudwatch. Cheap, quick,
real-time, re-usable and accessible anywhere.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What concepts in your field are the most difficult to explain simply? - Nuance
======
drakonka
What I actually do. My job is super interesting, but I can't find a good way
to explain it to outsiders. I am a software engineer in the games industry. If
I worked on gameplay it would be a bit easier, but I work on core tech/central
engine tools. A layperson has no idea what that means and any examples I can
think of to give them are just as confusing unless they already have some
familiarity with AAA game dev infrastructure and workflows. This is surely a
failure of me not being able to find a good way to explain it yet; it feels
like I'm working on so many different things in this area that there isn't a
single clear-cut description I can come up with that an outsider would easily
understand.
------
potta_coffee
Why it's so difficult (impossible) to accurately estimate software projects
and why programming does not fit the "assembly line" model.
------
pplonski86
I'm working mainly with machine learning field, if there is something
difficult for me, then it means I dont fully understand it. Then I'm trying to
improve my knowledge. The good test of understanding is to try explain it to
the wife :)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Things to consider when choosing an investor? - jpd750
HN-<p>I don't even have a product yet, but did a quick pitch on the phone to an investor and he is potentially interested in investing in a MVP-level version of the product.<p>What are the most important things to consider with choosing an investor (considering you have the choice) ? What should I be looking out for?<p>Thanks!
======
mchannon
Terms, terms, terms.
Also, ask yourself (and perhaps the investor) what would happen if in three
years, that money's gone, your product tanks, and have to get yourself a job.
There are many investors who get emotional and make it their life's work to
torment you when things go south, no matter what they sign or what they
promise. Some are even accredited, but making sure they're accredited will
limit your exposure to this eventuality. (They should have forms they can fill
out and give you to prove it).
If the investor passes that test, then as long as you can live with the terms
(convertible notes are always best if you can get them versus straight
equity), close 'em and get building!
| {
"pile_set_name": "HackerNews"
} |
Bitcoin - The Internet of Money - WardPlunet
http://startupboy.com/2013/11/07/bitcoin-the-internet-of-money
======
vovantics
Testing
| {
"pile_set_name": "HackerNews"
} |
The Girls Next Door - not_paul_graham
http://www.5280.com/girlsnextdoor/?src=longreads&mc_cid=54948c4afe&mc_eid=99af5e345c
======
reubenmorais
Took me a while to realize: you have to scroll down to see the content.
~~~
fernly
In Chrome, it doesn't scroll. It appears to be only the image and the
headline. (Edit: nope, nor Firefox either, for me)
~~~
jaredsohn
It does scroll in Chrome. However, you have to scroll for a bit before
anything beyond the scrollbar changes.
------
fit2rule
Slavery in America is something that really needs to be discussed openly and
in free society. People believing that slavery doesn't/can't exist in their
modern world really need to be exposed to the truth: there is more slavery now
than there ever was.
~~~
tmerr
When you say there's more slavery now than there ever was I can't tell whether
you're exaggerating or know something that I don't. As far as America goes, it
seems like a stark difference between now and 150 years ago when 13% of the
population consisted of slaves [0]. If you're referring to third world
countries that's more understandable due to the number of young workers
building products for wealthier countries.
[0]
[http://en.wikipedia.org/wiki/1860_United_States_Census](http://en.wikipedia.org/wiki/1860_United_States_Census)
~~~
fit2rule
[http://www.globalslaveryindex.org/report/](http://www.globalslaveryindex.org/report/)
It is estimated that the US has 60,000 slaves _today_ , per definition. So no,
its not over yet in the US.
However, world-wide: approximately 30 million people fit the definition of
enslaved humans.
One thing, though: the US Prison System is considered by some to be
industrialized slavery. If this is included in the statistics, the US
enslavement index goes way, way up.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Tabtation – A Chrome Extension to Manage Your 'Too Many Tabs Syndrome' - RishitKedia
Hey there, HN!<p>My name is Rishit, and this is my first post here on HN.<p>Let me start off by asking you a question.<p>How many tabs do you have open in Chrome right now?<p>If you said less than five or ten: Congratulations, you’re not suffering from the dreaded 'Too Many Tabs Syndrome' (TMTS). Well, most probably! We all grow and start from zero.<p>What?! Say that again. Did I hear you correctly? 25? 50? 100? Maybe even more? Or you’re just too lazy like me and said ‘too many’ since you can’t count them on your fingers? Woah! Now we’re talking! Yeah, yeah, I know, we’re in HN land after all, so I may be over-hyping this.<p>But seriously though, you must have landed up in situations where the tab widths are so small, you can’t make sense of anything. So, what do you do? You start opening new windows. Wonderful. Few hours of work, and yup, you end up with the same thing again. Windows are nice. But a lot of them (with a lot of tabs) sucks even more. I just went through all of my five windows and all their tabs, and still can’t seem to find that one tab! Jeez. ️<p>So, I may have something that would be right up your alley, and help you manage your TMTS; Well, it certainly helps me with mine, so I’m stoked to find out if it does the same for y’all.<p>I’ve just launched Tabtation on the Chrome Web Store!
(<a href="https://chrome.google.com/webstore/detail/tabtation/hdidaidpgcmfbkhcfhdpaehpfeilhfcb" rel="nofollow">https://chrome.google.com/webstore/detail/tabtation/hdidaidp...</a>)<p>I’m offering a 7-day free trial, so give it a try and do let me know what you think! I’d love to hear your questions, thoughts, or feedback, and work on them to improve Tabtation so that we can all be more productive in the coming months. ️<p>BTW, Tabtation is also on Product Hunt today (<a href="https://www.producthunt.com/posts/tabtation" rel="nofollow">https://www.producthunt.com/posts/tabtation</a>). Yup, a lot of firsts for me today!<p>Cheers!
======
Jefro118
I've used Workona which I like but it seemed to be slowing down my browser and
so I uninstalled it (although I haven't tested this carefully). Is Tabtation
any more performant?
~~~
RishitKedia
Hey there! Workona and Tabtation are trying to solve the same problem
differently. Tabtation is just a bar at the bottom that loads on every tab,
and groups/organizes your tabs based on the domain, for handy access to all
your open tabs. From the very little that I've seen, Workona solves the
problem a little differently by introducing Workspaces and opening/closing
tabs each time depending on the Workspace you select. I'd love that you try
Tabtation and see the difference. Hope that helps!
------
qnsi
I will stick with workona
~~~
samanator
Wow! Thanks for the tip. I've been using it every day at work since you posted
this. Makes it much easier to compartmentalize things.
~~~
qnsi
That's why I posted about Workona, wanted people interested to know about good
alternative. I am glad you enjoy it as well as I
| {
"pile_set_name": "HackerNews"
} |
List of All Current TLDs - manjana
http://data.iana.org/TLD/tlds-alpha-by-domain.txt
======
mike_d
The IANA TLD list should never be used directly. What you really want instead
is the Public Suffix List [1]. It will help you determine the "effective TLD"
of domains like amazon.co.uk or sflawlib.ci.sf.ca.us, and gives you more
insight in to how a technical allocation at the root transforms politically in
to implementation.
1\.
[https://publicsuffix.org/list/public_suffix_list.dat](https://publicsuffix.org/list/public_suffix_list.dat)
~~~
sleevi
PSL maintainer here: please don’t use the PSL!
Yes, it’s weird to have a maintainer asking people not to use their project,
but the PSL was a very specific (and unfortunate) hack for a very specific
(and unfortunate, and browser-created) problem. It is something we live with,
not something we like. While the ideal world is “don’t use any list at all,
use the protocols as God, the IETF, and IANA intended”, if you are going to
use a list, using the IANA list, updated daily, is much better than the PSL.
Do not use the PSL for anything that is not “cookies abusing the Host header”
[https://github.com/sleevi/psl-problems](https://github.com/sleevi/psl-
problems)
~~~
Lvl999Noob
Are you still adding suffixes to the list? If so, wouldn't refusing to add new
suffixes help with the issue? If no new organisation can make use of PSL to
link their subdomains, then they are only left with SOP. Since the list stays
like it is now, no existing websites, depending on the list suddenly break
down.
~~~
sleevi
We are. Deliberate sabotage like that would take quite a while before it was
noticed, however, and it wouldn’t magically fix cookies and how people use
them.
To the extent it is used by cookies, we still want to maintain a fair and
equitable solution. However, we also want to actively discourage any new users
or use cases, to the extent possible, while we also try to fix cookies.
Ideas _like_ [https://github.com/privacycg/first-party-
sets](https://github.com/privacycg/first-party-sets) provide a possible model.
While FPS doesn’t directly address this, as part of keeping a narrow scope,
the approach to explicitly expressing boundaries is one that has the best
viable path. However, that’s effectively “Deprecate the Host option for
cookies”, so... that’s a big task.
Simply sabotaging the PSL doesn’t force the problem to be solved, so mostly,
it’s an education campaign of “We made a mistake; learn from ours, rather than
repeating it.”
------
saaaaaam
I worked with some slightly crazy businessmen who were tricked by an out-of-
work “domain name consultant” into putting in an application for some new
gTLDs.
They got one, and cashed out another application to let someone else take it
which got them all their application fees back plus a decent chunk of cash.
They were genuinely convinced their terrible new gTLD was going to make them
$100 million a year.
My main job was to stop them from blowing what money they had in reserve on
insane publicity stunts for long enough that they woke up and realised they
had been conned.
Eventually they woke up but had spent something approaching $2m finding that
out. I stopped them spending at least as much again.
“Tell me again why you want to hire ten hot air balloons to fly over this
stadium...?”
~~~
treeman79
Worked Directly for a fortune 100 CEO.
His attitude on advertising was it was better take all the money, put it in a
pile and burn it.
Had positive things about superbowel ads, but dissed everything else
~~~
andruby
Nice typo. Superbowel.
~~~
anon73044
Could have been entirely intentional if he's not a sportsball fan.
~~~
treeman79
Let’s go with that. :)
------
gruez
For comparison, the same file 10 years ago:
[https://web.archive.org/web/20100502082201/http://data.iana....](https://web.archive.org/web/20100502082201/http://data.iana.org/TLD/tlds-
alpha-by-domain.txt)
There were only 279 TLDs back then, or 32 TLDs if you excluded all the country
code TLDs. Now there are 1508 TLDs, or 1260 excluding country code TLDs.
~~~
zokier
> There were only 279 TLDs back then, or 32 TLDs if you excluded all the
> country code TLDs.
Arguably that's about 30 too many and really the root of this whole mess. Imho
Postel, for all the good he did, in retrospect mismanaged DNS by not
establising really any structure or policy, in a way that feels now bit
naive/idealistic (and US-centric).
By the time ICANN took over dot-com bubble was already knocking at the door
and laissez faire anything goes attitude of DNS pretty well cemented, so any
drastic changes would have been difficult to accomplish.
Essentially by leaving the legacy TLDs completely open and mostly without
restrictions or hierarchy/structure their meaning was eroded away, and if the
TLDs have no meaning then it's only logical to throw them away.
------
fanf2
One of my side projects is
[https://twitter.com/diffroot](https://twitter.com/diffroot) a twitter account
that publishes changes to the root zone (except for nameserver IP address
changes). Alongside that I have a couple of Twitter threads commenting on the
changes.
The relevant one is
[https://twitter.com/fanf/status/903709051606978560](https://twitter.com/fanf/status/903709051606978560)
my collection of dead .brand TLDs which is now 3 years old (tho I didn't keep
it very reliably the first year).
It's also amusing to see if you can spot a .brand TLD being used for real
services, where the brand is not an Internet company. The biggest one I know
of is SNCF.
~~~
mike_d
Thank you for this service. It is actually the only Twitter account I have
push notifications enabled for (though I need to figure out a better solution
when there is a lot of churn).
FWIW, I know at least one non-tech company that was already using .brand as an
internal TLD and spent the money to avoid a name collision.
------
nfoz
ICANN sold out the internet. The namespace is all but ruined.
.PHOTO, .PHOTOGRAPHY, .PHOTOS, .PICS, .PICTURES
This should offend everyone.
~~~
codethief
I'm more concerned about TLDs like:
.ACCENTURE
.AIRBUS
.AMERICANEXPRESS
.AMERICANFAMILY
.AVIANCA
.BAIDU
.BARCLAYCARD
.BARCLAYS
.BENTLEY
.BESTBUY
.BLOOMBERG
.BNPPARIBAS
.BOEHRINGER
.BUGATTI
.CALVINKLEIN
.CAPITALONE
.CITI
.EPSON
.ERICSSON
.FERRARI
.FUJITSU
.GMAIL
.GODADDY
.HDFC
.HDFCBANK
.HITACHI
.HYATT
.HYUNDAI
.JAGUAR
.JEEP
.JPMORGAN
.JUNIPER
.KERRYHOTELS
.KERRYLOGISTICS
.KERRYPROPERTIES
.LACAIXA
.LAMBORGHINI
.LANDROVER
.LANXESS
.LPLFINANCIAL
.MASERATI
.MATTEL
.MCKINSEY
.MICROSOFT
.MITSUBISHI
.NETFLIX
.NORTHWESTERNMUTUAL
.OLAYANGROUP
.PANASONIC
.PRAMERICA
.SAMSUNG
.SCHAEFFLER
.SCJOHNSON
.SONY
.STCGROUP
.SUZUKI
.SWATCH
.TIFFANY
.TOSHIBA
.VIRGIN
.VOLKSWAGEN
It's disgusting.
~~~
AmericanChopper
If those TLDs were actually useful, then the domain name system would be in
much better shape than it actually is.
In reality, you have .gov for government, .com for business, .org for
organisations who don’t care how much traffic they get, local country TLDs for
if you operate a service for one country only, and tons and tons of garbage.
com is the only TLD that has any value for international commerce. It doesn’t
matter that .netflix exists, because it will never be used for anything
productive. The problem is that you can only ever count on a person knowing
.com and their local TLD. Everything else either won’t register with people as
being an actual domain name, or will look like a scam to most people.
The internet is locked into this system, and it’s one that cannot possibly
scale. The explosion in new TLDs is an attempt to address that. But we don’t
need to worry about how disgusting it is, because it’s an attempt that has
failed.
I would suggest two issues that are more concerning is that there is nothing
that seems to be a realistic alternative or solution, and that the way this
problem has played out has diminished the actual usefulness of domain names,
with that gap being filled by the google search engine.
~~~
kohtatsu
There were plans on potentially allowing [https://netflix/](https://netflix/)
I can't recall the technical term for it though, and my search engine couldn't
help me find it within a few minutes.
~~~
pantalaimon
That would ruin local host names
~~~
pwdisswordfish4
Not if the final dot to denote the root zone were to be brought back, i.e.
[https://netflix./](https://netflix./)
~~~
parliament32
Brought back? It's still a thing and required.. your OS's DNS implementation
might not handle it in the way you expect though ;)
------
jacobjonz
I agree with @jrockway below. There is no point in retaining TLDs the way are
now. The original idea of TLDs were to have separate namespaces. For example,
apple.com is the company apple and apple.fruit, may be a fruit seller. This
never worked though. At the end, the companies ended up having to register all
the TLDs or someone else would get apple.dong and pretend to be related to
apple. ICANN decided to use the opportunity for money grab and started
releasing new TLDs every now and then. It makes all sense to get rid of usage
of TLDs as they are today. If apple is <any subdomain>.apple, that's it.
People would know that apple.dong is something related to dong. It might sound
to be far fetched, but it is not. Once people see the flooding of TLDs (like
handshake TLDs which are easily and cheaply available to general public on
Namebase (namebase.io) or Bob wallet(Bob wallet.io)) and when they realize
TLDs are the new equivalent of .coms, they would realize that it's just the
TLDs that matter for auth aspect and the subdomains are more functional within
the company (like mail.google and chat.google) The only people to lose are the
scammers and ICANN.
------
Bnshsysjab
I’d love $myhandle.sucks but alas the domain registrar decided to charge
extortion rates in the hopes that large companies register their own domain to
prevent hate sites >_>
~~~
jasomill
Incidentally, if the $185,000 you're about to spend on a new gTLD registration
is bringing you down, you could use the money to register _icann.sucks_
instead:
[https://www.rebel.com/search/?exact=false&q=icann.sucks&tldo...](https://www.rebel.com/search/?exact=false&q=icann.sucks&tldonly=sucks)
Turnabout is, I suppose, fair play.
~~~
saagarjha
Funnily enough, I tried to check dotsucks.sucks…they’ve thought through that
already.
------
gfaure
.calvinklein? .bananarepublic? A clear money-grab that has no benefit to
users, complicates validation and security for developers, and seals off vast
swathes of the namespace for the sole use of corporations.
~~~
ShakataGaNai
A money grab for whom? ICANN, sure. But if a company wants to spend the $200k,
what is the big deal?
Some companies (ex
[http://www.nic.ovh/en/index.xml](http://www.nic.ovh/en/index.xml) ) are using
it for their customers.
Some companies (ex [https://calculator.aws/](https://calculator.aws/) ) are
using it for shorter URLs, while still being descriptive.
Sure some are just doing it because they can, other's have no good use case
yet. But I fail to see how .bananarepublic being in the hands of one company
is a detriment to me... the average internet user.
~~~
gfaure
It's a money grab for ICANN, precisely. Neither users nor developers have any
say in this process, and the body that stands to benefit financially from
accepting trademarks as TLDs is _not_ going to be acting in the interest of
users or developers, are they?
My argument wasn't specifically about .bananarepublic or .calvinklein. It was
more that I don't believe trademarks should have been admitted, full stop.
There's no way ICANN can make impartial decisions here that benefit the bulk
of Internet users.
I reserve judgement on generic TLDs, although I really don't like the
implications to user confusion caused by .photo, .photos, .pics and the like.
------
jrockway
I don't understand why we even have TLDs, and don't just register names at the
root level. Sure, it's nice to be able to shard data structures among many
providers (.com can be different servers/infrastructure/rules than .net) and
might have been a technical necessity "back in the day" (though there weren't
many shards, so I doubt it), but now it's actively harmful. You found a
company called foobarcorp and register foobarcorp.com... and some jerk
registers foobarcorp.net, foobarcorp.info, foobarcorp.sucks, etc. Why even
allow this? Let there be one and only one foobarcorp.
Yes, I'm bitter that Google gets google. but I'm stuck with jrock.us. Why does
it cost millions of dollars to remove one dot from my domain name? There is no
technical reason. Maybe it's time to overthrow the default root servers and
start our own Internet.
Also "." should have an A record.
~~~
puranjay
> foobarcorp.net, foobarcorp.info, foobarcorp.sucks, etc. Why even allow this
Because there can be multiple companies with the same name. Why cut them off
from using their own name in a domain address?
~~~
account42
You already have that problem - there is only one .com and most companies will
want to hog all others too.
------
randyreddig
Maintainer of ZoneDB (zonedb.org or
[https://github.com/zonedb/zonedb](https://github.com/zonedb/zonedb)) here:
We extracted this project from Domainr
([https://domainr.com](https://domainr.com)), using tooling that updates the
database each day. It’s formatted as a single text file (zones.txt) and
associated metadata in JSON files. We also generate a Go package for our own
uses (the tooling is written in Go).
It’s similar to the PSL, but where the PSL has wildcards and inverted matches,
ZoneDB explicitly lists each “known” zone, including retired or withdrawn
names.
[https://github.com/zonedb/zonedb/blob/master/zones.txt](https://github.com/zonedb/zonedb/blob/master/zones.txt)
------
palijer
.GIFT .GIFTS
Why have both? There is no way that isn't going to cause confusion.
~~~
james_pm
Also .hotel and .hotels. And .photo and .photos (and .photography). Plus .ink
and .inc. And many more "confusingly similar" despite ICANN rules that were
supposed to prevent that. Money talks.
~~~
spcebar
I agree there's serious potential for misleading customers, but also see the
occasional merit of having both, ie, if you own Hank's Hotel you'd want the
.hotel tld to correctly identify your business, and likewise helpmefind.hotels
makes more sense than helpmefind.hotel. These are my arbitrary examples that
do not outweigh the potential for the fraud of someone registering
hanks.hotels maliciously. I think icann is a horrible entity and never should
have existed.
On a different note, I like how many tlds there are now. .pizza is my personal
favorite.
------
NKosmatos
Hi all, a question slightly related to this topic... Is there an easy (and
free) way to get hold of all the registered domains under a TLD or ccTLD? I
know that services like [0] exist, but they are paid for and the validity and
collection of data is dubious. Why aren't zone files generally and freely
available? Is there a way to download or mirror the DNS data?
[0][https://zonefiles.io/cctld-domains/](https://zonefiles.io/cctld-domains/)
------
rerx
I like how there are _two_ top level domains for my city of about a million
people: .cologne and .koeln
Is there any other town represented twice? OK, places like Berlin, Hamburg,
London or Paris don't have the advantage of different spellings in English and
a local language. But there's only .wien, no .vienna. How about .tokyo -- is
there a puny-coded Japanese version?
~~~
SergeAx
"cologne" may be referring to perfume)
~~~
rerx
It isn't though :) -- just try
[https://www.city.cologne](https://www.city.cologne)
The company behind .cologne and .koeln is from Vienna BTW:
[https://nic.koeln/en](https://nic.koeln/en) How could they let .vienna slip
through the cracks?
------
ksec
Here is another chance for people who may know more on TLDs.
What happened to .Web?
Verisign got it in 2016 and it has since been in endless legal battle and
endless _Final_ decisions from ICANN.
Anyone have any news on that?
------
Tepix
I looked at the list and ZERO grabbed my attention. Turns out its a private
GTLD for Amazon.
Reading Amazons's application for the ZERO GTLD (linked at
[https://gtldresult.icann.org/applicationstatus/applicationde...](https://gtldresult.icann.org/applicationstatus/applicationdetails/934)
) makes me angry. It's completly bland. You could use their application to
register any string under the sun. It's not clear what benefits it offers for
the public. These types of domains should not be allowed.
------
ss64
and still nobody has registered .EXE
------
neiman
What stops us, at 2020, to just allow possible string to be a TLD? What's the
point of limiting it to this list?
~~~
umvi
Nothing, technically speaking. But legally and economically speaking I think
it's a bad idea.
I personally think there should only be a very small handful of TLDs: com,
edu, org, gov and maybe a few others. Having a limited number of TLDs
communicates to the end user what kind of site it is (government, educational,
commercial, non-profit, etc.) and reduces your domain footprint online.
When you allow ".sucks" to be a TLD, now you've basically opened up a new
market of squatters and blackmailers forcing companies and individuals to buy
up every possible potentially damaging TLD of their trademark or brand[0].
If you allow any arbitrary TLD, be prepared to employ a full DNS police force
because tons of people acting in bad faith are going to register every
possible typo under the sun in order to capitalize on people's mistakes
("apple.con", "apple.cpm", "apple.vom", "f---.apple")
[0]
[https://en.wikipedia.org/wiki/.sucks_(registry)](https://en.wikipedia.org/wiki/.sucks_\(registry\))
~~~
gabereiser
I agree with this only so much as to protect the user with information on what
type of site they are visiting. Org for non-profit or clubs, Net for networks,
Com for commerce, nation tld’s and gov. The arbitrary TLD’s are really to keep
certain organizations from owning the internet because of how name
registration works. Humans are corrupt.
~~~
neiman
.org could still be for non-profits and national TLD's could still be managed
by governments. .com, .net, meanings btw, are completely irrelevant nowadays.
My idea is not to cancel the meaning of .org, but rather create other
possibilities for names.
What's the difference really between a 1000+ TLDs and a 100,000+ TLDs?
------
wespeng
I have wrote a perl module for this official IANA TLD database, please review
it:
[https://metacpan.org/pod/Net::IANA::TLD](https://metacpan.org/pod/Net::IANA::TLD)
Thanks.
------
mmphosis
.WTF
------
wheelerwj
look at all that digital real estate thats not being used. So much
opportunity!
~~~
gruez
Ah yes, we were missing out on so much without TLDs like KERRYLOGISTICS,
LPLFINANCIAL, or SANDVIKCOROMANT!
~~~
c22
I think .BLOCKBUSTER is the one that's really going to come in handy.
------
xwdv
Why is there no .FACEBOOK?
~~~
tialaramex
Presumably they didn't want to spend a tremendous amount of money for no clear
purpose?
Most outfits which registered a brand or company name as a TLD are purely
throwing away money here, either because they didn't understand what they were
doing or out of sheer vanity.
You can _maybe_ make an argument for a handful of very big technology
companies that have some sort of plan for what they'll do with a TLD, such as
Google, but I don't think Facebook would be on that list.
~~~
xwdv
If there’s any company that could have _tons_ of uses for a TLD it’s Facebook.
Imagine a decentralized Facebook made up of custom websites all using the
.facebook domain. Imagine the revenues.
Zuckerberg are you reading this? What are your thoughts?
------
searchableguy
There should be .anime or .uwu
~~~
james_pm
Related, there is .moe
------
bhartzer
My favorite site to watch: ntldstats dot com
------
gitgud
What are all the TLDs prefixed with XN--?
~~~
kej
Punycode for these Unicode names:
कॉम セール 佛山 ಭಾರತ 慈善 集团 在线 한국 ଭାରତ 大众汽车 点看 คอม ভাৰত ভারত 八卦 موقع বাংলা 公益 公司
香格里拉 网站 移动 我爱你 москва қаз католик онлайн сайт 联通 срб бг бел קום 时尚 微博 淡马锡
ファッション орг नेट ストア アマゾン 삼성 சிங்கப்பூர் 商标 商店 商城 дети мкд ею ポイント 新闻 家電 كوم 中文网
中信 中国 中國 娱乐 谷歌 భారత్ ලංකා 電訊盈科 购物 クラウド ભારત 通販 भारतम् भारत भारोत 网店 संगठन 餐厅
网络 ком укр 香港 亚马逊 诺基亚 食品 飞利浦 台湾 台灣 手机 мон الجزائر عمان ارامكو ایران العليان
اتصالات امارات بازار موريتانيا پاکستان الاردن بارت بھارت المغرب ابوظبي البحرين
السعودية ڀارت كاثوليك سودان همراه عراق مليسيا 澳門 닷컴 政府 شبكة بيتك عرب გე 机构
组织机构 健康 ไทย سورية 招聘 рус рф تونس 大拿 ລາວ みんな グーグル ευ ελ 世界 書籍 ഭാരതം ਭਾਰਤ 网址 닷넷
コム 天主教 游戏 vermögensberater vermögensberatung 企业 信息 嘉里大酒店 嘉里 مصر قطر 广东 இலங்கை
இந்தியா հայ 新加坡 فلسطين 政务
~~~
maple3142
I don't know Amazon and Google have their domain in Japanese too.
アマゾン (Amazon) グーグル (Google)
~~~
NetOpWibby
Oh that's clever
------
axaxs
All because of an ICANN money grab. If it were really about 'having choice',
applications wouldn't have cost so much.
~~~
hombre_fatal
The end-game is that everyone is going to have their own suffix for their
website. And the first part of the hostname will standardize into, I don't
know, maybe "com" for the commercial part of your entity, "org" for the more
community-oriented part, "net" for projects that have to do with
interconnectivity, etc. Maybe even regional ones like "us" and "co.uk".
For example,
com.shopify
net.battle
org.wikipedia
co.uk.bbc
gov.whitehouse
Maybe even some sort of routing system, just spitballing here.
/my-shop/com.shopify
/elections/gov.whitehouse
Surprised nobody has thought of something like this.
The only problem I see with this system is that the ICANN could get greedy and
possibly sell this conventional "com", "net", "org", etc prefix system to the
highest bidders and centralize them to just a few suffixes for us to choose
between, then we'd be forced to register our websites as prefixes of a small
oligarchy that owns the handful of suffixes. :/
~~~
aserafini
I think the end game will be that the domain part of a URL will become
optional so that it is valid to enter just a TLD in the browser and the
company that owns that TLD can redirect.
So [http://google](http://google) would be a valid URL that redirects to
[http://www.google.com](http://www.google.com) (for example).
Essentially TLDs will become the new domains and only companies will be able
to afford to buy one, but they will do it for the prestige (like the
equivalent of owning your .com today).
~~~
squiggleblaz
More likely, the whole system will just collapse and so we will use a private
organisation who provides a service linking approximate names to websites,
something like a telephone directory but it doesn't require you to correctly
spell things and get the right prefix. It will occasionally be a problem where
you search for "Honest Company" and it takes you to honest.co.fraud, but if it
does that too often I guess we will switch to a competitor. I guess the main
solution to that problem is to have a list of different possible matches, and
require the user to pick the right one.
~~~
selfhoster11
> we will use a private organisation who provides a service linking
> approximate names to websites
You are describing Google. Lots of people already type in 'yahoo' or 'paypal'
and then click the first link than type the URL.
~~~
navaati
Woooosh !
| {
"pile_set_name": "HackerNews"
} |
It's Playtime - Light Table Playground released - ibdknox
http://www.chris-granger.com/2012/06/24/its-playtime/
======
Cushman
Who thought "It's Playtime" was a better title for this than "Light Table
Playground released"? Why does this keep happening to Light Table posts?
Edit: Now "It's playtime - Light Table Playground released", after ibdknox
altered the blog title. So... success?
I still think it's ridiculous, though.
~~~
ibdknox
I don't know, but it's starting to make me sad. It doesn't seem to fall under
the "editorial spin" guidelines - if anything it was clarification.
Hopefully the mystery will bring more people in? haha ;)
~~~
seiji
It's either an auto-renaming script or a human with the personality of an
auto-renaming script.
~~~
Cushman
My current hypothesis is that it's an individual moderator (perhaps showing
off) running a script without official sanction, hence the silence from the
admins.
The way the post on the subject[0] was buried without official comment after
over 500 upvotes suggests a certain amount of institutional blindness.
[0] <http://news.ycombinator.com/item?id=4102013>
~~~
why-el
I think PG did comment on a subsequent post asking why post on subject[0] was
taken down. I am not sure if commenting on a soon-to-be-killed post
constitutes an official comment though, perhaps it was decided there is no
need since according to policy meta-concerns should be dealt with using email.
~~~
JoeCortopassi
So instead of having a thread pop up to address meta-concerns, they have to be
dealt with privately causing any front page title change to now have 10-20
comments at the top discussing the title change. All the actual discussion
about the article is well beneath the fold now
------
trotsky
I understand you're working fast and it's early.
But not providing hashes and using a two stage downloader and not using ssl
and using auto updates and not using code signing means that your app will now
be the weakest link in terms of security for all but the worst configured
computers.
While it's almost certain no one is targeting you or your users now, that
could change when people see such a weakness or could leave people open to
local attackers that you'll never have a chance to notice.
~~~
ibdknox
It's early and we honestly didn't think down those lines. Ultimately, the
deployment mechanism will look different than this, but this was the path of
least resistance. At the very least though we can do a few of these things to
remove some of the danger - we'll get on it.
~~~
anigbrowl
Yeah, I am not that exercised about the security risk but it made a really bad
first impression. I primarily use Windows, I wouldn't mind so much if I were
Unix-based. Also, installing it in my Documents folder under Windows is weird.
On the plus side, I like the prototype itself and will be interested to see
where it goes.
------
alokm
Is there any interesting piece of code that might help me see this working to
its true potential? I tried the factorial function expecting it to show all
the recursive calls. What can I expect to see here? Call trace over multiple
functions?
EDIT: Just tried this , atleast this shows the last calls made to the
functions
(defn my-add [a b] (+ a b))
(defn fact[x] (if (<= x 1) 1 (* x (fact (- x 1)) )))
(my-add (fact (my-add 3 3)) (fact (my-add 2 5)))
\---------------
OUTPUT
(defn my-add [||720|| ||5040||] (+ ||720|| ||5040||))
(defn fact[||1||] (if (<= ||1|| 1) 1 (* ||1|| (fact (- ||1|| 1)) )))
(my-add (fact (my-add 3 3)) (fact (my-add 2 5)))|| => 5760||
~~~
lmarinho
A little example I've come up with is writing a bunch of tests for a function
you are implementing and seeing them automatically executed.
Try to fix fib by changing a, b and i values:
(defn fib [n]
(loop [a 1, b 1, i 1]
(if (= i n)
a
(recur b (+ a b) (inc i)))))
(= (fib 0) 0)
(= (fib 1) 1)
(= (fib 2) 1)
(= (fib 3) 2)
(= (fib 4) 3)
(= (fib 10) 55)
------
why-el
A quick search failed me but I am pretty sure my question has been discussing
elsewhere, in which case I would appreciate a redirect/summary. Is there a
difference between LightTable and Emacs' eval-last-sexp and similar functions?
What is LightTable supposed to add? besides support for Clojure.
~~~
leif
It's prettier. And it shows parameter expansion in some cases I think. And
some day it will have some fancy-ass version of narrow-to-tag (which actually
does sound like a real step forward).
------
mey
Thanks for considering Windows user in this release.
~~~
mey
Heads up, just sent an e-mail to feedback, with an error on starting up the
system on powershell. Not sure if it's my environment or not.
~~~
madsushi
For me, the issue was trying to load Light Table via the 64-bit version of
PowerShell. Switching to the x86 version (and setting the remote code signing)
had me working in no time.
~~~
mey
I'll check on this, I use 64bit powershell and 64bit jvm.
------
munchor
First of all, I'm using Linux and it opened on Firefox instead of Chromium.
Secondly, it seemed quite slow, and I can't use Ctrl+Shift+Up to select a
paragraph above and Ctrl+Shift+Down to do the same to the paragraph below,
like on Emacs.
Either way, the live interpretation of Clojure code looks really great, keep
on working and good luck!
~~~
dmaz
For Linux the script is doing "if chrome, else if firefox". Better to use xdg-
open.
~~~
munchor
So it's looking for Chrome, a closed source browser that I'd never use,
instead of Chromium. Too bad.
~~~
sherbondy
Seems like you can use any browser you'd like by going to:
<http://localhost:8833/>
once the server is up and running.
~~~
munchor
Thank you, that's cool! How can I kill the server, though?
~~~
jurjenh
light server stop
if you look through the script file, it has a series of options.
light table
light server start | stop | restart
light update [version]
but as far as I can tell it always checks for updates first and will install
them before it checks any of the arguments passed.
------
pyrhho
So far the biggest annoyance is truncating results.. It's cute, but not very
useful when exploring output.
For example: (.. System (getProperties)) produces a lot of output, and I'd
like to look through it to find a property.
Maybe a way to expand the output (like chrome inspector's javascript objects)
?
That said, it's pretty cool, and really interesting.
------
jasonjackson
It works perfectly on mac. Great work Chris, this demo gave me a sense of an
Apple product "it just works", i typed in code and immediately it just worked.
~~~
LaGrange
Click the full-screen view. While this is of course a purely aesthetic
impression, the clean-slate feeling is wonderful.
------
lispm
I somehow fail to see how it does something useful for recursive functions.
~~~
kaonashi
Clojure in general prefers the loop/recur construct over real recursion. You
still can't see each iteration of the loop, though; perhaps that's what you
meant by useful.
~~~
lispm
anything where there is more than one invocation of a function
------
nyellin
This is really sweet for a beta. If an inner function throws an exception, you
should show the exception there, not just next to the toplevel function call.
------
lucian1900
The server starts up for me with:
> light table
--- Checking for updates...
--- Starting server... (this takes several seconds)
nohup: redirecting stderr to stdout
--- Server up!
--- Starting Chrome
But then I get a blank window in Chrome pointing to <http://localhost:8833/>.
If I reload, it loads forever. I've stopped and started it a few times, and
once I got a dark background, another time I got some Clojure code loaded as
plain text.
Running this on Ubuntu 12.04 amd64. It happens with OpenJDK (Java 6), OpenJDK
(Java 7) and Sun (Java 6).
~~~
Neener54
I ran into this problem as well, killed java and the light app and restarted
(./light table) and it worked.
------
freyday
Java being required means this is a complete non-starter for me.
~~~
edoloughlin
Do you have an ideological or practical opposition? Java is cross-platform
enough for it not to be an issue. Are you memory-constrained?
~~~
freyday
Mostly due to the security risk. Especially on Windows. Even being a software
developer (read: not your average computer user) and taking extra precautions
(like click-to-run for java use in browsers) I've still gotten hit by malware
that takes advantage of security holes in the Java runtime.
~~~
cnf
I second the exact sentiment on OSX.
And running a VM with linux (or windows, java is as much a pain on either
platform) is a lot of effort for running an editor.
As long as it needs java installed, I'm out...
------
Moocar
Nice color scheme. Anyone know of a similar theme for emacs?
~~~
jkbr
It's quite similar to Solarized [0] for which there is emacs support [1].
[0] <http://ethanschoonover.com/solarized>
[1] <https://github.com/sellout/emacs-color-theme-solarized>
------
fdb
The core Clojure language analyzer is open-source:
<https://github.com/ibdknox/analyze>
~~~
ibdknox
actually I don't use any of that anymore. I'm using some hackery around the
CLJS analyzer to make it all happen.
------
_feda_
Is there an easy way to get the editor to evaluate a different language? It's
just I don't really use lisp but would like to try it out properly. I've tried
poking about int he ./light script but don't see anything.
Fantastic software by the way. Could really see this having a big impact on a
lot of people's development style.
~~~
madsushi
Light Table is also being developed for Javascript and Python; but this
alpha/early version is Clojure-only.
------
tzury
I had to install Java on my Ubuntu box to evaluate this, and I am happy about
it.
In other words, it worth the effort[1].
Looking forward Python support. See how it will help me compose faster.
[1] [http://rootzwiki.com/topic/23008-howto-install-java-7-on-
ubu...](http://rootzwiki.com/topic/23008-howto-install-java-7-on-ubuntu-1204/)
------
endlessvoid94
Love it. As someone who's only played around with Clojure, it's a wonderful
tool for learning / refreshing.
------
chrismetcalf
Random, but does anybody know what color scheme his colors are based on? I'd
love to crib that for Vim.
~~~
mapleoin
They're using CodeMirror for the in-browser editor and that's the default
theme:
[https://github.com/marijnh/CodeMirror2/blob/master/lib/codem...](https://github.com/marijnh/CodeMirror2/blob/master/lib/codemirror.css)
~~~
ibdknox
the parts that affect clojure have been changed a fair bit from the default
theme.
------
gnarmis
This is great! I'm doing some stuff with SICP in Clojure, and this should be
helpful for that.
Btw, I was wondering if Dr. Racket's style of automatic parens-closing by just
repeatedly pressing ']' (regardless of '(','{','[') makes anyone else wish
other editors supported that feature.
------
edwinyzh
Good to see the progress, Chris. I'm a kickstarter backer and I'm watching
this project.
If I ever get any inspirations for LIVEditor (my own live html/css/js editor)
I'll give the credits here:
<http://liveditor.com/credits.html>
------
Estragon
Is it open source at this stage?
(Found the source in the jar file, wondering about publishing modifications.)
------
camelite
Hi, I'm a beginner programmer, I've played around with learning Lisp various
time & just started the 4clojure problems. I was having issues getting a nice
workflow going & getting diverted with IDE issues etc. This is wonderful.
Thanks.
~~~
vosper
I recommend trying Clooj, it's a simple editor and REPL that requires
virtually no setup or configuration, and it provides enough functionality to
be useful when working on the 4clojure projects.
<https://github.com/arthuredelstein/clooj/>
------
greggaree
Once you start coding, the right side quickly gets cluttered. Maybe functions
that get invoked by new code entry on left side should be brighter than non-
invoked funcs/macros etc.. Or non-invoked funcs etc. should get dimmer?
------
ya3r
I used it and it's not bad at all for a "playtime".
But since I've no idea how to write code in Closure, it's not useful for me.
What I want is a Python version. A Python enabled Light Table would definitely
replace Python's REPL for me.
~~~
jfoutz
Better start coding!
~~~
heretohelp
Not quite, Granger committed to making a Python version in the Kickstarter.
~~~
Luyt
...and got enough funding for it.
~~~
heretohelp
The funding floor for Python is the commitment.
------
samrat
I'm really hoping someone makes something similar to this for Vim.
~~~
irahul
VimClojure <https://github.com/vim-scripts/VimClojure> is _something similar_
for vim. LightTable seems to focus on showing execution trace which isn't
always desirable(a function that delete files, gets file over the network,
does a lengthy computation etc). VimClojure provides you completions, repl,
looking up doc, going to source etc.
------
mrdmnd
Script hangs on server launch process -- cat server.log yields a
NoClassDefFoundError for java/util/concurrent/LinkedTransferQueue.
That's pretty strange - any idea where my machine is borked?
~~~
puredanger
That class was added in Java 7 - maybe you have older Java?
~~~
ibdknox
it was compiled for java 6, but should run on 7.. I have no idea what's going
on there.
------
wildfennecfox
It seems the HTML rendering is not part of this release? I am seeing HTML
printed as a string instead of being rendered. Is there something that I am
missing? Thanks!
------
jkbr
Promising. Already now it's quite useful for studying/debugging algorithms.
Looking forward to Python support. Recursive call support would be awesome
too.
------
arkx
I wish the instarepl supported doc and source, something all other Clojure
REPLs seem to support. I've found both invaluable when working with Clojure.
~~~
ibdknox
(use 'clojure.repl)
(doc map)
~~~
arkx
Brilliant, thank you!
------
Suor
Sorry, if this is a dump question, but how I can add some package and then
require it in playground?
I'm trying to (require 'http.async.client)
------
addisaden
I really love the way, i can test coljuresnippets.
Its really amazing and a really good expirience in debugging!
~~~
addisaden
Tested this on linux and a windowsmachine.
On Windows, is there a way to quit, without deleting the process? \-
Lighttable get started in the background. Is there a shortcut for shutting
down lighttable?
------
silasb
I was highly surprised this worked for Snow Leopard and it being almost 2
releases old.
Nice job.
~~~
ibdknox
A fair number of people were still on Snow Leopard (because Lion is a bit of a
disaster), so we were very intentional about making that work.
~~~
wiredfool
I'm getting this on Snow Leopard (x86):
--- Checking for updates...
--- Starting server... (this takes several seconds)
--- Server up!
The application cannot be opened because it has an incorrect executable format.
But, hitting localhost:8833 does bring up the app.
~~~
cellularmitosis
I'm seeing the same on my hackintosh (Dell Mini 10v).
------
ReedR95
Does anybody know if his color scheme exists for Textmate/Sublime Text?
------
zgm
Nice work, Chris! I can't wait to start learning Clojure with it.
------
nixarn
Doesn't work on my iMac (from '11 all software up-to-date). :S
~~~
pianoben
I also got the all-white window on first startup - MB Air with Lion, fwiw.
------
tlear
Perhaps provide some interesting piece of code to look at?
------
le_isms
I find this to be a very efficient way to learn clojure :)
------
eragnew
I can't wait to try this when I get home. Thanks ibdknox
------
taylorlapeyre
Everything so far works fanatically. Awesome job.
------
hoprocker
Tossing us a bone! Thanks!
------
boggzPit
Nice, works very well....
------
dereferenced2
On windows, I had to run set-executionpolicy Unrestricted since the light.ps1
isn't signed..
------
batista
A quick off topic question -- this is Java right? I have 2-3 years to catch up
on the latest developments, but the font smoothing seems quite nice on OS X.
Is this Swing based?
------
tubbo
That's REALLY cool.
| {
"pile_set_name": "HackerNews"
} |
Is There Enough Meat for Everyone? - mhb
http://www.gatesnotes.com/Books/Should-We-Eat-Meat
======
msandford
Joel Salatin is one of the first people that comes to mind after reading the
article. He might not rub you the right way (as he's an evangelical Christian,
etc) but he sure does seem to have some interesting ideas about raising cattle
in a natural way that's also highly productive. His way of farming seems to be
about 4x as land efficient as his neighbors (if you believe him, I tend to)
which is a big deal. It also seems to be relatively low input (not buying lots
of feed) and low capital; he uses mostly cheap electric fence.
If I sound like a fanboy it's because I'm leaning that way. It really feels
like he's "hacking" farming and I have a big appreciation for that.
[https://www.youtube.com/watch?v=mjzvtM-
Wo4c](https://www.youtube.com/watch?v=mjzvtM-Wo4c)
~~~
have_humility
A BBC Horizons documentary with subtitle "How to Feed the Planet" (popsci--I'm
aware) was posted to reddit a few months ago. The conclusion, IIRC, suggested
that even the best hacks in beef farming don't hold a candle when compared to
approaches that lean more heavily towards transitioning away from beef to
other types of meat, especially chicken.
~~~
msandford
Cattle have a FCR of between 5 and 20 or so. Chickens more like 2ish. If beef
is bad because it takes 20 units of food to make one unit of beef and you have
a way to make 4x the units of food per acre, then your effective FCR (relative
to traditional methods) drops from 20 (at the worst) to 5. If it was at 5 then
your effective FCR could be as low as 1.25 If it was more in the middle at say
10ish then the effective FCR could be at 2.5 which is pretty respectable. This
is made better because you're also getting eggs from the sanitizing chickens
and meat from the broiling chickens that are all making multiple passes over
the same land at different times.
This is of course predicated on grass fed beef with the farmer taking a
substantial interest in raising as much grass as possible (sanitizing chickens
and paddock system). It's not a lot of work, but it does take more effort than
just throwing grain at cows in a feedlot.
~~~
have_humility
I don't know what a sanitizing chicken is, and apparently neither does Google.
I assume from context it means chickens bred to lay eggs?
~~~
msandford
In this case it's chickens which get carted around a few days behind the cows.
Cows eat grass. Cows poop on field. Flies lay eggs in poop. Eggs hatch into
maggots, which eat poop. Chickens dig through poop looking for maggots. This
spreads the poop out and gives the chickens an excellent source of protein.
Cow poop is actually a good fertilizer for grass, but it's too concentrated
normally. The chickens spread it out to a more reasonable concentration and
produce eggs in the process.
------
beat
A basic vegetarian argument is that meat is inefficient compared to grain, and
there just isn't enough resource. But if this were true, then older cultures
with far less resources would never have wasted food on raising meat. Clearly,
they did.
Animals are a way of turning inedible things into edible things. Grass? Well,
you could till it all, or just turn sheep and cattle loose on it. Grain hulls?
Throw them away, or feed them to the chickens. Spoiled garbage? Pigs will eat
it. For as long as we've farmed, animals have increased rather than decreased
the food supply.
~~~
pdx
Exactly. I grew up on cattle ranches in Montana and Wyoming. The short growing
season, rocky soil, and dry conditions would make any attempt at farming
laughable, but the cattle did just fine. Driving them into the high mountains
for summer grazing allowed the mountain grass to also be converted to beef.
None of that land was appropriate to farming, which means that any beef grown
there is extra food for the planet.
What always amazes me about capitalism is how it often manages to allocate
resources efficiently. Nobody is raising large cattle herds down in farm
country. Land that can be farmed is generally farmed, because that provides
the best return. Land that can't be farmed, is ranched. This idea that you
have to give up farming to have meat doesn't take this into account.
~~~
beat
This doesn't mean corn-fed meat isn't wasteful, of course. But it does mean
that meat itself isn't the cause of hunger elsewhere.
~~~
Lawtonfogle
If it is wasteful, I wonder how much is caused by corn subsidies.
------
sremani
The way the US society discourages vegetarianism (even for people who a
traditionally vegetarian) is mind boggling. When you ask for a vegetarian
option, one would be lucky if he does not get stared like a space alien, esp.
in the country.
~~~
ElijahLynn
This is starting to change but yeah, it is saddening how many people think you
need meat to survive and grow. Bill Gates left his intelligence behind on this
one.
All protein on this planet was created by plants via photosynthesis.
~~~
slayed0
Yes, (most) protein on this planet was created by plants, but not all protein
is equal. Most proteins from plants are not complete proteins and cannot
sustain a human being. This is not true of animal protein. Yes you can combine
various plant proteins together to form complete proteins, but the answer is
not as simple as: just eat plants.
~~~
astazangasta
This is incorrect. What, pray tell, is a "complete protein?" There are no
amino acids in animals that are not found in plants. You appear to be wholly
misinformed on this subject.
~~~
slayed0
"A complete protein (or whole protein) is a source of protein that contains an
adequate proportion of all nine of the essential amino acids necessary for the
dietary needs of humans or other animals"
[http://en.wikipedia.org/wiki/Complete_protein](http://en.wikipedia.org/wiki/Complete_protein)
Many plants have "complete" amino acid profiles but one or more amino acids
are too low to be completely adequate for humans. In this case, another
protein source with a supplemental amino acid profile is required in order to
balance out the deficiency in the first.
~~~
astazangasta
Quoting from your own source: >Nearly all foods contain all twenty amino acids
in some quantity, and nearly all of them contain the essential amino acids in
sufficient quantity.
You don't need to eat meat to get your protein.
~~~
JoeAltmaier
But traditional foods in America included beans, corn, squash. Presumably to
get a reliable source of protein. The natives weren't dieticians; they ate
that because villages with that tradition thrived.
So protein quality and quantity can't likely be insured by eating any old
vegetables. Its not as simple as 'don't eat meat'.
------
MrDosu
A personal experience I have made in trying to live off the land a few times
autonomously is that it is quite sustainable when you are hunting animals.
When you rely solely on plants you need to roam HUGE swatches of land in
comparison and extract almost all of the plants you find.
------
GordonS
Another issue is what parts of the animal we eat or don't eat. At least here
in the UK, many wouldn't touch liver, heart, kidneys etc with a barge pole,
despite never having even tried them.
While offal has gained ground in restaurants in recent years, it's still
absent from most homes.
Offal makes up a decent chunk[1] of cattle and pigs, and fresh, well cooked
offal is a delicious thing. There should be more effort by the meat industry
to persuade people to try it.
[1]
[http://www.ers.usda.gov/media/147867/ldpm20901.pdf](http://www.ers.usda.gov/media/147867/ldpm20901.pdf)
~~~
gadders
A lot of UK farmers are now shipping offal (or the "5th quarter") to other
locations such as China and the Middle East. So a lot does get eaten, even if
not locally.
Source for this is Farming Today podcast on Radio 4.
~~~
GordonS
A good point - people are not so 'scared' of offal in some other cultures
~~~
gadders
I quite often eat liver, have eaten heart when a kid and happily wolf down a
steak and kidney pie.
I will not go anywhere near tripe.
------
jasonisalive
As usual, Gates gets the true problem completely wrong. The real issue here
are the numerous and significant unpriced economic externalities associated
with animal flesh production, whose impact is rendered enormous by the scale
of this industry. Animal rearing produces a major portion of global greenhouse
gases, rivers of faeces, pollutes waterways and overtaxes water resources, not
because of a lack of capacity to technologically innovate cleaner solutions,
but because collective interests in these resources are not being properly
acknowledged and protected through the negotiation and enforcement of pricing.
This is a classic economic problem. Bill Gates does the issue no favours with
his starry-eyed techno-optimism or his attempts depict food supply as a
selfless global communal endeavour. No, food supply is a market of profit-
seeking individuals using their resources to generate goods considered
valuable enough to trade by other individuals. There is simply an
overproduction of these goods because they are being sold without their
externalised costs being factored in. Food producers can make their products
too cheaply, so too many are made.
Tackle the pricing problem and technological development to minimise
environmental impacts will naturally emerge. Absent this step, efforts to
develop and promulgate technological improvements will never get far.
~~~
bryanlarsen
This is of course not unique to animal farming. Where I grew up animal farming
is much more environmentally friendly than grain farming. It uses green water
and native prairie, without chemical use or plowing. (Plowing is generally
much more environmentally destructive than chemical use).
In Saskatchewan, higher pricing of externalities would increase animal
production, not lower it.
~~~
sleepyhead
> Where I grew up
Well that is the problem here. Farming has changed. Particularly so in
America. A handful slaughterhouses, heavy corn use, chemically power washed
eggs, all within a framework that is made for economical returns and not
animal welfare or taste. And it is not a US-only problem. Denmark for example
is facing huge issues with pig farming and here in Norway we are seeing
problems with use of antibiotics in chickens.
------
sehugg
The article briefly touches on this, but substituting more efficient (< 2 to 1
conversion ratio) meats like chicken and farmed fish seems like a good idea.
The high price of beef right now certainly has changed my shopping habits, and
I can't say I really miss it -- nor do other Americans according to some
sources[1].
p.s. take chicken drumsticks, pat dry, salt and pepper, iron skillet for 1
hour at 450 :)
[1] [http://www.huffingtonpost.com/2014/01/02/chicken-vs-
beef_n_4...](http://www.huffingtonpost.com/2014/01/02/chicken-vs-
beef_n_4525366.html)
~~~
rotten
20 years ago we were talking about shellfish aquaculture as the answer to this
problem (which was obvious even then). The lower on the food chain you go,
typically, the lower the production costs. Filter feeding mussels, scallops,
and oysters are about as low as you can get and still call it "meat". Many
producers were touting these creatures as the answer to getting protein in the
future. Since then water pollution has seriously curtailed the growth of that
industry. It is still worth exploring as an option though.
~~~
delish
> Since then water pollution has seriously curtailed the growth of that
> industry.
Interesting. I'd like to know more. Do you have a source for that?
~~~
have_humility
The BBC Horizons documentary I mentioned above also looked into mussels as a
source of meat. I don't recall pollution being mentioned as an issue, but it
gave other reasons why even optimistic outlooks could only consider them a
partial replacement, given the numbers we have for meat consumption today (not
to mention decades from now).
------
gadders
I love meat. My neighbours have cows and sheep and pigs and I own chickens.
I've been up close with all those animals and they're pretty cool in their own
way.
I sometimes thing that if meat could be grown in a lab and animals didn't have
to die, I'd be happy with that. But then I realise that arsehole food
scientists from somewhere like Kraft would get hold of it and ruin it.
~~~
ocfx
Meat infused with their yellow cheese powder
~~~
gadders
Yeah, and hydrogenated vegetable oil, food dye, fillers, lego offcuts etc.
------
have_humility
What are the numbers on the chart in the linked page supposed to mean? Average
meat consumption per capita per year, organized by country? And the bigger
question: how do this kinds of charts keep getting made, and how do articles
that otherwise make no hint at their own charts' existence keep being
published?
~~~
diego_moita
It is a good question.
I think it refers to carcass per person. This means that they're counting the
weight of non-eaten tissue (bones, skin, intestines, brains, etc).
The chart sounds odd to me. I am Brazilian and I know that Argentinians and
Uruguayans consume a lot more meat than us, I believe even more than the
Americans.
When it comes to non-carcass, edible meat only we are at 37 kg per
person/year. The Uruguayans are at 60 kg.
------
Rockslide
tl;dr: no (at least not at the moment):
> Returning to the question at hand — how can we make enough meat without
> destroying the planet? — one solution would be to ask the biggest carnivores
> Americans and others) to cut back, by as much as half. [...]
> But there are reasons to be optimistic. For one thing, the world’s appetite
> for meat may eventually level off. [...] I also believe that innovation will
> improve our ability to produce meat. Cheaper energy and better crop
> varieties will drive up agricultural productivity, especially in Africa, so
> we won’t have to choose as often between feeding animals and feeding people.
------
rboyd
I'm grateful Gates and others are funding meat alternatives. He briefly
touched on the ethical issues (by proxy), but we hardly ever do here on HN. I
remember in 2011 Zuckerberg resolved to only eat meat that he killed himself.
I think more people ought to try that. I know I wouldn't be able to hunt and
slaughter my own food, and I respect people that do much more than the status
quo of outsourcing animal murder.
It's pretty hypocritical of this society to elect ~4 species of animals that
we endorse killing. But we think about eating dog or dolphin and rageface.
~~~
beat
Oh, if people get hungry enough, they'll eat dogs and dolphins, all right.
At the end of WWII, rats had been exterminated in Berlin. That was all there
was to eat.
~~~
gadders
I invited my 93 year old gran to a barbecue (she's 97 now) and asked her if
there is anything she doesn't eat or wouldn't like.
Gran: Whale meat
Me: Er, OK. I can do that.
Gran: We had it during the war on rationing. You didn't know if you were
eating meaty fish or fishy meat.
------
gadders
There is also this company: [http://motherboard.vice.com/blog/silicon-valleys-
fake-eggs-a...](http://motherboard.vice.com/blog/silicon-valleys-fake-eggs-
are-better-than-the-real-thing)
that is planning to create synthetic eggs (even though eggs are pretty much
are the perfect food).
------
shusain
Like carbon caps, governments could introduce meat production limits if none
of the other solutions pan out.
------
platz
No
| {
"pile_set_name": "HackerNews"
} |
Apple in bidding war to acquire Toshiba’s storage business - dmmalam
https://9to5mac.com/2017/04/02/apple-toshiba-nand/
======
thinkling
First news that Apple is developing its own GPU, now they're trying to buy a
NAND memory business. At least they're investing some of that huge cash hoard.
What's missing? The biggest thing seems to be displays.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Archie Botwick, WWI Veteran Facebook Chatbot that uses NLP - shnere
http://m.me/anzaclivearchie
======
theazmeister
Is this like Siri?
------
petagilbert
So awesome
------
chitopunk
this is great
| {
"pile_set_name": "HackerNews"
} |
96 Technology Blogs That Will Make You Stop and Think - buttercupsmom
https://www.sealights.io/blog/96-technology-blogs-that-will-make-you-stop-and-think/
======
buttercupsmom
Hey y'all, so a few things about this list: a) I created it because most of
the roundups I've seen suck in my opinion. They mainly list obvious choices
only. Safe. b) In my mind this will always be a work in progress so 100% there
are blogs out there that I've never heard of - let me know what is missing! c)
I'm considering moving this over to a site of its own if you all think that
this is a good resource for the community worth maintaining - let me know what
you think.
| {
"pile_set_name": "HackerNews"
} |
Angel Problem - lukas
https://en.wikipedia.org/wiki/Angel_problem
======
nemo1618
This is the 1-Angel Problem on a hexagonal grid, yes?
[http://llerrah.com/cattrap.htm](http://llerrah.com/cattrap.htm)
Surprisingly tricky, even with a 1-angel!
~~~
mmanfrin
Managed to find a strategy pretty quickly: begin from the _outside_ , opposite
of the direction you want the cat to go in; then fill until you have 1 exit in
that direction and begin to fill in the exits while the cat travels to the one
open route, and when it's one step away you close it. You then have many more
moves before the cat can get back to open area, in which case you can repeat
until you have a closed loop.
~~~
chias
I eventually found a strategy of basically trying to fill every "even" space
around the edge, and then only filling in the odd ones when the cat was 1
space away from entering the edge-ring. This would allow me enough time to
enclose the entire board, after which trapping the cat into a single space is
just a matter of time
------
vessenes
Mathé's 2-Angel proof is really nice, or at least the summary is appealing --
he imagines a 'nice' devil, shows it can be beaten, then proves that if you
can beat the nice devil, you can beat the mean one.
This is one of my favorite problem solving strategies -- reducing to a more
obvious solvable situation, then filling in the chinks and gaps to expand
back.
~~~
dsp1234
The Kloster solution[0] solves it in a more direct way, by showing that for
each action the devil takes the angel can take a counter action by limiting
it's own moves.
In the one proof is the realization that sometimes it's easier to solve a
smaller problem then prove equivalence to a harder problem, and in the other
is the realization that sometimes voluntarily reducing the number of actions
can lead to a simpler solution. Both are pretty good tools to have under one's
belt.
[0] -
[http://home.broadpark.no/~oddvark/angel/kloster.html](http://home.broadpark.no/~oddvark/angel/kloster.html)
~~~
bduerst
Seems like Kloster devised a mathematical proof for a _kiting_ \- a game
strategy that involves staying just out of the effective range of an opponent
while they chase you.
------
Kiro
If the board is infinite, can't the angel just jump in one direction forever?
~~~
chias
No. Lets say we're playing with a 2-angel. Lets place an angel, and without
loss of generality assume the angel is moving left:
_ _ _ _ _ A _ _ _ _ _
Then, perhaps the Devil goes here:
_ D _ _ _ A _ _ _ _ _
Angel moves:
_ D _ A _ _ _ _ _ _ _
Devil places:
_ D D A _ _ _ _ _ _ _
The angel must now change direction. For any given direction and any given
angel power, as long as the devil starts placing pieces far enough away he can
force the angel to change direction.
~~~
ewzimm
Couldn't the angel calculate when it's necessary to change direction and then
do so, forcing the devil to begin constructing a new trap, and then keep
repeating the same behavior? I'm sure there's a reason why this obvious
strategy wouldn't work, but I don't quite see it.
~~~
dragontamer
It has been proven that a 2-Angel can.
The difficult part of the problem is proving your argument to others. There-in
lies the difficulty of mathematical proofs.
~~~
ewzimm
I only read the part about pretending the left half is blocked and using the
left wall as a guide, which seems a lot more specific than just "go in one
direction until you're approaching a trap and then change." I understand that
proofs are much harder than intuition, but it seems that the angel has such an
advantage of choice that it would be easy to prove. At any point, the 2-angel
can move to 8 spots on an infinite plane and the devil can block 1. I wonder
how constrained the problem could get for the angel to have a winning
strategy. Let's say the angel could only move in two directions. It would seem
intuitively that this would still leave enough room to avoid traps. Devil
starts creating a trap in the up direction, angel moves right. Devil starts
constructing a trap in the right direction, angel moves up. If it were
possible to constuct a trap that blocked both directions with an unknown rate
of movement on an infinite plane, it might lead to some interesting
applications to other things, but my intuition says it's not.
~~~
nandemo
> Let's say the angel could only move in two directions. It would seem
> intuitively that this would still leave enough room to avoid traps.
Nope. Even if you restrict the angel to 3 directions (up, left, right), the
devil has a winning strategy. This is mentioned in the linked article:
> _If the angel never decreases its y coordinate, then the devil has a winning
> strategy (Conway, 1982)._
There's a simple, informal proof in the references:
Conway, H. "The Angel Problem"
[http://library.msri.org/books/Book29/files/conway.pdf](http://library.msri.org/books/Book29/files/conway.pdf)
~~~
ewzimm
Thanks for the clarification. I do wonder what other kinds of things this math
could apply to. It's pretty abstract, but if you want to direct an
unpredictable agent toward a certain behavior, knowing where to place control
mechanisms might be interesting.
------
wodenokoto
From the Wikipedia description it is bit vague how many blocks the devil put
down each turn. Is it just 1? The same as angels power?
~~~
pc86
> The devil, on its turn, may add _a block on any single square_ not
> containing the angel.
~~~
wodenokoto
That phrase can mean both 1 block and one block only, or it can mean one block
per square, on as many squares as needed as long as the block is not placed on
a square containing an angel or another block.
Particularly when read without emphasis.
I think I leaned towards an ambiguous reading because I couldn't understand
how one block per turn was enough to ever win.
------
mondoshawan
Amusingly, a variant of this is present in Beyond Zork toward the end of the
game.
| {
"pile_set_name": "HackerNews"
} |
AlphaGo’s ultimate challenge: a five-game match against Lee Sedol - wyclif
http://googleasiapacific.blogspot.com/2016/03/alphagos-ultimate-challenge.html
======
jamornh
Wow, Lee Sedol just resigned. First game goes to AlphaGo. I wasn't sure who
would win the 5 matches, but I never expected AlphaGo to win the first game!
~~~
CamperBob2
Lee was _rattled._ This match might end up 5-0.
~~~
awwducks
Too early to tell, IMO. The next game will be the bigger game since Lee Sedol
has a far better idea what he's up against. If the next game goes like this
one did, I would be more inclined to agree with you for the remaining 3.
~~~
imglorp
He has the human advantages of adaptability and intuition, the better to try a
new strategy next game.
~~~
krastanov
Adaptability is not all that human (plenty of machines learn from their
mistakes and adapt to new settings). Intuition is so poorly defined that
depending on what you mean machines easily have it (heuristics, Bayesian
inference, etc) or it is just sufficiently vague of a notion that it does not
matter.
~~~
arcanus
> plenty of machines learn from their mistakes and adapt to new settings
Curious what you see as examples of this.
> Intuition is so poorly defined that depending on what you mean machines
> easily have it (heuristics, Bayesian inference, etc)
As a working scientist and a bayesian practitioner, I'm sceptical algorithms
have intuition. From my perspective, almost all models that one codifies are
extremely brittle and will produce catastrophic failures (or just nonsense)
unless the user possesses enough expert knowledge or intuition to a-priori
know not to use the model in this regime.
However, I agree with the spirit of the text... go is a well-defined game and
adaptability and intuition will be highly limited. For instance, the human
can't just turn the board over, or unplug the game!
~~~
imglorp
I guess I was referring to strategy specifically. The tactics are probably
well in hand for both human and AI.
For the AI, the result of the first match will result in one more game entered
in the its database. If it's like chess history, it's probably slanted a
little towards that player's history in particular.
But the human player is well aware of the machine studying his strategic
history and weighting it. If he's well studied like the chess guys are (is
that how go players study?) he could employ a strategy he thinks would be
surprising to the AI, or even plan to switch strategies in the middle. If one
knows they are playing a pattern matcher, you can try to lead it to a local
minimum and then leave it there.
Just speculating :-)
------
awwducks
A bit late, but this is the AGA feed.
[https://gaming.youtube.com/watch?v=YZPKR7HzM_s](https://gaming.youtube.com/watch?v=YZPKR7HzM_s)
No one can believe it. Myungwan Kim 9p says it's likely Lee Sedol feels like
he could have won. He also says Alpha Go is likely stronger than he is.
------
colordrops
I know it's not the focus of this game, but it would add to the presentation
if a robot handled the stones for alphago.
~~~
sigterm
and used computer vision algorithm to register the opponent's moves...
~~~
lisivka
I saw this in an episode of the TV show, but I forgot name of the show.
PS.
"Person of interest":
[https://www.youtube.com/watch?v=HkvukotSSms](https://www.youtube.com/watch?v=HkvukotSSms)
------
lostdog
I wish they would highlight the most recent placements, so it's easier to
watch intermittently.
~~~
matburt
We are relaying the match with discussion and analysis on OGS! The most recent
move will be highlighted on the board.
[https://online-go.com/demo/114161](https://online-go.com/demo/114161)
~~~
makoz
Thanks for this! Loved the discussion.
------
jsnk
Amazing.. I think AlphaGo is going to win.
~~~
taneq
Lee just resigned. O.o
------
awwducks
Myungwan Kim 9p will be doing live commentary at the Korean Cultural Center in
Los Angeles for game 3.
The second game should be a doozy since Lee Sedol will definitely know what to
expect and come in full force!
~~~
awwducks
If you're based in LA, here's the event link.
[https://www.kccla.org/english/calendar_view.asp?cid=4020&imo...](https://www.kccla.org/english/calendar_view.asp?cid=4020&imonth=3&iyear=2016)
------
nickpsecurity
My money is on the human. This time.
~~~
CamperBob2
Hopefully there's still time to edit your comment before the robots notice...
~~~
nickpsecurity
[https://twitter.com/mustafasuleymn/status/707469083458068480](https://twitter.com/mustafasuleymn/status/707469083458068480)
(cough) Ok best 2 out of three before it counts. (cough)
------
cloudwalking
Live on YouTube:
[https://www.youtube.com/watch?v=vFr3K2DORc8](https://www.youtube.com/watch?v=vFr3K2DORc8)
------
pmontra
Halfway through the game and it's difficult to say who's winning. AlphaGo has
definitely improved during the winter.
------
djokkataja
AlphaGo wins :)
------
magoghm
AlphaGo won!
------
lololomg
Lee seems to be ahead so far in game 1
------
jorgecurio
I am so stoked for this match, Lee Sedol is a child prodigy and a legend....I
literally felt as excited as I was going in McGregor vs Diaz before the fight.
I used to play Go when I was a kid on televised matches in Korea during the
90s and have woken up early on saturdays watching every game live on tv. Then
I'd go to these Go school after class and there'd be like 30 students studying
and fighting.
Go is a hugely appealing game to intuitive people rather than logical people
who prefer Chess. Go is an infinitely more complex and at these Pro levels a
demigod like Lee Sedol have the same fanatic followings.
~~~
jorgecurio
so I watched this last night and it was an earth shattering moment...like no
fucking way sedol gonna get bamboozled by a computer right?
AlphaGo winning was the cherry on top but what was really even more intense
was the actual battle in itself. It was like Lee Sedol was playing himself but
a version of him that would get better and better each time Sedol attacked.
AlphaGo surprisingly chose the right strategy which was to be aggressive right
back.
Overall, I could identify with the commentator's excitement and sort of
apprehension that the first battle against the Machines have begun and lost
the first round.
Lee Sedol must have been taken back at how good AlphaGo is I think he
seriously underestimated it because he had a lot of hubris and over confidence
going in like 'yeah imma smack the shit out of alphago' and then it after the
match is like 'damn gg'.
The biggest ground breaking realization is that deep learning has become so
good that it is possible to outperform a human even in previously thought
impossible problems....who would've thought a bunch of logic gates fast
forward 40 years we have machines that beat us in our own games? 80 years from
now what will things look like?
It's a real reckoning and I really feel the drive to learn deep learning just
don't know where to start
| {
"pile_set_name": "HackerNews"
} |
WIndow 7 virtual XP is a marketing stunt - daniel71l
http://design-to-last.com/Technical/windows-7-virtual-xp-solution.html
======
jodrellblank
1\. Was a reason people went back to Win98 too; it didn't last forever.
2\. Don't argue about win7's projected speed and resource use buy extension
from vista when those are specifically being addressed by MS in win7.
3\. The real reason for virtual XP is probably backwards compat. for ancient
business apps, and that sort of wrecks your whole post. MS want you to play
games in Win7, not virtualXP.
| {
"pile_set_name": "HackerNews"
} |
Introducing A/B testing + Cross Browser testing rolled into one - paraschopra
http://www.visualwebsiteoptimizer.com/split-testing-blog/multiple-browsers-preview/?source=hn
======
cloner
Looks great. Still wish you had a pay-as-you-go plan though. E.g. buy testing
of 10,000 visitors, 50,000 visitors etc.
Its not all of us that has a need to test all the time (really) and thus a
subscription is inconvenient.
~~~
paraschopra
Thanks. We have a concept of pausing subscription, so that your account,
reports and test data remains intact. It is just that you are not able to
create tests. And when you're ready, simply purchase a paid plan again.
You could pause your account any time and any number of times.
------
lxt
Interesting product. But wow, your website really looks like
<http://puppetlabs.com/> in color scheme and layout, but most especially the
logo.
~~~
paraschopra
Interesting. Yep, I see some resemblance. Though we got our design developed
from the scratch from one of the best designers we have worked with:
<http://www.31three.com/>
------
ivabz
Good work guys. I'm already a big fan of feasibility of product. This addition
really made it double.
------
vaidik
Awesome guys! +1 For the amazing work.
~~~
paraschopra
A big kudos to you as well! You helped us kickstart the development of the
whole stack of browsers for this feature. We are very creatively satisfied
with developing this technology in house :)
------
playhard
Good work Paras!
~~~
paraschopra
Thanks! We are planning to blog about challenges involved in automation of
cross browser testing. Getting screenshots on browsers from IE7 to iPhone
Safari and everything in between was certainly very challenging. Very proud of
our engineering team.
| {
"pile_set_name": "HackerNews"
} |
Regulate Facebook and Twitter? The Case Is Getting Stronger - pseudolus
https://www.bloomberg.com/opinion/articles/2019-02-14/regulating-facebook-twitter-and-instagram
======
throwawaysea
I would definitely not want the government to be able to control what
ideas/speech is allowed on these platforms and what isn't. Nor do I want a
single entity (as Facebook/Twitter exist today) to have that control. What we
need is more effective anti-trust legislation and enforcement, so that a
number of platforms can coexist and compete even though there is a strong
network effect to having a single platform.
I would also support laws requiring that these social media platforms a)
protect consumers' data b) don't censor beyond what the law requires
minimally. But it might be easier to go the competition route.
~~~
kodablah
> I would definitely not want the government to be able to control what
> ideas/speech is allowed on these platforms and what isn't. Nor do I want a
> single entity (as Facebook/Twitter exist today) to have that control.
I have come to the admittedly sad conclusion that you can't reside in the
middle here. I mean, you can idealistically, but slope will slide to one side
or the other in practice. At least at this time I think you can ask for
government interference or not. I would like to think that you could trust
each side to know its boundaries, as we see in other regulated industries, but
time has shown either side cannot. Which side would you want to give an inch
to, because a mile will be taken (or according to some doomsayers it already
has)?
~~~
kokokokoko
The US government has censored the broadcast media(tv, radio) since their
formation. So we do have some fairly solid evidence that the US government has
not wildly abused that power.
I'm not sure this has to be an all or nothing thing. We have some reasonable
protections built into the Constitution and case law that back that up.
I understand your hesitation about having the government involved as there are
plenty of examples of government influence on the media around the world. I
share the same fears.
With that said, do we really have any real world proof that the US government
in recent years has over stepped its boundaries in regards to media
restrictions?
~~~
kodablah
We don't all publish TV content. We can see from retransmission fees for over-
the-air content to the decimation of business models like Aereo what happens
when content is governed. I see the Aereo business model as very similar to
what's happening w/ article 13 in the EU right now.
But in general I don't think you can compare broadcast mediums to
bidirectional ones (e.g. words on the telephone or text on the internet).
~~~
pjc50
That's not really driven by the government so much as the media companies.
~~~
cobbzilla
Big media companies have weaponized the government to serve their interests,
it makes sense that social media companies will do the same.
They’ll write the rules together, make the barriers to entry even higher, and
pat each other on the back.
Then it’s on to the next moral panic, this one’s been fixed!
------
kauffj
As I posted previously when this topic came up
([https://news.ycombinator.com/item?id=19079526](https://news.ycombinator.com/item?id=19079526)),
it makes little sense to regulate these companies while there are still
federal laws on the books actively encouraging the centralization of big tech.
Quoting from that post:
Why not start with relaxing the federal laws which forbid the development of
third-party applications?
The limits on third-party apps are legal, not technical. It is not technically
challenging to build an application that collects Facebook credentials and
then presents alternative views and features. It could, for example, finally
be possible to see a time-ordered view of your friends' posts (Facebook
doesn't allow this since it reduces engagement).
The development of such applications would serve as a threat and check on the
market dominance of Facebook. A popular third-party application could consider
adding its own features that Facebook does not have. It would also reduce
Facebook's revenue.
What stops this? In the US, it is primarily the CFAA
([https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act](https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act)).
Once Facebook formally tells a company to stop accessing their servers, they
are in violation of federal law if they continue to do so.
It seems premature to pursue legal action while we still have federal laws
that encourage and cement the dominance of a single provider.
~~~
whatislovecraft
> It could, for example, finally be possible to see a time-ordered view of
> your friends' posts (Facebook doesn't allow this since it reduces
> engagement).
Am I missing something? This has always been a feature on Facebook right? I
might be crazy but I thought that this was the only way I've been using FB for
years. They call it "Most Recent" News Feed, isn't that the same as "time-
ordered"? I'm quite sure they more than "allow it", they spend significant
money on supporting it.
> It seems premature to pursue legal action while we still have federal laws
> that encourage and cement the dominance of a single provider.
Why? The laws are generally meant for different things (protecting
trademark/quality of service/etc issues, vs. destabilization of society by
having too much direct influence.
We can - and must - do more than 1 thing at at time, it's not premature to
take action on Thing A while Thing B is still in-progress.
------
danShumway
I honestly feel the opposite is true -- that the case was reasonably strong in
the past and it's gotten steadily weaker.
Go back 5-8 years ago, and I might have felt like Facebook and Twitter were
unstoppable monopolies. Nowadays I feel reasonably confident that post-
millennial generations are going to widely abandon Facebook. And I feel
reasonably confident that as Mastadon matures, it will take an increasingly
large amount of market share from Twitter.
Mastadon in particular is surprising to me, because I did not think it was
going to work. And sure, it's still minuscule right now, but I'm willing to
bet that usage is going to steadily trend upwards. Right now, you kind of need
to care a lot to switch to Mastadon, but there exists a tipping point where
the user base becomes big enough that for certain communities it makes sense
to just switch en-mass.
I think people underestimate how hard it is to get a small-to-medium number of
users to switch off of a network, and overestimate how hard it is to get
_everyone_ to switch off of a network.
I guess I'm willing to bet that the fake news/foreign bots panic will last
longer than those platforms, but I also can't think of how regulation is ever
going to fix that. Maybe in Europe, but in America 1st Amendment protections
are going to get in the way. Facebook has _more_ power to ban hate speech and
bots right now than it would if it were a public utility.
~~~
soziawa
> Nowadays I feel reasonably confident that post-millennial generations are
> going to widely abandon Facebook.
You are probably forgetting Instagram and WhatsApp which completely dominate
the respective market. There is literally no competition for Facebook at all.
~~~
danShumway
That is a very good point.
I'm not sure I agree that those apps are untouchable, but I would agree that
Facebook as a company is in a much stronger position than Facebook as an
individual product, and the company is probably a long ways from going away.
------
max76
I'm disappointed that Bloomberg published an article that points out all of
the problems with unregulated social media but doesn't purpose specific
regulations that could be helpful for most of the problems they point out.
What can the goverment do to help prevent foreign interference with domestic
elections via social media? If Facebook is a natural monopoly what regulations
would make it's use more fair? They suggest a user's bill of rights, but only
one item that would fit in it. A bill of rights naturally would include
multiple items. To make matters worse the ToS lets users know how their data
might be used which is the only item suggested for the user's bill of rights.
All companies in the United States are subject to a barrage of regulations. In
my opinion some are good and some aren't. Purposing more regulations without
the details of what those regulations are is like a blank check.
~~~
joe_the_user
The thing is that deciding how to regulate social media is extremely
difficult. So someone arguing for such regulation can't begin by saying what
they want. They have to take a broad "it should be regulated" "look how
terrible, can't we DO SOMETHING??" sort of approach. Then argue regulation
isn't bad by looking at different examples.
Only once there's a huge ground swell can specific proposals be laid down.
But all this is makes it sound like a giant power grab. And yes, that's what
I'd call it. Not because all regulation is bad but because this particular
thing is far outside the purview of the Federal Government.
~~~
philpem
I think I'd start with forcing advertising transparency on any social network
(and probably search engines too -- actually, make that any site with
advertising).
And then force any company paying for political advertising to list every
single one of their donors publicly. Chuck an unlimited personal liability
clause in there for the company directors, so they can't just wind the company
up to avoid any fines.
------
avar
The president of the US uses Twitter to make statements to the public, and so
do a lot of other officials.
So the argument that it's a purely private platform is getting harder to make
than just the Facebook use-case where "all my friends use it", which you could
also say about iPhones, or Coca-Cola. Should a private company be allowed to
ban you from what's becoming a de-facto platform for interacting with
officials?
But the article lazily likes to pretend that this problem could be solved
within the borders of the US with proposed FCC regulation. That still leaves
the most interesting problem, which is how are we going to square the free and
open Internet with the interests of nation states and their individual
citizens.
There's complaints about Russian election interference. But right now US-
sponsored election interference is happening in Venezuela via Twitter. What's
to be done about cases like those? Is Venezuela's only recourse going to be to
ban Twitter?
~~~
datenhorst
> But right now US-sponsored election interference is happening in Venezuela
> via Twitter.
Do you have a source for that?
~~~
docbrown
Not sure on his source or argument but my guess would be accounts advocating
for Maduro to be displaced by Guaido. If you look at it from that POV, you
will clearly see how Twitter is playing a vital part in an US-backed coup
against the Venezuelan people because currently, Maduro still has support from
some of his allies and from his own military. [1] These are mucky waters to
wonder through.
1: [https://www.reuters.com/article/us-venezuela-politics-aid-
id...](https://www.reuters.com/article/us-venezuela-politics-aid-
idUSKCN1Q325K)
------
athenot
The downside with this is the "regulatory moat". If this hypothetical
regulation becomes hard to meet, then only a few companies are able to comply
with it and it inadvertently reinforces their position.
What I would like to see is a requirement for platforms to be open, so they
can't take data and networks of people hostage by locking things down. An
analog is portability of healthcare data: one hospital or EMR vendor can't
lock up data for themselves, they must make it open to others (though they
still drag their feet and don't always make it convenient).
~~~
chillacy
This tends to be the bargain for natural monopolies. You’re limited by
regulation but you basically cease to have competition (like.. Comcast).
------
sovietmudkipz
I get uncomfortable with this line of thought. I would much rather let the
market produce alternatives products when companies do things customers object
to.
The best thing about the internet is that it is so easy for another product to
spring up to serve a new need or compete with ensconced businesses.
Regulations increase cost of business and dissuades other companies from
competing.
~~~
scarface74
It's surprising to see how much many posters on HN want more government
regulation over a tech company.
It's about how so many are in favor of "taxing the rich", not realizing that
to Middle America with an average household income of $70K, if you make over
six figures, you are "the rich".
And from the political skew of HN, I wonder how many actually trust _this_
government with more power?
~~~
ocdtrekkie
The problem is, right now _Facebook and Twitter_ have significant power. And
the question is whether or not you trust the government, but which you trust
_more_ : A company with a slogan of "move fast and break things", or a
bureaucracy purpose-built to move very slowly and purposefully.
~~~
Frondo
It's not just a bureaucracy built to move slowly and purposefully, it's one
that we all, by design, have a say in operating.
Don't like how Facebook runs its data collection on you whether you have an
account or not? Tough. There's no, and will never be, a town hall for
Facebook.
Don't like how the county runs its health services departments? Well, you can
show up at county council meetings, you can get involved politically, you can
vote, etc.
Fundamental difference.
~~~
scarface74
Theoretically, yes.
But if you live in a larger state like California, you have much less say in
the federal government on a per capita basis than someone who lives in Rhode
Island between the Senste (2 senators per state regardless of population) and
the electoral college, not to mention gerrymandering.
------
jimkleiber
I would prefer that tech companies come up with solutions to these problems
and therefore not require law to try to solve them. And yet, I think
regulations are often a sign that an industry had conflicts and just kept
avoiding the issues, so people got tired of waiting.
I personally am tired of watching Twitter, Facebook, Google, and others give
the impression that they didn't know fake accounts were being used to
manipulate the actions of individuals--either to hate someone, vote against
someone, send money, download a virus--as this seems to be the storied history
of spam. I'm tired of interacting with someone who appears real but may be
fabricated to trick me into doing something. This problem is not slowing down,
as the This Person Does Not Exist site showed us the other day here.
I yearn for the tech company that creates a platform where I interact with
people who are verified to be who they say they are. Please, tech companies,
let people verify their accounts. Let the overall verified users on your
platform increase. Please do something before regulators step in so that they
don't believe they have to.
~~~
pixl97
How do you verify a user in the age of identity theft, across all nations on
the platform?
And how do you stop retaliation against non-popular options, like
homosexuality in particular countries?
------
imh
As someone who doesn't use Facebook, Twitter, or their subsidiaries, the
debate is kinda laughable. Sure break them up for antitrust. Write privacy
regulation. But regulating the content shared there? If you don't like it,
it's so easy to opt out, and it feels great. People write like these are
utilities necessary for a good life, which sounds crazy from the outside.
------
desc
Trusting any government to regulate massively-powerful information clearing
houses is a mistake, because they will inevitably, eventually, and maybe even
(best case) unintentionally abuse it for their own ends, as history has
demonstrated.
Trusting those clearing houses to regulate themselves is a mistake, because
they have already abused their power for their own ends (profit).
We should never _trust_ any organisation to work against its own interests or
those of its members. They must all be required to be utterly transparent in
relevant actions and reasoning behind those actions.
------
tracker1
I think it might become necessary to create an antitrust class for effective
media monopolies like twitter and facebook, as well as financial institutions
(paypal, mastercard, etc) that includes provisions for aknowledging and
preserving first amendment rights.
While nobody likes speech that they don't agree with, or feel is vitriolic
against one's own ideology, it's exactly that speech which needs to be
protected. PC outrage is maximizing online censorship in ways that should send
chills down anyone's spine.
~~~
tracker1
Perhaps something along the lines of, "Any company with more than 5 (or 10?)
million monthly users in the system," as a baseline for provisions regarding
protected speech. I don't like censorship in general, but can respect those
that would want to build smaller communities with proactive moderation vs the
likes of twitter/facebook etc with very little effective moderation in
practice.
Also, similar provisions ensuring that policies against classes of speech are
used regardless of backing ideology.
------
imgabe
> Everyone now knows that foreign governments, most notably Russia, have been
> using social media aggressively to promote their interests.
And if _anyone_ is going to be using social media to manipulate public opinion
in the US, it should be the _US_ government, goddamnit!
~~~
ForHackernews
Maybe, yeah. I'm not a cultural relativist: Democracies are better than
dictatorships; freedom of expression is better than repression; pluralism is
better than chauvinism.
Every major technology platform in the West exists as result of small-l
liberal enlightenment values. Maybe I wouldn't name the current United States
as my ideal champion for those values, but I don't have any problem with the
general suggestion that Western tech companies should be promoting liberal
values around the world.
~~~
zaarn
If a single corporation can control the primary means of communication and
only allows "corporate-sanctioned" messages, then we will no longer be a
democracy in any meaningful sense.
Yes, maybe that is alarmist, after all they are only concerned about corporate
influence, but this isn't the first time in Facebook history where you could
just accuse someone of being a bot to shut them up if you don't like what they
say.
------
rdiddly
_" If federal officials are going to regulate social media, they should be
independent of the president."_
You're in luck, the legislature (where regulations are supposed to come from)
is a whole separate branch from the President. Any time you've got the
executive branch handling it, the President is in charge of it.
~~~
dantheman
Except that the legislature writes general polices and defers to the executive
to work out the specifics.
------
Illniyar
That sounds like a minefield. How do you regulate a service where two people
from different countries can interact? Which countriy's law should be followed
when a person from country Y posts on a feed from country X? What if the laws
contradict eachother?
Multinationals had to deal with multiple regulations before, but most time
that was solved by using the visitor's residency, but that might not be so
clear cut when we are talking about social networks.
~~~
ilovetux
A good example to watch is with the ramifications of the GDPR playing out in
the EU. I'm still not sure how I feel about the actual wording and effects of
the GDPR, but it does provide a test-bed of sorts.
------
SketchySeaBeast
> If federal officials are going to regulate social media, they should be
> independent of the president. The simplest course would be to give new
> authority to the FCC rather than to a whole new agency, though the latter
> option also deserves consideration.
I was just thinking I hadn't seen a picture of Ajit Pai's ridiculous coffee
mug for like 15 minutes now.
~~~
ocdtrekkie
His coffee mug (or something equally ridiculous), would not be out of place in
a Silicon Valley office of any kind. Is his coffee mug your objection, or do
you dislike the man (and his political views), and hence, ridicule everything
about him as an ad hominem sort of attack?
I find this very irritating, and we see it also with Donald Trump, where it
suddenly becomes okay to body shame someone, make suggestive comments about
their relationship with their wife or children, or attack them in other ways
unrelated to their politics, because of their politics. I wouldn't say I'm a
fan of either individual by any means, but I think we should aim to do much
better, especially here on HN. Can we talk about a corporate-owned bureaucrat
and an incompetent president as a corporate-owned bureaucrat and an
incompetent president?
~~~
SketchySeaBeast
I apologize for Reese shaming. I think there's a difference between body
shaming and making light of someone's obviously deliberately chosen self
image.
~~~
PavlovsCat
In context of the criticisms of these people (and more importantly, the
interests they represent) that actually matter, they are functionally the
same, silly distractions. Ocdtrekkie didn't equate bringing up that cup with
body shaming, they simply mentioned that as another result of what I would
call the same inability to be serious, even about fires that are still raging,
causing suffering, and for which we have no answer and no plan. It's like some
kind of pressure release valve I guess, and IMO that pressure needs to find a
better route.
------
prepend
I think regulating Google is more important. Sure social media has a lot of
noise, but Search is actual reality. Google’s potential to shape worldview and
commerce based on search result would have much greater impact from
regulation.
Imagine a single company selling 90% of all tv ads in the country? Or a single
company selling 90% of all the ads in newspapers.
------
kethinov
I would go much further. Break up Twitter into a bunch of Mastodon instances.
Break up Facebook into a bunch of Diaspora instances. The internet should be
open. Imagine if email or HTTP was proprietary like Facebook and Twitter are.
This is a nightmare and we should end it.
~~~
SketchySeaBeast
> Imagine if email or HTTP was proprietary like Facebook and Twitter are. This
> is a nightmare and we should end it.
Email is a means of communication, twitter is a platform. Anyone is free to
implement a 140 character messaging service, the only problem is that these
particular social media platforms have gotten monstrous. I don't like them,
but the comparison between HTTP and Facebook doesn't seem right.
~~~
kethinov
What is classified as a "means of communication" or a "public utility" and
what is "a platform" is itself language that is a consequence of political
conditions. If HTTP or email were proprietary and owned by a single entity,
they would refer to their ownership of it as "a platform" just as Facebook and
Twitter do now. There is no technical reason these services cannot be reduced
to mere "means of communication" or a "public utility" by means of breaking
them up into separate services that are forced to federate with each other
over a common protocol, e.g. Mastodon/Diaspora.
------
exodust
I hope the future is one where you can pack up your social profile and move it
to another service provider, or host it yourself. All without any of your
contacts, friends and associates knowing or caring which service provider or
host your profile is on.
Just like telcos are now. Keep your phone number and move to another provider.
In other words, running with the grain of how the open web works best, rather
than against it. Until that happens, I will never join FB or any other walled
junkyard.
------
mattbeckman
Decentralized social networks are a thing, and will become a much bigger
thing, fueled by the catalyst of the aforementioned monarchs of social media
taking things way too far.
------
pcstl
Allowing the government to regulate Facebook and Twitter is opening a door for
the government to claim it has the legal right to regulate anything on the
Internet. While Facebook and Twitter indeed might be used for nefarious
things, dissidents and social activists depend on the unruly nature of the
Internet for a lot of their operation. You can have both or none.
------
Animats
Regulate content, no. Break up de-facto monopolies, yes. Facebook should be
forced to sell off Snapchat, Instagram, and WhatsApp, for starters.
~~~
civicsquid
Not agreeing or disagreeing, but I don't think Snapchat is owned by Facebook.
~~~
Animats
Right; Facebook tried, but failed.
------
dontbenebby
I thought this was going to be an article about corporate taxes (or lack
thereof).
So now we're going to introduce the perilous hand of federal invention, and
it's not to make gigantic companies pay their fair share, but to stifle
speech?
------
ArtDev
There are serious consumer protection issues that need to be addressed. Laws
establish individuals rights and individual freedoms. We need some good well-
written laws by people who know what they are talking about.
------
pochamago
I don't understand how Twitter and Facebook can simultaneously be accused of
Monopoly. They're competitors in the same market
------
bargl
I struggle with this concept. One of my problems with these platforms is that
de-platforming someone can be argued to be a violation of free speech (there
are valid counter arguments as well). I don't know how to solve that because I
think that no one forces news papers to accept articles from people they think
are crazy.
Can we separate the "data storage" at this scale from the message delivery.
Where de-platforming someone doesn't mean losing your hosting of the videos
but instead pushes you to a fringe "subscription and recommendation" tool?
This is a massive hand waiving over simplification so please correct gross
assumptions I'm making here.
Youtube has services that could be broken into separate "categories" if you
would. Platform to post videos. Platform to subscribe to videos. Platform for
recommendations of videos. Platform with "top" videos as watched by everyone.
Then you've got Google Search which doesn't control the data but caches it and
it has a database of the inter-relationships between sites and you and can
recommend sites to you. Up until they started customizing data to a user I'd
say there was only one issue here. But now that they customize, you actually
have two sets of data in search. Your data. The Internet's interconnected
relationships.
I don't know how you break this data apart in a reasonable way. I mean Google
did create all of it or buy a platform and expand it in the case of YouTube.
They deserve to be rewarded for their innovation (obviously my opinion) but we
need an equal ability to compete.
I am obviously concerned when you can be "de-platformed" for a TOS violation
and silenced. Especially when some of these platforms are so ubiquitous that
being de-platformed from them all could completely silence an individual. But
at the same time a company should have the right to choose who their customers
are if there aren't fair regulations that address this.
So it's complicated and I think this sort of thing deserves a lot of
conversation. I still think action isn't appropriate because if we did have
twitter owned by the government we'd also need to keep free speech on there.
It'd have to get a court order for data to come down as libel or something
similar.
~~~
dalbasal
_I don 't know how to solve that because I think that no one forces newspapers
to accept articles from people they think are crazy._
I don't think this is tangential... I don't think this necessarily resolves to
a perfectly fundamental principle, in a philosophically loophole-free way.
Maybe a newspaper isn't a good analogy.
Here's one example. A large portion of elected officials around the world
communicates with (and mostly to) their electorate mostly on social media. It
is how they get elected, argue positions, etc. The argument (I'm not sure I
agree, but I think there's an argument) is that twitter, fb, etc have crossed
some sort of the threshold where there is a lot at stake.
~~~
bargl
This is a good point. I think some people treat facebook and twitter as their
main news source which is why I drew that correlation.
The twitter blast out by politicians is a new thing, that hasn't really
existed before from what I can see. It's common to try to draw a parallel to
something that arleady exists (like I did). I think you're right that my
example was a miss.
~~~
dalbasal
Cheers bargl.
I don't think it is a clear miss. The way we think of law, and right-and-wrong
generally, tends to be principled. Rules and laws as embodiments of abstract
principles that are consistently true.
That's what the analogy does is check for inconsistencies. That's what lawyers
do, argue by analogy. High judges can invalidate laws if they create
inconsistencies.
But... the laws aren't really abstract truths. They tend to be point solutions
to specific problems.
Social media has created new realities that just didn't exist in as meaningful
a firm before.
Personally, I'm happy that proprietary social media platforms just get
replaced by open platforms a la WWW or email. There's no real reason to have a
multi billion dollar company behind twitter, text messaging and such.
That'd make the choice between bureaucratic or monopolistic control moot.
------
eachro
Suppose Facebook or Twitter were incorporated as a non-US company. Would the
US gov still be able to regulate them?
~~~
rgarrett88
Can Europe regulate google?
------
airocker
How about Google? Force them to give us a search engine for 50$ a month and
Android for 100$ a piece?
~~~
ucaetano
How do you force someone to sell you a product they don't sell?
~~~
airocker
What they sell should not be saleable?
~~~
ucaetano
What do you mean? Are you suggesting we force Google to offer a paid version?
"Law number XXXX: Google must offer a paid version of its search engine to end
users".
~~~
pixl97
Why not law YYYY, you cannot own the search engine, video distribution
platform _and_ ad platform at the same time. The ad platform must be spun off
as a separate non-colluding entity.
~~~
ucaetano
"non-colluding entity"
Colluding? What the heck are you talking about?
------
ucaetano
It's an opinion piece, not a news article, keep that in mind.
------
rblion
How exactly could they be regulated?
~~~
justinmchase
With a law.
Perhaps one saying that they cannot ban or block anybody or their content,
except for content already deemed illegal by a governing body or upon receipt
of a court order.
------
bad_user
The case is getting stronger for a balkanization of the Internet.
I don’t want the US or the UK to censorship my Internet, any more than I want
China.
It’s bad enough that nudes are being censored due to US companies pushing the
US conservative Christian values on the rest of the world, while violence gets
a free pass.
Also do you really think that the fake news promoting Trump or Brexit would
get censored? That would be so extremely naive.
------
carrja99
Yes.
| {
"pile_set_name": "HackerNews"
} |
Google is now a certificate authority? - gary4gar
http://i.imgur.com/waCb4.png
======
gary4gar
It seems google is using self-issued SSL certs which do not generate
warnings/promts to the user.
Domain: plus.google.com Browser:Google Chrome 14.0.835.202
~~~
sp332
The "Certification Path" goes to Google Internet Authority, issued by Equifax.
| {
"pile_set_name": "HackerNews"
} |
Mou is one year old - chenluois
http://chenluois.com/blog/mou-is-one-year-old/
======
alexcabrera
Mou is the single best purpose-built Markdown editor I've ever used. I've
tried all kinds of Markdown-centric workflows, but keep coming back to Mou.
Split-screen editing and preview windows make all the difference, and Mou
doesn't seem to ever have any performance issues.
Mou has become essential. Donated $50, worth every penny and then some.
------
pudgereyem
I also think Mou is by far the best Markdown editor I ever used. But even
better is it's creator @chenluois. I asked him if he was to support Math
Syntax, and ~2 months later it came. As he writes on this post;
> __That's why donated users' suggestions are on my highest priority, because
> it is them who are supporting Mou's development. __
Thanks so much, and I really hope ppl keep donating for every feature that
ships (if they benefit from it). I know I will.
------
obilgic
I just started using mou for my lecture notes. But I have a question, when I
export it to html it looks awesome, for some reason when I export it to pdf to
print, texts get bigger it looks all different than html. What would be the
reason for that
~~~
chenluois
You can write a PDF specified custom CSS inside the @media print rule, assign
the font size you want.
Take this post as an example: <http://chenluois.com/blog/mou-pdf-export-page-
break/>
It's talking about the page break, but the principle is the same.
~~~
obilgic
Thanks, I actually just printed 15 pages a minute ago. I will definitely try
it.
P.s. I actually spent so much time on that css. and I was thinking about a
live showcase/gallery for different markdown css files. I am sure people are
using Mou because of it's beauty and simplicity. Do you think that kind of
website would be useful?
~~~
chenluois
You are planning to make a CSS showcase website? Then go ahead and make it, I
think it would be useful.
------
Protonk
I love Mou. I _could_ use textwrangler for markdown work, but Mou's simple
interface and live previews make it a snap. It's a great middle point between
a general text editor and a markdown focused document editor.
------
jaykru
I love Mou. I've been using it for several months as I've been getting
accustomed to using Markdown (which I'm fairly new to) for my class notes. It
just keeps getting better and better. Thanks Chen Luo!!!
------
rocu
Happy birthday. I also love Mou! Keep up the good work.
------
maxjacobson
This is a great Markdown editor.
| {
"pile_set_name": "HackerNews"
} |
Would you publicly call out an non-paying client? - jentulman
I just found this via twitter..<p>http://bsglogistics.co.uk/<p>Here a developer has suspended hosting and publicly called out a client for non payment of bills.
Personally, whilst I might suspend a clients service, I don't feel that this kind of name and shame tactic would reflect well on me professionally.<p>Would you do the same?
======
damoncali
Ask yourself this: What good does it do you to publicly air this stuff? Take
the site down if you must, but enraging your clients, even deadbeat clients, I
would guess is bad for business overall. You have to leave them room to save
face.
------
JoeAltmaier
Anything can be done gracefully. A simple announcement that due to failure to
make payments, service is suspended. Kind of like those emails about Ken
'pursuing other opportunities'.
------
asto
Yes I would call them out. People do these things because they believe the
consequences are painless.
I wouldn't do what this guy has done though because I doubt clients publicise
sites before they are live, so no one's going to see that notice anyway.
------
mrkmcknz
What 'freelance' developer would build a site, host it and put it live without
one payment.
~~~
bjplink
This was my first thought as well. This is a good reason why you need to take
down payments.
There probably isn't a freelancer out there that hasn't wanted to do what this
guy has done to a deadbeat client. I just don't see how going this route
benefits you in any way though. Now everyone involved looks like a jerk.
~~~
marquis
Downpayments aren't often enough. I've had friends whose clients never paid
the last installment and the owner has changed the access credentials. It's a
difficult path - what can you do when the site is live and you can't get
access anymore, and the client won't pay the last installment before it's
live?
~~~
JulianMiller520
first and foremost you can host the site yourself which gives you unfettered
access and secondly don't ever give admin credentials to a site that wasn't
fully paid for???
~~~
marquis
Not all site jobs are self-hosted, for example site upgrade projects.
~~~
JulianMiller520
right in which case I'm assuming you'd remove your work and leave the original
version instead of pulling down work which wasn't yours.
~~~
marquis
My point was: I have seen clients receive work, change the password on their
site and not pay for it. In relation to the topic, it is not possible for the
developer to remove the work so whether it is ethical to denounce the client
publicly otherwise is the dilemma (personally I would not do this and send a
debt collector, but then again you're not always in the same country). So what
do you do, put a bad review on Yelp?
~~~
mrkmcknz
You could try to recover the costs legally. In the UK we have a swift
procedure which could make it a legal matter with payment due immediately or a
plan to clear balance arranged.
It is a very difficult situation indeed.
| {
"pile_set_name": "HackerNews"
} |
Hackers dump data for 2.3M Patreon users online - wymy
http://www.theverge.com/2015/10/2/9439077/patreon-hack-user-database-2-million-users
======
sigmar
>Patreon revealed earlier this week that it had recently been hacked,
compromising the email addresses, usernames, and shipping addresses of its
users.
Were passwords not breached? Or is it just that they haven't publicly released
the passwords from the breach?
~~~
21echoes
passwords were leaked, but were per-user salted & 12-round bcrypted, so there
has not been a mass password breach. with significant computing power and weak
enough passwords, obviously some passwords that are extensively targeted (say,
a famous or infamous creator) will be compromised in the coming weeks &
months. which, of course, is why the initial announcement suggested all users
change their passwords as a precaution.
| {
"pile_set_name": "HackerNews"
} |
Gödel and the limits of logic - ColinWright
http://plus.maths.org/content/goumldel-and-limits-logic
======
jaysonelliot
As an aside, if you look at the photo credit on that great color photo of
Einstein and Gödel, it was snapped by Oskar Morgenstern, one of the fathers of
game theory.
<http://en.wikipedia.org/wiki/Oskar_Morgenstern>
Morgenstern and Einstein were Gödel's closest friends, I've just now learned.
It gives me goosebumps looking at that photo and imagining the three of them
on that lawn.
Semi-related, here's an account of Gödel's "pent-up lecture" about the
inconsistencies in the American constitution that he told to his citizenship
examiner: <http://morgenstern.jeffreykegler.com/>
~~~
why-el
Thanks for the links. I read the account. I was surprised to see that
Morgenstern didn't mention Gödel's arguments. It only made a reference to the
steps leading up to Gödel's own findings, like what he read on and how much
time it took him, but never mentioned the substantial argument, which is what
I wanted to read.
------
vbtemp
When I first became fascinated with incompleteness (following initial
coursework in theory of computation), it kind of became my "religion" of sorts
for a while. But as many mathematicians lament, the Incompleteness Theorem is
one of the most popularly abused proofs of all time - used for non-experts to
assert their own half-baked pseudo-philosophy (of course, the same goes for
quantum mechanics as well).
These are a few books I recommend:
"Incompleteness - The proof and paradox of Kurt Gödel" by Rebecca Goldstein
"Gödel's Proof" by Ernst Nagel (it's a tiny book, not too technical, but
technical enough for anyone with a solid CS background to appreciate and
understand)
~~~
rgower
I have no background in CS or Math, but a lot of philosophy. In other words,
I'm a highly interested layman. What's my best plan of action to understanding
Godel's theory? Maybe the best approach would be an entry level book on CS?
~~~
vbtemp
At the very least a good course in discrete mathematics is a good start (it's
also a good start for anything technical as well - one of the most valuable
math classes anyone can ever take, as far as I'm concerned)
Following that, a good class in the theory of computing: understanding what
exactly a generative grammar is, properties of classes of languages (e.g.,
understanding what "regular languages are closed under complimentation"
means), pumping lemma, diagnalization proofs, halting problem. The
incompleteness theorem is intimately tied to this. This is the "CS-route" to
getting a good understanding in Incompleteness, I'm sure math or physics
majors come to approach it in each their own way.
Being a little blunt, a background in philosophy (whether it's academic or
not) without a solid discrete math background, doesn't help you out at all.
This isn't philosophy, it's just a fact about properties of formal systems of
sufficient complexity. If you're looking for philosophy you won't find
anything too deep in the proof of Incompleteness. The philosophical
implications are not clear.
However, I do recommend Rebecca Goldstein's book. It's not technical, and
she's a Princeton philosopher who will indulge you with possible philosophical
ramifications of the theorem (along with a good narrative). I also recommend
her other books as well, especially her first novella "The Mind-Body Problem".
From a philosophical perspective, the dispute between Goedel and Wittgenstein
who never accepted the Incompleteness Theorem "whereof we cannot speak we must
pass over in silence", which, ironically, speaks of something of which we
cannot speak.
~~~
praptak
> At the very least a good course in discrete mathematics is a good start
I believe that the best starting point to get to incompleteness is formal
logic. This is the basic set of concepts that lets us make terms, statements
and finally proofs the subject of formal mathematical study, thus tying the
loop (formally mathematically defined reasoning about formally mathematically
defined reasoning :-) ) that leads to Goedels proof.
Discrete mathematics is helpful but it is rather low level, the core concepts
in incompleteness come from formal logic.
~~~
vbtemp
I consider a solid discrete math curriculum to provide a reasonable background
in first and second order logic
~~~
neilc
I would be surprised to find second order logic discussed in an introductory
discrete math curriculum.
------
ColinWright
Related: Logicomix
<http://news.ycombinator.com/item?id=3991687>
<http://www.logicomix.com/en/>
Mentioned in glowing terms here on HN many times:
[http://www.hnsearch.com/search#request/all&q=logicomix](http://www.hnsearch.com/search#request/all&q=logicomix)
<http://news.ycombinator.com/item?id=846451>
<http://news.ycombinator.com/item?id=870762>
<http://news.ycombinator.com/item?id=874471>
<http://news.ycombinator.com/item?id=3690254>
It was my present for proofing an early draft of "Here's Looking at Euclid" /
"Alex's Adventures in Numberland"
------
ionfish
"It's like an ill-designed jigsaw puzzle. No matter how you arrange the
pieces, you'll always end up with some that won't fit in the end."
I really don't understand this analogy. The first incompleteness theorem shows
that there are statements true of the natural numbers which aren't provable
from any sufficiently strong recursive theory. It's more like Th(N) (the set
of statements true of the natural numbers) being a jigsaw puzzle from which
many pieces will always be missing if you start with a recursive set of pieces
and try to lay down only those pieces which a provable from your initial set.
Nothing "won't fit": there aren't inconsistencies or incompatibilities at work
here, but _incompleteness_.
~~~
stiff
I think the point is that if you try to add those unprovable theorems to the
system to try to make it complete it becomes inconsistent.
See for example:
[http://en.wikipedia.org/wiki/Consistency_proof#Consistency_a...](http://en.wikipedia.org/wiki/Consistency_proof#Consistency_and_completeness_in_arithmetic)
_Moreover, Gödel's second incompleteness theorem shows that the consistency
of sufficiently strong effective theories of arithmetic can be tested in a
particular way. Such a theory is consistent if and only if it does not prove a
particular sentence, called the Gödel sentence of the theory, which is a
formalized statement of the claim that the theory is indeed consistent._
~~~
ionfish
"I think the point is that if you try to add those unprovable theorems to the
system to try to make it complete it becomes inconsistent."
Eh? No it doesn't! If you add Con(PA) to the axioms of Peano arithmetic you
obtain a stronger system. That system can't prove its own consistency, of
course, but if you have a proof that the system PA + Con(PA) is inconsistent
then you're probably in line for a Fields Medal.
Alan Turing worked on precisely this issue, developing ordinal logics in his
PhD thesis (with Alonzo Church) to try to overcome incompleteness. Soloman
Feferman, who in the 1960s proved a stronger result than Turing obtained, has
written about this extensively. An accessible paper is this one:
<http://math.stanford.edu/~feferman/papers/turingnotices.pdf>
~~~
stiff
Yes, but in your example the system is still incomplete and the moment you
would add an axiom that would make it complete, it would become inconsistent
(so either you never finish your puzzles or you finish them and exactly the
same moment they fall apart).
From Wikipedia again:
[http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_t...](http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)
_Gödel's theorem shows that, in theories that include a small portion of
number theory, a complete and consistent finite list of axioms can never be
created, nor even an infinite list that can be enumerated by a computer
program. Each time a new statement is added as an axiom, there are other true
statements that still cannot be proved, even with the new axiom. If an axiom
is ever added that makes the system complete, it does so at the cost of making
the system inconsistent._
~~~
ionfish
Right, but the point here is that we're not just talking about extensions of
the system, we're talking about true but unprovable statements—that is,
statements that are true in the standard model of arithmetic but not provable
in PA (or whatever other arithmetic theory strikes your fancy). This is why
Turing looked not at single formal theories but at a hierarchy of consistency
extensions of the initial theory. In other words, the game changes from formal
provability to _informal_ provability, and from provability relative to a set
of axioms to absolute provability. Turing showed (very roughly) that given a
tree of consistency extensions (which branches only at limit stages) every
Pi_1 sentence was decided at some point a with |a| = ω + 1. Feferman then
proved in the 1960s that there is a path through the tree of ordinal notations
that decides every Pi_2 sentence. These are completeness results, albeit for
progressions of formal systems rather than individual systems. So certainly
the puzzle can never be completed _within a single formal system_ , but by
restricting to sentences of limited complexity, there is an ordinal-time
operation which decides each sentence (obviously there are numerous
philosophical problems with this, although I'm afraid my expertise in this
area is extremely limited so I can only give a sketch of the issues involved).
------
haliax
On a related note, does anyone know where I would look to understand
reducibility of formal systems to one another?
I'm really interested by questions like:
Why is second order logic irreducible to first order logic if I could use
first order logic to reason about the behavior of a turing machine running a
second order logic theorem prover with whatever inputs I like?
How do I get something that can do what I can do, which is to say take _any_
formal system and prove theorems with it? How do you determine what formal
systems are "valid" logics? (Leading to sensible conclusions rather than
nonsense like A & ~A)
~~~
ionfish
'Reducibility' in general is an informal notion, and as such there are many
different technically precise ways of capturing aspects of it. Mutual
interpretability and bi-interpretability are two of these, but they apply to
formal systems with the same underlying logic (that is, the same semantics and
proof theory). There are also many other notions of translation between
different logics like Gödel–Gentzen negative translation between classical
logic and intuitionistic logic. I'm not sure if there is a good introduction
to _all_ of these different ways of capturing reducibility, but you could try
asking on math.stackexchange.com, there are usually helpful responses to
reference requests there.
Second order logic does not have a complete proof theory, so your Turing
machine will not be able to compute the consequences of a theory formulated in
second order logic. This can be avoided by employing Henkin semantics, but
then you're not working with full second order logic anymore. Stewart
Shapiro's 2000 book, _Foundations without Foundationalism: A Case for Second-
Order Logic_ has the technical details should you be interested.
~~~
haliax
> Second order logic does not have a complete proof theory
Is this different from saying that second order logic contains unprovable true
statements / that the incompleteness theorem applies?
Also thanks for the really well informed response!
~~~
ionfish
One of the features of first order logic is that the provability relation is
_recursively enumerable_ : given any recursive first order theory, there is a
Turing machine that can list every theorem of that theory (although of course
it will run forever).
Additionally, first order logic is _complete_ : for every statement true in
all models of a theory, there is a proof of the statement from the theory.
These two constraints cannot both be satisfied in a sound deductive system for
second order logic. To see that this is so, consider that in second order
logic we can prove Dedekind's categoricity theorem: there is only one model
(up to isomorphism) of the second order Peano axioms (PA2). Let's assume that
the provability relation for second order logic is recursively enumerable. We
know from Gödel's incompleteness theorem that the set of first order sentences
true of the natural numbers is not recursively enumerable. So take a sentence
of the form "If PA2 then _" for some sentence _ which is in that set but not
in the extension of the provability relation (this is a legitimate statement
since the PA2 axioms are finite so we can just take their conjunction). This
should be a logical truth of second order logic, but it's not provable (by the
argument just given), so second order logic is incomplete: there are
statements which are logical consequences yet are unprovable. So in other
words, yes, the incompleteness theorem is very much at play in this limitation
of second order logic.
For the technical details I very much recommend chapters 3 and 4 of Shapiro's
book; it's not terribly expensive, and any decent university library should
have a copy.
(A small footnote to my earlier post: Shapiro's book originally came out in
1991, not 2000—that's just the date of the paperback edition, and I'm unsure
as to whether there are any substantial differences between the two.)
------
Fice
Stephen Hawking «Gödel and the end of physics»
<http://www.damtp.cam.ac.uk/events/strings02/dirac/hawking/>
------
SoftwareMaven
GEB sits on my nightstand with too little time to be read. It might have to
get bumped up the priority queue a bit.
~~~
oz
Same here. Got through most of the foreword, but haven't found time to
continue.
~~~
andybak
I read it a long time ago (late teens/early twenties) but it changed my
intellectual world and gave me an insight into things that I might never have
been introduced to. Hard to know whether it would have the same impact now or
the same impact for others but I rate it very highly for personal reasons.
Also - Rudy Rucker's 'Infinity and the Mind'...
------
ttttannebaum
"Another result that derives from Gödel's ideas is the demonstration that no
program that does not alter a computer's operating system can detect all
programs that do. In other words, no program can find all the viruses on your
computer, unless it interferes with and alters the operating system."
I think I just heard a 'pop'ping sound.. but really, writers try too hard
sometimes to make this stuff accessible to people. I don't think someone who
is going to get a whole half-way into the article is going to need such
reductionism to catch their interest; I'd honestly be more excited if the
actual symbolic definition of the theorem was shown to me at that point.
------
jpdoctor
_In 1949 he demonstrated that universes in which time travel into the past is
possible were compatible with Einstein's equations._
Wait, what?! Anyone have a ref?
Edit: Thanks to andyjohnson and vbtemp. TIA for others too.
~~~
andyjohnson0
He discovered a solution to Einstein's field equations that permits closed
timelike curves if the universe is rotating.
<http://en.wikipedia.org/wiki/Godel_metric>
"This solution has many strange properties, discussed below, in particular the
existence of closed timelike curves which would allow for a form of time
travel in the type of universe described by the solution. Its definition is
somewhat artificial (the value of the cosmological constant must be carefully
chosen to match the density of the dust grains), but this spacetime is
regarded as an important pedagogical example"
------
tluyben2
For people interested in the original from 1931: <http://www.w-k-
essler.de/pdfs/goedel.pdf> (in German). Work of art IMHO.
------
lcargill99
While that's biographically interesting, you really don't get off the hook
from understanding that he used basically the same approach of Cantor's
diagonalization.
------
rmATinnovafy
Its always fascinating to read about Gödel. I have not read GEB, yet reading
about his findings has really changed the way I think about things.
Thanks for posting this article.
~~~
ibrow
You may find the Reddit discussion about GEB of interest.
<http://www.reddit.com/r/geb>
~~~
rmATinnovafy
Thank you!
------
ChrisHugh
What I get out of Goedel is this: There are some things that are true that
cannot be proved.
~~~
vbtemp
Be careful. That's a naive view, and drawing more conclusion than I think you
mean.
This it more like it: For any consistent, finite axiomatized formal system
that is sufficiently expressive (such as the Principia Mathematica), you can
construct a sentence in the language of that formal system that asserts its
own un-provability. Therefore, there _does not exist a mechanistic method for
enumerating over all true statements in the language of that formal system_.
By stating "there are some true things that cannot be proved" goes too
philosophy deep, and is outside of our pay-grade. Just consider: humans don't
reason based on mechanistic principles - and there's no proof as to the
expressability of natural language (though we can be sure it's aggravatingly
inconsistent)
EDIT: I just want to say that in general, if someone does not really grasp the
technical notion of a formal system, consistency, expressiveness, provability,
soundness, or recursive enumeration, then it is basically impossible for them
to appreciate the incompleteness theorems, and they are very likely to grossly
misrepresent it.
~~~
haliax
> humans don't reason based on mechanistic principles
Do you support an empiricist view of logic then
(<http://en.wikipedia.org/wiki/Is_logic_empirical%3F>) ? That we justify
logical rules because they so strongly correspond with our own experiences?
~~~
vbtemp
Not really. I'm just saying we don't go around all day doing logic-algebra in
our head and saying _only_ true, consistent things :)
~~~
haliax
Ahh, fair enough. I'd have to agree with you there. My guess is that that plus
being able to inductively generate axioms from experience are largely what let
us escape that particular weakness of formal systems.
| {
"pile_set_name": "HackerNews"
} |
‘Virtual Pharmacology’ Advance Tackles Universe of Unknown Drugs - rbanffy
https://www.ucsf.edu/news/2019/02/413236/virtual-pharmacology-advance-tackles-universe-unknown-drugs
======
daddylonglegs
When I see a paper like this one of my first thoughts is "What will Derek Lowe
say about this?" He is a chemist in the drug industry and excellent writer. On
his blog he regularly tears apart overhyped claims for how software searches
for targets and automated synthesis of chemicals are going to find perfect
cures for everything at the press of a button. His take on this paper is
actually positive, though with some important caveats:
[http://blogs.sciencemag.org/pipeline/archives/2019/02/11/vir...](http://blogs.sciencemag.org/pipeline/archives/2019/02/11/virtual-
screening-as-big-as-it-currently-gets)
~~~
higginsc
I have been out of this space for a few years (transitioned to data science
from drug discovery), but from my time doing in silico and in vitro work, a
major issue with docking was rank ordering. His comments are right on the mark
IMO. Especially this paragraph:
>Another point is that high-middle-low effort on the D4 case. The binding
assay results compared to the docking scores are shown at right. You can see
that the number of potent compounds (better than 50% displacement, below that
dashed line) decreases as the scores get worse; the lowest bin doesn’t have
any at all. But at the same time, there are a few false-negative outliers with
binding activity at pretty low scores, and at the other end of the scale, the
top three bins look basically undistinguishable. So the broad strokes are
there, but the details are of course smeared out a bit.
These methods can filter millions of compounds down to hundreds, but as an
academic lab, it's still a herculean effort to synthesize hundreds of
compounds. And out of those hundreds, you might get a couple that are active.
This study is a combination hard work, yes, but also a lot of money and luck.
That being said, good for the team, and good for science. I have nothing but
respect for Shoichet and Roth. Didn't ever cross paths with Irwin.
~~~
cowsandmilk
> I have nothing but respect for Shoichet and Roth. Didn't ever cross paths
> with Irwin.
This makes me laugh since Shoichet was childhood friends with Irwin and
they've worked together on almost everything together since 2000 when Irwin
went to Northwestern to join his lab.
~~~
higginsc
That is pretty funny. The more you know. I was just a grad student and met
Shoichet at conferences.
------
arkades
Link to the actual article, not the press release;
[https://www.nature.com/articles/s41586-019-0917-9](https://www.nature.com/articles/s41586-019-0917-9)
------
roomey
"The four structures of AmpC determined with the new docking hits are
available from the PDB with accession numbers 6DPZ, 6DPY, 6DPX and 6DPT."
Are we in a situation now where, if I have a bad anti-botic resistant
infection I can just order these molecules on the off chance that they will
help me?
Can I order a toxin?
Or are these molecules just impractical to use outside of a lab setting
~~~
fabian2k
They only determined whether those specific molecules bind to the target. They
didn't test if they are toxic, if they are able to actually get to the target,
how stable they are under real conditions, into which metabolites they are
processed in humans, ...
This is the very first step towards developing potential drugs, it's very,
very far from an actual drug. And drug development wasn't the goal of this
paper anyway.
And the idea behind this paper was to find molecules that aren't in any
catalogue, so you would still have to synthesize them yourself or pay someone
to do a custom synthesis for you.
~~~
daddylonglegs
I thought the point of the library of molecules used was the supplier
(Enamine) has a systematic method of synthesizing the molecules they've
listed. It appears that, in practice, they can supply 90% of the molecules
they offer, synthesized on demand:
> Of the 589 molecules selected, 549 (93%) were successfully synthesized
> (Supplementary Table 10 and Supplementary Data 11, 13)
I fully agree your main points.
~~~
fabian2k
That's still custom synthesis and not off-the-shelf compounds. No idea how
expensive they are in this case.
~~~
daddylonglegs
About $100 apparently:
> Over the past decade, Kiev-based Enamine Ltd has innovated an efficient
> pipeline to produce any of over a billion never-before-made drug-like
> compounds on demand — at a cost of about $100 per molecule — by combining
> any of tens of thousands of standard chemical building blocks with one
> another using over a hundred established chemical reactions.
~~~
justtopost
'Per molecule' gives no real indication of cost, that appears to be a
'tooling' charge, to design the reaction chain. I doubt $100 will buy you any
useful quantity of any research chemical, much less a custom mfg one.
------
melbourner
as a computational chemist I must say end-use might differ significantly from
these screening studies, it is more of a statement on current computing
capabilities and a little bit of science icing
~~~
dekhn
it's not even a statement of current computing capabilities if they used 1
CPU-day on 1,500 machines. That's a pittance.
~~~
momeara
Co-author here (AMA)--A large-scale docking screen of 116M molecules takes
~1100 cpu days on our cluster, working out to about 1 mol/sec, which is very
fast for virtual screening. What this doesn't account for is this requires
about 30 minutes per compound to precompute information (conformations,
partial charges, etc.). So this works out to ~6M cpu/hours to prepare the
library for screening, which is a substantial amount of computation. We're
loading about 1M molecules a day and have a 2-3 year backlog of compounds to
load from Enamine.
The good news is that once the library is prepared, it is quick to screen at
more targets--and we make the pre-computed library available at
zinc15.docking.org.
Interestingly, as the library grows a limiting factor is storing the library
on disk. It is now ~20T. We've set up several mirrors around the world for
groups that are actively using it. An interesting problem will be to see if
preparing compounds for screening on the fly (e.g. with machine learning
models) can overcome this limitation to keep up with library growth.
A big question for us is what will the return on investment in screening
larger and larger libraries be? One of the take aways from this work is if
docking has moderate enrichment, than screening larger libraries not only
gives more hits but actually can increase the hit-rate for the top scoring
compounds.
~~~
cing
I know that docking using GPU is about an order of magnitude faster than CPU
(see today's Schrodinger 2019-1 release notes,
[https://youtu.be/K4AYdBvuOe4?t=90](https://youtu.be/K4AYdBvuOe4?t=90)). Is
there a way of doing GPU accelerated precomputation though?
~~~
momeara
Hey Chris--We're right now using a mix of commercial and open source software
like Omega, Corina, AMSOL, and Mol2DB. Probably the slowest step is generating
the partial charges for each conformer with a reasonably high quality semi-
empirical forcefield. I'm not sure if there are competitive (in terms of
quality) GPU based methods, but if there were methods that were ~1000 times
faster as can be the case for GPU based methods, it would definitely speed up
the pre-computation or make on-the-fly prep feasible. Do you have any ideas of
where we should look?
------
radicaldreamer
This is right up Zymergen's alley, at least when it comes to in vitro testing
of these compounds.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Visualize positions of a string - Thimo
https://github.com/ThimoKl/StringViz
======
welder
You shouldn't be using character offsets when parsing HTML or XML...
~~~
silentOpen
Your lexer emits buffer positions which you then put into error/diagnostic
messages. Afaict, this tool makes it easier to visualize cursor positions in
buffers. Personally, I'm not sure about its approach but it's not an
unreasonable goal.
~~~
Thimo
I parsed many strings from a website today. These strings were created by a
user. So they didn't have a structure. It seemed easier to search for some
keyword (like €/$) and extract substrings than writing a grammar. Working with
positions can be compilcated, so I created this little tool to visualize it
and make it a bit more easy and to save some time.
------
ExpiredLink
Have you seen this?
<https://github.com/laktek/extract-values>
~~~
Thimo
Looks like a good and simple tool. In some cases it's just easier and faster
to work with buffer positions. I can be tricky though.
------
fidz
Simple, but i think it would be very useful to teach substring to very young
programmer. Thanks, starred for future purpose.
| {
"pile_set_name": "HackerNews"
} |
Arbitrary Code Execution on Pokemon Stadium for the N64 - pizza
https://github.com/MrCheeze/pokestadium-ace
======
dimodi9
I see a pipeline error message on the bottom left.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Help I need advice on fraud - eam
I run a web site (side project) where users can use their credit card to send money to a friends checking account as a gift. It seems a user created multiple accounts with fake names then proceeded to send money from stolen credit cards. All the charges to the stolen credit cards where then sent to one final destination checking account which I have all the information on. I detected all this activity a bit too late (2 days late) so the money has been transferred from the credit cards to my marketplace to the destination checking account. Overall there were 5 different stolen credit cards used with over $2,000 in charges! As a side project this a big loss for me. I'm already starting to receive some chargebacks and it's stressing me out. As a result I have permanently shut down my project because this is a major loss, more than I have ever made from the actual side project itself.<p>I have visited the local police department, but they said since I'm not the victim they can't do anything about it (presumably the owners of the stolen credit cards are the victims here, so they have to file a report). They referred me to the FBI. So I filed a complaint with the IC3.gov. After submitting the form, it said that it may be a while before I hear anything since they have limited resources and they receive thousands of complaints each day.<p>What's really frustrating is that I have the checking account details where the stolen money was sent to! So it seems it would be an easy case to break. The authorities would have to subpena the bank account since I have the bank account number and bank name, it's not like they used bitcoins.<p>Can anyone with experience in this situation before chime in with some advice? What should I do? Please help, any information would be greatly appreciated.
======
callmeed
I have dealt with this exact scenario in our photography ecommerce product
([http://nextproof.com](http://nextproof.com)). Ours just happened to have an
extra 0 on the end. We almost lost our merchant account because of all the
chargebacks. (I'm thinking of writing an ebook on the topic)
Through some social engineering, I was even able to get the name and location
of the checking account owner and _get him on the phone_. I was actually quite
close to visiting and beating the crap out of him. Turns out he was just some
poor rube from Arkansas who answered a craigslist ad. In the end he was
actually more of a victim than me (basically had his identity stolen, credit
ruined).
Law enforcement at all levels were completely unhelpful (I dealt with CA
police, AR police, and feds). Once I located the bank and got them on the
phone, they at least were able to freeze the checking account (I believe they
are required by law to do this once fraud/cybercrime is reported). That's
really only a temporary fix though.
Any time you're doing payment aggregation or money transfers, _you have to do
as much verification as possible_. We learned that the fraudulent charges had
very predictable patterns (international cards, fake websites, very specific
range of charge amounts, etc.). At a small scale, you should just manually
verify all accounts, require phone/address verification, and more. I've seen
some bitcoin startups that even require you to submit a photograph of your
card + ID via WebRTC. This is what you should do right away. Once fraudsters
realize they have to do work, they will move on to the next target. Our
chargeback rate is now near zero and never fraud-related.
At scale, you can have in-house people write code to detect fraud patterns.
There are also startups like Sift Science with APIs.
Hope that helps.
~~~
jasontan
Hey there, I'm the CEO of Sift Science. Unfortunately, callmeed is spot on --
law enforcement typically won't get involved unless it's in the tens of
millions of dollars, at least. Even trickier if it's across international
borders.
This means that you're left to defend yourself. Typically, you'll start
implementing some basic verification and rules in your code base. For example,
"if num_credit_cards_per_destination > 5; flag_as_suspicious()". But, it's
tough to be accurate with this approach, so you'll want to manually review
activity flagged by rules, so that you don't insult your good customers. As
your business grows, it's more challenging to scale these fraud detection
rules and manual review operations. While adding more verification helps, it
does negatively impact the experience for innocent customers. It's a delicate
balance.
I wish I had better news. In some sense, seeing fraud means that you're on the
map. Unfortunately that means you'll only attract more and more attention as
your business grows. I'm happy to be a resource, even if we don't work
together - jason at siftscience dot com.
------
jmount
Down vote me on this, but here is my honest opinion (that may actually help
others) phrased as a question.
Why would you as a hobby run a payment site linking credit cards and checking
accounts when you appear to not have done any research in to how important
loss prevention is in such an activity? If you were not interested why did you
start? If you were interested how could you not know what steps to take?
~~~
wpietri
Hi, John. I'm not the poster, but the way I look at it, everybody has to learn
caution sometime. If this guy's lesson costs him just $2k and a little
headache, I'd say he got away cheap.
I can think of a number of important business lessons I learned that cost me
more time or money. E.g., "be careful picking business partners", "don't start
work without a signed contract", or "crazy clients don't get saner". All
things I should have known, or could have discovered reading. But had I waited
until I had read and appreciated all business lessons, I never would have
started anything.
And I appreciate him sharing the lesson with Hacker News. It reminds me of the
Despair, Inc poster on mistakes:
[http://www.despair.com/mistakes.html](http://www.despair.com/mistakes.html)
"It could be the purpose of your life is only to serve as a warning to
others."
So thanks, eam, for getting a bunch of young entrepreneurs to say, "Hey, maybe
I should double-check our fraud prevention."
~~~
jmount
Always good to hear good calm advice from somebody I know and respect, Will. I
admit I make tons of mistakes (and also would never start anything if I always
"thought it through"). But I still really don't like what the original poster
presented.
~~~
eam
Hi OP here, thank you for your opinions. I just wanted to say that I thought
that I had "thought it through" but apparently I didn't, it was more
complicated than I thought it was. This is not the first time in my life that
I thought I had thought something through, there have been numerous times
actually in all aspects of my life. A year or so ago, I watched a Malcolm
Gladwell talk on TED
([http://www.ted.com/talks/malcolm_gladwell_on_spaghetti_sauce](http://www.ted.com/talks/malcolm_gladwell_on_spaghetti_sauce))
where spaghetti sauce companies thought they had thought things through, but
really didn't it. Of course I could have spend lots of time reading books, but
even then I might have missed this. I just wanted to share my experience and
ask for any advice (not legal) just advice/tips in general from others who had
been in the same boat. So far the comments have been excellent and invaluable.
They have taught me many things I didn't think of before, but more importantly
it will help others who might be looking or are doing that same thing I was
doing already on my side project.
------
mey
I work in fraud management in the payment space for my day job. (Unfortunately
we not have a publically available option yet for someone at your scale).
- You are most likely violating OFAC/KYC regulations in the US (Assuming you are in the US with references to the FBI)
- It is easy/cheap to buy on the black market complete combinations of credit cards/cvv/social security info
- People who buy/have these stolen cards want a cash exit
- Verification of both sides of the transaction are really needed for what is essentially a money transfer, to keep fraud down (steps beyond CCV to prove someone is in control of a CC)
- You are lucky, that $2000 was probably an initial probe to see what checks you had in place. Shutting down was the right thing to do. If you had left it open, you could've added three zeros to the damages
- CC's are not secure and the "merchant" is always the loser in fraud. Visa/Mastercard will always make their cut. Additionally ACH/echecks doesn't provide much in the way to claw back funds (any really).
Edit: Oh some other notes, the local PD are simply not equipped to handle
this, even though you _are_ the victim as you have been defrauded. Chargebacks
can continue to roll in down the line, typically 30-90 days after the
transaction. You may have violated your MCC code on your merchant account by
doing this, as getting an MCC code to do a balance transfer like this is not a
simple thing.
------
noonespecial
Run from this. You've been lucky.
1) You are almost certainly operating a money transmitting service (like
Western Union). If you are an intermediary between people giving each other
money, there are piles of regulations and compliances you _must_ deal with
just to stay out of jail!
2) Anything dealing with money and internet is HARD. This is like complaining
that you tried to be a veterinarian on the side and some animals died. There
is a minimum amount of knowledge you need just to start. You presently don't
know what you don't know in this space. Its dangerous.
Sorry for the downer, but pick a different side project.
------
dminor
You were basically providing a cash advance, which is against the credit card
companies' TOS, so chalk it up as a lesson learned and move on.
I can pretty much guarantee that no one in law enforcement will do anything
about your situation. I work for an online retailer and we've been down that
road. Everyone will mumble something about jurisdiction and hang up on you.
------
eli
If you're looking for legal advice, you absolutely must ask a lawyer. Most
good lawyers will give you an initial consultation for free.
If you're looking for business advice, I don't think there's any practical or
safe way to run a business that allows people to charge a credit card and
return cash to a bank account. If that's necessary for the functioning of your
site, you may need to rethink your site.
------
KhalPanda
What makes you think the bank account's details you have that the (presumably)
stolen funds were sent to are those of the actual criminal? It could very
easily (and extremely likely) be an account opened under a stolen identity.
I'm afraid it's likely you're going to have to put this one down to
experience... You haven't gone into specifics, but your side project sounds
like a money-launderer's dream.
~~~
pbhjpbhj
> _What makes you think the bank account 's details you have that the
> (presumably) stolen funds were sent to are those of the actual criminal?_ //
Did he say that? I thought he was just saying as he had the account number
then the bank could easily stop that money; the implication being that someone
trying to retrieve the money could be traced.
~~~
KhalPanda
Maybe you're right (that that is what he meant)... but all it takes is the
criminal to withdraw cash (or have someone do it for him) and that money is
long gone.
I was more getting at the fact that the money is probably not retrievable.
------
bluedino
>> 2000 was basically the year of fraud, where we were just losing more and
more money every month. At one point we were losing over $10 million per month
in fraud. It was crazy.
—Max Levchin, founder of PayPal
~~~
hcentelles
Where this quote came from?
~~~
maxmcd
[http://www.foundersatwork.com/](http://www.foundersatwork.com/)
~~~
wpietri
A book that any founder should read. It's a great set of interviews with
founders telling relatively unsanitized versions of their startup stories. It
serves as a great antidote to the business press's "all winners are perfect
geniuses" school of reporting.
------
blakerson
You were running a money transmitter, and once you learn the regulations and
liabilities that come attached to that you'll be glad you shut it down before
the gap widened any further.
------
beat
My spouse works as a BA/project manager for a large e-commerce player. The
efforts they go to in order to handle fraud are crazy. Fraud management is an
_entire department_ in any e-commerce organization. They're fighting not
simple scammers, but international organized crime syndicates.
My not-a-lawyer advice? Drop your "side project" as fast as you possibly can,
before it destroys you.
------
eam
I actually even called the destination bank fraud department which is where
the checking account resides. They seem to not care. I called them 2 days
after the transactions happened and asked if they can reverse the transactions
though the agent that I spoke with said he would work on it and call me back.
He never called me back, so I called him back and he said he's still has to
get to it and told me to have my payment processing company call him. My
payment processing company has tried to call the bank agent for 2 days with no
avail. I even tried to call him and many times I was sent to voicemail. It has
been 11 days and I haven't heard back.
~~~
mtamizi
> My payment processing company has tried to call the Ally Bank agent for 2
> days with no avail.
Ally isn't going to help you in this case. Ally doesn't know you, and you're
asking them to give you money from one of their customers.
Who is your payment processor? You can issue an ACH reversal. You would get
your money back __if __the money is still in the recipient 's bank account.
It's worth a try since they may not be expecting you to reverse the
transaction and will still have money in the account.
------
scarmig
Someone will say, "use bitcoin instead!" So follow the directions here to help
your situation:
1) Set up an exchange. 2) Wait for people to deposit >$2000 worth of bitcoin.
3) Run away.
Problem solved.
More seriously, I think you're more or less in a very unhappy place without
good options. Chalk it up to experience and consider yourself lucky that you
only lost $2k.
Though, a question for the legally-minded: if this project had been done in a
corporate structure, could the poster just walk away from it and be insulated
from the loss?
~~~
aioprisan
As long as you're incorporated, you're personally shielded from incurring
those loses yourself or anyone going after you for those losses, as long as
you didn't personally guarantee those accounts (i.e. AMEX business cards are
guaranteed with your personal SSN vs company EIN).
~~~
eli
No offense, but that sounds like terrible advice. Please consult a lawyer or
accountant with questions, but corporations do not magically and universally
shield your side business from incurring debts you have to pay. (And your
business credit cards would almost certainly be personally guaranteed -- who
would give a credit card to a business with no credit history?)
~~~
aioprisan
Again, I should have prefaced this with stating that I am not a lawyer and do
not provide legal advice. With a DUNS number, you can open business cards if
you have an established history of paying your suppliers and can show sales to
other companies.
> And your business credit cards would almost certainly be personally
> guaranteed -- who would give a credit card to a business with no credit
> history?
Not true. While it is easier to get a business credit card if you personally
guarantee it from day 1, you can get one using you business identification
information. You can get Citi business cards with a DUNS and EIN number.
[https://www.citicards.com/cards/wv/html/cm/business/know-
the...](https://www.citicards.com/cards/wv/html/cm/business/know-the-
rules/business-credit.html) You can also get corporate AMEX cards once your
business has $10M in revenue a year. Employee cards only require a SSN to
verify identity, not to guarantee them (the regular, business amex cads,
however, do).
------
dragonwriter
Credit Card companies basically tell merchants (in their merchant guides) not
to (1) deposit funds from CC transactions in any account but their own, or (2)
allow CC users to extract cash or the equivalent from CCs as by cash refunds,
and highlight that these things are wide open gates for fraud, money
laundering, and high chargeback rates. [1]
This sounds like a grossly irresponsible "side project".
[1] example: See "Laundering" on p. 11, "No Cash Refunds" on p. 13 of
[https://usa.visa.com/download/merchants/card-acceptance-
guid...](https://usa.visa.com/download/merchants/card-acceptance-guidelines-
for-visa-merchants.pdf)
------
genericresponse
You lost $2000 in stolen goods. Someone defrauded you by knowingly using fake
cards. Your police department should see you as a victim as well. If they
don't you might want to think about talking to a lawyer to get things moving.
Actually- just go talk to a lawyer about getting the wheels of justice moving
for you.
------
daseong
I am not a lawyer, this is no legal advice. You have to be careful. Depending
on your country's laws you might have been running a financial service. These
services usually require you to register, fulfill tons of requirements (at the
least hold enough reserves) etc. Offering a financial service without
registration might get you in a lot of trouble. The only course of action you
have is to try to reverse the transactions to the checking accounts. This will
largely depend on your provider.
Talk to a lawyer. Make sure you haven't been running a financial service.
------
kapnobatairza
I know this is not what you want to hear right now but this is where the
importance of KYC requirements for any company dealing with financial
transactions comes in. I imagine you made a trade-off between providing a
frictionless service and best practice, but that's a trade-off you need to pay
for eventually.
EDIT: I would also like to add that typically those who dabble in credit card
fraud are sophisticated enough NOT to link their own bank details to the
cards. What they will do is either buy some unknowing person's account for a
few hundred dollars or steal details of an otherwise inactive account. Then
all they have to do is use any ATM to withdraw the money, and it can be nearly
impossible to catch the culprit without committing significant police
resources.
------
pktgen
IMO, you would probably be best speaking to an attorney. They may also be able
to get more cooperativeness from the FBI.
~~~
tdicola
Unfortunately at a normal rate of $300/hr or so you're going to rack up well
over $2000 in attorney fees.
~~~
daseong
IMHO he should still talk to an attorney.
If he provided financial services without a proper license, he might be in a
world of hurt.
------
larrydag
Card-Not-Present online commerce draws fraud and that is a reality that you
need to address. There are methods to mitigate the losses from fraud. You
could collect webserver, internet traffic data and credit card data to filter
your signups to prevent this happening in the future. One such company that
could help is siftscience.com.
~~~
larrydag
I'm curious to those that downvote how they would address online fraud. It is
a real problem with online commerce.
~~~
aioprisan
You can request strict full address validation and request that charges fail
on CVC mismatch. On the cashout side, you can use a system like
[http://www.idology.com/](http://www.idology.com/) for identify verification,
which can be either as complete or as superficial as you want it to be (think
credit card application level verification, with questions about past
employers, loans and monthly payment amounts). If this person has all the
information to steal your customer's identity, then you can't really defend
yourself against that scenario and that customer likely has to deal with
larger identity theft issues.
------
LeBlanc
I would highly recommend that you contact the banks for whatever accounts the
money went to. If you are able to prove fraud, you may be able to work with
them to freeze the accounts and then recover enough funds to cover the
chargebacks. You can use the routing numbers to figure out which banks to talk
to.
When I was at WePay, we used this to help recover fraud losses. It's not 100%
effective (because often the account has already been drained/closed), but
it's better than nothing.
In the future, I would also recommend using a PSP like WePay, Stripe, or
PayPal that will handle KYC and fraud detection for you.
[https://www.wepay.com/api/payments-101/preventing-losses-
fig...](https://www.wepay.com/api/payments-101/preventing-losses-fighting-
fraud)
------
iddav
I've lost 2 merchant accounts in the past due to a high chargeback rate
involved with selling web hosting online.
Most chargebacks are a result of orders from people with stolen credit cards,
usually from international IPs. To mitigate this, I ended up using:
1\. A service called MaxMind, which includes automated phone verification
(e.g., ensuring the person owns a phone number in the a area code matching the
credit card zip code).
2\. Using payment providers like PayPal or 2CO since they have their own
built-in fraud prevention systems.
Of course, this does not prevent chargebacks for non-fraudulent reasons (e.g.,
unsatisfied customers). For large orders, you may need to get the customer's
signature on a credit card authorization form, to enable you to win the
chargebacks if they occur.
------
yardie
1\. Consider this a very expensive lesson for you. Loss prevention isn't easy.
It's why I stopped using Ebay and do local direct (CL, gumtree, leboncoin,
etc.) sale.
2\. FBI cybercrimes division will eventually want to hear from you but the
fraud was small potatoes compared to what they are up against. Your local PD
is right, this is out of there league. Most likely this is across county,
state, and international borders.
------
raverbashing
And that's why you _don 't_ consider money payed with CC an immediate part of
the balance.
Unless you can swallow the loss.
As an example, some airlines require that you present the Credit Card used in
the purchase upon check-in.
------
uptown
Just curious - who'd you take the transaction costs from? The sender?
~~~
eam
Both, from sender and receiver.
| {
"pile_set_name": "HackerNews"
} |
Building an HTML5 WebGL game with GLGE, Part 1 - statico
http://statico.github.com/webgl-glge-game-part-1.html
======
statico
The game itself is here: <http://statico.github.com/webgl-demos/ducks/>
| {
"pile_set_name": "HackerNews"
} |
What Programming Languages Engineers and Employers Love and Hate - rbanffy
https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/what-programming-languages-engineers-and-employers-loveand-hate
======
JohnFen
Apparently I'm an unusual engineer, because I dislike Python.
~~~
tincholio
You're not alone, brother
------
temporallobe
It’s funny that Clojure is not even on this list. Is it completely off
employers’ radars? I have Clojure experience but I am finding that libraries
and projects are getting abandoned by the dev community, so maybe nobody even
cares about it any more?
~~~
blain_the_train
What's a lib or feature are you missing in the language?
~~~
temporallobe
Sorry for the late response. I didn’t mean that I missed a lib or feature.
It’s just that many of the libs I see on Github are sometimes several years
old and have been abandoned. Clojure seems to be incredibly featured and rich,
but I find very few devs who have even heard of it (I often have to reference
Lisp for context).
------
bencollier49
Seems odd that the global rankings are so out of step with local rankings.
Given the local demand, I would have expected to see something like Typescript
at the top. Scala looks like it's in completely the wrong position.
But what does jump out is exactly what this is measuring - it's languages
listed on the CVs of people who got interviews. I'd wager that Scala and Ruby
are up there as they indicate a level of experience, Ruby because it's been
around a while, and Scala because it has a slightly steeper learning curve and
generally isn't a "first language". That's probably also true of Go.
------
aboutruby
Has anyone successfully downloaded the report from
[https://hired.com/page/state-of-software-engineers/key-
takea...](https://hired.com/page/state-of-software-engineers/key-takeaways) ?
edit: Got it: [http://pages.hired.email/rs/289-SIY-439/images/2019-State-
of...](http://pages.hired.email/rs/289-SIY-439/images/2019-State-of-
SoftwareEngineers-Report.pdf)
------
drallison
Given the methodology used and the lack of supporting data, I do not believe
that any weight should be given to the results, interesting though they may
be.
------
expertentipp
> programming languages
> HTML
I argued with people over this.
------
nigwil_
No Rust either.
| {
"pile_set_name": "HackerNews"
} |
What Do Investors Look for in a Game Developer? - rocky1138
http://gamasutra.com/view/feature/182962/what_do_investors_look_for_in_a_.php
======
sami36
if you're a decent game developer, in this age of digital distribution
(Steam), you should forgo the whole venture thing & look at Kickstarter.
Not only do you get to keep all your equity, you can crowdsource ideas &
feature requests, fine-tune your marketing strategy & build a community of
loyal gamers who will sustain you through the ups & down of the dev lifecycle
al for a measly 5 % commission.
As long as you have good communication skills, the right attitude & talent.
You'll find unrivaled rewards.
~~~
jakozaur
Well, Kickstarter might be a bit rough if you don't have loyal fans already.
However, there are multiple marketplace where you could start (iOS, Android,
Steam...).
However, still keep in mind that if you don't absolutely love creating games,
there are a lot of other hard problems to be solved as a programmer. Game
market tends to be over saturated and I know ppl who earn half of what they
could in non-gaming company.
| {
"pile_set_name": "HackerNews"
} |
Average product lifespan of Google products before it kills them - walterbell
https://gcemetery.co/google-product-lifespan/
======
jedberg
I noticed in the "recent deaths" section, YouTube for 3DS. This made me laugh,
because at Netflix, supporting the 3DS was always a big pain. Regardless,
Netflix still supports it to this day.
In fact, as far as I know, Netflix has only ever killed support for one
platform -- The PS2. And that was only because there were only about 10 people
left using it.
So we sent them all Rokus and told them we're discontinuing PS2 support.
~~~
sct202
Netflix is the last app that works on my Sony Google TV from 2011. The YouTube
app shut off after like 2015, but 8 years later Netflix is still trucking.
~~~
dmurray
Why don't these devices run Android? Pretty sure you can run YouTube on a 2011
Android release. That's not important to Sony, of course, but even for new
hardware it seems easier to make it run Android and shovel your bloatware on
it than to develop your own half-assed OS for each TV model.
~~~
sorenjan
Youtube's required Android version is listed as varying depending on device on
Google Play, but on apkmirror.com it says that the minimum version is Android
4.4+. That was released in 2013.
[https://www.apkmirror.com/apk/google-
inc/youtube/youtube-14-...](https://www.apkmirror.com/apk/google-
inc/youtube/youtube-14-43-55-release/)
~~~
londons_explore
That's for the latest version of YouTube, but I believe an old version of
YouTube will still work fine with today's servers, at least for basic video
watching (commenting etc. broke in the big Google plus debacle)
------
codyogden
Okay, so the page/statement is inaccurate, and not just because it's been
spammed to HN three times before today (seven months ago).
Since I run Killed by Google, I feel it's important to clarify that the
average Google product lifespan is much longer than four years thanks to
flagship products like Gmail, Maps, Docs, etc. I can't actually get a solid
number despite a lovely spreadsheet of products that I actively track because
I haven't even compiled them all yet. Anyway, I made a choice not to draw
stats in killedbygoogle.com for a reason, and it's because they're misleading
to most people. When I'm asked about it, I say, "The products listed lasted an
average of about four years." KBG is cynical view of Google's product strategy
by any measure, but at least it's not drawing empty and misleading
conclusions.
------
ThatPlayer
Like always, I have issues with these lists. Project Ara is there despite
never actually launching a product. Or Google Glass is on there despite a new
Google Glass hardware that was released this year.
They also list Google Sky Map ( [https://gcemetery.co/google-sky-
map/](https://gcemetery.co/google-sky-map/) ) which they call discontinued
because Google released it as open source and handed over reigns to another
group of developers, but looking at the github it still gets updates too, so I
can see this one being either way. [https://github.com/sky-map-
team/stardroid](https://github.com/sky-map-team/stardroid)
~~~
joshuamorton
About a quarter of the projects in the graveyard had replacements, and often
automated transfers to the new tools (writely, a bunch of analytics tools,
songza, etc.)
Another quarter were explicitly experimental (labs, glass, ara, etc.).
And another handful are products that no longer make sense in the modern world
(google desktop, various toolbars, browser sync, the gmail notifier, etc.)
Hell, one of them is apparently the webconferencing software google employees
used internally in like 2012, it wasn't a product.
~~~
anoncake
> Another quarter were explicitly experimental (labs, glass, ara, etc.).
How many years was Gmail in "beta"?
------
cookie_monsta
Not a Google fanboy by any means, but couldn't this be read with the opposite
conclusion as well - that the average time that Google keeps marginal products
alive is 4 years?
Of course it's possible that everyone has some personal favourites amongst
those 164, but if an idea really has legs what's to stop people from exhuming
it from the Google graveyard?
~~~
underwater
It's probably more like a year with actual support and updates, two years
where it's left to whither and slowly die, and then a year of notice before
sunsetting it.
~~~
Pxtl
Yeah. By that logic MS has eternal support for everything, because MS has a
legendary hardcore approach to backwards compatibility that they will release
a product, push it hard, and then abandon development on it... but still have
it "officially supported" for a decade while it bitrots into a nightmare...
but never tell their users "stop using this, it's deprecated".
As much as I hate Google sunsetting products, I hate even more the companies
that keep their zombies shuffling along in a state of undeath forever.
~~~
wvenable
That makes no sense. Nobody is forcing you to use an abandoned product. So
you're obviously better off with the Microsoft model than the Google one.
A good example is VB6, the runtime is still supported in Windows 10 -- and
necessary for one our largest vendor products to run -- yet the VB6 IDE only
runs in XP. Our business would have been really screwed if they just dropped
support for it. We are, of course, working to move to a different product and
have the luxury of time to do that.
------
heavymark
I imagine Google views this as a positive thing, trying a million ideas, so
that hopefully it increases the chances of having at least one good and
popular idea. The downside is, a lot of products they killed were good ideas,
but its sink or swim and they don't get the products the needed love and
attention so they don't take off and thus they kill them. And now even when
releasing a good idea or good piece of hardware people will always be worried
if it will be gone tomorrow, which leads to less adoption and more axing of
products.
~~~
jsight
Maybe, but some are exceptionally bad. Why did Allo exist at all for example?
And the merging, demerging involved with Messages, Hangouts, and Voice have
been a complete mess. There's more there than just trying some ideas, as some
of the ideas actively damaged older ideas.
~~~
sct202
I don't understand why they announced that Google Hangouts (classic) is
closing to move to Google Hangouts Chat, like it's a chat app am I going to
notice a difference, but now you just made me think that the whole service is
shutting down?
~~~
journalctl
This is what happens when companies are disorganized and don’t communicate
internally.
------
jedberg
I wonder what this is the _average_ of? It is the average of just things that
were killed, or the average of all products, including living ones like
Search? And is it really average or is it median?
> based on discontinued products listed on our website
This makes me think it is only dead products, which seems like it would skew
things a bit.
If anything this tells me it takes them a long time to make a decision to kill
something.
~~~
steren
Exactly. This website is either making a very basic mistake or is just plain
misleading.
~~~
codyogden
This is part of the reason why I decided not to include stats at
killedbygoogle.com . When you included flagship products things like Gmail,
Maps, Drive, Docs, etc the 'average lifespan of a Google product' rises
significantly.
------
jVinc
While some might be seeing 212 potential youtube scale success stories that
google killed "for no apparent reason", I'm seeing 212 potential wework scale
disasters that google avoided by killing off successful pitfalls and focusing
on their core. The truth is somewhere in between, but obviously with google
being what it is today, none of these cases chip-ed of anything from their
success.
------
privateSFacct
Meanwhile on AWS I was using SimpleDB until recently on a small project - I
make AWS _NO_ money - but they seem to still support SimpleDB even though it
is not actually marketed? It's 12 years old and can't be generating a lot of
new signups because it doesn't actually show up anywhere I don't think.
Does anyone know how AWS handles depreciating items on AWS. I've yet to be
bitten despite beng a long time user (S3 still going / Simple DB was going
last time I needed it etc).
~~~
kedean
As I've understood it, AWS keeps things alive as long as someone is using it.
Once they want to get rid of it, they take it off the catalog of items you can
provision, and start encouraging users to move to it's successor, and once
it's abandoned fully they get rid of it. That's how they've always done old
ec2 instance types, for example.
------
Waterluvian
It surprises people that Google kills products because people them as
traditional products. But they're exploratory vessels for selling ads. Of
course Google's ads are the product.
It's unsurprising, to me at least, that they explore lots and lots of ways to
sell ads. That's exactly what you do with whatever widget your company
peddles.
------
asdfasgasdgasdg
Average lifespan of products that have been killed, when they are killed. The
actual average lifespan is probably much longer.
------
ortusdux
As I understand it, a successful product launch is a strong addition to a
"promo packet". What we are seeing is the inevitable result of a system that
prioritizes new products above all else. If google wanted to combat the stigma
that they kill 3 out of every 4 products they launch, they could just start
rewarding promo packets that demonstrate maintenance and growth. I still have
Reader bookmarked as a reminder to not get to invested. Show people that they
are committed to the longevity of their products and you might just get more
early adopters for you next big launch.
------
com2kid
I'm upset about Google Trips.
Trips was one of the few examples of how letting Google know everything about
you lead to a great experience. It'll automatically figure out my time tables,
hotel reservations, and flight times. The ability to download city info and
store it offline was wonderful, and its recommended itineraries, while often
silly in their listed time tables, were a great jumping off point.
Nothing else _can_ exist that does the same job, because only Google has
access to literally everything about you.
~~~
Florin_Andrei
I never heard of it, and I'm one of those people who would definitely use it.
------
steren
Nobody is pointing the massive flaw in the headline ? This count does not
consider products that have not been killed (e.g. Gmail)
It's only the average lifespan of killed products.
~~~
gwern
If you'd like a formal survival analysis which takes into account that right-
censoring, I did one a while ago at [https://www.gwern.net/Google-
shutdowns](https://www.gwern.net/Google-shutdowns)
------
skunkworker
RIP Google Inbox. I still think it was the best mobile email client they've
made.
~~~
papito
Hear hear!
------
dang
[https://news.ycombinator.com/item?id=18509735](https://news.ycombinator.com/item?id=18509735)
------
jrochkind1
"The average lifespan of a discontinued Google product is 4 years and 1
month."
"The average lifespan of a _DISCONTINUED_ Google product..."
Note this does not mean the average lifespan of a Google product is 4 years.
There numbers do not include products which have NOT been killed.
------
at_a_remove
"... they might develop their own emotional responses. You know: hate, love,
fear, anger, envy. So they built in a fail-safe device."
"Which is what?"
"Four-year life span." \-- _Blade Runner_
Here, though, the emotional response is our attachment to a Google service.
------
wolco
Most of these could have been a success but google has no idea how to connect
with customers. An idea and a product is what we have. If customers accidently
start using the product and it becomes a success it stays but if it doesn't
meet some corporate goal it gets killed see g+(first they pushed it everywhere
making everyone hate it, the moment groups of people started using it google
saw it would never reach facebook and killed it. If they would have left it
could have pivoted).
I don't know what many of these products are either. Cloud VR cloud, but it
doesn't sound like something I would shutdown maybe sell.. maybe rethink.
------
mroche
Reminds me of the Autodesk Graveyard:
[https://www.cadnauseam.com/autodesk-
graveyard/](https://www.cadnauseam.com/autodesk-graveyard/)
------
notadoc
I'm still disappointed they killed off Google Reader, it was an excellent
product as it was, and it had so much potential to be more.
------
bduerst
A better metric would be rate of death, not total deaths. For example, how
many products does Google have now vs 2004 Google?
~~~
jobigoud
Not only a better metric, but the only way to actually compute life
expectancy. We need the mortality rate for each year, then we can know the
expected remaining time for any particular "age".
------
egfx
I interviewed at YouTube and I kid you not, the interviewer had no idea what
YouTube Leanback was. As wrong as it sounds, I had to school her, a team lead
at YouTube on their own product. YouTube Leanback is awesome and will be
missed. I integrated it in a chrome extension I made called YouTube Share
Enhancer.
------
JohnJamesRambo
Does no one at Google HQ ever have the guts to raise their hand and bring this
up? It's a meme by now.
~~~
ethbro
There have been some comments on here about how career advancement
optimization at Google involves hopping between projects, staying long enough
to launch, and then moving on.
Whereas doing good work on product maintenance or evolution is much less
rewarding.
True or not, it would explain a lot about how Google (as a whole) behaves
towards products.
~~~
dogprez
That's not a Google only problem. It's a cultural problem in Silicon Valley
(ex drive to disrupt) and America (ex not respecting stay-at-home parents).
See the artwork of Mierle Laderman Ukeles:
[https://en.wikipedia.org/wiki/Mierle_Laderman_Ukeles](https://en.wikipedia.org/wiki/Mierle_Laderman_Ukeles)
------
ekianjo
I did not know that Chromecast audio was discontinued. That was pretty fast.
------
meesterdude
my theory:
[https://www.youtube.com/watch?v=H62sHBHq3pc](https://www.youtube.com/watch?v=H62sHBHq3pc)
~~~
james_s_tayler
Nice. So this is effectively what is keeping skynet at bay.
------
burke_holland
Inbox still hurts
| {
"pile_set_name": "HackerNews"
} |
Are You Lightest in the Morning? [video] - yincrash
https://www.youtube.com/watch?v=lL2e0rWvjKI
======
yincrash
Essentially a video form of
[https://news.ycombinator.com/item?id=9416062](https://news.ycombinator.com/item?id=9416062)
where the host investigates what the public thinks as well as animation of the
actual mechanisms.
| {
"pile_set_name": "HackerNews"
} |
If you're going to do good science, release the computer code too - rglovejoy
http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release
======
lutorm
I am not surprised that people find errors in code written by researchers and
grad students who have little training in software development and, perhaps
more importantly, are doing so in a culture which values them writing papers,
not good code. (See for example <http://lanl.arxiv.org/abs/0903.3971> for a
discussion of this situation in astronomy/astrophysics.)
I find it much more surprising that professionally developed software used for
scientific research is also error ridden. And while it might be difficult to
convince individual researchers to release their code, that's nothing compared
to the difficulties of convincing Wolfram research to release the source code
to Mathematica...
But I do think that research is somewhat undeservedly singled out for this,
just _because_ some academic software is open for inspection. Like the article
mentions, it certainly seems like the financial software has caused a lot of
badness. How about flight control software used by NASA that crashed the Mars
orbiter? Who knows how many innocent lives have been lost due to software
errors in military systems like UAVs and missiles. Maybe none, but we can't
know because it's all secret. Shouldn't they be required to show their code,
too?
~~~
smallblacksun
But the military and NASA don't claim to be generating reproducible knowledge
through the use of their code. In particular, the military doesn't WANT other
people to be able to reproduce what their code does. Also, there is a
difference between operational code (code that runs a physical object like a
lander or a UAV) and analytical code. NASA makes some of their code available
here: <http://opensource.arc.nasa.gov>
~~~
lutorm
Cool, I didn't know about the NASA open source project.
You are right that knowledge production isn't the purpose of those other
entities, of course. However, in my mind the purpose is less important than
the outcome -- why is it more harmful to society if scientists produce a
flawed scientific result than if the military kills innocents or the financial
sector brings on a market crash because of flawed models? They all hurt
society and could all benefit from more scrutiny. I admit the military case is
a stretch, but certainly the financial sector seems like a relevant example.
------
jackfoxy
If science is to remain science, and not devolve into mysticism, data and
computer models must be available to other researchers in order to repeat
experiments and provide knowledgeable criticism. Calling anything "settled
science" which is not openly available to all researchers is not scientific.
~~~
kurtosis
I have no beef with open audits of published science that is used in decisions
of economic consequence.
But I would only add that sometimes you learn a lot more from trying to
reproduce a result without the code/schematics of the original experiment. If
you implement it yourself and get a different answer, you should publish it
and not bias yourself by paying too much attention to the original authors
interpretation. As long as you can justify your methods you should be fine.
Also, I feel that it's a lot more fun to design an experiment knowing that
it's possible than it is to merely copy someone else's published procedure. A
month in the lab spares you a day in the library!
------
regularfry
A sound idea.
While I can imagine any number of reasons people might post facto not wish to
release code, if it were developed from the start with the intention of
releasing it, I think we'd all benefit.
Inevitably, the cost of doing so would increase the cost of the research, but
I believe it would be worth it.
~~~
anamax
> Inevitably, the cost of doing so would increase the cost of the research,
> but I believe it would be worth it.
I'm not convinced that it would increase costs.
I'll bet that there's a lot of reinvented code in science. If every project
released their code, new projects would start reusing code from current
projects. In some cases, that sharing and reuse would reduce costs.
~~~
JunkDNA
I have seen code reinvention in my career a number of times. In one instance,
I was actually asked to code up a method where the code and method had been
published in a scientific journal. When I asked why I should implement this on
my own, instead of using code developed by the group who published the method,
I was told, "Because you can't trust anyone else's code. It's better to write
everything from scratch so you know it's _right_ ".
I don't personally have the hubris to think I can code up a method better than
the people who invented it in the first place. That aside, it's just so
wasteful.
So instead of spending time on novel work _we_ were doing, I spent a month
implementing a half-baked version of something _other_ people had done.
~~~
btilly
As silly as the explanation was, there is actually a good reason to re-
implement. And that is that if nobody does, then any bug in the original code
will survive to cause problems with nobody knows how many results before
anyone catches the bug.
Reimplementing from scratch then comparing with the original gives an
opportunity to find such bugs.
~~~
barrkel
Yes, but such arguments apply at different levels of abstraction.
I doubt one would rewrite the OS, compiler or runtime libraries because they
couldn't be trusted; though all these can also have bugs.
~~~
btilly
One would probably not rewrite them. However people both can and do take their
software and run it on a different operating system, compiled with a different
compiler, linked with different run-time libraries, on a different type of
hardware. And yes, I've seen bad software assumptions flushed out by doing so.
(Don't use floating point for complex financial calculations please. OK??)
------
Lewisham
It's surprising how few Computer Science papers release code as well. I don't
care if it's platform-specific and it requires ridiculous numbers of obscure
libraries and only operates on proprietary data that you can't release. I
don't care, I want the code to be open-source. I want to see what you did, and
whether I believe that it does what you claim it does in the paper.
Where possible, I open-source everything I try to be published. There's only
one project I haven't (a scraper for the WoW Armory), but even then I released
the library I built for it.
There's no excuse to not do so. Unless you have something to hide.
~~~
lutorm
_There's no excuse to not do so. Unless you have something to hide._
Not true, for the same reason that commercial ventures don't like to release
source code even if they don't have something to hide.
Having a capable computer code can be a substantial competitive advantage and
make it possible to do studies no one else can. While this is less than
desirable from the standpoint of science, it's perfectly understandable given
the career pressures that individual scientists operate under.
~~~
j_baker
This creates a conflict of interests though. Is the research legit or has it
been "enhanced" to help a business venture the researcher has in the works?
~~~
lutorm
Oh, for sure. But I wasn't even talking about any business ventures (those are
rare in astrophysics...) but more about keeping your code under wraps to
prevent others from benefiting from your hard work. Especially, when (as I
said in another post), code development is not especially beneficial for your
career.
Though it's hard to find a situation where people don't have a (short-term)
incentive to make their work _look_ good. One can hope it will catch up with
them in the long run, but more likely by then they have a new job (and, in
academics, tenure) that will never hear about their past shoddy work.
~~~
btilly
The solution is to make peer reviewed code produced for a paper be considered
equivalent to a paper in tenure decisions. And for all papers in peer reviewed
journals that do computer analysis to be backed up by peer reviewed, published
code.
That makes code development beneficial for your career, gives an incentive to
not keep it under wraps, improves quality, and is likely to reduce the number
of published incorrect results.
Of course that is a pipe dream at this point, but what's wrong with dreaming?
------
maurycy
Finally. Finally a discussion about this.
~~~
timr
Enough with the false melodrama, please. Aside from the fact that your comment
is content-free and inane, scientists have been discussing this subject since
computer simulation first became a part of science. A lot of scientists _do_
share their code (I'm one of them, and I believe in sharing code). But there
are good arguments on the other side. Among them:
1) Papers describe methods in enough detail to reproduce them. If they don't,
there's a _serious_ problem.
2) Independent lines of verification. If simulation code becomes a reference,
it's inevitable that the same bugs/bad assumptions will contaminate an entire
field. Independent re-implementation of the same algorithms is a strong hedge
against this phenomenon (even if it means that there are more bugs overall).
3) Money. A lot of scientists fund their research in part through licensing of
implementations of their algorithms. I don't like it, but until someone gets
around to repealing Bayh-Dole (a _real_ scientific travesty, IMO), this is
going to continue to be a problem.
In short, what you really meant to say was that finally someone wrote a
_newspaper article_ about this subject. It's not a new discussion.
~~~
DaniFong
Closed academic publishing is intellectually bankrupt, and is probably one of
the greatest problems effecting research today. People don't share code, and
put a paywall between themselves and the public. There are open journals, but
they are rarely as prestigious, and so are not as valuable to those seeking
tenure. These academics put tenure before fruitful scientific discussion.
~~~
lutorm
So would you rather people publish in "low-impact" journals and then leave
science completely because they can't get a permanent job?
"Intellectually bankrupt" is a pretty strong term to use for people who work
for a small fraction of the amount of money normally talked about on this
site.
I'm not saying there aren't issues, but blaming the individuals who are trying
to make a living by doing science isn't going to help. The success rate of
getting permanent jobs in science might be higher than that of startups, but
the "payoff" is a small fraction.
~~~
DaniFong
I have not left science completely: I've made my own job. It is possible but
it is only made harder because of the closed system.
There are many of us who've left academia and still do science. We're
generally maligned, and removed from the ability to even participate in a
discussion due to a variety of academic access restrictions, and why?
What's more, day by day people are showing how to achieve scientific
credibility and influence through their blogs and paper hosting services like
ArXiv or, as Michael Nielsen points out, open journals like PLoS Biology. The
majority of scientists still bow to tenure pressure, and frankly I don't
understand why. There are other opportunities if you want to gain status, and
one doesn't even have to gain traditional academic status if one wants to do
real science. There are other options.
~~~
lutorm
Which academic access restrictions are you talking about? I know people who
have started independent "institutes" but the only reason you need to do so is
to receive federal funding. It's true that if you brand yourself as an
"independent researcher", people might be inclined to think you are a
crackpot, but publishing real papers should take care of that.
I'm not sure blogs are a relevant source for scientific studies though. Not
necessarily because I think peer review is the greatest system, but having
your paper published in an actual journal (open journals are fine) at least
means you managed to convince a few other people that it's worth looking at
the paper.
------
merraksh
There are a few examples of how this can be done. One of them is Mathematical
Programming Computation (MPC), a journal where articles submitted must be
accompanied by the source code that was used to produce the results. The
article is peer-reviewed, and the code submitted is tested by "technical
editors" to verify that the results are correct. See <http://mpc.zib.de>
------
moron4hire
Opening the source for research software is absolutely vital to the concept of
reproduceability. However, this fact of the level of programming training for
most scientists is a major issue. A lot of novice programmers tend to fall
into a trap of "it runs without error, it must be right." Even expert
programmers struggle with verifying that their results are correct;
technically, program verification is a mathematical impossibility. So it's a
daunting task to start with, reproducing results of software-based research.
This is only compounded by the fact that reading source code sucks. Source
code is an end result of multiple processes that occur in feedback loops. With
just the source code, you never see _how_ the code got that way. It's like
showing someone a maze with the start and end points marked but the middle of
the map blocked out.
Different programmer's conceptions of what constitutes good code varies
widely. One man's golden code is another's garbage. Just because the source
code is available doesn't mean anyone is going to understand it or be able to
work with it effectively.
Compounding this all is the fact that few people are going to _want_ to read
the source code. Analyzing source code is dull work, maybe the worst job a
programmer can take while still doing programming. Most programmers are far
happier to discard old code and start from scratch. This is often a bad idea
and doesn't lead to a better product, but at least you don't want to kill
yourself while you're doing it.
When it comes to reproducing algorithmic results, I would prefer having a
description of the algorithm, a set of inputs, and a set of outputs. I would
then write the actual code myself and see if I get the same results. This, I
think, is much closer to the concept of reproducing lab results in the
physical sciences. You wouldn't use the same exact particle accelerators if
you were verifying the results from a paper on nuclear physics. I'm afraid
having access to the raw source code will be used as a crutch where logic
errors are missed from reusing portions of code without much thought about the
consequences. Take, for instance, the subtle differences in implementations of
the modulo operator across programming languages:
<http://en.wikipedia.org/wiki/Modulo_operator#Common_pitfalls>
It would be great if scientific software were open. Unfortunately, it won't
matter a lick if it is.
------
jgrahamc
Yes, tell me about it: [http://www.jgc.org/blog/2010/02/something-odd-in-
crutem3-sta...](http://www.jgc.org/blog/2010/02/something-odd-in-
crutem3-station-errors.html)
------
eshi
I might be alone in this, but this seems like a symptom of the problems of IP
laws.
~~~
artsrc
One problem with IP laws is that to fully enforce them you need a police
state.
I don't know precisely what you are thinking, but my view is that the IP
framework should be: For a published work to be eligible for copyright, source
code must be published. Something like a cross between github and the library
of congress.
Publishing source code does not currently relinquish all rights. This would
add greatly to our societies store of knowledge and would help prevent the IP
theft in the code of published works.
~~~
eshi
This is sort of what I was getting at. I agree that releasing source code
shouldn't be a matter of giving up property rights. In fact, plenty of
commercial systems and software do allow source code access. However, it
always seems to be through messy licenses and cumbersome legal agreements to
not divulge anything.
As it stands, companies seem more motivated to protect their IP rights than to
produce tools that would keep science reliable. IMHO, companies view source
code as the product of their investments and secrets worth protecting. The
main fear seems to be that if these secrets are published competition could
use them against them by receiving a boost in their own R&D efforts by
deriving methods and processes from their own work.
This doesn't seem like just a software problem since I've heard wetware horror
stories from biotech and agriculture folks.
It honestly makes me wonder if software should be something you can patent. At
some level, it seems disturbingly similar to companies that patent colors,
genes, or derived living organisms.
------
albertcardona
The title contains the reason on why we created Fiji (<http://pacific.mpi-
cbg.de>): so that instead of releasing a Matlab script without documentation
on its many parameters and exact Matlab version used, as a print out (or
nowadays, downloadable .m file as supplementary material), we could offer
instead a ready-downloadable, version-controlled and fully working program.
A colleague of mine made similar remarks recently:
"... if you can’t see the code of a piece of ... software, then you cannot say
what the software really does, and this is not scientific."
| {
"pile_set_name": "HackerNews"
} |
ClojureC, a compiler for Clojure that targets C as a backend - terhechte
https://github.com/schani/clojurec
======
mullr
This is neat.
What worries me about this and similar efforts (like
<https://github.com/takeoutweight/clojure-scheme>) is that clojure's standard
library design assumes that the underlying runtime will be do some kind of
polymorphic method inlining.
For example: the sequence library is all defined in terms of ISeq, which
basically requires a "first" and "rest" to be defined for the data structure
in question. These are polymorphic: there are different implementations of
these for different data structures (list, vector, map, etc). So a dispatch
step is required to choose the right one. In clojure-jvm, this is implemented
using a java interface; this means the jvm will inline calls to said methods
when they're being used in a tight loop. And if you use the standard library,
calls to 'first' and 'rest' are going to be inside nearly all of your inner
loops.
Compare this to a normal lisp or scheme: 'first' and 'rest' (or 'car' and
'cdr', whatever) are monomorphic. They only work on the linked-list data
structure. So compiling these directly down to C functions makes perfect sense
and incurs no performance penalty.
So in summary: clojure assumes theres a really smart JIT which is helping
things along. This means it's not as suitable for alternate compilation
targets as you might want it to be.
I wonder if there's something clever you could do here. Vtables could be
reordered based on expected usage, certainly. Clojure can already do some
measure of type inference, so this could be used for AOT inlining when it's
available. Even if it's not, perhaps several versions of a call could be
speculatively generated based on what the compiler _does_ know already. The
normal polymorphic inline caching technique could perhaps be abused to apply
here. But it's hard to see how any of this can work in absence of a profile or
heavy hinting.
(not a compiler writer, just interested in the problem)
~~~
vidarh
You can do polymorphic method caching/inlining with a hybrid ahead of time /
JIT compiler targeting C _reasonably_ easily. The code fragments required for
caching at least will be small, and code to generate them at runtime is not a
big deal. I'm playing with a Ruby compiler, and Ruby badly needs these type of
optimizations to get fast, so I've spent a fair amount of time looking at it.
For a fair amount of cases you can do static analysis to get good guesses at
likely types, even for cases where you can't be sure. E.g. speculatively even
looking near call sites by method _name_ to see if you can guess the type of
objects that will get passed in looks to get you a reasonable chance at
guessing at the top contenders to let you speculatively generate inlined
versions without creating too much junk. But to get the most performance out
of this you're likely to need to be prepared to do some very basic JIT.
~~~
mullr
>E.g. speculatively even looking near call sites by method name
That is devious and fantastic.
> But to get the most performance out of this you're likely to need to be
> prepared to do some very basic JIT.
Yeah. But the attractive targets here are places where you can't have a JIT:
embedded systems and iOS.
~~~
vidarh
When you spend a few years speculating about what it would take to efficiently
compile Ruby as statically as possible (I love Ruby, but I hate moving parts),
devious becomes second nature...
The idea of speculatively looking at method names comes from testing that to
create vtables ahead of time for Ruby classes, to avoid hash tables in the
common-case.
As it turns out, most method on most Ruby classes are the ones inherited from
Object or other standard classes, and the number of classes is usually fairly
constrained, so again speculatively looking at method names in the compile-
time available source and allocating sparse vtables for the most common names
results in relatively little waste.
And it reduces typical method lookup to a vtable lookup for common methods,
with expensive method dispatch becoming much more rare. There's the tradeoff
between theoretical horrible blowup in vtable waste from apps dynamically
adding tons of methods and tons of classes, with a unique vtable slot required
for each method name across all classes, vs. falling back to doing hash-table
lookups all the way up the inheritance chain for "unusual" method names ones
you reach certain thresholds for waste.
You do incur the cost of propagating vtable changes down the inheritance tree
when methods are dynamically redefined in other places than leaves, but it is
fairly rare to see apps where this happens at a very high rate, and the number
of subclasses usually fairly small, so it is likely to be quite cheap. Doing
it that way is something I first saw in a technical report by (now) prof.
Michael Franz from '93 or '94 on "Protocol Extension" for Oberon.
You can probably also get some decent gains by adding heuristics to give
preference to names that appears to be used in loops when picking names for
the vtables to reduce the need of any JIT'ing.
~~~
lobster_johnson
Out of interest, are you actually working on something like this for MRI? As
you say, Ruby is in desperate need of optimization.
~~~
vidarh
No, I've been off-and-on, toying with writing a "as static as possible" Ruby
compiler (see <http://www.hokstad.com/compiler>) - it's been about two years
since I last posted an update, but I have one new part complete and another
one mostly complete. Just holding off posting for a bit longer because I want
to have a bit of a buffer (3-4 complete parts) before I get peoples hope of
regular posts up again...
What is there uses vtable's exclusively - I effective punted on the slow path
(and so on adding methods at runtime) completely, but keep track of how much
of the vtable allocations is wasted space. If/when I get there, the goal is to
use various mechanisms like this to determine when to fall back on a slow
path, and couple both with polymorphic inline caching when suitable.
EDIT:
I don't see MRI as very interesting to work on, largely because interpreters
aren't much fun, and ironically given the amount of time I spend using Ruby, I
prefer compilers to be as static as possible. I also prefer my compilers to be
bootstrapped in their target language. Hence my "ideal" Ruby compiler would be
written in pure Ruby, do a ton of static analysis, with minimal fallback to
JIT when users user features that are too dynamic to analyse fully ahead of
time
E.g. there's a ton of annoying uses of eval() in Ruby code where a more
complete meta-programming API would make it trivial for a compiler to do full
ahead of time static analysis, so one big thing an AOT Ruby compiler really
need to do is to provide a library of compiler specific meta-programming
facilities with a fallback that uses eval() as needed, and either convince
people to use it, or provide monkey-patches for a number of popular projects.
Some of these uses don't even need eval() in the first places, but uses it
just as a quick shortcut because it's simpler...
Just to make clear, I'm not sure when or even _if_ my compiler project will
ever get to a state where it's even remotely _useable_ to compile Ruby. I
started it out without even having decided to compiler Ruby, mostly to write
about various parts of the process of writing a compiler that I find
interesting. I find compiling Ruby as incredibly fascinating from a
theoretical point of view because of the complexity involved, but
unfortunately working on it takes a lot more time and effort than thinking
about the problems.
~~~
lobster_johnson
I see. Sounds like an interesting project.
Sure, MRI is boring, but it's the best implementation right now (though some
may argue that JRuby is better), and it's in desperate need of VM innovations.
Any new compiler/VM starting from scratch will be years away from being
available for use in production environments. By the time it's finished, we'll
all be using Go. (Sigh.)
~~~
vidarh
I think we need both. MRI can keep getting better, as can JRuby (which is an
amazing feat, but to me, running on top of the JVM makes it a non-starter) or
Rubinius, but they're fundamentally side-stepping the really hard problems.
E.g. nothing will stop MRI from having to interpret thousands of lines of code
each time because it can't draw a line between runtime and compile time, while
for an ahead of time compiler for Ruby, finding a pragmatic line between what
needs to be executed at runtime vs. compile time is essential (consider for
example the tendency to do stuff like getting the list of files in a directory
and require all of them in turn).
~~~
lobster_johnson
We do need both. But some of the things you mention (like constructing
vtables) could be applied to MRI's VM model without writing a compiler from
scratch. Frankly, I would much rather have large performance improvements now
than in 2-3 years.
(I agree about JRuby. I also wonder why Rubinius, which showed so much promise
at the beginning, has stagnated. Is it simply the lack of developers?)
~~~
vidarh
I agree the performance increase would be great, but I think it needs to come
gradually to MRI. E.g. trying to do anything fancy with the old AST-based
interpreter would've been pretty pointless. After YARV, it is probably
starting to get more attractive, but at the same time they've added method
caching which gives a decent amount of the benefits. A vtable will still get
faster, but it might not be the most immediately expedient way of speeding
things up vs. e.g. Sasadas latest project of adding a generational gc.
Regarding Rubinius, writing compilers for dynamic languages is hard. Most
textbooks you'll find cover techniques most suitable for statically typed
languages (the best resource I know for starting to catch up on compiling
dynamic languages is actually the Self papers). So you need more than an
unusual level of interest in writing compilers to be likely to try to tackle a
language like Ruby which is tricky even for dynamic languages (e.g. my
favorite pet problem to meditate on: What constitutes 'compile time' vs.
'runtime' for ahead of time compiled Ruby?), and even more to actually
persevere until you start getting proper results where you can get decent
results in _days_ with a simpler language.
It's made worse because of Ruby's _horrendous_ grammar. And I mean that from a
compiler writers perspective - as a developer I love to _use_ Ruby to a large
extent because the complexities of the grammar means it reads and writes
better 95% of the time. But MRI's bison based parser was 6k-7k lines with ugly
parser/lexer interplay last time I checked... There are full compilers
substantially smaller than that for other languages...
To me, these complexities are part of what makes it fascinating. I firmly
believe you can parse Ruby fully with a much, much simpler parser for example.
A lot of the ugliness can be abstracted away, and C parser code is rarely good
examples of succint code.
I _did_ start playing with MRI years ago, specifically the parser, actually,
and started chopping out redundant pieces, but got frustrated and bored with
it. That's part of the problem - it's one thing to play around with a toy
compiler like I've done, and another entirely to put in the effort to push a
major change to MRI through to production quality given the number of years of
accumulated history encapsulated in it. Doing the latter as a hobby is a
daunting task.
~~~
lobster_johnson
Just a note on Rubinius: The PyPy guys seem to have done pretty well at this.
I don't know how similar they are to Rubinius; PyPy reduces to a RPython as an
initial step, whereas I believe Rubinius compiles to LLVM's LI.
------
saosebastiao
So if I wanted to write CLI applications in Clojure, is this my best bet?
Cause the JVM is about the least suitable platform I have ever worked with for
CLI apps...which is most of what I do. I'm constantly in this pickle of
wanting to use Clojure but defaulting to Ruby because the JVM is so terrible
at it.
~~~
bbq
If the startup slowness is your problem, look into Nailgun:
<http://www.martiansoftware.com/nailgun/>
If that's not your problem with the JVM, what is?
~~~
Rayne
Nailgun is not a good solution to the problem. If the JVM startup speed is a
problem, your best option is to use something not on the JVM and not use hacks
that make it feel faster.
~~~
bbq
That's not true. Yes, it's worth considering outside of the JVM if being on
the JVM means your application isn't interactive enough (startup speed doesn't
matter for long running, fire & forget tasks).
At the same time, splitting your application into a client/server architecture
is not a hack but an engineering decision. There are times when this decision
is natural e.g. Music Player Daemon (MPD)[1]. For most CLI applications,
there's no clear benefit (but the general approach has no clear downside
either - the code overhead of this approach can be brought very low).
Certainly, in a production application you would want to secure the messaging
channel (Nailgun doesn't).
[1] A music playing server: <http://www.musicpd.org/>. Some of the clients
happen to be command line:
<http://mpd.wikia.com/wiki/Clients#Command_Line_Clients>
------
jgalt212
As someone who doesn't use Clojure, but watches it fairly closely, I'd say the
least appealing part of Clojure is its reliance on the JVM.
As such, I'd say efforts such as these are greatly welcomed.
~~~
pjmlp
> As someone who doesn't use Clojure, but watches it fairly closely, I'd say
> the least appealing part of Clojure is its reliance on the JVM.
Like it or not, this is what made Clojure successful in the enterprise, at
least when compared against other Lisps.
~~~
lispm
Is that based on numbers or a guess? How many Clojure applications are there
in comparison to Lisp or Scheme?
~~~
pjmlp
Based on guess.
I see lots of Java shops now having Clojure code and talking about it at JUGs
and how it enables their business.
You just need to have a look at InfoQ, Skills That Matter, Devoxx or Jax for
ongoing talks.
------
billsix
For those interested in Lisp implementations which compile to C, providing
cross-platform benefits and a nice FFI, gambit-c and chicken are performant,
mature implementations of the Scheme programming language.
------
densh
Why C but not llvm? Structured code generation is always better than string-
based one.
~~~
pat_punnu
Can you show a proof for that claim?
I believe it to be nonsense.
~~~
bratsche
I'm not sure if this is what the previous poster was talking about, but clang
has some APIs that let you get access to the AST pretty easily. It's been a
really long time since I've looked at it, and at the time I don't think it was
exposed as a library API for general consumption. But for example:
[https://github.com/bratsche/clang/blob/gtkrewriter/tools/cla...](https://github.com/bratsche/clang/blob/gtkrewriter/tools/clang-
cc/RewriteGtk.cpp)
~~~
pat_punnu
What is his definition of better? It sounds like he thinks it's entirely
objective, so he should be able to express it clearly and logically.
Does he think that it's technically more powerful? Again he should be able to
prove that if that's the case.
Otherwise he's just giving a shitty opinion, and should say that.
I think the claim is nonsense because with inline assembler there is nothing
that you cannot express in C that you can with LLVM. So the decision between
the two is opinion.
~~~
coldtea
> _Does he think that it's technically more powerful? Again he should be able
> to prove that if that's the case. Otherwise he's just giving a shitty
> opinion, and should say that._
It's not like it's some controversial opinion what he said -- it's both self
evident and common place. It's you who offers the more controversial opinion
(and in a rude way, to top).
> _I think the claim is nonsense because with inline assembler there is
> nothing that you cannot express in C that you can with LLVM. So the decision
> between the two is opinion_.
It's not about "expression", and nobody argued that you can express more in
LLVM.
This is missing the point by miles!
It's about having more structure and less of an ad-hoc pipeline, which helps
with better tooling, error prevention, etc.
(Not only what you wrote is wrong, but even if the original argument was about
expression, your opinion would still be wrong. Two things offering equivalent
expressive power, does not mean that they are just as good to use in practice
at all. Might as well ask "why invent new languages, when assembly can express
everything").
The only benefit to using C for something like this is portability, which is
something else altogether.
~~~
vidarh
> It's not like it's some controversial opinion what he said -- it's both self
> evident and common place.
As someone who has written more than one compiler, I don't see how it is self-
evident at all. It's also not at all that common-place compared to generating
C or asm output textually.
> It's about having more structure and less of an ad-hoc pipeline, which helps
> with better tooling, error prevention, etc.
Those provide some benefits, sure. At the cost of massive amounts of
complexity in the case of LLVM.
> The only benefit to using C for something like this is portability, which is
> something else altogether.
Now it is you who are wrong. Other people have already pointed out, for
example, that C provides an easy-to-read intermediate format, and is simple to
generate, as other benefits. Not having to deal with a massive C++ codebase is
another.
You may disagree that these other benefits are worth it, but for me at least
they are (just taking a break from a compiler that generates textual _asm_
because I find even that preferable to dealing with LLVM).
~~~
coldtea
Sure, I agree about this: "C provides an easy-to-read intermediate format, and
is simple to generate, as other benefits. Not having to deal with a massive
C++ codebase is another.".
So portability and less dependencies, plus easier.
------
akkartik
_"Before you can run anything make sure you have GLib 2 and the Boehm-Demers-
Weiser garbage collector installed."_
Wow, that's a pretty skimpy list of dependencies. But..
_"Make sure you're using Leiningen 2."_
..argh, installing that on ubuntu that requires 110 packages. All that just
for a build system?
~~~
uvtc
> ..argh, installing that on ubuntu that requires 110 packages. All that just
> for a build system?
Don't install lein using the OS packaging system (apt, rpm, yum, etc.).
Instead, just grab the `lein` script (linked to from <http://leiningen.org/>
), put it into your ~/bin, set it executable, and you're all set.
~~~
michaelochurch
This. It's a lot easier to do it that way.
------
timbaldridge
sadly, this doesn't seem to support any sort of multithreading. Even something
as simple as swap! isn't thread-safe in this implementation. So that kills one
of the main reasons to use Clojure in the first place.
~~~
swannodette
Lack of multithreading seems like a result of the implementation being heavily
based on ClojureScript ;) (it's actually pretty cool to see how reusable
core.cljs is IMO). I imagine ClojureC will have its uses like ClojureScript
does when the JVM is not an option.
------
iso8859-1
How does this compare to Chicken Scheme?
------
pjmlp
This is quite nice, on the other hand, one could just use one of the many Lisp
compilers existing since years.
| {
"pile_set_name": "HackerNews"
} |
LibreOffice 5.3.0 - ronjouch
https://wiki.documentfoundation.org/ReleaseNotes/5.3
======
chrisballinger
> Firebird has been upgraded to version 3.0.0. It is unable to read back
> Firebird 2.5 data, so embedded firebird odb files created in LibreOffice
> version up to 5.2 cannot be opened with LibreOffice 5.3. Since a future
> version of firebird will have a backwards compatibility module, some future
> version of LibreOffice (embedding this future version of firebird) will also
> be able to open these older files.
> ODB files created by LibreOffice < 5.3 can be manually converted to
> LibreOffice 5.3 format by using Firebird 2.5 to convert the data to archive
> format, and replacing the database data within the ODB by the archive format
> version. To do this, install a stand-alone Firebird 2.5, and use its "gbak"
> tool to convert the file "database.fdb" to "database.fbk" within the odb
> file. Don't forget to remove the .fdb file.
I don't use this feature so I don't know how popular it is, but it seems like
this could cause a lot of problems for people. They probably shouldn't have
updated to Firebird 3.0.0 until they had an automated migration process in
place, instead of instructing end users to manually convert their old files
from the command line.
~~~
grandinj
Firebird was experimental in 5.2.x so it's unlikely there are many such files
in the wild.
------
hysan
Have the fixed the missing grow/shrink feature in the GUI? [1] The bug that's
now a decade old? No? Well that's disappointing... I'll keep using
LibreOffice, but I still can't recommended it to my non-technical friends who
seem to love that effect.
[1]
[https://bugs.documentfoundation.org/show_bug.cgi?id=48918](https://bugs.documentfoundation.org/show_bug.cgi?id=48918)
~~~
ac29
Not fixed. Just tested in 5.3.
------
znpy
My complaint against openoffice/libreoffice is about their
programming/scripting interface: it is a total mess to deal with.
A couple of weeks ago I had the task to dynamically update the value of two
cells in a calc spreadsheet. I did it, but ended up using a library that build
on a library that builds on ... that build on UNO or whatever it's called.
The api is a mix of C++ and Java, documentation is pretty much non-existing
and code examples are incomplete and ridiculous.
If I had been using excel I could have been using the win32 com api and get it
done in a couple of hours at most.
But don't get me wrong: when doing non-programming stuff it works great!
------
ronjouch
Linking to the release notes; downloads live at
[https://www.libreoffice.org/download/](https://www.libreoffice.org/download/)
------
compsciphd
So I want to like libreoffice, but its really terribly maintained. features
come and go, practically at whim.
For instance, custom motion paths seem to come and go at whim (still not fixed
in 5.3, and as this bug shows has been a recurring problem)
[https://bugs.documentfoundation.org/show_bug.cgi?id=76916](https://bugs.documentfoundation.org/show_bug.cgi?id=76916)
It's hard to rely on it when basic functionality constantly breaks.
~~~
dublinben
I would hardly call that "basic functionality." I've been using LO as my
primary office suite through several schools and jobs, and never felt like it
was broken.
~~~
compsciphd
to me custom paths are a basic function of making interactive/dynamic
presentations.
~~~
keithpeter
I agree with both comments up the tree.
I use LO daily under Linux to produce handouts, simple screen based materials
and spreadsheet models for teaching. My colleagues are blissfully ignorant of
the fact that the materials they sometimes use where _not_ produced in MS
Office on Windows.
I would really like a stable interface for creating interactive materials, but
I'd settle for being able to export a full range of hyperlinked objects in
Impress as a pdf file so that the hyperlinks work. That would get me 95% of
where I would like to be.
------
coolspot
Using it every day. Great product!
------
brianzelip
OT; there's a recent Changelog podcast[0] that features the dev who translated
LibreOffice to the native tongue of Paraguay.
[0][https://changelog.com/podcast/235](https://changelog.com/podcast/235)
------
shmerl
Did breeze-dark icons make it in?
------
elyrly
Great alternative to Office
~~~
gima
I think anything is a great alternative to the Office at this point. In
hindsight, LibreOffice's name must've been chosen by a fortune-teller..or a
<strike>pessimist</strike> realist ;)
~~~
symlinkk
what's so bad about Office? I just switched from LO to Office and it's felt
like taking a huge weight off my shoulders - all the weird little UI quirks
and bugs are gone, and everything just works.
~~~
gima
My apologies. My reply was an attempt at political humor. No connection to the
LibreOffice software.
| {
"pile_set_name": "HackerNews"
} |
Tachyum Starts from Scratch to Etch a Universal Processor - rbanffy
https://www.nextplatform.com/2020/04/02/tachyum-starts-from-scratch-to-etch-a-universal-processor/
======
rpiguy
Beautiful example of design driven by physics. I love it!
However, like all VLIW processors, no one knows how this will work on real
workloads until they are in the wild.
I’m principle, it’s great just to see something different.
| {
"pile_set_name": "HackerNews"
} |
What's the most orthogonal programming language? - BlackJack
http://programmers.stackexchange.com/q/103567/27757
======
johnm
Cracks me up that Lisp/Scheme have so many votes/comments when the Tcl entry
is getting none.
And nobody's even mentioned Io.
| {
"pile_set_name": "HackerNews"
} |
Collapse OS – Why? - ColinWright
https://collapseos.org/why.html
======
ncmncm
I also am predicting collapse by 2030.
The mechanism follows from climate disruption. As the tropics become
uninhabitable and/or unfarmable, millions will migrate (mostly) north,
crossing borders and driving fascist / jingoist governments into power. (We
see this beginning already.) Global war follows, disrupting all kinds of
global trade, shattering supply chains. Famine ensues, and more war.
Preventing this requires preparation to absorb millions of refugees, and food
aid to places not entirely uninhabitable. Temperate agriculture will be badly
disrupted too, so we also need mass agriculture less dependent on clement
conditions.
------
glial
Here is an interview with the author cited in the article:
[http://www.cadtm.org/The-coming-collapse](http://www.cadtm.org/The-coming-
collapse)
------
gregoreous
I disagree with his argument of the end of cheap energy. Even if oil were to
become very expensive, we could use nuclear power. It is more expensive than
natural gas, but it can power the economy. Also, countries with hydro power
would have an economic advantage in a post oil world. Countries with these
power sources would become manufacturing hubs.
| {
"pile_set_name": "HackerNews"
} |
Appops reloaded – Istio, Kubernetes home-lab, Prometheus and pause-lab - alexellisuk
https://tinyletter.com/mhausenblas/letters/appops-reloaded-47
======
alexellisuk
Absolutely packed with tech this week from Michael Hausenblas. Good read and
wanted to share.
| {
"pile_set_name": "HackerNews"
} |
Follow the White Ball: The torments of snooker’s greatest player - jonathansizz
http://www.newyorker.com/magazine/2015/03/30/follow-the-white-ball
======
sakri
I've been a fan since 1992. Possibly the most beautiful video for me on
youtube :
[https://www.youtube.com/watch?v=bpeBugHSCnU](https://www.youtube.com/watch?v=bpeBugHSCnU)
Ronnie O' Sullivan Fastest 147 in History - 5 minutes 20 seconds - 1997 World
Championship
(mostly because I watched it live, and still remember how it blew my mind,
still does)
~~~
coleifer
Absolutely incredible the suspense must've been crazy.
------
JacobAldridge
From a timing perspective, O'Sullivan lost overnight - knocked out in the
Quarter Finals of the World Championship.
"I'll go for a run in the morning and sparring in the afternoon. Life has to
go on and will go on."
[http://m.bbc.com/sport/snooker/32524542](http://m.bbc.com/sport/snooker/32524542)
~~~
corin_
Relevant quote from the article about a game 5 months ago:
> _In the semifinal, O’Sullivan found himself 4–1 down and on the brink of
> losing to Stuart Bingham, the ninth-ranked player in the world. “That was a
> match where I just thought, I’m not going to be pushed around by someone
> like Stuart,” O’Sullivan told me afterward. “I’m not ready to accept that
> role yet. I fucking hated that match.” He won, 6–5._
(Bingham being the same player who just beat him)
After this defeat it sounds like he's thinking of quitting again, but I
certainly wouldn't put money on him staying away.
~~~
ed0wolf
He always sounds like he's quitting the game.
~~~
corin_
That was my point. They said, one of these days he actually will quit for
good, but I hope not soon.
------
jonathonf
You could argue he's one of the greats. But the greatest?
"Many wonder whether O’Sullivan can equal Hendry’s record of seven world
titles and officially become, in his forties, the greatest player the game has
ever known."
Not yet.
~~~
jdietrich
When O'Sullivan is on form, he has an ease and fluency that no other player
can match. His maximum breaks are rightfully legendary, and give a glimpse of
his immense talent. O'Sullivan doesn't really compete against other players,
but against his own psyche; For this reason, he is both the most exciting and
the most frustrating player to watch.
~~~
jonathansizz
There was a nice passage describing O'Sullivan in an article [1] posted on
r/snooker recently:
"For years, the main frustration for the keen snooker fan was the apparently
immutable dichotomy between the monotonous winners and the flamboyant losers.
It seemed that we could either have the unflappable resolve of a Steve Davis
(six-time World Champ) or the charismatic fragility of a Jimmy White (six-time
runner-up), but we couldn't have both in one player. By only self-destructing
either before the start of a tournament or after winning one, O'Sullivan has
cleverly defied this convention. He doesn't fall apart in the crucial stages
of a match; he plays consistently throughout, either like the greatest player
in the world or only, say, the twelfth best. It all depends on which Ronnie
shows up."
[1] [http://www.ianbgibson.com/on-snooker](http://www.ianbgibson.com/on-
snooker)
------
gadders
As a Brit, it seems funny to see Ronnie O'Sullivan profiled in the New Yorker.
Who are they going to cover next? The Crafty Cockney? [1]
[1]
[http://en.wikipedia.org/wiki/Eric_Bristow](http://en.wikipedia.org/wiki/Eric_Bristow)
~~~
thorin
I'd prefer Jockey Wilson.
~~~
gadders
I was going to suggest him, but he has the disadvantage of being dead.
------
jbrooksuk
> O’Sullivan is frequently described as a genius. But he does not see how this
> can be so.
Ah Imposter Syndrome, we meet again.
| {
"pile_set_name": "HackerNews"
} |
Need book recommendations for programmer to product developer transition - paperwork
Some friends, distributed across a few states and continents, are developers. They have one or two products being used by clients and a couple more in the pipeline.<p>It is becoming obvious that they are following into the same trap as many techies before them. The products are not fully defined and new features keep getting added. When an obstacle comes up or if a client has an idea, work shifts to a different product, the first one being left unfinished. Sometimes the vision of the product is not clear to the team. The look & feel and usability is left to the personal likes of the developer implementing them. The people involved know computers, programming, databases, system administration, etc. They are all intelligent, curious, etc. They have the kind of skill set any employer would kill for, but they are their own employers now! They are not the kind of folks who read blogs or new.ycombinator posts so, on their own, they don't know if zynga is a good model to follow or twitter.<p>What are some good, to the point, books which will help coders become the kind of people who are good at "building stuff?" Manage team, develop vision, manage process, avoid typical pitfalls (like constant requests for proofs of concepts which earn no revenue), etc., etc.<p>Look forward to suggestions!
======
russtrpkovski
Inspired: How To Create Products Customers Love by Marty Cagan
~~~
paperwork
The table of contents looks very interesting. Thanks!
| {
"pile_set_name": "HackerNews"
} |
Where can we find a cofounder for a promising & growing product? - razasaeed
We are a growing software consulting company and to fulfill our own hiring needs , we built a very simple & easy to use(inspired by 37signals) applicant tracking system called Simplicant for small to medium sized companies (especially startups). Over last 2 years, without any marketing or sales staff, it has been growing slowly. We get a lot of customer interest (and at times from VC's too) and those who start using it totally love our product. We think the product is a great utility for its target customers.<p>However, since we are not based in US (our target market), it's very hard for us to take this product to the next level without on ground market & sales team/personnel that can help aggressively market this to potential customers. We want to partner with a passionate entrepreneur who would be willing to join as a co-founder of this product and lead the marketing/sales effort in the US while we provide strong engineering/product development.<p>What's your feedback on this approach ? What's the best possible way to advertising this opening ? How should we evaluate people who show interest in this proposition ? Thanks for the help.
http://www.simplicant.com
======
hotmind
posting here is a good place. Have you tried <http://www.partnerup.com> and
Cofounder.com?
~~~
razasaeed
Not yet, thanks for the links, will do it.
| {
"pile_set_name": "HackerNews"
} |
Apply HN: Hostable, Reskinnable, Domainable, Searchable, Forum Software - andy_ppp
So while hacker news is used in very flexible ways (like this!) and the community is amazing I'm really really exhausted with how badly the actual software used to host it works.<p>I see a lot of opportunity around a piece of forum software integrated into a piece of crowd funding software. Wait, what? How the hell would this work?<p>So imagine we go completely meta and start thinking about how hacker news is being used today, what do we need:<p>1) Custom Tagging of thread titles (Apply HN, Show HN, customisable via the setup).<p>2) Works on mobile<p>3) Skinnable<p>4) Hosted/Can create your own DNS/Community/Rules/etc.<p>5) Conclusion mode - was there anything suggested/tagged as a conclusion?<p>6) Conclusion ordering mode - call for actions - you can turn you comment into an action point (maybe with a suggested cost against it).<p>7) Given a set of conclusion/actions a crowdfunding campaign could be started. We could for example all cough up to hire a lawyer to start a class action lawsuit or pay someone in our local area to organise with the local authority to fix a part of our area or we could get together and build a crazy piece of installation art.<p>8) Grouping of threads by tag and different tag types.<p>9) Not sure about money yet but I guess you could take a fee from any successful crowd funding.<p>10) Directory of Volunteer/Recommended Helpers/Past history of having done stuff!<p>11) Think about it like github issues for the real world!<p>12) Successful campaigns running through tools like this and a playbook for managing them.<p>13) Multiple post admins who have to agree/action stuff.<p>Name suggestions welcome, I have to go to bed now so please vote, sorry for the lack of further discussion! I will be on it tomorrow!
======
jay_kyburz
Hey Andy, when I went shopping for forum software last year I discovered
Discourse. Perhaps you could talk about how your forum would be better that
it?
~~~
andy_ppp
Ouch! $100 per month minimum plan... for me to get started building my
replacement for local government that is a high barrier to entry. I suppose at
least I can host it myself though, it is very impressive software.
I guess what I want is to take over all the branches of the government and
replace them with forum software. But not the forum software we have seen
today, an as yet unrealised possibility of what forum software could be!
So thus far having looked at Discourse how does it implement my ideas:
1) Nope, tagging in the HN style doesn't exists AFAIK.
2) Yes, mobile
3) Yes, Skinnable
4) Hosted/Can create your own DNS/Community/Rules/etc
5, 6, 7) It seems more focused on finding solutions to specific problems (I'm
using the twitter API does it do X) rather than having a broad discussion and
them being able to select answers from that discussion to refine into a crowd
sourcing campaign.
8) Grouping of threads by tag and different tag types.
9) Discourse have a business model that's pretty sound. Charge for hosting.
10) Directory of useful people on each forum who can be paid to do stuff...
11) How are we discussing these YC programs - I think there is a lot of room
for improvement being able to tag posts to the top of the thread and give a
TLDR of the salient points in a thread.
Maybe people can summarise questions and break things down into pieces that
can be answered in a more coherent package.
If deemed suitable by the community the thread could then be acted on through
a crowd sourcing campaign.
Imagine if we were to organise society with a piece of forum software how
would it look and what would people get in return for helping their community
- probably more voting rights and better ability to change things/make a
difference.
~~~
qopp
You might consider implementing the features you seek as a plugin to
discourse. As for the hosting expenses, there are 3rd party services that will
host it for you.
------
bestattack
To get started with an idea like this, you really need a specific use case or
customer who wants something. Then you build it for them. You've dumped a big
list of features, but at the early stage it's important to only work on the
ones which actually create value for a specific user or use case. Do your
customer development first. This is a really ambitious project so paring it
down is probably your best bet.
I do think "works on mobile" is a really great thing, few forums work well on
mobile and it's definitely the future :)
------
sideproject
Hey Andy,
Interesting, I'm working in a similar area (but not in crowd funding space) -
would love to chat on some of the things you've mentioned in regards to forum
software. Do you have a contact point I can reach out to? (me - hello at
hellobox.co)
| {
"pile_set_name": "HackerNews"
} |
Are you running Windows XP? - neur0mancer
http://amirunningxp.com/
======
Piskvorrr
No, but the page loads so slowly it brings memories of dialup. (What is it
with 20 MB backgrounds anyway?)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Which companies are working with/on the coolest technology? - cjbarber
Also any thoughts on how I could capture this in a website?
======
lanna
You have to start with a proper definition of "coolest technologies".
| {
"pile_set_name": "HackerNews"
} |
A Futures Site Coming to Bet on Movie Ticket Sales - unignorant
http://www.nytimes.com/2010/03/11/business/media/11futures.html?ref=technology
======
chasingsparks
_"If the distributor shorts a $100 contract and the movie grosses $50 million,
the distributor will make $50, thereby limiting the company’s total losses
from a film."_
From what they have said, this does not sound like a viable hedging strategy
for those in production. There is unlikely to be enough contracts traded.
Instead, it seems more like a gambling operation with a CFTC gambling shield.
I'm surprised they did this, given how much general anger there still is
regarding derivatives and speculation.
~~~
steveplace
The two differences between financial speculation and gambling are your odds
and whether you wear a fancy tie.
The odds are the more important one. If you can make statistical methods that
show the risk of loss, you can then assign a premium to that risk and put it
on the market. Whether it will get enough liquidity to get past the smalltime
remains to be seen.
~~~
ericwaller
At least in NY state, the exact odds and payouts are published. So you can
certainly assign a premium to the risk involved, and you can do so exactly.
But since everyone knows the odds, the lotto market is perfectly efficient and
no one can win in the long run -- this is what makes it gambling.
------
jsm386
I remember playing the old Hollywood Stock Exchange (non-cash version) back in
middle school. This seems like a fun idea, but how do you rule out the heaps
of insider information that exist?
------
bryanh
I am curious if the technique described in this paper
(<http://www.cam.cornell.edu/~sharad/papers/searchpreds.pdf>) would give an
advantage over the long run. I would imagine that if the majority of bets are
placed for entertainment value based on guesses, this could be a quick way to
rake in some dough.
------
mhb
Seems like Netflix might have some insight into this market.
~~~
Aron
Definately. 12M Customers can add movies to their queue prior to it even
coming out at the theater. I would wager this is substantially predictive.
Netflix could either bet themselves, or as perhaps more likely, sell this
information directly.
------
jasongullickson
This will certainly have a positive impact on the quality of films coming out
of Hollywood.
| {
"pile_set_name": "HackerNews"
} |
Why does flat Earth belief still exist? - RobertSmith
https://arstechnica.com/science/2018/11/why-does-flat-earth-belief-still-exist/
======
LinuxBender
Why does anyone believe it is round?
I have no horse in this race. I'm merely suggesting that this could be a
simulation. We could all be in a space ship, plugged into some computer that
is keeping our brains from going insane on our journey to another galaxy. Or
perhaps our ship ran out of fuel and the system is just "keeping us happy"
until the last power generators or solar sail power converters go offline.
I have seen way too many people that look and act related. It can't be in-
breeding? I chalk it up to lazy programming. The best developers are lazy,
right?
| {
"pile_set_name": "HackerNews"
} |
Could ImGUI Be the Future of GUIs? - tokyodude
https://games.greggman.com/game/imgui-future/
======
overgard
Really probably not. I love them and they're very handy in certain situations
(debugging tools, quick UIs) but once you need a lot of customization they
become extremely cumbersome. Also really kind of makes it impossible for
designers or less technical people to do anything. Also, while I think
separating view logic is slightly overrated, it is useful, and it's very hard
to do that with IMGUI. Also it doesn't thread well. Also... Look there's like
a million downsides.
Also saying there's no memory allocation is really misleading. There's PLENTY
of memory allocation, per frame, you're just not explicitly doing it yourself.
It's actually much worse than an RMGUI in this regard, because at least with
an RMGUI you get the allocations over with once. With an IMGUI you're
allocating things all the time. They're probably much smaller allocations, but
lots of small allocations does not make for good performance.
One final note, the Unity 3D example always gets used. If you've ever written
a plugin for unity or a custom editor, you're very familiar with the fact that
it's editor gui system is extremely limiting and kind of sucks. I mean, it's
an example, but once you're past the basics it's kind of a bad example.
~~~
josephg
On the allocation point, the efficient way to handle this is to use a per-
frame memory pool. Because nothing in the IMGUI can persist between frames,
you can allocate a single arena of memory for any UI elements that need to
store bounding boxes or callbacks or whatever. Each frame just reset your
next_ptr to the start of the arena. Technically you are allocating memory, but
in practice your allocations are free.
~~~
geezerjay
Adding a separate complex ad hoc memory allocation scheme is not what I would
call free. Granted, computationally-wise it may be relatively cheap but it
does add multiple forms of complexity to a problem that doesn't exist in
rmguis.
~~~
loup-vaillant
Arenas are the simplest allocation scheme ever.
// Start of frame
void *arena = malloc(LOTS_OF_MEMORY);
void *next_ptr = arena;
// Allocate something
object = *next_ptr; // return that value
next_ptr = object_size + alignment; // crash if out of memory
// End of frame
free(arena);
By the way, such "separate complex ad hoc" memory allocation schemes are the
reason why manual memory management is faster than garbage collection. If you
did everything with malloc(), it would be slower (unless the GC language
allocates much more than C, which they often do).
~~~
Keyframe
No need to malloc each frame. Malloc at startup and memset each frame (don't
even need to do that, tbh).
~~~
maccard
Nope, just set the pointer back to where it started from. You do need to be
super careful doing this though, as anything that relies on RAII (in c++ land)
will be busted. You could manually call the destructor on the object in that
case, but kind of defeats the purpose of the "no allocation" goals
~~~
geezerjay
C++ includes it's "placement new" feature specifically to cater to the memory
pool needs. There's no allocation, and only constructor and destructor calls.
------
yonilevy
FWIW, Unity are moving away from ImGUI (to a classic retained mode UI system)
[https://blogs.unity3d.com/2019/04/23/whats-new-with-
uielemen...](https://blogs.unity3d.com/2019/04/23/whats-new-with-uielements-
in-2019-1/)
~~~
CreepGin
Also to clarify, UIElements (Unity's new retained mode UI framework) is
available now in 2019.1 for Editor UI. Runtime UI will also be using
UIElements in the future.
------
theclaw
In the context of creating debugging UIs for games and graphics applications,
Dear imGUI is a godsend. Programmers love it because there is literally only
the code to worry about. It's very easy to get it up and running, and all the
code that handles the UI drawing and interaction is in one place so it's easy
to reason about.
It works very well in the context where you already have fast graphics and an
update loop, and you're already expecting to redraw the whole screen every
frame. It does not really suit more complex, text-heavy UIs where you're
rendering thousands of glyphs with proper kerning and ligatures and anti-
aliasing, etc, and want the result of that hard work to be retained in the
framebuffer unless it absolutely needs to change.
~~~
pedrocr
> want the result of that hard work to be retained in the framebuffer unless
> it absolutely needs to change
I think immediate mode GUI libraries can get around this issue by still
caching and reusing between frames. Conrod does this by still having the state
in the background although you are programming to an immediate mode API:
[https://docs.rs/conrod/latest/conrod/guide/chapter_1/index.h...](https://docs.rs/conrod/latest/conrod/guide/chapter_1/index.html#is-
conrod-immediate-or-retained)
~~~
sly010
That's a bit antithetical, given the other side of the debate being Retained
Mode GUIs.
That said it's a good middle ground. Use whatever API you prefer over a well
optimized implementation.
------
mindfulplay
The author does a good job explaining some benefits of an immediate mode
renderer but vastly misses the disadvantages.
The immediate mode renderer is great for toy programs. Similar to how you
could reproduce 'look here is how simple it is to write hello world and
compute the millionth digit of PI' in a new esoteric language...
Occlusion, hit-testing, state changes, scrolling/animations even in the
simplest forms will fall over. Infact, that's why we have every major browser
move their touch and animation systems into a layer based compositor system
(into a separate thread / process).
The author also grossly misses their own example of 'how a spreadsheet with
many rows and columns will update faster using ImgUI' and how Instagram styled
apps will fare better ImgUi.
A retained mode renderer will use virtual scrollers, efficient culling of
nodes for both display and hit-testing (scale to billions of virtual cells)
and more importantly help a team of people coordinate their work and get
things done.
We are no longer in the 90s.
~~~
btown
In fact, with Javascript JIT engines (whose teams deeply understand the use of
libraries like React) relentlessly attacking the overhead of DOM nodes and
their initialization, and with users who actually expect the design
flexibility provided by CSS... a system like React is actually an ideal layer
of abstraction for modern UI implementation on practically any platform.
Sure, if you're on an embedded platform, somewhere a JIT can't run, or if
you're doing something with real-time rendering requirements (and honestly
modern React Suspense should even make that feasible), you may want to use
something lower-level. But most people won't need to do this.
~~~
overgard
The DOM being slow is really more an artifact of browsers/HTML and the history
behind doing document based layout. React is a nice solution to that specific
problem; but this article suggests immediate mode gui's are the future, which,
I don't think that's really the case. And I'm not sure I'd really call React
an IMGUI anyway. It's similar, but it's also pretty different.
------
mooman219
This is really reads like someone trying to sell you something. I've done work
on frameworks for both immediate mode and retained mode GUIs. They both
absolutely allocate memory behind the scene. There absolutely is state being
marshaled around. Caching commonly used state is important. Performance can be
bad and great in both. You're really just subscribing to different sets of
opinions
~~~
TazeTSchnitzel
Any moderately complex “immediate mode” GUI system is going to do something
equivalent to constructing a “retained mode” GUI on the fly I'm guessing.
~~~
tom_
It will. In fact, even a simple one will require this. But it's not much of a
problem - 99% of the time, the UI is the same from one frame to the next, and
it isn't much work to detect this. So even if every change requires a complete
rebuild of everything, it's not much of a problem.
(I've written systems like this for Cocoa and Win32, and it never turned out
to be necessary to do anything other than just regenerate the entire UI any
time anything changed. The update runs at 30Hz or 60Hz, and when anything is
changing, the UI gets regenerated a lot! - but so what? Most of the time, the
UI doesn't get regenerated at all. Then something happens, and the code spends
2 seconds getting absolutely hammered continuously, malloc malloc malloc
malloc malloc, god help us all... and then, once again, nothing. The operator
puts their fingers to one side and stares at the result with their eyes.
Repeat.)
------
seanalltogether
I think the author is over exaggerating the problem of object creation and
destruction in traditional gui frameworks. List/collection views are designed
to reuse objects as you scroll. Secondly I think the author is also
downplaying the fact that retained GUIs can also cache object rendering. Just
as the gpu doesn't have to draw the whole screen when only the cursor is
blinking, it also doesn't have to redraw widgets unless their size changes.
Immediate vs retained is a simple case of budgeting against cpu usage or
memory usage, and it should be considered in that light. (immediate uses more
processing, retained uses more memory)
~~~
charlesetc
Thinking about this decision just as a performance one disregards the fact
that the code is substantially different. It does seems likely that one way is
more intuitive / easier to work with than the other, I wouldn't know which
though.
~~~
gmueckl
It depends on the probkem younare trying to solve with your GUI, I suppose.
Sometimes, it is better to just recreate the whole GUI to adapt to a model
change (e.g. the user moved half of the tree nodes somewhere else), sometimes
it is easier/faster/... to just update the existing GUI (e.g. update the text
of a label inside a complex dialog widget).
------
geekpowa
Retained GUIs vary wildly in implementation.
Many of authors most significant criticisms on retained GUIs are
implementation considerations. GUI frameworks exist that solve his key
criticisms of complexity and are pleasant to work with.
Criticisms that target core architecture of retained GUI I don't consider to
be valuable design goals, at least in settings where I work on GUIs. e.g.
memory usage.
Alot of things are glossed over that remain challenges in both, e.g. layout
management.
HTML is an interesting example. First iteration of HTML was essentially
immediate mode if you think about a single client/server interaction as a GUI
update cycle. Server sends draws to client browser and client browser sends
back to server user selections. There is no retained state on gui side. Now
with programmatic access to DOM, ability to attach events to DOM elements from
client side it is now a retained GUI. Seems to be where things evolve to
naturally.
The GUI framework I use nearly daily is retained and very pleasant to work
with in terms of ease of belting out screens & readability/maintainability of
code. The simplicity comes with compromises though as there are limits on GUI
idioms that can be expressed. Occasionally run into those boundaries and
resulting GUI does look a little plain and unexciting, but for something that
is substantially about data entry its fine.
------
arianvanp
Recently I started playing with [https://github.com/ajnsit/concur-
documentation/blob/master/R...](https://github.com/ajnsit/concur-
documentation/blob/master/README.md) which has been the most refreshing UI
paradigm i've used in a while. and it reminds me a lot of this ImGUI approach,
but behind the scenes it uses coroutines instead.
The idea is that a button is a UI element that _blocks_ until an event is
fired. You can then compose elements in time like:
button "hey" >> label "clicked"
which is a program that displays a button, you click it, the button goes away
and the text "clicked appears"
Or you can compose programs in space:
(button "hey" <> label "not clicked")
this is a program that displays both a button, and a label at the same time.
Now, by combining both space and time, we can create a program that changes
the label when clicking as follows:
program = (button "toggle" <> label "off") >> (button "toggle" <> label "on") >> program
This is an application that toggles a label on and off when you press a
button. (Note that the definition is recursive)
------
RandyRanderson
There's a difference bt poor impl and poor design.
As the author points out, HTML has a poor design (eg. if you want to have a
1000x1000 cell table, you have to have 10^6 actual cells - that's a lot of tds
or whatever to parse).
Modern OO GUI frameworks don't do this - they say something like:
cellRenderer.draw(target, row,col,position,size)
No creation of objects required. Of course since it's so easy to create OO
programs a lot of code isn't great... and then others copy that code and so it
goes.
Seems like we keep re-creating software b/c we haven't taken the time to look
at what exists and only then decide on what to keep and what to change. "This
is too complex - I'll re-write it!". 10 years later: "We added all the
features that the existing software had and now the new one... is just as
slow... but we did sell a lot of conference tickets and books so... totally
worth it."
When I was 20 I also thought I knew better so I get it.
------
babel_
I feel that the future lies in combining retained and immediate interfaces,
preferably with the granularity to allow deeply nested retained interfaces
that are fast for complex ui (or for a realtime system with memory to spare),
whilst still allowing one to go the other direction, such that a simple ui can
be written cleanly and logically for low-memory systems (such as embedded or
boot guis). It would need a very well designed api for this, but I feel the
benefits are worth the effort (and I'll probably look into this next time I
have the freedom to choose ui apis).
A balance may be letting people define it either way, so that manually written
ui still can have auto-layout yet intuitive code (following control flow
primitives), whilst allowing generated retained uis to be manually editable --
perhaps even allowing one to then embed one within the other, a boon for
scripted interfaces that perhaps have people of various levels of experience
producing ui elements, such as a musician with little experience being able to
add a simple visualiser in an immediate manner to a deeply retained daw gui.
Of course, there's a lot here that is implementation, and some criticism
either way can be optimised out. Immediate mode can still cache its rendering,
we've had optimised blitting since the early days, and is only usually a
problem with complex ui. Retained would get fewer cache misses if we weren't
allocating madly across the heap and took a more disciplined approach
allocating to contiguous memory -- which is almost entirely a language/api
problem (in my experience) that can also happen with immediate but we
typically don't see since it's often done in a more procedural style that is
allocating to some pool.
Other api elements, such as handling lists etc aren't really a differentiation
between retained and immediate, those can be made in either.
For me, I often find that the ability to write out prototype ui code in an
immediate style in very quick and satisfying (exactly what I want in
prototyping), however once I start to expand upon a ui, I find it best to over
time refactor towards a retained style, since by then I will typically have
some templates for what ui elements look like, and so I just have to pass a
string and function pointer to fill in the template.
Can't see why we can't have nice things and let both coexist...
------
pedrocr
Conrod is an immediate mode GUI library for rust[1]. I've been using it for an
image processing app[2] and have enjoyed the way the code turns out.
Everything is much more straightforward as you don't have to reason about
callbacks interacting with a loop that's not yours to control and the
performance seems good.
[1]
[https://github.com/pistondevelopers/conrod](https://github.com/pistondevelopers/conrod)
[2] [https://github.com/pedrocr/chimper](https://github.com/pedrocr/chimper)
------
thrax
Immediate mode uis are fine for debug displays or for displaying data that's
changes every single frame, but for anything else, in a shipping product they
are just a waste of resources. The primary wasters are excess memory
allocation, string generation, and the sheer amount of redundant function
calls. Anything you do to address those problems result in converting your ui
to a retained mode UI. For those advocating react like approaches to solving
this.. similar problems are involved. Diffing state is wasting cycles unless
it's done so optimally and carefully that it becomes a technical feat and ends
up being as complex as just doing something retained. Source: game developer
for 25 years on console, desktop, and mobile.
------
seanmcdirmid
Probably not. With technologies like React that make retained-mode UIs look
more like immediate-mode ones, there is less need for full blown immediate-
mode UIs. React achieves the programmability of an immediate-mode UI without
sacrificing the performance of a retained-mode UI (at least, that’s the goal).
------
weinzierl
> A few problems with this GUI style are:
> You have to write lots of code to manage the the creation and destruction of
> GUI objects. [..]
> The creation and destruction problem leads to slow unresponsive UIs [..]
> You have to marshal your data into and out of the widgets. [..]
My biggest pain point with the retained mode GUIs I worked with was none of
the issues mentioned above. It was always the centralized GUI thread and the
consequential synchronization complications. I don't know if this is an
inherent problem of retained mode GUI frameworks and if there are some that
don't force all widgets into a single thread. If not, this alone is a reason
to for me to find immediate mode interesting.
~~~
overgard
Immediate mode, if anything, makes this harder. (You absolutely have to run
the GUI on one thread, for instance). Really though you can get around that on
either of them by spawning a worker thread/coroutine/etc. on button clicks and
so on.
~~~
weinzierl
The real problems start, when the worker thread needs to update the GUI, for
example to advance a progress bar. With the frameworks I know I have to build
a communication channel between the threads and tell the GUI thread to show
and update the progress bar.
My hope was that with an immediate mode framework I could just show a progress
bar on top of the existing GUI right from the worker thread. I don't know
enough about immediate mode to say if this is really possible. It would
simplify a lot of my code though.
------
Klonoar
I recall reading this article in your comments on the last GUI-specific link
posted here... where you just kept disagreeing with comments that took the
time to point out how this stuff is largely off base.
We moved away from WM_PAINT for a reason.
------
amluto
Here’s a downside that wasn’t mentioned:
if (ImGUI::Button("Click Me")) {
IWasClickedSoDoSomething();
}
This forces Button to be stateless, which limits the possible quality of
implementation. If you mouse-down on a button and the button changes before
you mouse-up, it shouldn’t register as a click. Similarly, if you mouse-down
on a button, drag to the button, and mouse-up, it shouldn’t be a click.
Implementing this in a fully immediate-mode GUI with this interface is either
impossible or requires gross hacks.
~~~
bdowling
A naive implementation would have the problem you describe. However, a smarter
library implementation avoids this (e.g., by generating an id for the button,
saving the id on mouse-down, and checking the id on mouse-up). The user of
such a library won't have to worry about it.
~~~
amluto
And how, exactly, is the id created? Some hash of the button’s text and
coordinates?
I would call that a dirty hack, no to mention being implicitly stateful.
~~~
HelloNurse
But a stateful GUI is not a sin: the program as a whole has state, application
code needs to process GUI inputs to update application-level state (e.g.
currently set game options) while tracking mechanical details like whether a
button is "half clicked" is suitable for automation at the GUI library level.
Regarding IDs, they can (and must) be provided by client code, as client code
is responsible for managing widget identity (i.e. whether the button that
might have a mouse up event this frame is "the same" that received a mouse
down event some frames ago). In C or C++, string names could usually be
autogenerated with trivial macros relying on __LINE__ and __FILE__.
------
jbverschoor
Can we please just stop moving around in circles in the tech industry? Nobody
seems to learn anything from past methods, tech and everything.
~~~
analognoise
LOL. My favorite part of it was the last line:
"More research into ImGUI style UIs could lead to huge gains in productivity."
Don't tell this cat that the research on this stuff goes back >40 years and
that the introductory chapter of any book on computer graphics would have
talked about all of this. Not like it would help him - he hasn't read anything
about it, didn't even do a cursory Google search. Sheesh. A low, low bar.
~~~
tom_
Do you have any recommended links?
~~~
analognoise
I do!
One of my favorite is "Don't Fidget with Widgets, Draw!"
([https://www.hpl.hp.com/techreports/Compaq-
DEC/WRL-91-6.pdf](https://www.hpl.hp.com/techreports/Compaq-DEC/WRL-91-6.pdf))
It's modern enough to be understandable, and while it's referencing Ezd (a
Scheme drawing system) it greatly influenced Tk (which is still used in all
kinds of heavy-hitting EDA software).
That one's only been around for 28 years though (well, the paper was published
in 1991, so code was before that...) but let's go further:
The drawing system(s) that greatly influenced Ezd came largely from Xerox
PARC, such as ALTO: [https://www.computerhistory.org/atchm/xerox-alto-source-
code...](https://www.computerhistory.org/atchm/xerox-alto-source-code/)
There's code in there for a vector drawing program (in 1980!), as well as
interacting widgets. Let's go back further...
Finally, we have to mention the Mother of All Demos, in 1968:
[https://en.wikipedia.org/wiki/The_Mother_of_All_Demos](https://en.wikipedia.org/wiki/The_Mother_of_All_Demos)
Which if you haven't watched MOAD before, give it a spin. It will still blow
your mind, but the interactive graphical drawing mechanisms will be
recognizable.
Also, this "paint the screen every time" method is how a tremendous number of
people who cut their teeth on DOS did things on the screen. DOS release date:
1981. So you don't have to have been an academic (which I am NOT) to have
tried these techniques while solving practical problems.
As far as books: Computer Graphics Principles and Practice in C (later
editions use C#/C++/etc. - the ideas are the same)
The 1995/96 one talks about retained mode graphics directly. That book has
been standard in "intro computer graphics" courses for as long as it's been
out. So at least 20+ years it's been the "start here" book.
So yea, this stuff has been around for a while...
------
jayd16
IMGUI must sound appealing to people who have never done UI work. Just try to
implement a responsive UI in an IMGUI. Layout code is not fun.
------
laythea
I got fed up of ImGUI when I wanted to route events to my application instead
of the GUI. Its good "out the box" ut when you need to get down and dirty to
customise, it can be awkward.
Nowadays I do a hybrid approach, so I have use NanoGUI and create my own "live
data" "retained mode" controls. Now I have either the best of both worlds or
the worst of both worlds. I think the best:
Pros: \- I don't have to bother with data binding. As the control is passed a
pointer to the actual memory for he value, it can "go get it" when rendering,
rather than my application setting its state. \- I still have classes to
represent elements and state, so its conceptually simple to build controls on
controls. I found this difficult with Imgui.
Cons: \- renders 100% full speed, but I am working on way to speed up and slow
down render loop depending on user activity, so that when sitting idle, the
cpu is not burning.
~~~
ahaferburg
This is the exact issue I'm running into right now with dear ImGui. There is
no event propagation. You can either transfer control over to ImGui, or you
retain control. In Qt it would be possible for the focus widget to not accept
events, so they would bubble up the hierarchy. Considering how the hierarchy
is tied to the callstack, this seems difficult to achieve with an ImGui.
~~~
laythea
NanoGUI + live data custom controls = happy days
------
sago
I have implemented a bunch of UIs for games. Immediate mode sounds good, but
each time the thing that has bit me has been layout.
Sometimes you need to traverse the hierarchy to figure out where things will
be placed. Before traversing it for render. If your hierarchy is implicit in a
call graph, you have to either duplicate your calls, or implement some kind of
backend retained system so you can defer rendering until after layout.
Beyond the absolute simplest of toy UIs, immediate mode doesn't work in my
opinion.
------
golergka
Every time a developer decries some abstractions and tools as unnecessary
complicated and too "enterprise", it probably means he haven't encountered a
problem that this solution was created to address.
As a Unity developer, I love immediate mode GUI for debugging. But I would
never in my right mind attempt to use it for actual in-game GUI. Project I'm
working on right now is not incredibly complicated, it's just a typical mobile
match3 game. But a typical screen here has: (1) background that has to be
scaled over all screen, eveloping it around while keeping aspect ratio, (2)
frame that has to be scaled to screen without keeping aspect ratio, (3) a
"window" background that has to be scaled somewhat to screen (with margin),
being enveloped by it, (4) interactive elements, that have to be scaled down
from the "safe area" (so that there are no button under the iPhone bevel), (5)
match3 game field that has to be scaled according to physical sizes of the
screen, (6) pixel-perfect elements that have to be chosen according to pixel
size of the screen (1x, 2x and 3x options) and scaled appropriately.
So, no, immediate GUI is definitely not the solution here.
------
ahaferburg
From dear ImGui's mission statement:
_> Designed for developers and content-creators, not the typical end-user!
Some of the weaknesses includes:
> \- Doesn't look fancy, doesn't animate.
> \- Limited layout features, intricate layouts are typically crafted in
> code._
It may not replace retained mode GUI toolkits, but it can certainly make the
life of devs easier. If all you need is to quickly hack together an internal
tool, or some quick debugging interface, keep ImGui in mind.
------
zzo38computer
I have seem other programs doing stuff like that before, and I have also done
some of that in my own programming (although not with this or any other
library). I did not know what it is called, until I read this today. It look
like good to me. Also, you will still need to add some extra variables if you
are doing such thing as tab to focus, I think.
------
4thaccount
Anyone experienced with ImGUI ever use Rebol and Red's DRAW DSL?
I believe Rebol's GUI support is even easier to use than ImGUI, but of course
it can't be embedded and used in the same way as ImGUI either. I wonder if non
Red projects could possibly hook into Red's system once Red/System gets closer
to C level performance?
------
nh2
> ... people scroll almost constantly in which case the ImGUI wins by a
> landslide
I think this is a misrepresentation of how fast scrolling is usually
implemented.
For fast scrolling, you render the page (which is larger than the viewport =
"what fits on the monitor") ONCE onto a GPU texture, and then all scolling
happens on the GPU side (the CPU just tells the GPU the offsets into that
texture).
Immediate mode has to recreate the texture every frame, instead of once for
multiple frames. So "It might use more CPU" is quite certainly true.
~~~
nh2
You can also do some simple math to arrive at a justification:
Assume a 4k x 2k display at 60 FPS.
Compute the throughput needed (Bytes per second) to draw an RGB framebuffer.
That is: 8M pixels x 3 Bytes x 60 fps = 1.44 GB/s
Note how we haven't done any computation yet to decide what the colours should
be, this is just the CPU effort to do IO to tell the GPU about the new colours
to show.
This would incur significant CPU usage, and your device would get hot quickly.
In contrast, if you let the GPU scroll, you have to send two floats (for X and
Y offset) per frame, and the GPU just does a hardware-parallelised lookup.
This is why we have GPUs, and why scrolling immediate-mode would make your
device burning hot while a GPU does the task with minimal energy usage.
~~~
hevi_jos
Following the same logic that you use... have you calculated how much memory
do you need to store the memory of a single treeview, or a single scrolling
document of just several pages?
Two floats? You are using a memory that is a CPU memory, a big chunk of
memory. That does not exist in the GPU. In the GPU the memory is distributed
so it can be used in parallel.
Immediate GUI exist because of GPUs, because with GPUs you can draw frames
fast. If you look at Imgui code it uses GPU drawing for everything. In fact it
uses only two drawing functions copying small rectangles in the screen.
It is drawing a single big chunk of memory what is extremely slow, and you
need to do that before you do offsetting.
And if you work with variable data, like a treeview, you need to allocate for
a finite amount of memory in the GPU buffers.
------
627467
I'm having a hard time to understand what is ImGUI (and what is the opposite
RmGUI)... any could help me with ELI5? It sounds like ImGUI is reactive while
RmGUI is not?
~~~
revvx
Games (normally) re-render the whole scene every frame.
ImGUI exposes that to their API users: you have to re-render and check for
clicks on every frame. The code looks like React, (but not as optimized, it
re-renders every frame!), and normally you have to keep state by yourself.
Code example: [1].
Retained Mode is closer to the DOM, Cocoa or WPF: you create objects and
there's an abstraction between the API and the renderer: they get re-rendered
every frame for you. Componentes normally have events and state by themselves.
Sometimes there's a visual editor too.
The main difference is the API. One is lower level than the other. In
practice, the APIs aren't that different, except when it comes to event
handling.
[1] - [https://docs.unity3d.com/Manual/gui-
Basics.html](https://docs.unity3d.com/Manual/gui-Basics.html)
~~~
floatboth
> Games (normally) re-render the whole scene every frame
They don't re-upload the whole scene to the GPU every frame, they don't
recreate objects all the time. Immediate mode was only used in, like, the GL
1.x times.
~~~
revvx
But I never said they did?
------
dang
Related recent thread:
[https://news.ycombinator.com/item?id=19744513](https://news.ycombinator.com/item?id=19744513)
------
dwrodri
Great find! I haven't done much work GUIs myself, but I would love to read
more about alternative design philosophies in human-centered computing and
their downfalls.
------
cotelletta
React hooks is the closest you'll get in practice, once you factor in async,
layout measurements and other practical things. But useState is pretty much
just imgui with built in getters/setters rather than external state.
------
polytronic
The single most important factor when it comes to UI is text rendering.
Immediate mode UIs are based on text rasterization, ie polygon creation for
each character of the text while retained mode text is usually featuring
texture atlases containing all characters in upper and lower case. This of
course introduces a limitation on the number of fonts available to the
developer, font sizes, etc. My 3D engine is using an immediate mode UI for the
editor and tools while it allows for creation of retained mode UI for in-game
UI components by automatically generating texture atlases for the selected
font types
------
etaioinshrdlu
Didn't Firefox put forward a big plan to basically implement the browser as an
immediate mode UI designed to redraw everything on every frame?
~~~
floatboth
WebRender is not just a "plan" but a real thing (I'm using it right now), and
it's fully retained.
Immediate doesn't mean just redrawing everything on every frame, immediate
means what's described in the article — not keeping state. WR _aggressively_
relies on kept state to optimize rendering of each frame as much as possible.
~~~
etaioinshrdlu
Very interesting, thank you.
------
Vanit
I hadn't heard of ImGUI, but after reading the definition realised that's
exactly how you made custom menus in RM2k/3 :)
------
abledon
Say I want to make a program, X, that draws a 50x50px square over another
program, Y, at (200,100) and only program Y. Do i need a low level stuff for
this, can imGUI be used here? or is it possible with electronjs etc... Also,
would program Y be able to detect, with administrator privileges that another
program was targeting its pixel space and drawing over it?
------
grifball
>programmers find it more performant >plusses speed >minuses uses more CPU
what?
------
qwerty456127
Are any implementations of this approach available for high-level languages?
------
pjmlp
ImGUI is the past of GUIs.
That is how we used to do it on 8 bit and 16 bit platforms, before frameworks
like Turbo Vision, GEM and Workbench came into the scene.
------
rambojazz
Could somebody please ELI5 this? How is this different from traditional UI
approaches and why would I want to use this?
------
dzonga
this feels more like drawing UI's using state charts in terms of
expressiveness.
------
ianrathbone
Isn't the future of GUIs to have no GUI?
------
781
It's important to point out why games use immediate mode GUIs:
1\. The GUI needs to be overlaid on the game image (OpenGL/DirectX). This is
difficult with traditional GUIs like QT.
2\. The GUI needs to be updated in sync with the game, again, it's difficult
to integrate traditional GUIs event loops into the game loop, especially with
stuff like double/triple buffering.
3\. The GUI needs to be as fast as possible, games are severely CPU bound.
A retained mode GUI is typically easier to use, convenience is not why people
use immediate mode GUIs.
It's worth pointing out that the immediate/retained split doesn't apply only
to the GUI - there are retained mode graphical APIs - DirectX used to have
one. They are only used in low-demand games, they sacrifice a lot of speed for
the convenience of using a retained mode.
~~~
invokestatic
I've written real-time game UIs before so I think I have some relevant
experience here.
1\. It is very possible to write a retained-mode GUI in a graphics API like
DirectX or OpenGL. In fact, a retained GUI would typically wipe immediate GUIs
in terms of performance in this context. In immediate mode, the GUI's vertex
buffers need to be completely reconstructed from scratch every single frame,
which is _slow_ , CPU bound, and cannot be (easily) parallelized. It's like
reconstructing the game world every frame -- that would be ludicrous for any
non-trivial game.
2\. I don't think there would be that much of a difference between the two UI
models, since data updates can be dispatched from the event loop. It would be
faster, too, because only UI components that need updating could be redrawn.
This is far faster than updating the entire UI every single frame.
3\. As mentioned earlier, immediate mode GUIs are going to be a lot slower
than retained mode, when implemented properly. Immediate mode GUIs put most of
the work on the CPU instead of offloading most of the work to the GPU like in
the retained model.
I think developers that are using immediate mode GUIs are doing so because of
their ease of use. I think retained mode is typically harder for a game
developer to conceptualize because immediate mode is conceptually similar to a
game loop. Also, I don't know of any free & open source retained mode GUIs for
DirectX and OpenGL and the like.
Also, DirectX at least (and probably OpenGL) encourages a retained-like model
for general rendering. The only way to get decent performance is to re-use
vertex buffers between frames and only update them when something changes.
~~~
tom_
I've written real-time game UIs too. I think you underestimate just how
ludicrously fast CPUs and GPUs are, and overestimate the complexity of your
average GUI. What does your average screen's-worth of GUI consist of, after
all? How many widgets are there? I double dare you to tell me that a modern
computer or games console can't handle 500 widgets per frame. And I now triple
dare you to tell me that your UI designer has put that many damn widgets on
one stupid screen in the first place.
(Every UI I worked on actually did redraw everything every frame anyway. It's
really not a big deal. Your average GPU can draw a monstrous amount of stuff,
something quite ridiculous, and any sensible game UI that's actually usable
will struggle to get anywhere near that limit.)
~~~
invokestatic
I'm sure many games can get away very well with an immediate mode GUI. I think
the question is not _can_ you, but rather _should_ you. My last project used a
custom immediate-mode GUI. At the absolute pinnacle of optimization, it was
pushing 2,000+ FPS on my machine with something like 3-4k vertices, with heavy
texture mapping and anti-aliasing. But the problem was that even with peak
optimization, the CPU was spending 15-20% of its time every frame recreating
the UI's vertex buffer. Now imagine if we had done a retained-mode GUI
instead. That 15-20% overhead would be reduced to near 0% on a typical frame.
For nearly any type of game, that kind of savings is really significant. Think
of how many more vertices your artists can add, or cool gameplay elements you
can add that you didn't have the CPU time available before, and how much
better it will run on lower-end hardware.
Why settle for "good-enough" performance?
~~~
revvx
> The CPU was spending 15-20% of its time every frame recreating the UI's
> vertex buffer.
Not saying it is easy, but it's possible to optimize and cache vertex buffers
by using something similar to React's VDOM.
~~~
smallnamespace
Doesn't your cache just become a limited retained mode with a somewhat hacky,
opaque API?
~~~
DougBTX
Basically yes. If an immediate mode API is much easier to use, and a retained
mode underlying implementation has much better performance, then putting a
React-style VDOM layer in-between could get the best of both worlds, depending
on how well the middle layer is implemented.
------
layoutIfNeeded
So basically go back to WM_PAINT
~~~
pjmlp
Not really, it is a go back to mode 13h.
------
chaboud
No. ImGUI could not be the future of GUIs.
GUIs are multi-process, multi-system, multi-clock, multi-network entities, or
at least they have the potential to be. Immediate Mode GUIs are almost non-
scalable by design.
Imagine a multi-system asynchronous AR collaboration environment. Now imagine
that as an Immediate Mode GUI. If we had enough horsepower to do that, we'd be
doing something far better with it.
| {
"pile_set_name": "HackerNews"
} |
Jeff Dean on Large-Scale Deep Learning at Google - charlieegan3
http://highscalability.com/blog/2016/3/16/jeff-dean-on-large-scale-deep-learning-at-google.html
======
hartator
It seems to work fine for me, a link to the actual talk on YouTube:
[https://www.youtube.com/watch?v=QSaZGT4-6EY](https://www.youtube.com/watch?v=QSaZGT4-6EY)
Jeff Dean - Chuck Norris for us nerds - fact as a bonus: "The rate at which
Jeff Dean produces code jumped by a factor of 40 in late 2000 when he upgraded
his keyboard to USB2.0."
------
return0
Tangentially, watching the pace of papers coming out in machine learning is
insane. It's so fast, people may literally cite powerpoint slides when the
paper doesnt exist yet. The culture of openness seems to have fostered this
insane pace. Contrasting that with the reclusive culture of life sciences
explains why there is slow progress there.
~~~
hackuser
If someone with technical expertise wanted to keep up on this field, but it
wasn't their profession - i.e., they don't need to know every detail and don't
have time to read a lot - what would be a good source?
~~~
p1esk
Follow Yann Lecun's posts on Facebook.
------
milesward
If you like this talk, come see him talk about what's even beyond that at GCP
Next:
[https://cloudplatformonline.com/NEXT2016.html](https://cloudplatformonline.com/NEXT2016.html)
Disclaimer: I will be there freaking out because I work at Google on Cloud and
Jeff Dean is rad.
------
YeGoblynQueenne
>> If you’re not considering how to use deep neural nets to solve your data
understanding problems, you almost certainly should be. This line is taken
directly from the talk
And this is exactly why Google's hype of their tech is getting dangerous for
everyone else, who is not Google. Because they advocate, nay, they preach,
that everyone should abandon what they're doing and do what Google tells them
works. And, oh, look, we just released those nice, free tools you can use to
do it like we do!
Which is insane. Google is a corporate entity. It has financial interests. The
purpose of its existence is to sell you its stuff, it doesn't give a dime if
you'll solve your problems or not.
This piece of advice is like Bayer, back in the day, selling its Aspirin as
the cure of all ills: "If you're not considering how to take Aspirin to solve
your health problems, you almost certainly should be".
~~~
dekhn
Although Google is a corp and has financial interests, I think it's in
Google's interest to share these ideas in workable form with the world. It can
(and I hope it will) contribute a lot to improving a number of things that are
wrong with the world.
When I was an academic scientist in the mid 2000s, I ended up with more data
than I could deal with, and none of the computing systems in academia at the
time dealt well with that (they were tuned for HPC/supercomputers). The
bigtable, mapreduce, and GFS papers were huge to me, because they provided a
nicer framework for data processing. Although Google made those tools for
Search and Ads (and profited greatly from them) they also published them, and
Doug Cutting and others incorporated them into Hadoop. A similar thing is
happening now, but Google got better at releasing their codes as open source,
which reduces the time between publication of a good idea, and replication of
that work by others outside the corp.
(eventually, I went to google to get direct access to its infrastructure;
built Exacycle, gave away an enormous amount of free computing time that cost
Google rather than profiting it, the leadership _loved_ it even though it cost
money, and I even managed to get Googler to apply machine learning to academic
problems I cared about).
So I don't think Google solely acts in its own short term financial interests.
Also, aspirin has turned out to be amazing at solving a wide range of health
problems, so I think bayer was probably right (if not for the right reasons)
on that one.
~~~
YeGoblynQueenne
>> So I don't think Google solely acts in its own short term financial
interests.
I think what your experience shows is that on the one hand individuals within
Google (or any big corp) can and do align their own personal interest with
that of the corp and on the other hand that the corp can benefit the community
as long as it is making profit and serving its own purposes. Nothing
surprising there.
As to releasing its tools, here's my Thought for the Day: There's no such
thing as a free lunch and the only people who pretend there is are the ones
who want to steal your lunch money. Google releases its tools when it is in
the interest of Google to do so, not when it's in the interest of anyone else.
Yes, they're doing better now than in the past in open-sourcing stuff and I
can't know what's on their mind. But I can tell that it doesn't hurt them to
get people adopting their tech even as Google itself develops it further and
further to something that can only be used by a corp with Google's resources.
In short, I'm pretty sure that their friendly offer of, frex, TensorFlow is
just some trick to get people roped in to their technology, in the same way
that other corps have tried to do before- except that they also made you pay
for the privilege.
^
~~~
dekhn
Did you really say that making TensorFlow open source is a trick to get people
roped into Google technology?
That doesn't make any sense to me.
Another big point I think you missed is those individuals within Google
influence the decisions about what gets open sourced. We have an entire team
that facilitates taking Google-written code and opensourcing it.
~~~
YeGoblynQueenne
OK, with the hindsight of a good night's sleep I admit that the bit about
giving away TensorFlow does sound a bit tinfoil-hats on.
Let me rephrase that then: I can't possibly hope to know why Google is giving
away free stuff. I can certainly know that they don't do it out of the
kindness of their hearts though.
That said, I am indeed very concerned that Google is trying to shape, not only
the market, but the science also, to suit its own interests. That could be
really bad for everyone, including Google; if research stagnates, they too
will find themselves unable to deliver on their big promises about ever
speeding progress.
------
return0
He gave a similar talk at stanford a few days later:
[https://www.youtube.com/watch?v=T7YkPWpwFD4](https://www.youtube.com/watch?v=T7YkPWpwFD4)
------
yeukhon
Nice. Forbbiden. Did we manage to crash the site? highscalability.com supposed
to be pretty high-volume site.
~~~
toddh
Sorry about this. It means Squarespace has black listed your IP for some
reason. Unfortunately I can't do anything about it. If you can try from
another address it will probably work.
~~~
yeukhon
Wow :-) I am working from corp office. But thanks!
------
goc
I am very interested in AI that can teach itself(sounds too great). Where can
I learn up about such AI(related concepts and the whole 9 yards) to start
reading papers in the field? I am just looking for comprehensive
sources(preferably textbooks).
~~~
knn
AI by Russell and Norvig. Machine learning by Murphy, Elements of Statistical
Learning by Hastie et al. Just a few good ones out of many!
~~~
gnahckire
AI by Russell and Norvig is one of my favorite textbooks of all time.
------
sounds
I wish I had more than one upvote for this article. Read the article. If you
have the time, just watch the video.
------
unexistance
you need to understand the data before it can be made to good 'use'
[https://news.ycombinator.com/item?id=11272473](https://news.ycombinator.com/item?id=11272473)
------
giardini
From the article
_"...it seems like an excellent time to gloss Jeff’s talk..."_
"gloss" a talk? WTF?
~~~
npalli
To gloss is to annotate some text (or talk)[1], the word glossary comes from
that. That meaning is overshadowed by the more modern association with
shininess but the annotation meaning seems appropriate here.
[1]
[https://en.wikipedia.org/wiki/Gloss_(annotation)](https://en.wikipedia.org/wiki/Gloss_\(annotation\))
| {
"pile_set_name": "HackerNews"
} |
Thrift vs. Protocol Buffers - peterb
http://floatingsun.net/articles/thrift-vs-protocol-buffers/
======
markkanof
For me the most important point in this article is that Thrift includes an RPC
implementation while Protocol Buffers does not.
This was very helpful while writing an iPhone application that records audio
and sends it to a server for voice recognition processing. Thrift allowed me
to setup the iPhone client and the Windows/C# server in only a few lines of
code. Protocol Buffers required that I establish a socket connection, send the
audio data across, and then reassemble the data on the server side. Not the
worlds most difficult problem, but being new to Objective-C at the time it was
a bit tricky. I wish I had known about Thrift when I was building my initial
implementation based on Protocol Buffers.
~~~
bretthoerner
I'm (honestly) curious why a custom serialization format and RPC was a better
fit than HTTP for this problem.
What was the payload like that made this a better fit?
~~~
atamyrat
I don't think network overhead is the problem here, it is about ease of
development. Thrift generates working server code and you just have to
implement RPC functions. On PB, you need a stack to handle connections, parse
messages, dispatch, etc.
~~~
lobster_johnson
Note that Thrift supports HTTP as a transport.
------
jbert
When I first read about protocol buffers, I was surprised at the similarity to
ASN.1/BER: <http://en.wikipedia.org/wiki/Basic_Encoding_Rules>
Basically, they're both nested type/length/value data formats with primitives
for numerics, strings, etc with an human readable description language and
toolsets to auto-generate language types + (de)serialisers etc.
Given that the ASN.1 toolset exists (even if a little dusty, SNMP and X.509
keep it alive) I don't see why google bothered to re-implement.
The FAQ: <http://code.google.com/apis/protocolbuffers/docs/faq.html> mentions
ASN.1 but it's main argument (being tied to a particular form of RPC) doesn't
apply to ASN.1.
~~~
wladimir
Indeed, it has all been done before with ASN.1. ASN.1 was invented for the
exact same reason: data-efficient, fast communication. Currently it is mainly
in use by telecom.
I've also wondered why so many re-inventions of the wheel what is basically
ASN.1 and did some research:
The main reason which I found was that, according to developers, for
reimplementation ASN.1 was too complex to get right (it has a big legacy) and
that the current toolsets had or not the right license, not the right
languages, etc.
Also, they didn't like the ASN description syntax.
------
sambeau
Although not mentioned in the article Go now has support for Protocol Buffers:
<http://code.google.com/p/goprotobuf/>
~~~
uriel
Go also has gobs which according to Rob Pike improve on protocol buffers in
several ways: <http://blog.golang.org/2011/03/gobs-of-data.html>
And while gobs are (by design) Go-centric, there are already implementations
in C for example: <http://code.google.com/p/libgob/>
And Go also has the rpc package, which uses gobs (but can also use json or
other encodings): <http://golang.org/pkg/rpc/>
------
ankrgyl
The serialization/deserialization times are _dramatically_ different for
Python. Thrift has an accelerated binary serializer written in C with Python
bindings, while Protobuf's is pure Python. While there exist third party C++
wrappers for Protobuf in Python as well, they are buggy (segfaults).
~~~
TillE
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
python setup.py build_ext
It's "EXPERIMENTAL", but it seems to work well.
------
andymoe
I personally really like mspack and msgpack-rpc. There are a tone of well
supported implementations for various languages and there are some speed and
other advantages over thrift and Protocol Buffers. The core implementation is
written in C++.
<http://msgpack.org>
~~~
ankrgyl
The Python RPC libraries seem to rely on Twisted. Does it support generic code
generation to fill in your own RPC implementation?
------
al_james
I am very cautious of Thrift's custom network stack.
We have a java backend service that gets thousands of requests per second per
node and where latency is of upmost importance. We tested thrift for
communication between the backend service and the front end web code, however
we saw an increase in failed requests and latency compared to a server written
using netty.
For us, using netty and Protocol Buffers works much better, but maybe we were
using Thrift wrongly.
------
kaib
Some of the protobuf implementations are a bit more official than others. We
are using the Go protobuf plugin at Tinkercad and it's maintained by Google
for internal use. Given the importance of protobufs in communicating across
Google is pretty safe to assume that the implementation is solid (disclaimer:
I used to work at Google and know the folks maintaining the Go plugin).
That said, we are starting to miss a Javascript protobuf implementation. There
is a lot of binary data to serialize across the client/server boundary and not
all of it requires a custom format. It would be nice to just drop in server
side protobufs and have them work seamlessly on the client.
I do understand the criticism about the missing RPC library but I've always
found that you need to write your own anyway.
------
vegai
I like how this thing looks <http://msgpack.org/> more than each of the
aforementioned. Perhaps I'm missing something, but Thrift and protobufs both
seem very lacking in comparison
------
tadruj
Protocol Buffers are versatile, allowing nesting and includes but performance
we got on Java server and PHP/iOS client was pretty poor and PHP libraries do
not support whole specification.
So we switched to Thrift and whole FB stack with HipHop and Scribe and we're
thrilled. Documentation is a problem just at the beginning when setting up the
stack. Everything else later is self explanatory.
------
6ren
> But thrift and protobuf are by far the most popular
[citation needed] _(seriously, I'm interested)_
XML seems far more popular (in the sense of market-share/adoption, not in the
sense of being liked).
~~~
ankrgyl
That's a good point, but I think by "most popular" the author was referring to
popularity in the hacker/startup community. One could make a similar argument
about operating systems
([http://en.wikipedia.org/wiki/File:Operating_system_usage_sha...](http://en.wikipedia.org/wiki/File:Operating_system_usage_share.svg))
or web browsers (<http://en.wikipedia.org/wiki/Usage_share_of_web_browsers>),
but I don't think anyone on hn would call Windows XP or Internet Explorer
"popular"
~~~
6ren
That makes sense, since most startups are technology users, rather than
technology seller. e.g. for a startup selling tools/middleware, technology
market-share is customer market-share.
Thinking further, startups might be early-adopters of new technology, that
will eventually become mainstream. But it doesn't seem to be a reliable
predictor, since many (most) new technologies don't reach critical mass before
being replaced by the next new thing. eg. ASN.1 binary serialization format.
<http://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One>
| {
"pile_set_name": "HackerNews"
} |
Elon Musk inviting John Carmack to work on rockets - iffyuva
https://twitter.com/elonmusk/status/588167701069230081
======
avmich
Wonder if JC will decide to make a comeback? I assume AA was suspended for
lack of funds to expand the development program - but, say, fuel tests were
wonderful.
What Mr. Mueller would say? :)
------
chrisbennet
I get "Sorry, that page doesn’t exist!" Can someone share what the tweet was?
Thx.
| {
"pile_set_name": "HackerNews"
} |
Creating a Lightsaber with Polymer - Everlag
https://developers.google.com/web/showcase/case-study/lightsaber
======
whoopdedo
I can't be the only person who's had just about enough of these cross-
promotions. We get it. There's a new movie. I almost feel like not watching it
out of protest for how over saturated the advertising has been.
~~~
isolate
It's not a movie, it's a marketing event.
------
krebby
Fun app, but it's like they chose all the buzzwordy tech from two years ago
that frontend devs no longer use. Bower? Jade? Gulp? CoffeeScript? Polymer?
And for what? The mobile page is just a single button with an accelerometer
listener and websockets / webrtc glue, it shouldn't need any of those. The
desktop page is mostly Three.js for loading and rendering the textures, and
also the communication with the phone. What advantage does Polymer have over
just writing normal ThreeJS and some vanilla js?
~~~
th0br0
What do devs use instead then? React/(ES6|TS)/jspm/System.js?
~~~
krebby
Not necessarily. It's just funny that they're bragging about choosing these
particular technologies when they've all fallen out of favor in the last year
and a half or so. Odd to see in a tech demo writeup like this.
My point (downvotes aside) is that Polymer (or any UI library -- react,
angular, whatever) is overkill for this type of application. Without digging
too deep into their source, it seems like it could've been done easier and
cleaner in vanilla JS and ThreeJS.
------
osxrand
How does this site stop the tap on the top of the screen / title bar of safari
from scrolling up to the top? Rather irritating.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Is free college an existential threat to ISA-funded schools? - tempsy
Thinking from the perspective of an ISA funded school like Lambda, would free college effectively destroy, or at least severely limit the growth potential, of those types of schools?
======
LeoSolaris
Theoretically, free colleges would eliminate 90% of the colleges. Only the
high end, Ivy League, and exclusive colleges would be able to survive as
private paid colleges. There wouldn't be much of a point to cost saving or
tuition sharing colleges if state run colleges were free.
------
clintonb
It depends on how free college is implemented. If it’s a voucher system, where
students get a certain amount to spend, the ISA night go away if the voucher
covers the full cost of education. Otherwise, an ISA could still be used to
cover the excess funds.
If the money goes directly to the institution, and the price exposed to
students is always $0, the ISA goes away completely.
Regardless of how we implement free college, there will most likely not be a
restriction on the existence of programs that are not free. These programs may
have a drastically smaller market, but a market of some sort may still exist.
This is primarily due to [my guesstimation] that we simply don’t have enough
seats and beds to cover every high school student going to college.
| {
"pile_set_name": "HackerNews"
} |
Sweden closer to being the first cashless society with negative interest rates - ogezi
http://www.businessinsider.com/sweden-cashless-society-negative-interest-rates-2015-10
======
draw_down
> Credit Suisse says the rule of thumb in Scandinavia is: "If you have to pay
> in cash, something is wrong."
I hope people don't start talking like this in the US. Something about that
statement is very disturbing.
~~~
geon
How? You are not forced to pay electronically. You just have an option to
cash.
The only time I use cash would be if I buy something from the swedish
equivalent of craigslist, or in a farmers market. And even there it would be
common to use Swish/Square.
~~~
x1798DE
I read that statement more as, "If you refuse to pay electronically" than "if
you have no electronic means of payment."
I know it's contrary to the literal meaning, but it feels like the unstated
assumption is that you would only pay in cash if you _must_ , not out of a
general preference for privacy.
~~~
geon
> I know it's contrary to the literal meaning
Also contrary to reality (source: me, being Swedish), and logic; A retailer
will hardly refuse to accept cash (although buses do refuse cash to avoid the
risk of robbery), while you might avoid buying if you need to handle cash.
~~~
x1798DE
That phrase is literally linked to an article called "Sweden: 'We don't accept
cash'" ([https://www.credit-suisse.com/us/en/news-and-
expertise/econo...](https://www.credit-suisse.com/us/en/news-and-
expertise/economy/articles/news-and-expertise/2015/03/en/sweden-we-dont-
accept-cash.html)). It seems like what you are saying is that they are just
making some sort of pun, but you can see why this would seem like it's about
people being _suspicious_ of cash.
~~~
geon
At least 60 % of the "facts" in that article are made up.
------
gizi
It will not work. If there is a need for fiat cash in the economy, they will
start using euros or dollars instead of Swedish crowns (SEK).
Injecting foreign cash back into the cashless SEK economy amounts to selling
these foreign banknotes at any, available exchange point. They cannot
reasonably ban these transactions, because foreigners bring these foreign bank
notes along with them when they visit Sweden.
If they insist on addressing that issue anyway, they will end up introducing
Venezuela-style, self-defeating, absurdistan regulations that will make the
situation only worse. They will be up against a growing number of people
trading against them, in order to defeat such regulations, and to make money
in the process of doing so. As always, it will be full of opportunities to
thoroughly bleed them while making a killing.
So, a Swedish economy without SEK bank notes is possible, but not necessarily
one without cash. If that is the situation that materializes, they will have
made the situation worse for them (foreign cash) instead of better (Swedish
cash). You cannot outsmart economic fundamentals, because there will always be
lots of money in punishing off such attempts.
------
HappyFunGuy
It is a national security issue to have cash available for use during a time
of war, or general internet failure, or natural disaster. Tying your national
security to the health of the internet is unwise.
------
crimsonalucard
This makes more sense economically. Physical products in general degrade in
value over time. Cash by itself does not degrade in value over time. Using
cash to represent products is a sort of mismatch. If the products' value
degrades so should the value of the instrument representing said product.
I don't know why they use negative savings interest rates when the government
can provide the same service through QE or lowering loan interest rates.
~~~
cyrus9020
Cash can "degrade", it's called inflation.
~~~
crimsonalucard
That's different from value degradation.
Additionally if you read further you'd see that I addressed inflation.
Interest for loans and QE are all methods the government uses to inflate cash.
------
wodenokoto
Well, they can (hopefully) still buy foreign currency and Gold coins, such as
Krugerrands.
~~~
digitalengineer
Executive Order 6102: "forbidding the Hoarding of gold coin, gold bullion, and
gold certificates within the continental United States".
------
geon
Extremely sensationalized headline. The only source for the microwave thing is
hearsay from some policeman.
------
yAnonymous
So in a cashless society, what happens when you cancel your bank account?
------
f3llowtraveler
Bitcoin.
~~~
maxander
The suggestion that Bitcoin would act as a replacement for _government bonds_
in any foreseeable future would reduce most economists to tears of laughter.
The interest rate tweaks being discussed here are negligible compared to the
market rate fluctuations of Bitcoin on a typical week.
| {
"pile_set_name": "HackerNews"
} |
Meteor adding first-party support for GraphQL - sergiotapia
https://github.com/meteor/data/blob/design-overview/design/high-level-reactivity.md
======
sergiotapia
MDG is currently discussing their initial approach integrating GraphQL support
in Meteor.
Once this lands the sky is the limit! Everybody is really excited.
~~~
djmashko2
I think the most exciting part of this project for the HN crowd is that it's
not Meteor-specific. We really want this project to be used outside of the
integrated Meteor platform, in all kinds of production applications.
So hopefully it's more like "Meteor is building a reactive GraphQL system
anyone can adopt" rather than "adding GraphQL support to Meteor".
| {
"pile_set_name": "HackerNews"
} |
My experience with using cp to copy 432 million files (39 TB) - nazri1
http://lists.gnu.org/archive/html/coreutils/2014-08/msg00012.html
======
fintler
I wrote a little copy program at my last job to copy files in a reasonable
time frame on 5PB to 55PB filesystems.
[https://github.com/hpc/dcp](https://github.com/hpc/dcp)
We got an IEEE paper out of it:
[http://conferences.computer.org/sc/2012/papers/1000a015.pdf](http://conferences.computer.org/sc/2012/papers/1000a015.pdf)
A few people are continuing the concept to other tools -- that should be
available at [http://fileutils.io/](http://fileutils.io/) relatively soon.
We also had another tool written on top of
[https://github.com/hpc/libcircle](https://github.com/hpc/libcircle) that
would gather metadata on a few hundred-million files in a few hours (we had to
limit the speed so it wouldn't take down the filesystem). For a slimmed down
version of that tool, take a look at
[https://github.com/hpc/libdftw](https://github.com/hpc/libdftw)
~~~
laymil
And it's interesting and useful for scientific computing where you already
have an MPI environment and distributed/parallel filesystems. However, it's
not really applicable to this workload, as the paper itself says.
_There is a provision in most file systems to use links (symlinks, hardlinks,
etc.). Links can cause cycles in the file tree, which would result in a
traversal algorithm going into an infinite loop. To prevent this from
happening, we ignore links in the file tree during traversal. We note that the
algorithms we propose in the paper will duplicate effort proportional to the
number of hardlinks. However, in real world production systems, such as in
LANL (and others), for simplicity, the parallel filesystems are generally not
POSIX compliant, that is, they do not use hard links, inodes, and symlinks.
So, our assumption holds._
The reason this cp took such large amounts of time was the desire to preserve
hardlinks and the resize of the hashtable used to track the device and inode
of the source and destination files.
~~~
encoderer
Sure, but if you read that article you walk away with a sense of _thats a lot
of files to copy_. And the GP built a tool for jobs 2-3 orders of magnitude
larger?! Clearly there are tradeoffs forced on you at that size...
------
pedrocr
How about this for a better cp strategy to deal with hardlinks:
1\. Calculate the hash of /sourcedir/some/path/to/file
2\. Copy the file to /tempdir/$hash if it doesn't exist yet
3\. Hard-link /destdir/some/path/to/file to /tempdir/$hash
4\. Repeat until you run out of source files
5\. Recursively delete /tempdir/
This should give you a faithful copy with all the hard-links with constant RAM
at the cost of CPU to run all the hashing. If you're smart about doing steps 1
and 2 together it shouldn't require any additional I/O (ignoring the extra
file metadata).
Edit: actually this won't recreate the same hardlink structure, it will
deduplicate any identical files, which may not be what you want. Replacing the
hashing with looking up the inode with stat() would actually do the right
thing. And that would basically be an on-disk implementation of the hash table
cp is setting up in memory.
~~~
derefr
If you cp your data onto a Plan9 machine, what results is pretty much exactly
the process you've outlined.
Plan9's default filesystem is made up of two parts: Fossil, and Venti.
\- Fossil is a content-addressable on-disk object store. Picture a disk
"formatted as" an S3 bucket, where the keys are strictly the SHAsums of the
values.
\- Venti is a persistent graph database that holds what would today be called
"inode metadata." It presents itself as a regular hierarchical filesystem. The
"content" property of an inode simply holds a symbolic path, usually to an
object in a mounted Fossil "bucket."
When you write to Venti, it writes the object to its configured Fossil bucket,
then creates an inode pointing to that key in that bucket. If the key already
existed in Fossil, though, Fossil just returns the write as successful
immediately, and Venti gets on with creating the inode.
Honestly, I'm terribly confused why all filesystems haven't been broken into
these two easily-separable layers. (Microsoft attempted this with WinFS, but
mysteriously failed.) Is it just inertia? Why are we still creating new
filesystems (e.g. btrfs) that don't follow this design?
~~~
pedrocr
_> Honestly, I'm terribly confused why all filesystems haven't been broken
into these two easily-separable layers. Is it just inertia?_
The penalty for doing content addressed filesystems is of course the CPU
usage. btrfs probably has most of the benefits without the CPU cost with its
copy-on-write semantics.
Note that what you describe (and my initial process) is a different semantic
than hard-links. What you get is shared storage but if you write to one of the
files only that one gets changed. Whereas with hardlinks both files change.
~~~
derefr
In effect, hard links (of mutable files) are a declaration that certain files
have the same "identity." You can't get this with plain Venti-on-Fossil, but
it's a problem with Fossil (objects are immutable), not with Venti.
Venti-on-Venti-on-Fossil would work, though, since Venti just creates
imaginary files that inherit their IO semantics from their underlying store,
and this should apply recursively:
1\. create two nodes A and B in Venti[1] that refer to one node C in Venti[2],
which refers to object[x] with key x in Fossil.
2\. Append to A in Venti[1], causing a write to C in Venti[2], causing a write
to object[x] Fossil, creating object[y] with key y.
3\. Fossil returns y to Venti[2]; Venti[2] updates C to point to object[y] and
returns C to Venti[1]; Venti[1] sees that C is unchanged and does nothing.
Now A and B both effectively point to object[y].
(Note that you don't actually have to have two Venti servers for this! There's
nothing stopping you from having Venti nodes that refer to other Venti nodes
within the same projected filesystem--but since you're exposing these nodes to
the user, your get the "dangers" of symbolic links, where e.g. moving them
breaks the things that point to them. For IO operations they have the
semantics of hard links, though, instead of needing to be special-cased by
filesystem-operating syscalls.)
~~~
ori_b
You seem to be confusing venti and fossil.
~~~
theworst
Can you explain further? I am not a plan9 expert, by any means, but I'm stuck
at where GP made the confusion. Thanks!
~~~
yungchin
He just swapped the names I think - Venti is the block store, Fossil is the
file system layer.
------
rwg
_Disassembling data structures nicely can take much more time than just
tearing them down brutally when the process exits._
A wonderful trend I've noticed in Free/Open Source software lately is proudly
claiming that a program is "Valgrind clean." It's a decent indication that the
program won't doing anything silly with memory during normal use, like leak
it. (There's also a notable upswing in the number of projects using static
analyzers on their code and fixing legitimate problems that turn up, which is
great, too!)
While you can certainly just let the OS reclaim all of your process's
allocated memory at exit time, you're technically (though intentionally)
leaking memory. When it becomes too hard to separate the intentional leaks
from the unintentional leaks, I'd wager most programmers will just stop
looking at the Valgrind reports. (I suppose you could wrap free() calls in
"#ifdef DEBUG ... #endif" blocks and only run Valgrind on debug builds, but
that seems ugly.)
A more elegant solution is to use an arena/region/zone allocator and place
potentially large data structures (like cp's hard link/inode table) entirely
in their own arenas. When the time comes to destroy one of these data
structures, you can destroy its arena with a single function call instead of
walking the data structure and free()ing it piece by piece.
Unfortunately, like a lot of useful plumbing, there isn't a standard API for
arena allocators, so actually doing this in a cross-platform way is painful:
• Windows lets you create multiple heaps and allocate/free memory in them
(HeapCreate(), HeapDestroy(), HeapAlloc(), HeapFree(), etc.).
• OS X and iOS come with a zone allocator (malloc_create_zone(),
malloc_destroy_zone(), malloc_zone_malloc(), malloc_zone_free(), etc.).
• glibc doesn't have a user-facing way to create/destroy arenas (though it
uses arenas internally), so you're stuck using a third-party allocator on
Linux to get arena support.
• IRIX used to come with an arena allocator (acreate(), adelete(), amalloc(),
afree(), etc.), so if you're still developing on an SGI Octane because you
can't get enough of that sexy terminal font, you're good to go.
~~~
_delirium
Adding some kind of arena-allocation library to both the build & runtime
dependencies _solely_ to keep valgrind happy, with no actual improvement in
functionality or performance, doesn't seem like a great tradeoff on the
software engineering front. I'd rather see work on improving the static
analysis. For example if some memory is intended to be freed at program
cleanup, Valgrind could have some way of being told, "this is intended to be
freed at program cleanup". Inserting an explicit (and redundant) deallocation
as the last line of the program just to make the static analyzer happy is a
bit perverse.
(That is, assuming that you don't need portability to odd systems that don't
actually free memory on process exit.)
~~~
andreasvc
I don't see why you assume arenas would be added "solely to keep valgrind
happy". Arenas have better performance when allocating a high number of small
chunks, because an arena can make better performance trade-offs for this use
case than the general malloc allocator.
------
mililani
This may be a little off topic, but I used to think RAID 5 and RAID 6 were the
best RAID configs to use. It seemed to offer the best bang for buck. However,
after seeing how long it took to rebuild an array after a drive failed (over 3
days), I'm much more hesitant to use those RAIDS. I much rather prefer RAID
1+0 even though the overall cost is nearly double that of RAID 5. It's much
faster, and there is no rebuild process if the RAID controller is smart
enough. You just swap failed drives, and the RAID controller automatically
utilizes the back up drive and then mirrors onto the new drive. Just much
faster and much less prone to multiple drive failures killing the entire RAID.
~~~
halfcat
This can not be stressed strongly enough. There is never a case when RAID5 is
the best choice, ever [1]. There are cases where RAID0 is mathematically
proven more reliable than RAID5 [2]. RAID5 should never be used for anything
where you value keeping your data. I am not exaggerating when I say that very
often, your data is safer on a single hard drive than it is on a RAID5 array.
Please let that sink in.
The problem is that once a drive fails, during the rebuild, if any of the
surviving drives experience an unrecoverable read error (URE), the entire
array will fail. On consumer-grade SATA drives that have a URE rate of 1 in
10^14, that means if the data on the surviving drives totals 12TB, the
probability of the array failing rebuild is close to 100%. Enterprise SAS
drives are typically rated 1 URE in 10^15, so you improve your chances ten-
fold. Still an avoidable risk.
RAID6 suffers from the same fundamental flaw as RAID5, but the probability of
complete array failure is pushed back one level, making RAID6 with enterprise
SAS drives possibly acceptable in some cases, for now (until hard drive
capacities get larger).
I no longer use parity RAID. Always RAID10 [3]. If a customer insists on
RAID5, I tell them they can hire someone else, and I am prepared to walk away.
I haven't even touched on the ridiculous cases where it takes RAID5 arrays
weeks or months to rebuild, while an entire company limps inefficiently along.
When productivity suffers company-wide, the decision makers wish they had paid
the tiny price for a few extra disks to do RAID10.
In the article, he has 12x 4TB drives. Once two drives failed, assuming he is
using enterprise drives (Dell calls them "near-line SAS", just an enterprise
SATA), there is a 33% chance the entire array fails if he tries to rebuild. If
the drives are plain SATA, there is almost no chance the array completes a
rebuild.
[1] [http://www.smbitjournal.com/2012/11/choosing-a-raid-level-
by...](http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-
count/)
[2] [http://www.smbitjournal.com/2012/05/when-no-redundancy-is-
mo...](http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-
reliable/)
[3] [http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-
st...](http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-
server-storage/)
~~~
Forlien
I think your calculation on failing an array rebuild is wrong. Can you show
how you got those numbers?
~~~
halfcat
Sure, there were two statements I made.
> _On consumer-grade SATA drives that have a URE rate of 1 in 10^14, that
> means if the data on the surviving drives totals 12TB, the probability of
> the array failing rebuild is close to 100%._
10^14 bits is 12.5 TB, so on average, the chance of 12TB being read without a
single URE is very low, and the probability the array fails to rebuild is
close to 100%. I was estimating 10^14 bits to be about 12TB, so the
probability is actually 12/12.5 = 96% chance of failure.
> _...he has 12x 4TB drives. Once two drives failed, assuming he is using
> enterprise drives...there is a 33% chance the entire array fails if he tries
> to rebuild. If the drives are plain SATA, there is almost no chance the
> array completes a rebuild._
A RAID6 with two failed drives is effectively the same situation as a RAID5
with one failed drive. In order to rebuild one failed drive, the RAID
controller must read all data from every surviving drive to recreate the
failed drive. In this case, there are 10x 4TB surviving drives, meaning 40TB
of data must be read to rebuild. Because these drives are presumably
enterprise quality, I am assuming they are rated to fail reading one sector
for every 10^15 bits read (10^15 bits = 125 TB). So it's actually 40/125 = 32%
chance of failure if you try to rebuild.
------
vhost-
These are the types of stories I love. I just learned a boat load in 5
minutes.
~~~
3rd3
Is there maybe an archive website dedicated to these kind of stories?
~~~
breadbox
At one time there was; it was called the Internet. The archive still exists,
but it's been made harder to browse through due to being jumbled up with
javascript and cat gifs.
~~~
jayvanguard
It's true. We should have never let the public on the Internet. It has been
downhill since then.
~~~
taeric
False choice, isn't it? I mean, the complaint isn't that the public now has
sites with massive javascript and related technologies. The complaint is that
it has muscled out useful sites that did not use those technologies. And it
should be heavily noted that the heavy muscles that have pushed out many of
these sites is not necessarily "the public."
~~~
thinkling
Kind of funny to say that on a text-only JS-free site that seems to be alive
and well, linking to an article on an old-school mailing list archive site. :)
~~~
taeric
Oh, certainly. I just can resonate with the sentiment that these sites aren't
the majority.
Even this site, honestly, is less than easy to deal with on a recurring basis.
(Consider, hard to remember which was the top story three days ago at noon.)
Specifically, sometimes I lose a story because I refresh and something
plummeted off the page. Hard to have any idea how far to "scroll back" to see
it.
------
calvins
I would usually use the tarpipe mentioned already by others for this sort of
thing (although I probably wouldn't do 432 million files in one shot):
(cd $SOURCE && tar cf - .) | (mkdir -p $DEST && cd $DEST && tar xf -)
Another option which I just learned about through reading some links from this
thread is pax
([http://en.wikipedia.org/wiki/Pax_%28Unix%29](http://en.wikipedia.org/wiki/Pax_%28Unix%29)),
which can do it with just a single process:
(mkdir -p $DEST && cd $SOURCE && pax -rw . $DEST)
Both will handle hard links fine, but pax may have some advantages in terms of
resource usage when processing huge numbers of files and tons of hard links.
~~~
tedunangst
You know how tar handles hardlinks, right? By creating a giant hash table of
every file.
~~~
dredmorbius
How's that going to scale with memory? In-memory hash tables were the downfall
of cp here.
~~~
tedunangst
It's going to scale just like you'd imagine it would. All the people saying
"oh, tar was built for this" obviously haven't actually tried replicating the
experiment using tar.
~~~
dredmorbius
Pretty much as I'd suspected.
------
pflanze
I've written a program that attempts to deal with the given situation
gracefully: instead of using a hash table, it creates a temporary file with a
list of inode/device/path entries, then sorts this according to inode/device,
then uses the sorted list to perform the copying/hardlinking. The idea is that
sorting should work well with much lower RAM requirements than the size of the
file to be sorted (due to data locality, unless the random accesses with the
hash, it will be able to work with big chunks, at least when done right (a bit
hand-wavy, I know, this is called an "online algorithm" and I remember Knuth
having written about those, haven't had the chance to recheck yet); the
program is using the system sort command, which is hopefully implementing this
well already).
The program stupidly calls "cp" right now for every individual file copy (not
the hard linking), just to get the script done quickly, it's easy to replace
that with something that saves the fork/exec overhead; even so, it might be
faster than the swapping hash table if the swap is on a spinning disk. Also
read the notes in the --help text. I.e. this is a work in progress as a basis
to test the idea, it will be easy to round off the corners if there's
interest.
[https://github.com/pflanze/megacopy](https://github.com/pflanze/megacopy)
PS. the idea of this is to make copying work well with the given situation on
a single machine, unless the approach taken by the dcp program mentioned by
fintler which seems to rely on a cluster of machines.
There may also be some more discussion about this on the mailing list:
[http://lists.gnu.org/archive/html/coreutils/2014-09/msg00013...](http://lists.gnu.org/archive/html/coreutils/2014-09/msg00013.html)
------
jrochkind1
So it was all the files in one go, presumably with `cp -r`?
What about doing something with find/xargs/i-dunno to copy all the files, but
break em into batches so you aren't asking cp to do it's bookkeeping for so
many files in one process? Would that work better? Or worse in other ways?
~~~
xchg_ax_ax
This page may be useful:
[http://unix.stackexchange.com/questions/44247/how-to-copy-
di...](http://unix.stackexchange.com/questions/44247/how-to-copy-directories-
with-preserving-hardlinks)
The main issue is that there's no api to get the list of files hard linked
together: the only way is to check all the existing files and compare inodes.
If you're doing a plain copy over 2 fs, you cannot choose which number the
target inode will be, so you need to keep a map between inode numbers, or
between inodes and file names ("cp" does the later).
~~~
sounds
pedrocr's comment above suggests a good solution:
1\. Copy each file from the source volume to a single directory (e.g. /tmp) on
the target volume, named for the source volume inode number.
(edit: I suggest using a hierarchy of dirs to avoid the "too many dentry's"
slowdown)
2\. If the file has already been copied, it will already exist in /tmp -
looking up the inode is a vanilla directory lookup
3\. Create a hard link from /tmp to the actual path of the file
4\. When all the files have been created on the target volume, delete the
inode numbers in /tmp
------
pedrocr
Unix could really use a way to get all the paths that point to a given inode.
These days that shouldn't really cost all that much and this issue comes up a
lot in copying/sync situations. Here's the git-annex bug report about this:
[https://git-annex.branchable.com/bugs/Hard_links_not_synced_...](https://git-
annex.branchable.com/bugs/Hard_links_not_synced_in_direct_mode/)
~~~
asveikau
Wow, it's not every day I hear about a filesystem feature that Windows has and
Linux doesn't. (On a recent windows system: _fsutil hardlink list <path>_ \--
you can try any random exe or DLL in system32 for an example of a hard link.)
I forget what the api for that looks like if I ever knew. Might be private.
I am surprised, usually Linux is way ahead of Windows on shiny filesystem
stuff.
~~~
peterwwillis
Linux just has more filesystems, and sadly a lot of them have various flaws.
I'm surprised when people are surprised that Linux isn't some completely
superior technical marvel. BSD and Unix systems have been more advanced for
decades..
Everyone on Linux still uses _tar_ for god's sake, even though zip can use the
same compression algorithms people use on tarballs, and zip actually stores an
index of its files rather than 'cat'ing each record on top of the next like an
append-only tape archive. (Obviously there are better formats than 'zip' for
any platform, but it's just strange that nobody has moved away from tar)
~~~
beagle3
tar is good enough for many uses, so people did not move on.
And it doesn't help that tar.gz / tar.bz2 compresses way better than zip in
most cases (thanks to using a single compression context, rather than a new
one for each file; and also compressing the filenames in the same context),
and that it carries ownership and permission information with it - whereas zip
doesn't.
The HVSC project, who try to collect every single piece of music ever created
on a Commodore C64, distribute their archive as a zip-within-a-zip. The common
music file is 1k-4k, goes down to ~500-1000 bytes zipped; The
subdirectory+filename are often 100 bytes with a lot of redundancy that zip
doesn't use, so they re-zip. Had they used .tar.gz or .tar.bz2, the second
stage would not be needed.
------
pixelbeat
I found an issue in cp that caused 350% extra mem usage for the original bug
reporter, which fixing would have kept his working set at least within RAM.
[http://lists.gnu.org/archive/html/coreutils/2014-09/msg00014...](http://lists.gnu.org/archive/html/coreutils/2014-09/msg00014.html)
------
gwern
> Wanting the buffers to be flushed so that I had a complete logfile, I gave
> cp more than a day to finish disassembling its hash table, before giving up
> and killing the process....Disassembling data structures nicely can take
> much more time than just tearing them down brutally when the process exits.
Does anyone know what the 'tear down' part is about? If it's about erasing the
hashtable from memory, what takes so long? I would expect that to be very
fast: you don't have to write zeros to it all, you just tell your GC or memory
manager to mark it as free.
~~~
mjn
Looking at the code, it looks like deallocating a hash table requires
traversing the entire table, because there is malloc()'d memory associated
with each hash entry, so each entry has to be visited and free()'d. From
hash_free() in coreutils hash.c:
for (bucket = table->bucket; bucket < table->bucket_limit; bucket++)
{
for (cursor = bucket->next; cursor; cursor = next)
{
next = cursor->next;
free (cursor);
}
}
Whereas if you just don't bother to deallocate the table before the process
exits, the OS will reclaim the whole memory block without having to walk a
giant data structure. That's a fairly common situation in C programs that do
explicit memory management of complex data structures in the traditional
malloc()/free() style. Giant linked lists and graph structures are another
common culprit, where you have to pointer-chase all over the place to free()
them if you allocated them in the traditional way (vs. packing them into an
array or using a userspace custom allocator for the bookkeeping).
~~~
ritchiea
Why exactly is it necessary to to free each hash entry instead of exiting the
process?
~~~
mjn
If it's the last thing you do before you exit the process, it isn't necessary,
because the OS will reclaim your process's memory in one fell swoop. I believe
that's what the linked post is advocating 'cp' should do. (At least on modern
systems that's true; maybe there are some exotic old systems where not freeing
your data structures before exit causes permanent memory leaks?)
It's seen as good C programming practice to free() your malloc()s, though, and
it makes extending programs easier if you have that functionality, since what
was previously the end of program can be wrapped in a higher-level loop
without leaking memory. But if you really are exiting for sure, you don't have
to make the final free-memory call. It can also be faster to not do any
intermediate deallocations either: just leave everything for the one big final
deallocation, as a kind of poor-man's version of one-generation generational
GC. Nonetheless many C programmers see it somehow as a bit unclean not to
deallocate properly. Arguably it does make some kind of errors more likely if
you don't, e.g. if you have cleanup that needs to be done that the OS _doesn
't_ do automatically, you now have different kinds of cleanup routines for the
end-of-process vs. not-end-of-process case.
~~~
epmos
I tend to do this in my C programs because in development usually have
malloc() wrapped so that if any block hasn't been free()'ed it's reported at
exit() time. This kind of check for lost pointers is usually so cheap that you
use it even if you never expect to run on a system without decent memory
management.
As an aside, GNU libc keeps ( or at least used to keep, I haven't checked in
years ) the pointers used by malloc()/free() next to the blocks themselves,
which gives really bad behavior when freeing a large number of blocks that
have been pushed out to swap--you wind up bringing in pages in order to free
them because the memory manager's working set is the size of all allocated
memory. Years ago I wrote a replacement that avoided this just to speed up
Netscape's horrible performance when it re-sized the bdb1.85 databases it used
to track browser history. The browser would just "go away" thrashing the disk
for hours and killing it just returned you to a state where it would decide to
resize again an hour or so after a restart. Using LD_PRELOAD to use a malloc
that kept it's bookkeeping away from the allocated blocks changed hours to
seconds.
------
sitkack
I appreciate that he had the foresight to install more ram and configure more
swap. I would hate to be days into a transfer and have the OOM killer strike.
------
angry_octet
The difficulty is that you are using a filesystem hierarchy to 'copy files'
when you actually want to do a volume dump (block copy). Use XFS and xfsdump,
or ZFS and zfs send, to achieve this.
Copy with hard link preservation is essentially like running dedupe except
that you know ahead of time how many dupes there are. Dedupe is often very
memory intensive, and even well thought out implementations don't support
keeping book keeping structures on disk.
~~~
steveh73
"Normally I'd have copied/moved the files at block-level (eg. using dd or
pvmove), but suspecting bad blocks, I went for a file-level copy because then
I'd know which files contained the bad blocks."
~~~
angry_octet
I was simplifying... dump backs up inodes not blocks. Some inodes point to
file data and some point to directory data. Hard links are references to the
same inode in multiple directory entries, so when you run xfsrestore, the link
count increments as the FS hierarchy is restored.
xfsdump/zfs send are file system aware, unlike dd, and can detect fs
corruption (ZFS especially having extensive checksums). In fact, any info cp
sees about corruption comes from the FS code parsing the FS tree.
However, except on zfs/btrfs, data block corruption will pass unnoticed. And
in my experience, when you have bad blocks, you have millions of them -- too
many to manually fix. As this causes a read hang, it is usually better to dd
copy the fs to a clean disk, set to replace bad blocks with zeros, then
fsck/xfs_repair when you mount, then xfsdump.
dd conv=noerror,sync,notrunc bs=512 if=/dev/disk of=diskimg
See Also: [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-
US...](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-
repair.html)
[http://xfs.org/index.php/Reliable_Detection_and_Repair_of_Me...](http://xfs.org/index.php/Reliable_Detection_and_Repair_of_Metadata_Corruption)
~~~
Rapzid
If the risk of keeping the system running while the array rebuilt was deemed
to high, I would have just gone with a dd/ddrescue of the remaining disks onto
new disks and then moved on from there.
+1 for mentioning ZFS. It's really quite amazing. Almost like futuristic alien
technology compared to the other freely available file systems.
------
minopret
In light of experience would it perhaps be helpful after all to use a block-
level copy (such as Partclone, PartImage, or GNU ddrescue) and analyze later
which files have the bad blocks?
I see that the choice of a file-level copy was deliberate: "I'd have
copied/moved the files at block-level (eg. using dd or pvmove), but suspecting
bad blocks, I went for a file-level copy because then I'd know which files
contained the bad blocks."
~~~
fsniper
Also there is no mention of unrecoverable file analysis like error handling of
cp operations in the article. And with this many files it would not be
feasible without using an error log file.
So going with a simple block copy should suffice IMHO.
~~~
rbh42
I'm the OP, so I can shred a bit of light on that: Dell's support suggested a
file-level copy when I asked them what they recommended (but I'm not entirely
sure they understood the implications). Also, time was not a big issue.
I did keep a log file with the output from cp, and it clearly identified the
filenames for the inodes with bad blocks. Actually, I'm not sure how dd would
handle bad blocks.
~~~
fsniper
Thank you for clarification.
I was about to bet on "read fail repeat skip" cycle for dd's behaviour but,
looking into coreutil's source code at
[https://github.com/goj/coreutils/blob/master/src/dd.c](https://github.com/goj/coreutils/blob/master/src/dd.c)
, if I'm not mistaken , dd does not try to be intelligent and just uses a
zeroed out buffer so It would return 0's for unreadable blocks.
------
IvyMike
Interesting.
In Windows-land, the default copy is pretty anemic, so probably most people
avoid it for serious work.
I'd probably use robocopy from the command line. And if I was being lazy, I'd
use the Teracopy GUI.
I think my limit for a single copy command has been around 4TB with robocopy--
and that was a bunch of large media files, instead of smaller more numerous
files. Maybe there's a limit I haven't hit.
~~~
noinsight
> Teracopy
I've used FastCopy for GUI based larger transfers, it's open source and can
handle larger datasets well in my experience. It also doesn't choke on
>MAX_PATH paths. Haven't had problems with it. Supposedly it's the fastest
tool around...
The only slight issue is that the author is Japanese so the English
translations aren't perfect plus the comments in the source are in Japanese.
~~~
gizmo686
">MAX_PATH paths"
How does this happen?
~~~
xenadu02
Technical debt that keeps on giving.
Today there are N applications. "We can't increase MAX_PATH because it will
break existing applications!"
Tomorrow there are N+M applications. "We can't increase MAX_PATH because it
will break existing applications!"
Repeat forever.
Any time you are faced with a hard technical decision like this, the pain will
always be least if you make the change either:
1\. During another transition (e.g. 16-bit to 32-bit, or 32-bit to 64-bit).
Microsoft could have required all 64-bit Windows apps to adopt a larger
MAX_PATH, among other things.
2\. Right NOW, because there will never be an easier time to make the change.
The overall pain to all parties will only increase over time.
------
pmontra
Another lesson to be learnt is that it's nice to have the source code for the
tools we are using.
------
dredmorbius
The email states that file-based copy operations were used in favor of dd due
to suspected block errors. Two questions come to mind:
1\. I've not used dd on failing media, so I'm not sure of the behavior. Will
it plow through a file with block-read failures or halt?
2\. There's the ddrescue utility, which _is_ specifically intended for reading
from nonreliable storage. Seems that this could have offered another means for
addressing Rasmus's problem. It can also fill in additional data on multiple
runs across media, such that more complete restores might be achieved.
[https://www.gnu.org/software/ddrescue/ddrescue.html](https://www.gnu.org/software/ddrescue/ddrescue.html)
~~~
pflanze
OP said "I went for a file-level copy because then I'd know which files
contained the bad blocks". When you copy the block device with ddrescue (dd
doesn't have logic to work around the bad sectors and the only sensible action
for it is thus to stop, but don't take my word for it), the result will just
have zeroes in the places where bad blocks were, and, assuming the filesystem
structure is good enough (you should run fsck on it), will give you files with
zones of zeroes. But you won't know which files without either comparing them
to a backup (which you won't have by definition if you're trying to recover)
or with a program that verifies every file's structure (which won't exist for
the general case). Whereas cp will issue error messages with the path of the
file in question. So the OP's decision makes sense.
~~~
dredmorbius
I've played with ddrescue very lightly. From the GNU webpage linked above, it
appears it creates logfiles which can be examined:
_Ddrescuelog is a tool that manipulates ddrescue logfiles, shows logfile
contents, converts logfiles to /from other formats, compares logfiles, tests
rescue status, and can delete a logfile if the rescue is done. Ddrescuelog
operations can be restricted to one or several parts of the logfile if the
domain setting options are used._
That might allow for identification of files with bad sectors.
~~~
pflanze
That would need either a hook to the kernel or a file system parser.
Even if you manage to do that, I'm not sure it would be a good idea to
continue to use a file system that has lost sectors, even after fsck. Are you
sure fsck is fixing any inconsistency? Are there any automatic procedures in
place that guarantee that the fsck algorithms are in sync with the actual file
system code? (Answer anew for any file system I might be using.) You
definitely should do backups by reading the actual files, not the underlying
device; perhaps in this case it could be OK (since it was a backup itself
already, hence a copy of life data; but then if OP bothered enough to recover
the files, maybe he'll bother enough to make sure they stay recovered?)
------
icedchai
For that many files I probably would've used rsync between local disks.
_shrug_
~~~
ajross
And hopefully you would have written up a similar essay on the oddball
experiences you had with rsync, which is even more stateful than cp and even
more likely to have odd interactions when used outside its comfort zone.
Ditto for tricks like:(cd $src; tar cf - .) | (cd $dst; tar xf -).
Pretty much nothing is going to work in an obvious way in a regime like this.
That's sort of the point of the article.
~~~
icedchai
Or maybe not. He mentions rsnapshot in the article, which uses rsync under the
hood. This implies rsync would have a _very_ good chance of handling a large
number of hardlinks... since it created them in the first place.
~~~
sophacles
That doesn't follow. If backups are for multiple machines to a big file
server, the backup machine will have a much larger set of files than those
that come from an individual machine. Further, each backup "image" compares
the directory for the previous backup to the current live system. Generally it
looks something like this:
1\. Initial backup or "full backup" \- copy the full targeted filesystem to
the time indexed directory of the backup machine.
2\. Sequential backups:
a. on the backup machine, create a directory for the new time, create a mirror
directory structure of the previous time.
b. hard link the files in the new structure to those in the previous backup
(which may be links themselves, back to the last full backup.
c. rsync the files to the new backup directory. Anything that needs to be
transfered results in rsync transfering the file to a new directory, the
moving it into the proper place. This unlinks the filename from the previous
version and replaces it with the full version.
So yeah, the result of this system over a few machines and a long-timeframe
backup system is way more links on the backup machine than any iteration of
the backup will ever actually use.
~~~
icedchai
Yes, it has more links, I realize, but this still doesn't mean it wouldn't
work. Give it a shot and report back. (Hah.)
------
dspillett
_> The number of hard drives flashing red is not the same as the number of
hard drives with bad blocks._
This is the real take-away. Monitor your drives. At very least enable SMART,
and also regularly run a read on the full underlying drive (SMART won't see
and log blocks that are on the way out so need retries for successful reads,
unless you actually try to read those blocks).
That won't completely make you safe, but it'll greatly reduce the risk of
other drives failing during a rebuild by increasing the chance you get
advanced warning that problems are building up.
~~~
rbh42
Glad someone noticed it (I'm the OP). Reading the drives systematically is
called "Patrol Read" and is often enabled by default, but you can tweak the
parameters.
------
mturmon
The later replies regarding the size of the data structures cp is using are
also worth reading. This is a case where pushing the command farther can make
you think harder about the computations being done.
------
grondilu
On Unix, isn't it considered bad practice to use cp in order to copy a large
directory tree?
IIRC, the use of tar is recommended.
Something like:
$ (cd $origin && tar cf - *) | (cd $destination && tar xvf - )
~~~
dmckeon
Use && there, not ; - consider the result if either of the cd commands fails.
~~~
grondilu
fixed
------
sauere
> While rebuilding, the replacement disk failed, and in the meantime another
> disk had also failed.
I feel the pain. I went thru the same hell a few months ago.
------
maaku
Another lesson: routinely scrub your RAID arrays.
~~~
jewel
On debian-based systems, /etc/cron.d/mdadm will already do this on the first
Sunday of the month.
------
0x0
I wonder how well rsync would have fared here.
~~~
sitkack
Rsync can die just from scanning the whole directory tree of files first.
~~~
chadcatlett
The incremental option(enabled by default) introduced in rsync 3.0 greatly
reduces the need for scanning the whole directory structure.
------
ccleve
Maybe this is naive, but wouldn't it have made more sense to do a bunch of
smaller cp commands? Like sweep through the directory structure and do one cp
per directory? Or find some other way to limit the number of files copied per
command?
~~~
caf
No, because then it wouldn't have replicated the hardlink structure of the
original tree. That was the goal, and also the bit that causes the high
resource consumption.
------
Andys
A problem with cp (and rsync, tar, and linux in general) is there is read-
ahead within single files, but no read-ahead for the next file in the
directory. So it doesn't make full use of the available IOPS capacity.
------
davidu
This is not, not, not how one should be using RAID.
The math is clear that in sufficiently large disk systems, RAID5, RAID6, and
friends, are all insufficient.
~~~
lysium
Can you elaborate?
~~~
davidu
[http://www.zdnet.com/blog/storage/why-raid-5-stops-
working-i...](http://www.zdnet.com/blog/storage/why-raid-5-stops-working-
in-2009/162)
~~~
lysium
Thanks for the link! The article says, that due to the read error rate and the
size of today's disks, RAID 5 and RAID 6 have (kind of) lost their purpose.
~~~
davidu
Yep, mathematically no longer safe.
------
dbbolton
>We use XFS
Why?
~~~
cnvogel
I personally still consider XFS a very mature and reliable filesystem. Both in
terms of utility programs and kernel implementation. If I remember correctly,
it was ported to linux from SGI/Irix where it was used for decades. It also
was the default fs for RedHat/centos for a long time, so it might still have
stuck at many shops.
Heres my anecdotal datapoint on which I base my personal believe:
From about 10-6 years ago, when I was doing sysadmin-work at university
building storage-systems from commodity parts for experimental bulk data, we
first had a load of not-reliably working early raid/sata(?) adapters, and
those made ext3 and reiserfs (I think...) oops the kernel when the on-disk
structure went bad. Whereas XFS just put a "XFS: remounted FS readonly due to
errors" in the kernel logfile. That experience made XFS my default filesystem
up to recently when I started to switch to btrfs. (of course, we fixed the
hardware-errors, too... :-) )
Also, from that time, I got to use xfsdump/xfsrestore for backups and storage
of fs-images which not even once failed on me.
~~~
Eiriksmal
As a blithe, new-Linux user (3.5 years), I was bumfuzzled when I saw
RHEL/CentOS 7 switched from ext4 to XFS, figuring it to be some young upstart
stealing the crown from the king. Then I did some Googling and figured out
that XFS is as old as ext _2_! I'm looking forward to discovering how tools
like xfs* can make my life easier.
------
limaoscarjuliet
Rsync seems a better tool for this. Can be run multiple times and it will just
copy missing blocks.
------
nraynaud
it reminds me of crash only software.
------
gaius
I would probably have used tar|tar for this, or rsync.
~~~
thaumaturgy
You're right to recommend a tarpipe. I've had to copy several very large
BackupPC storage pools in the past, and a tarpipe is the most reliable way to
do it. (The only downside to BackupPC IMO...)
For future reference for other folks, the command would look something like
this:
cd /old-directory && tar czvflpS - . | tar -C /new-directory -xzvf -
Tarpipes are especially neat because they can work well over ssh (make sure
you have ssh configured for passwordless login, any prompt at all will bone
the tarpipe):
cd /old-directory && tar czvflpS - . | ssh -i /path/to/private-key user@host "tar -C /new-directory -xzvf -"
...but tarpipe-over-ssh is not very fast. I have a note that says, "36 hours
for 245G over a reasonable network" (probably 100Mb).
Disk-to-disk SATA or SAS without ssh in between would be significantly faster.
~~~
LeoPanthera
The prompt goes to stderr, the pipe only pipes stdout, so a prompt should not
cause excessive bonage, as long as you're there to respond to it.
Also, don't use -z locally, or even over a moderately fast network. The
compression is not that fast and almost always makes things slower.
~~~
thaumaturgy
Good to know!
Also, re: bonage, I agree that it "shouldn't", but it definitely did. From my
sysadmin notes file:
> The tar operation kicks off before ssh engages; having ssh ask for a
> password seems to intermittently cause problems with the tar headers on the
> receiving end. (It _shouldn't_, but it seems to.)
------
RexM
Is this where a new cp fork comes about called libracp?
------
brokentone
Feels like a similar situation to this:
[http://dis.4chan.org/read/prog/1109211978/21](http://dis.4chan.org/read/prog/1109211978/21)
------
lucb1e
> 20 years experience with various Unix variants
> I browsed the net for other peoples' experience with copying many files and
> quickly decided that cp would do the job nicely.
After 20 years you no longer google how to copy files.
Edit: Reading on he talks about strace and even reading cp's source code which
makes it even weirder that he had to google how to do this...
Edit2: Comments! Took only ten downvotes before someone bothered to explain
what I was doing wrong, but now there are three almost simultaneously. I guess
those make a few good points. I'd still think cp ought to handle just about
anything especially given its ubiquitousness and age, but I see the point.
And to clarify: I'm not saying the author is stupid or anything. It's just
_weird_ to me that someone with that much experience would google something
which on the surface sounds so trivial, even at 40TB.
~~~
sitkack
Because the man is wise. He also didn't kill a job that appeared to be hung,
he started reading the code to figure out why and determined that it would in
fact, complete.
| {
"pile_set_name": "HackerNews"
} |
How mosquitos deal with getting hit by raindrops - davi
http://phenomena.nationalgeographic.com/2015/06/24/raindrops-keep-falling-on-my-head-a-mosquitos-lament/
======
developer1
Of course the video doesn't show anything interesting, the mosquito's leg is
hardly even grazed. I was definitely hoping for the version where a drop
smacked the insect dead on target. Fairly strange for a lab result - if that's
the only video that was captured, it really doesn't seem to divulge much at
all. Where's the cool video? :D
~~~
e2e8
[https://www.youtube.com/watch?v=LQ88ny09ruM](https://www.youtube.com/watch?v=LQ88ny09ruM)
~~~
lucb1e
Direct hit just after the minute mark:
[https://youtu.be/LQ88ny09ruM?t=1m3s](https://youtu.be/LQ88ny09ruM?t=1m3s)
~~~
mordrax
Watching it several times, it looks like only the left most mozzie came out
unscathed. The other two took it hard and went down... definitely didn't 'walk
off the bus' :\
------
upofadown
>A study says a mosquito being hit by a raindrop is roughly the equivalent of
a human being whacked by a school bus, the typical bus being about 50 times
the mass of a person.
That is not a sensible comparison. When you scale something mass changes as
the cube of dimension. Strength changes as the square of dimension. So small
things are inherently stronger with respect to their mass.
~~~
abandonliberty
[Citation Wanted]
Very believable; how does the math work out?
~~~
troymc
Galileo. _Discourses and Mathematical Demonstrations Relating to Two New
Sciences_. 1638.
It's known as the square-cube law.
[https://en.wikipedia.org/wiki/Square-
cube_law](https://en.wikipedia.org/wiki/Square-cube_law)
~~~
abandonliberty
Thanks - I hadn't realized that muscle strength was proportional purely to
cross section.
------
dgemm
> But because our mosquito is oh-so-light, the raindrop moves on, unimpeded,
> and hardly any force is transferred. All that happens is that our mosquito
> is suddenly scooped up by the raindrop and finds itself hurtling toward the
> ground at a velocity of roughly nine meters per second, an acceleration
> which can’t be very comfortable, because it puts enormous pressure on the
> insect’s body, up to 300 gravities worth, says professor Hu.
Interesting article, but in the span of one paragraph here we have confused
velocity, acceleration, and pressure - and there are similar errors in the
following one. For an article about physics, I would expect this to at least
be proofread.
The Gell-Mann Amnesia effect:
[http://harmful.cat-v.org/journalism/](http://harmful.cat-v.org/journalism/)
~~~
joncameron
From your link:
> In any case, you read with exasperation or amusement the multiple errors in
> a story, and then turn the page to national or international affairs, and
> read as if the rest of the newspaper was somehow more accurate about
> Palestine than the baloney you just read. You turn the page, and forget what
> you know.
Which is of course intriguing, since cat-v.org hosts frothing-at-the-mouth
vitriol about topics like women in tech and gay marriage in the always
trustworthy and well reasoned medium of reposted reddit and slashdot comments.
And presumably I'm supposed to click over to the technical stuff with a
straight face.
~~~
roghummal
It's telling that you'd apply a derogatory label and attack the source medium
rather than say anything of substance about the content that offended you.
cat-v is chock-full of food for thought. You don't have to agree with any of
it and in fact disagreement is a large part of the site.
"Other than total and complete world domination, the overriding goal is to
encourage and stimulate critical and independent thinking."
------
daniel-levin
From an io9 article on the same research:
>> [Hu] and Dickerson constructed a flight arena consisting of a small acrylic
cage covered with mesh to contain the mosquitoes but permit entry of water
drops. The researchers used a water jet to simulate rain stream velocity while
observing six mosquitoes flying into the stream. Amazingly, all the mosquitoes
lived.
The researchers used _simulated rain drops_ on _six_ mosquitoes. There are
more than six species of mosquitoes. They controlled for wind effects (which
are part and parcel of rain). So they excluded horizontally travelling
raindrops. My immediate reaction to the conclusion that mosquitoes can fly in
rain was "Really? Not always". Here is a methodologically lacking and wholly
unscientific anecdote: I have lived in Johannesburg my entire life, where
mosquitoes are quite prevalent during the summer months. When it is raining
heavily (it is usually quite windy as well), the local species of mosquito
that feeds of humans do not present a problem as the number of airborne
mosquitoes tends to zero.
~~~
joeyspn
^This
I live in a mediterranean zone near a huge lake and during summer mosquitos
are your every night companions (specially if you're working during late night
hours). But when a summer storm brews the mosquitos disappear for two or three
days. Why? This has been for me a recurrent question, and the answer has been
always obvious: few of them survive being hit by raindrops.
You can make 1000 theories about how our tiny vampire friends deal with
raindrops, but it's pretty clear that intensive rain (>3hours) wipe out
mosquitos population for several days...
~~~
soneca
I also agree.
> _" And yet (you probably haven’t looked, but trust me), when it’s raining
> those little pains in the neck are happily darting about in the air, getting
> banged—and they don’t seem to care."_
I have looked and I don't trust you. I live in Brazil where mosquitoes are
present all the time, even in the city (obviously, on a smaller scale than
places closer to nature). I do notice that whenever is raining there is a
sharp drop in mosquitoes number flying inside our homes. They don't completely
disappear, but is notorious they are in much smaller numbers. As this is
common knowledge over years and years, across basically all the people, I
don't consider it anecdote, but empirical observation.
I cannot answer if that is because raindrops kill them, or they just preserve
themselves sheltered in their nests, or they breed less in rainy days, or
whatever. But the article (not sure about the research) is based on a false
premisse.
~~~
daniel-levin
Well, no, it's not empirical until we design some experiments to test the
theory, make predictions, test them, come up with potentially observable data
that would falsify our hypotheses, publish our results and let them be peer
reviewed, reproduced elsewhere etc... The jump from anecdotes to empiricism is
a large one that is not to be undertaken lightly.
------
nippoo
"Had the raindrop slammed into a bigger, slightly heavier animal, like a
dragonfly, the raindrop would “feel” the collision and lose momentum. The
raindrop might even break apart because of the impact, and force would
transfer from the raindrop to the insect’s exoskeleton, rattling the animal to
death."
Has anyone actually done any research on dragonflies being hit by raindrops,
or is this just speculation?
------
chrismorgan
The drawings in this article tend to be absurdly large, with the outcome that
the document is, transferred, around 23MB, for no good reason. _Sigh._
~~~
Jgrubb
Because editors.
------
Kiro
> In most direct hits, Hu and colleagues write, the insect is carried five to
> 20 body lengths downward
> If you want to see this for yourself, take a look at Hu’s video
What? Nothing like that happens in it.
~~~
dasmoth
Are you confusing wing span with body length?
In the right hand panel of the video, the insect certainly moves several body
lengths, and is still moving downwards at the end of the clip.
~~~
Kiro
No, it says "20 body lengths downward, and then [...] gets up and “walks” to
the side, then steps off into the air". In other words 20 body lengths while
being in the raindrop, which doesn't happen in the video. In fact, the
raindrop barely touches it.
------
ebbv
If it wasn't for the cute child like drawings this would be a truly terrible
piece of link bait. As it is it's still pretty and, and I expect better from
NatGeo.
Anyone who lives in a mosquito heavy area knows that mosquitos (like almost
all airborne insects) go into hiding during heavy rain and/or wind.
------
jbert
Does this places a reasonable selection pressure on the kinds of flying
insects we can have?
Big enough to shrug off a raindrop hit, or small enough to surf along the
surface tension until it can slide off?
~~~
baddox
Butterflies just seek shelter.
[http://www.scientificamerican.com/article/what-do-
butterflie...](http://www.scientificamerican.com/article/what-do-butterflies-
do-wh/)
------
theVirginian
It would appear they haven't yet evolved to deal with being hit by cars quite
as gracefully.
~~~
whoopdedo
I think this can be approached the same as the "ants can lift 50 times their
own weight" bit of trivia. It doesn't translate to "if a human were as strong
as an ant he'd be able to lift an elephant" because size doesn't scale that
way. Ants and mosquitoes get away with larger forces relative to their mass
because the skeleton and muscles needed are still within reasonable material
and fuel costs. A human-sized animal that wanted to survive being hit by a car
would need to spend much more energy per mass than the insect does.
~~~
eru
I think theVirginian was commenting about mosquitoes getting smashed on a
car's windshield, not about cars and humans.
~~~
whoopdedo
Oh, right. I thought it was a reference to "the equivalent of a human being
whacked by a school bus" from the article.
------
rokhayakebe
I just realized how making things fun and funny can help to teach anything.
The drawings and the comical tone made this seem so approachable. I wish they
had a series of 1000 of such lessons I could read.
~~~
KnightOfWords
Here's his old blog on NPR:
[http://www.npr.org/sections/krulwich/](http://www.npr.org/sections/krulwich/)
Probably not 1000, but perhaps getting on for it.
~~~
rokhayakebe
Thank you for sharing.
------
jokr004
Not really important but.. "nine gravities _(88 /m/squared)_"
I don't get it, the scientificamerican blog that they are quoting has the
right units, where did they come up with this?
------
mordrax
> But because our mosquito is oh-so-light, the raindrop moves on, unimpeded,
> and hardly any force is transferred.
So if the mosquito's weight is insignificant compared to that of the heavier
and denser water drop and that's what keeps it from having the force
transferred, would this equally apply to hailstorms? (Where our mosquitoes are
pelted by small hail balls the size of raindrops)
~~~
acyacy
You don't really find mosquitoes where you're likely to find hailstorms.
~~~
RBerenguel
In Spain we definitely have mosquitos, and most Augusts we have these summery
storms, sometimes bringing also hailstorms (sizze varies though between drop
sized ice and golfball sized ice)
~~~
acyacy
You find them in these areas. When it gets cold there tends to be far fewer of
them.
And compared to the equators its nearly incomparible.
~~~
Dove
Cold isn't required for hailstorms. The ice forms at altitude. We have a lot
of hailstorms in the spring and summer in Colorado, and while it isn't the
mosquitoiest place I've _ever_ lived, there _are_ mosquitoes.
~~~
acyacy
Compared with where its mosquito haven like by the Equator?
I suppose raindrop vs hailstone is a reason is one of the reason's the density
issues are so different.
~~~
Dove
Yeah. Mosquitoes are densest in the tropics where hailstorms are rare, but
just about everywhere on earth short of Antarctica has _some_ mosquitoes. I'd
think mosquitoes would meet hailstones occasionally, though I can't really see
the mosquito surviving it.
------
mleonhard
The article embedded a short video. Here's longer video with explanations:
[https://www.youtube.com/watch?v=LQ88ny09ruM](https://www.youtube.com/watch?v=LQ88ny09ruM)
------
state
Can't help but immediately notice: "Drawing by Robert Krulwich"
~~~
sohkamyung
Yes, Robert Krulwich has joined the Nat Geo Phenomena blogging platform [
[http://phenomena.nationalgeographic.com/blog/curiously-
krulw...](http://phenomena.nationalgeographic.com/blog/curiously-krulwich/) ]
~~~
k_brother
I think the commenter meant that Krulwich actually illustrated the piece too.
Who knew Krulwich could draw!
~~~
sohkamyung
Ah, I see. My bad.
Yes, Krulwich does draw pretty well.
------
dharma1
if you like watching slo mo videos, recommend this channel:
[https://www.youtube.com/user/theslowmoguys/videos](https://www.youtube.com/user/theslowmoguys/videos)
------
bnolsen
so if mosquiotos are oblivious to rain is there some way to make artificial
rain with different properties that could destroy mosquitos en masse?
~~~
chinathrow
Yes, it's called poison and it's being done a lot.
[http://www.local10.com/news/plane-to-spray-for-mosquitoes-
ov...](http://www.local10.com/news/plane-to-spray-for-mosquitoes-over-south-
fla/27244642)
Ah you mean different mechanical properties ;)
------
stillsut
Send this to Bill Gates, that guy _HATES_ mosquitoes.
~~~
Kluny
A man who thought, "When I'm a billionaire, I'm going to dedicate my life to
getting rid of those nasty fuckers (mosquitoes)" and then _did_ it.
------
cJ0th
very interesting article. It is a pity that his column has no rss feed.
------
blumkvist
A commenter on the site says that some type of mosquitoes (Texas) are used in
oil drilling. I tried googling "texas mosquitoes oil drilling" and variants,
but didn't find anything.
>"Why, one species even secretes an enzyme to dissolve the organic matter in
blood leaving only the iron in haemoglobin. Then another enzyme causes the
iron atoms to join to form biological drill pipe! These structures are known
to be as much as 6 inches in diameter and to extend a mile deep."
Is there something to it or he just went to on the internet to tell lies?
~~~
coconutrandom
That is a joke that makes more sense once you've been bitten there.
~~~
briandear
In Texas, we'd call that a tall tale.
~~~
dalke
Up north a few winters back the weather was so cold that words froze up as you
talked. People had to stand around a fire to have a conversation. When spring
thaw finally came the sound of all the melting conversations was deafening.
Then there was the time that Pecos Bill lassoed and rode a twister, but that's
a tale for another time.
| {
"pile_set_name": "HackerNews"
} |
The first ever accurate molecular simulation with quantum computing by Google - gri3v3r
http://www.sciencealert.com/google-s-quantum-computer-is-helping-us-understand-quantum-physics
======
selimthegrim
dupe:
[https://news.ycombinator.com/item?id=12132700](https://news.ycombinator.com/item?id=12132700)
| {
"pile_set_name": "HackerNews"
} |
25-Year-Old Textbooks and Holes in the Ceiling: Inside America’s Public Schools - SQL2219
https://www.nytimes.com/2018/04/16/reader-center/us-public-schools-conditions.html
======
awat
I wish I could upvote this more than once. Both my parents were teachers. The
disparities that children face from day one neighborhood to neighborhood are
so large it makes me sigh so hard when I hear people say things like just work
harder.
~~~
mncharity
I'm not sure whether the following thought makes that better, or even worse.
Even the best of pre-college science education is wretched. Chemistry
education research describes chemistry education content as "incoherent".
There's limited evidence that it's possible to do much much better. But it's
hard to create such content, and there's little incentive, so it largely
doesn't exist. Even in expensive private schools in states with the best
public education.
So one perspective is, if by dint of extraordinary societal efforts, the
disparity were eliminated, then science education would achieve... a
uniformity of wretchedness. Useful for student opportunity and mobility, but,
sigh.
Another perspective is, that if changing technology and incentives makes
transformatively better science education possible, it need not follow the
existing pattern of disparity. For illustration, if the best lab experience
becomes a virtual lab experience, then having a well-stocked lab vs a moldy
closet, suddenly matters much less.
Another perspective is, of course, that the pattern could live on (ducktaped
broken obsolete VR?), and thus the barrier grow even larger. :/
But in such a transition, there's at least a hope for piggybacking a change in
disparity.
| {
"pile_set_name": "HackerNews"
} |
As Tesla struggles to exit 'production hell,' buyers complain of delivery limbo - Aloha
http://www.latimes.com/business/autos/la-fi-hy-tesla-sales-delivery-problems-20180912-story.html
======
chmaynard
Is it legal for a company to require full payment before the product has
shipped? Most businesses don't operate that way.
| {
"pile_set_name": "HackerNews"
} |
The Optical Illusion That’s So Good, It Even Fools DanKam - cmrx64
https://dankaminsky.com/2010/12/17/mindless-equals-blown/
======
cmrx64
I'd seen the optical illusion before, it was one of the more impressive ones
in my middle school design class. Very, very interesting to see how it "fools"
a computer program.
I wonder how other computer vision systems compare.
| {
"pile_set_name": "HackerNews"
} |
Firm fat-fingered G Suite and deleted data, escalates support ticket to lawsuit - gilad
https://www.theregister.co.uk/2019/07/05/musey_v_google_lawsuit/
======
markgavalda
So let me get this straight: they deleted their own account and because they
didn't have any backups (because why would they) they're suing Google now.
That's gonna end well.
| {
"pile_set_name": "HackerNews"
} |
Dogescript - RossPenman
http://zachbruggeman.me/dogescript/
======
steveklabnik
What's funny about this is that every comment here is calling this useless,
yet it's something that's very much in the hacker spirit.
Utility is not the end goal of everything.
~~~
victorf
I recall when LOL Cats were actually pretty funny, way back around "I has a
flavor". Then people who didn't understand the language [1] overran the
Internet with cats that had incredible vocabularies and immaculate grammar
(they just didn't know how to spell and were evil).
The only thing that bothers me about Dogescript is that the joke is too
forced. The typical Shibe pictures are just "wow", "such X", "very Y", "wow";
when one adds in the "shh", "plz", and "rly", and (even worse) starts crafting
a coherent sequential story, it removes the humor from the doge meme.
[1]
[http://itre.cis.upenn.edu/~myl/languagelog/archives/004442.h...](http://itre.cis.upenn.edu/~myl/languagelog/archives/004442.html)
~~~
jamesaguilar
Things stop being funny over time. There's no reason to be concerned. There
will always be a next funny thing, and there's really very little you can do
about it.
~~~
victorf
I'm not concerned. This was my response to the claim that it is "very much in
the hacker spirit". I find it neither utile nor humorous. I think compiling it
to Javascript is trivial and not worth our attention.
~~~
girvo
While I may disagree with your conclusion here, I'm quite chuffed that I have
now learnt a new word: "utile". Neat!
------
remixz
Hiya! I'm the creator of this. I did not expect this to be here (nor did I
especially want it to...). If it isn't painfully obvious, this is a joke, so
please don't take it too seriously. Thanks!
~~~
RossPenman
Hey dude. I submitted this link. I'm really sorry if you didn't want it to be
here. I just saw the link on Twitter and thought it would be a cool thing to
share. I feel awful now knowing that you didn't want it to be here.
~~~
remixz
Hey, no problem! _Please_ don't feel bad (makes me feel bad :P). I mostly said
that because of knowing how HN can respond to jokes. I'm totally good with it
though. Thanks for enlightening HN with doge!
~~~
RossPenman
Thanks, that's a relief. :)
Congratulations on getting to number 1, anyway.
------
possibilistic
In case you aren't familiar with this meme, it's the Shiba Inu meme, termed
"Shibe" or "Doge".
* [http://knowyourmeme.com/memes/doge](http://knowyourmeme.com/memes/doge)
* [http://reddit.com/r/shibe](http://reddit.com/r/shibe)
* [http://www.reddit.com/r/supershibe](http://www.reddit.com/r/supershibe)
~~~
ufo
why do they even have two separate subreddits for that?
~~~
theorique
such popularity
so doge
wow
many reddit
~~~
AsymetricCom
This caused me to exhale air sharply through my nostrils.
~~~
Cthulhu_
Welcome to the internet!
------
harel
This is really funny. I quite enjoyed it. I wouldn't take any serious comments
here seriously. Unfortunately the distribution of sense-of-humour in the world
is not even. Some get a bigger chunk of it than others.
The next time anyone mentions CoffeeScript to me, I'll send them here. Much
better.
~~~
eudox
"At least Dogescript has reasonable scoping!"
~~~
DonPellegrino
Sadly true.
------
Lockyy
Yay, looping just for those who're upset.
very mad is true
many mad
plz console.loge with "rawr rawr stop posting amusing/funny things"
wow
------
andrewcooke
[http://knowyourmeme.com/memes/doge](http://knowyourmeme.com/memes/doge)
[https://www.google.com/trends/explore?hl=en-
US#q=doge&cmpt=q](https://www.google.com/trends/explore?hl=en-
US#q=doge&cmpt=q)
~~~
ryeguy
More to the point:
[http://www.reddit.com/r/supershibe](http://www.reddit.com/r/supershibe)
~~~
benatkin
Well, that subreddit is written by fans of supershibe, so someone who isn't
into it might not be interested in what they have to say. Google Trends is
already a trusted source for many.
------
eudox
All it needs now to be perfect is Hindley-Milner type inference.
------
dpcan
I like it. And I always wondered why a programming language couldn't exist to
work as follows:
Start
Run at 60 frames per second and do
Clear the screen
Draw a rectangle at (10,10) with size (100,50) and rotate it 20 degrees
Repeat
End
Or something along those lines - hopefully you get my point.
This would be fun for prototyping. I could just speak to my computer and have
it translate my plain english into a working program :)
~~~
nightpool
[https://en.wikipedia.org/wiki/Natural_language_programming](https://en.wikipedia.org/wiki/Natural_language_programming)
Plenty of things already work like this. For example, check out Inform 7
[http://inform7.com/](http://inform7.com/) or the Robot C natural language
module
[http://www.robotc.net/NaturalLanguage/](http://www.robotc.net/NaturalLanguage/).
Or, more generally, LOGO
[https://en.wikipedia.org/wiki/Logo_%28programming_language%2...](https://en.wikipedia.org/wiki/Logo_%28programming_language%29)
~~~
icelancer
LOGO was exactly what I thought of when I saw his post. 'pen up' and 'pen
down' and so forth jive very well with "natural" language.
------
arvidkahl
The thing that made me crack up was "console.loge" \- love it.
------
code_duck
I like the syntax! Reminiscent of
[http://en.m.wikipedia.org/wiki/LOLCODE](http://en.m.wikipedia.org/wiki/LOLCODE)
but probably more usable.
------
minimaxir
The logical next step would be to write a wordcloud generator in Dogescript.
Which always includes a "wow".
~~~
curiousdannii
A word cloud of all words in all doge images. Wow.
------
ddp
What's funny is that it reads a lot like COBOL.
------
thiderman
such plug
very terminal shibe
so hax
many monads
[https://github.com/thiderman/doge](https://github.com/thiderman/doge)
[https://news.ycombinator.com/item?id=6667414](https://news.ycombinator.com/item?id=6667414)
------
georgeoliver
Am I the only one who read the title as
[http://en.wikipedia.org/wiki/Doge](http://en.wikipedia.org/wiki/Doge) script?
~~~
andrewflnr
At first I got "dodgescript", but yeah, then I was thinking, "what the heck
does this have to do with doges?"
------
robot_
I got a huge laugh out of this. I love how some of the statements I ended up
writing could almost be interpreted as poetry, hilarious poetry.
------
crabasa
This is totally awesome. I was already excited about Zach's upcoming talk at
CascadiaJS [1] but now I can't wait to see what he's got up his sleeve.
[1] [http://2013.cascadiajs.com/speakers/zach-
bruggeman](http://2013.cascadiajs.com/speakers/zach-bruggeman)
------
twodayslate
Are there any tutorial that go about implementing your own compile-to-js
language?
------
daemin
It seems to me that a lot of these toy/joke languages are just thin wrappers
around existing languages. Something that can be accomplished by a few
#defines or regexes to transform it into a runnable language.
------
jiggy2011
Challenge for the month, persuade your boss to use this in production.
------
guerrilla
Well, you made me smile :)
------
agrias
Haha this has made my day
------
davidw
As someone who lives in territory that belonged to the Republic of Venice for
longer than Italy has been a going concern, 'Doge' means one of the leaders of
that Republic.
------
jawerty
I think it's pretty funny. This is firsthand codecomedy.
------
zamnedix
Reminds me of LOLCODE.
[http://en.wikipedia.org/wiki/LOLCODE](http://en.wikipedia.org/wiki/LOLCODE)
------
whalesalad
The console.loge had me laughing pretty hard.
------
gcatalfamo
I want the Kittyscript plugin of Dogescript
~~~
kalleboo
[http://en.wikipedia.org/wiki/LOLCODE](http://en.wikipedia.org/wiki/LOLCODE)
------
piracyde25
Reminds me of LOLCODE [1]
[1][http://lolcode.org/](http://lolcode.org/)
------
10098
BUT DOES THIS WORK WITH NODE.JS?!1
~~~
becojo
Yes. It's even a Node module.
------
Pot
I'm not a js coder but this is really fun to take an eye on the dogescript
------
heyandy
Very funny. Maybe useless but amusing.
------
davexunit
Thank you, doge. I love it.
------
hawleyal
This lang gave me cancer.
------
namuol
Somebody had to do it.
------
nickthemagicman
Is it turing complete?
------
lukehorvat
Comedy is dead.
------
tylerlh
so wow. much smile.
Great job on this. Made my day
------
jk211e
so useless
~~~
smosher
People missed the joke. Have an upvote.
~~~
deoxxa
Unfortunately, I don't think it was a joke. Such negative, very disappoint.
~~~
smosher
He did a better job of aping doge than you did, and not everyone needs to love
it.
------
a8da6b0c91d
Many of these internet memes are genius hilarious. People know the good Monty
Python bits 40 years on. The good doge pics bust my guts just like that stuff.
Will anyone get or remember this in 20 years? Interesting times.
~~~
chrismonsanto
All your base is still funny 20 years later!
~~~
chinpokomon
That can't be 20 years old yet can it? I loved how AYBIBTU was used to torment
many talk shows, like Love Line and Tom Green. Good times.
------
thenerdfiles
Is the point that we need a transpiler for any idiolect to JavaScript?
------
thenerdfiles
I don't even
------
T3RMINATED
pure gargbage
------
Option_User_
Absolutely disgusting, may I request that you reddit/manchild honeypot users
please refrain from posting your degenerate garbage on here.
~~~
dannytatom
such anger wow
~~~
becojo
so mad
------
lemiffe
rly?
------
koala_advert
I hate this bullshit.
~~~
lowboy
Wow such angry.
------
jbeja
Ok....why? And please don't replay "Why not?"
~~~
monkeyspaw
Why build it? Because it appealed to the author, perhaps in a way you can't
understand.
Why share it? From the comments, I understand that it was shared by someone
else, and the original creator didn't intend for it to be put on HN.
Why judge it? That's the question I'm trying to figure out while reading this
thread.
~~~
jbeja
That doesn't answer my question :p
~~~
monkeyspaw
My point was that your question wasn't very interesting.
| {
"pile_set_name": "HackerNews"
} |
Payola – A Rails engine for Stripe - zrail
http://www.petekeen.net/introducing-payola
======
aculver
Congrats, Pete! You literally wrote the book on Stripe integrations and it's
awesome seeing you embrace open source as a way of sharing your knowledge. I
_love_ how well thought through everything you've done is. I can't wait to see
what you do with subscriptions!
As the author of the Koudoku gem[1] that Pete mentions, I'd encourage people
familiar with it to keep a close eye on Payola as well. Given Pete's technical
excellence, great implementation choices, and broad experience with different
types of Stripe integrations (my primary interest has been SaaS products,)
Payola could very well supersede Koudoku when he tackles the subscription side
of things.
[1]
[http://github.com/andrewculver/koudoku](http://github.com/andrewculver/koudoku)
~~~
itengelhardt
As a minor contributor to the Koudoku gem, I have to say that the structure
and especially the test coverage of payola are amazing. Well done, Pete!
Looking forward to how you integrate subscriptions into this
------
tarr11
Seems like this is a little bit cleaner than using stripe-ruby + stripe_event
+ checkout.js. I've never tried stripe-rails.
I get the idea behind making this code async. I've never experienced slowness
with Stripe's API, but I'm sure it happens!
One of the pain points for me is keeping all of my Stripe data in sync with my
AR models.
Would would be helpful for me would be a generator for a set of ActiveRecord
models representing all the stripe data, and have all the webhooks populate
those tables(and maybe rake tasks as well to initialize things).
------
pjungwir
This looks great--thank you!
Does anyone use ActiveMerchant? I looked at it several years ago and it seemed
to be just a mess: scanty documentation, hard to install/set up, lots of bugs.
So I've always gone with rolling my own payment code (usually Stripe,
sometimes PayPal), and it'd be nice to stop that, one way or another. It seems
like ActiveMerchant will be Payola's main competitor.
~~~
boucher
ActiveMerchant's entire purpose is about abstracting all the different payment
gateways into one API. If you know for sure you're going to use Stripe, or if
you want to use Stripe specific features (of which there are quite a lot worth
using) ActiveMerchant is less valuable to you.
It's also worth noting that ActiveMerchant hails from a time long before
Stripe or even Braintree, and so it supports gateways that are quite a bit
more complex. (disclosure: I wrote the ActiveMerchant Stripe support).
~~~
pjungwir
Yes. Now that I look over the ActiveMerchant docs again, I have a few
observations/questions:
\- With AM, you have to accept the credit card details (even if you don't
store them), and then send them on the the payment gateway. Even with AM's
Stripe implementation, it's your Rails app sending the info to Stripe. There
is no Javascript sending the details straight from the user's browser. (Or am
I mistaken about that?) So you have a higher PCI burden than if you used
Stripe in the normal way.
\- AM doesn't provide any persistence, just an API to the payment gateway, so
you would still have to roll you own tables/models for payment
success/failure.
\- I don't see support in AM for subscriptions.
Looking at AM again now, perhaps my previous comment was too harsh, although
that's how I remember it when I checked it out long ago. Even now, I'm tempted
to say that Payola's landing page _already_ has better documentation than AM.
~~~
thezoid
ActiveMerchant is just an abstraction layer around various gateways. It's goal
was never to provide a full-stack solution, but to simply making working with
numerous gateways easier because you have a common interface for working
between all of them.
As for subscriptions, that's a feature that could be added but many of the
Gateways that ActiveMerchant supports don't have subscriptions baked into them
(compared to say Stripe).
You could of course have your application store the credit card information
and manage the charges yourself using ActiveMerchant, but that opens a bunch
of PCI compliance and such.
If you just need to accept money and aren't already bound to a specific
merchant account, then plain ole Stripe or Payola are a better option.
------
neurotixz
The main thing preventing me from using stripe directly in my applications is
that it does not calculate sales taxes. The added complexity to do so makes it
complex to integrate.
That aside, thanks for the book, I am buying it as I am sure that the advice
will be relevant for the payment platform I will use (hesitating betweek
Chargify and Recurly right now).
------
PhilipA
I look forward to when the gem will be updated with the possibility for
subscriptions.
~~~
joshmn
Was just about to say the same. Commenting here so I don't forget.
------
dTal
Can't speak to the code, but the name seems unforunate:
[http://en.wikipedia.org/wiki/Payola](http://en.wikipedia.org/wiki/Payola)
~~~
zrail
It's money related and customers are never going to see it so I thought it
worked well. If people think it's offensive I'll think about changing it.
~~~
aarondf
There's a food product named Soylent... So I wouldn't worry about it too much.
~~~
spacehome
That's a poor choice of name, too.
------
rizzy
Awesome! I'm about a month away from adding Stripe into my app.
Thanks for the work on this.
------
michaelbuckbee
This seems pretty great: like an open source Gumroad or Cargo service.
------
studiofellow
Love that this gem does things the right way, like including background jobs.
As a Rails newbie trying to build billing, all the code/tutorials/gems I could
find weren't nearly this high quality or robust.
~~~
weaksauce
He wrote the book on payments with stripe. Which also happens to be a high
quality book and worthy of a purchase.
------
msie
So, is this only needed if you want to do asynchronous processing with Stripe?
I vaguely recall integration with Stripe was really simple probably because it
was synchronous.
~~~
zrail
A basic Stripe integration is pretty simple, but to do anything more advanced
you end up writing a lot of boilerplate code. Payola (and the other projects
mentioned in the post) attempt to simplify and eliminate the boilerplate that
you'd otherwise have to write.
------
tessierashpool
Like many others, I've read Pete's book, and it's very good.
------
namidark
Does anyone know of something similar for Paypal?
~~~
zrail
Right now things are Stripe-specific for expediency, but there's no reason why
Payola couldn't be extended to multiple payment providers, as long as they
provide the same basic capabilities.
~~~
studiofellow
This would be nice because it's always a good idea to have a fallback payment
processor.
| {
"pile_set_name": "HackerNews"
} |
What we lose by reading 100,000 words every day - pepys
https://www.washingtonpost.com/outlook/what-we-lose-by-reading-100000-words-every-day/2018/10/04/72dea000-b212-11e8-a20b-5f4f84429666_story.html
======
dpark
> _“My grafted, spasmodic, online style, while appropriate for much of my
> day’s ordinary reading, had been transferred indiscriminately to all of my
> reading, rending my former immersion in more difficult texts less and less
> satisfying,” she writes. Wolf soon tried again, forcing herself to start
> with 20-minute intervals, and managed to recover her “former reading self.”_
Translation: I stopped reading novels and then found it difficult to start
again.
The problem isn’t the fluff we read online. The problem is that _when you go
long periods without reading novels, it’s harder to pick up a novel and enjoy
it_. Reading fluff online doesn’t make you stop reading novels, though, any
more than watching TV makes you stop reading novels. It’s entirely possible to
do both, with the caveat that everyone’s time is limited. But there’s nothing
about browsing online that intrinsically makes it hard to read novels.
~~~
Taylor_OD
This. I consume books endlessly and almost always have... Unless I stop. A few
times in my life I've started reading a book and when it didnt hold my
interest I stopped picking it up. 3 months to a year later I would come across
an interesting book and then it was back to constantly reading.
To help avoid this situation I've made it part of my morning routine to read
for 15 minutes every morning (keeps the world/story alive in my head) and if I
don't pick up a book for 7 days I move on to another book.
Using this method I'm at 27 books for the year and I've moved on from 2.
~~~
kbenson
I actually limit my exposure to novels because I find it extremely hard to
stop reading once I start. I'll stay up _way_ too late, and sneak reading in
throughout the day, basically doing whatever I have to to continue and finish
the story. I enjoy this process, as it keeps me immersed, but it's not
healthy.
~~~
WalterSear
Someone should create a book that helps us manage this harmful technology! We
should, at least, keep novels out of the reach of children.
~~~
walshemj
Indeed "Is It a Book That You Would Even Wish Your Wife or Your Servants to
Read?"
This is a Quote from the LadyChatterley’sLover Trial in the 60's
------
daveslash
This reminds me of Ray Bradbury's _Fahrenheit 451_. Many people believe the
story to be about censorship, but Bradbury himself (in his later years)
publicly claimed it was about peoples' increasingly short attention spans. In
_Fahrenheit 451_ , people were afraid of books because the stories, thoughts,
and concepts were more than mere sound-bytes and were thus unintelligible to
the masses with shortened attention spans.
_Radio has contributed to our ‘growing lack of attention.’ .?.?. This sort of
hopscotching existence makes it almost impossible for people, myself included,
to sit down and get into a novel again. We have become a short story reading
people, or, worse than that, a QUICK reading people._ ~Bradbury
Source: [https://www.laweekly.com/news/ray-bradbury-
fahrenheit-451-mi...](https://www.laweekly.com/news/ray-bradbury-
fahrenheit-451-misinterpreted-2149125)
[Edit] Punctuation & typos.
~~~
tinalumfoil
I tried reading 451 once, since it seemed up my alley. I got about a chapter
in, put it down and never picked it up again. It wasn't that I didn't like the
book (I don't remember if I did), but I just happened to get really busy right
after that and just forgot that I ever started reading.
Maybe if I had read it I would understand the importance of having an
attention long enough to finish the book. Thing is if I had that attention
span the lesson would be lost on me since I'm already reading the book.
In other words, a book teaching the importance of reading books is teaching a
pointless lesson.
~~~
daveslash
Good point. I enjoyed it, but never picked up on the attention-span lesson at
all; I read it thinking it was about censorship. I know now that the author's
intent was to write a story about attention-span. I now take it to be a social
commentary. Side-note: after reading the book, I rented the movie on DVD. The
DVD had a scratch on it -- a scene faded out, the player hit the scratch and
skipped ahead... it landed on a fade in scene. I didn't even notice. The movie
_I watched_ was only about 20 minutes and I thought " _man, they left out a
lot of stuff_ ".
~~~
bllguo
bradbury's stance on _Fahrenheit 451_ is what convinced me that authorial
intent is irrelevant.
------
sjg007
Hopefully, some aspiring psychologist or neuroscientist will be able to
quantify the effect. My hypothesis is that this is driven by an information
seeking dopamine reward cycle and that for some reason we lose the capacity in
our executive function to regulate it. Much like an addiction. There is so
much digital distraction. Similar to ADD perhaps? You can also see this in
modern movies, shows and cartoons where the pace has quickened. It's hard to
watch old movies or older cartoons etc... Watch old episodes of Sesame St vs
new episodes and you can see the shift. And even then kids get bored of new
episodes with the ultimate addiction being youtube. I actually don't find
youtube too bad in moderation because kids do take ideas and try to play with
them in the real world. You just have to be sure they aren't overexposed and
exposed to ideas that are not healthy. What I am saying is that I do see
imaginative play despite youtube or that incorporates youtube.
So in this hyper digital world it feels like we have less time. Even though
time remains the same. There is always something seeking our attention and for
the most part it is unimportant but we can't ignore it.
------
gdubs
After the 2016 election, during which I spent an inordinate time online,
constantly searching for some 'new' piece of information like a smoker
lighting up a cigarette before the last one had finished burning, I realized
that I was having a harder time than usual getting through the books I was
reading. The effect was two-fold: not only was I spending more time online, I
was fragmenting my attention; when I sat down to read a book, I found my brain
was pausing for interruptions. I was training myself to self-interrupt. The
book, "The Distracted Mind" goes deep into the science behind this phenomena,
and is really worth checking out.
------
Aeolun
It sounds a bit silly to assume that reading a hundred reddit posts a day
would interfere with my reading of a good novel. I haven’t encountered
anuthing even close to it, but I guess it might be different for others?
~~~
bootsz
I've definitely noticed it myself. I have a much harder time reading novels or
any long-form content these days, and I suspect it's due to consuming lots of
very-short-form content on a daily basis. On HN and elsewhere on the internet
you can consume a huge number of distinct ideas in a very short time. This
causes me to now be impatient with long-form pieces where I find myself
wanting to just "get to the point already". It's a quantity vs. quality
problem. The internet tends to favor quantity.
It's something I'm trying to work on because there's obviously immense value
in books and long-form reads (and a lot you get out of them that you can't get
out of little snippets and quick articles).
~~~
jodrellblank
" _there 's obviously immense value in books and long-form reads_"
Is there? Or are you just saying that because it's expected and you'd feel
embarrassed if you said otherwise?
Humans are pattern matchers, what if we see patterns more easily from many
examples, rather than one? What if we extract patterns more easily seeing them
from many points of view instead of just one author?
Is a neural network better trained on one high detail photo, or a dataset of
many photos?
~~~
jodrellblank
Someone's got to have a better comeback than a downvote. You know where
there's "obviously immense value"? Oil fields. People literally kill to
control them.
Nobody kills to take control of a library.
At best you could say people get into massive debt for education. But at the
same time, education is clamouring for online courses, videos, conferencing,
teachers, interaction, and textbooks are widely considered a problem - low
priority, low quality, a racket, going back years since Richard Feynman's
famous story about reviewing them at least.
Books, especially academic books, are increasingly given away for free online
- when people will pay for entertainment.
How many people learn from a teacher, a course, or learn by doing, vs how many
actually learn from books?
People don't treat books the way they treat things they value. There may be
immense value in books, it's not "obvious".
------
jasode
First off, I haven't studied Maryanne Wolf's neuroscientific research on "deep
reading" and its claimed benefits. But, as a person who has read most of the
major thick books like Moby Dick, War & Peace, Les Miserables, Middlemarch,
Proust, and reading _cover-to-cover_ the old computer books like the 3-ring
binders of C Language[1], my lifetime reading experience could offer
counterpoint to why "deep reading" seems to be a lost activity:
_Most of the text out there is just not good quality that deserves or rewards
deep reading._
My pet theory is that the rampant skimming or "shallow reading" is basically
the brain performing a hidden Bayesian probability that any random text put in
front of us isn't worth the effort of deep reading.[2] This is why many of us
go to HN comments first instead of reading the actual article. The Bayesian
priors told us that the "tldr" in the comments got to the point where the
article all-too-often had a self-indulgent author that meandered all over the
place and wasted our time. Therefore, "shallow reading" isn't bad for us...
it's our way of optimizing against "information overload".
Even college professors who are used to heavy reading workloads skim new work.
I'd argue this is another manifestation of Bayesian priors.
To back to my C Language example. I didn't really learn C by reading those
binders cover to cover. (Deep reading.) I _really_ learned it when I did
_shallow reading_ across fragmented sources like USENET comp.lang.c forum and
playing around with toy programs. So maybe deep reading isn't the answer but
the attention restriction from not being distracted with Twitter and Instagram
notifications. In other words, maybe we're conflating benefits "uninterrupted
study" with "deep reading".
[1]
[https://www.betaarchive.com/forum/viewtopic.php?t=33794](https://www.betaarchive.com/forum/viewtopic.php?t=33794)
[2]
[https://en.wikipedia.org/wiki/Sturgeon%27s_law](https://en.wikipedia.org/wiki/Sturgeon%27s_law)
~~~
EGreg
For anyone who found the above too long, basically she means that deep reading
often wastes too much time and you don’t know ahead of time what to read. She
didn’t mention that you can check other shorter sources before you commit to
reading, like reading this comment before hers.
~~~
jasode
_> , basically she means that deep reading often wastes too much time _
I apologize that you found my text length was too long but I thought the extra
background was necessary to state _why_ deep reading is often a waste of time.
If I _only_ state a 1 sentence punchline in my original post, it can seem like
a cheap hit-and-run comment and therefore not really engaging with the
article's arguments. (Or the extreme brevity would just invite snark such as
_" you probably don't have deep reading skill"_. Therefore, a writer's reflex
is to defensively preempt that with extra words that try to establish street
cred.)
I thought it would be interesting to share that many of us can do "deep
reading" and yet we don't bother with the effort -- _and that behavior is not
a contradiction_. Instead, it's an optimization of limited reading time. This
tradeoff doesn't seem to be reflected in Maryanne Wolf's research.
_> She didn’t mention that you can check other shorter sources before you
commit to reading,_
I actually did and I specifically used "HN comments" as an example of readers
trying to find a tldr summary and why it's a rational strategy.
~~~
StevenRayOrr
> _I apologize that you found my text length was too long but I thought the
> extra background was necessary to get state why deep reading is often a
> waste of time._
I read @EGreg as poking a bit of fun, rather than raising a legitimate
complaint: simultaneously supporting your point, but also gently pushing at
the limits of shallow reading.
~~~
ninju
<sarcasm> We need a digest of HN comments for people don't have _time_ to read
the comments about an article regarding people not having time to read
everything that they come across </sarcasm>
------
mapcars
answer is: time :)
~~~
trukterious
It's more like 'thinkjuice'. The act of making all the perceptual
discriminations required to consume 100,000 words cuts into the cognitive
budget.
------
casper345
Also might just be easier and more convenient to read the articles online. We
have laptops, phones, tablets, emails as mediums to read "100,000" words but a
novel (preferably on canvas) is physical and limited by nature. I can read
hacker news at work but I cannot just whip out Oliver Twist when I'm
'sneaking' a break.
------
hyko
_the average person “consumes about 34 gigabytes across varied devices each
day” — some 100,000 words’ worth of information_ \- seems like an odd and
misleading statistic.
~~~
tw1010
The precise number doesn't really matter. The strongman interpretation is that
we just read and skim too much each day.
~~~
jgtrosh
If so, the title of the article is meaningless clickbait. From reading the
article it seems to me the author takes the phrase at face value as they
imagine skim reading a third of Middlemarch in a day. I'm surprised they
describe different modes of reading as a skill future people will have, since
afaik it's normal for most people to approach different types of texts at
different speeds and paying attention in different ways. I can skim quickly
over comments and articles just building an understanding of the context and
basic information and conclusions people are using, while I will read a
scientific article anywhere between ½ and 3 pages per hour if I'm actually
trying to understand a difficult concept.
------
jillav
Reading that "online content destroys the old way we used to consume
information" kind of observation always remind me of that :
[https://xkcd.com/1601/](https://xkcd.com/1601/)
------
neonate
[https://outline.com/HRfAnU](https://outline.com/HRfAnU)
~~~
sbr464
I was just thinking a service like this would be useful, and the large sum I
would pay to not be inundated with animated ads when reading articles.
~~~
Cthulhu_
If it's just the articles, ad blockers work, as does reader mode(s). Even AMP
would if they offer it, but AMP is an evil hostile takeover attempt from
Google.
If it's the subscription paywall, just get the subscription. Else, don't read
it on a site infringing on copyright - you're not entitled to the content.
~~~
kd5bjo
How about illegally charging more if you don’t want to consent to their
tracking regime?
~~~
heyyyouu
How is it illegal to charge?
~~~
kd5bjo
See
[https://news.ycombinator.com/item?id=18199132](https://news.ycombinator.com/item?id=18199132)
.
| {
"pile_set_name": "HackerNews"
} |
Verifications.io Leaks Personal Records of 2B Users - cybarrior
https://cybarrior.com/blog/2019/03/28/verifications-io-leaks/
======
jjjjjjjjjjjjjjj
Why are so many MongoDB databases left unsecured? Are they extraordinarily
hard to secure? I imagine the people who are working with these databases must
be aware of the numerous leaks, and pay close attention to securing the data,
no?
~~~
Twirrim
Historically, MongoDB was unauthenticated and insecure by default. Because
_that 's_ always a good idea.
You should never assume anyone is going to use your product in a secure
fashion, and make it so that they have to at least make _some_ effort towards
security.
Other than that, writing new features is fun, and you can get so many
developers (that don't think about security) for the same amount of money as a
good security professional, or a developer with even half an ounce of security
sense, commands.
Security is always inconvenient, takes extra effort, and is invisible. So many
companies and managers deprioritise it over more visible feature work,
forgetting that security in and of itself _IS_ a feature.
~~~
jdsully
A lot of databases have this weird idea that there is some secure "internal
network" and its OK to just pretend its 1995 in there. Antirez actively blogs
about how "insecure" Redis is but its OK because just don't put it on the
internet [1]. Others just avoid the subject completely. Never mind that
internal networks get infiltrated all the time.
Security in depth is just not a thing a lot of people think about right now.
[1] [http://antirez.com/news/96](http://antirez.com/news/96)
~~~
jchw
Okay, let's be fair, and I'm sure you realize this: having network ACLs that
prevent unauthorized access is absolutely a good idea. "Internal networks" are
not dead - they've become more advanced with "VPC" services and software
defined networking.
Tunnelling Redis protocol over mutual TLS or something like that sounds like a
good idea, but I don't think I've seen anyone doing that :(
Frankly, I would love it if there were a simple, open standard for
authentication so every database didn't have to redo it. Maybe mutual TLS is
that answer, though traditionally getting the infrastructure for that correct
has been difficult.
~~~
viraptor
> I would love it if there were a simple, open standard for authentication so
> every database didn't have to redo it
There is:
[https://en.wikipedia.org/wiki/Simple_Authentication_and_Secu...](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer)
~~~
jchw
I've only ever seen it used with IRC but this most certainly is the closest
thing. Guess I hope for more adoption in the future.
~~~
X-Istence
SASL is also used with Dovecot/Postfix for example.
------
kitotik
“However, after further investigation and examination, DynaRisk updated its
report to state that the combined number of emails leaked is 982,864,972 to be
exact, and not 2 billion as previously reported.”
The headline seems wrong.
------
jjjjjjjjjjjjjjj
Source [https://securitydiscovery.com/800-million-emails-leaked-
onli...](https://securitydiscovery.com/800-million-emails-leaked-online-by-
email-verification-service/)
------
chrisbolt
[https://news.ycombinator.com/item?id=19333600](https://news.ycombinator.com/item?id=19333600)
------
skilled
How exactly did this get pushed to the front page?
This adds _nothing_ new to the conversation and consists mostly of quotes from
another article.
I was expecting an actual follow-up, and this is not it.
~~~
pmoriarty
From the HN Guidelines:
_" Please don't complain that a submission is inappropriate. If a story is
spam or off-topic, flag it. Don't feed egregious comments by replying; flag
them instead. If you flag something, please don't also comment that you did."_
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
~~~
skilled
Oopsie! Thanks for the heads-up.
| {
"pile_set_name": "HackerNews"
} |
What Is Functional Programming? - asp_net
https://thomasbandt.com/what-is-functional-programming
======
rajman187
The author quotes a definition
> A functional language is a language that supports and encourages programming
> in a functional style.
Seems rather circular
~~~
hans1729
> Seems rather circular
* recurrent
seems more appropriate in this context :-)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Can I learn to be a programmer/developer without going to university? - lookitzpancakes
I've been wondering lately if there are any comprehensive resources (oustide of a formal college or university) that can take you from being a computer enthusiast to a fully knowledgable developer/programmer? Figured this'd be the place to ask! Thanks in advance, HN!
======
lumberjack
I'd go as further as saying, that if you aren't able to self teach yourself
programming, then you can't be a programmer, no matter what classes and
universities you attend. Programming involves a lot of continuous learning
anyway.
~~~
benawabe896
I would agree with this only to a point. Sometimes there is a wall that you
run into that you can't make over by yourself. The little hint, nudge, or push
in the right direction can open up understanding that maybe wasn't attainable
alone.
------
lmm
My personal experience is that the only way to learn is by doing it. "Scratch
your own itch", build programs to do the things you want.
The funny thing is that while few of my formal university classes had anything
to do with programming, it was still a great environment in which to learn -
mostly because of the people I met there rather than anything else.
~~~
printerjam
Agreed. You gotta just start programming things that matter to you. I know
many good programmers who have non-CS degrees but have been hackers since they
were kids. Through trial and error, and a lot of reading on the side, they've
turned out to be phenomenal programmers. And each job they've held along the
way, they've learned a ton from other people on their teams.
------
goldvine
I never took any university courses on programming/computer science, and I'm
working full-time as a product developer at a digital agency.
In fact, when I interviewed at FreshForm, I wasn't asked any questions about
my college/courses. Everything was based on the work I'd done, which came as a
result of learning over about 4 years. I started slow with html, css, etc.
Then moved on to PHP/MySQL, eventually started building crappy web apps. And
now I'm building better web and mobile apps with Ruby, etc.
Year 5 is really when everything clicked for me, but I was going through
school and not focused 110% on it.
I learned mostly from online tutorials, and building side projects that kept
me interested. Books were/are helpful at times, but most of the time you will
learn the most by jumping in over your head and figuring everything out the
hard way. But there are fundamentals that need to be learned up front and
books are a great medium for that.
~~~
lookitzpancakes
Awesome, and this goes for everybody else, too: thanks for the replies!
------
USNetizen
I actually got into Software Engineering this way. I started off working as a
Systems Administrator, which required me to script jobs on servers/clients
and, eventually, I inherited the entire intranet for the company which forced
me to learn PHP and Java. Prior to this, I mostly dabbled in HTML, CSS, etc.
since I was a teenager. Since this, however, I have completed my Computer
Science degree (while still working) and learn new languages, technologies and
methodologies mostly from online documentation and tutorials. I went from
being a low-level sysadmin to a senior software engineer to a program manager
for software engineering within a span of four years by spending every free
moment I had learning, adapting and experimenting with new apps.
------
khyryk
Are you assuming that a university grad is necessarily "a fully knowledgeable
developer/programmer"? ;) Independent study of new concepts, programming
languages, libraries, etc., is almost a requirement in order to be a
successful programmer, but it's ultimately experience that brings one closer
to what you describe, and I think that many people on HN would agree with me
that independent study is sufficient to bring you to a position from which you
can begin to acquire said experience.
Oh, and the customary nod to SICP: <http://mitpress.mit.edu/sicp/>
------
JackpotDen
University was a terrible choice for me.
It boils down to this :
Do you prefer a structured learning experience and being in a meatspace
community away from where you were living?
Do you prefer having a crappy job and tutoring yourself during that time
period in an unstructured fashion?
------
kabuks
Absolutely!
We just graduated our first cohort, and after 8 weeks, they've learned enough
to get entry-level ruby jobs techcrunch.com/2012/05/10/dev-boot-camp-is-a-
ruby-success/
There are also less intense courses out there like bloc.io
~~~
SilasX
Or, in my case, entry-level Python/Django jobs :-)
------
franze
books
if you are really a beginner, start with "head first programming" (which is
python), after this go forward with "head first javascript" (if you like the
head first approach). do all tasks. after this choose your language, read the
best books on that topic (go to amazon) front to cover - while coding lots of
really tiny projects (one after the other). try to create one simple script
per day. publish them on github.
two to three years later you will be a "programmer", you will probably be able
to get a job in this area much sooner.
| {
"pile_set_name": "HackerNews"
} |
Music Icon Prince Dead at 57 - aaronbrethorst
http://www.huffingtonpost.com/entry/prince-dead-dies_us_57190013e4b0c9244a7b2a5b
======
kintamanimatt
As tragic as his death may be, HN just isn't the venue for mainstream
celebrity gossip or news.
Oh, and downvoters, is this really what you want HN to become?
~~~
Jaruzel
I've recently finally jumped in with both feet and registered a HN account
because I got sick and tired of what passed for 'news' on other so called
'tech' sites.
I want be part of above-average-IQ community that is interested in the same
things I am. If wanted celeb news or gossip, I'd go to the E! online site.
Yes it's sad that Prince is dead, and I'll be playing When Doves Cry later on,
but this is not the place for this news.
~~~
Delmania
Both this comment and the parent comment appear in some form when a celebrity
related articles appears. Neither one takes into account the fact that Hacker
News is a site for new of interest to hackers. That means occasionally, there
will be articles that aren't focused on either technology or startups.
Prince was an extremely talented musician, and many people of all walks
enjoyed his music. For many people, his music was a part of their development.
I'd say news of his death is of interest.
~~~
kintamanimatt
I'm sure 30 minute healthy recipes are also of interest to hackers, after all,
what time-starved hacker doesn't want to eat good food? Just because it might
be of interest to hackers doesn't mean it's appropriate for HN.
------
jgrahamc
Thanks for the music, Prince.
RIP
~~~
bcook
He also had a damn good sense of humor (album cover);
[https://en.m.wikipedia.org/wiki/Breakfast_Can_Wait](https://en.m.wikipedia.org/wiki/Breakfast_Can_Wait)
~~~
jmspring
And composed/wrote many songs for others. I think recognizing talented icons
when they pass isn't unfit for here.
------
Delmania
I'll be listening to Purple Rain and 1999 in a few to remember his music.
------
arrpeegeee
R.I.P. Ƭ̵̬̊
------
poorman
How is this HN relative?
| {
"pile_set_name": "HackerNews"
} |
Why did Quora choose Python for its development - Alex9762
http://www.quora.com/Quora-Infrastructure/Why-did-Quora-choose-Python-for-its-development
======
waiterZen
Python is good choice
| {
"pile_set_name": "HackerNews"
} |
Postmortem from getting kicked out of college for hacking - getbackto
https://medium.com/@wololodev/fdd85b99e0c5?hnattempt=2
======
lun4r
The encryption method is a simple XOR cypher. It uses the key
"581fad87738939".
<?php function encryptSID2($sid) { return dechex(0x58 ^ $sid{0}) . dechex(0x1f
^ $sid{1}) . dechex(0xad ^ $sid{2}) . dechex(0x87 ^ $sid{3}) . dechex(0x73 ^
$sid{4}) . dechex(0x89 ^ $sid{5}) . dechex(0x39 ^ $sid{6}); } ?>
------
JoshTheGeek
?hnattempt=2 is the query string of the URL...
| {
"pile_set_name": "HackerNews"
} |
Need to list related videos along with their published date in YouTube? - divyumrastogi
https://chrome.google.com/webstore/detail/youtube-video-list-dater/mbaflkdlneldejanggphlhcepncjfaco
======
divyumrastogi
check it on github: [https://github.com/divyum/youtube-
dater](https://github.com/divyum/youtube-dater)
| {
"pile_set_name": "HackerNews"
} |
Norwegian lawyer had visa withdrawn after private chat with client on Facebook - Deestan
http://translate.google.com/translate?sl=no&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.vg.no%2Fnyheter%2Finnenriks%2Fartikkel.php%3Fartid%3D10104089&act=url
======
belorn
Be you a lawyer talking privileged to a client, a priest talking privileged to
a follower, a hot-line worker talking privileged to someone thinking about
suicide, or a social service person talking to a child who been sexually
assaulted, every ones communication is equally collected.
This is after all the result of ubiquitous surveillance. When people learn
about it, the reaction is very simple. people stop talking. They do not call
the lawyer. They don't call the priest. The person thinking about suicide
won't call the hot-line, and the sexually assaulted child will stay quiet in
fear of people finding out. After Germany introduced their ubiquitous
surveillance law, this was exactly what the statistics ended up showing. I
wonder, while hoping not, if the same result will happen in the US too after
the current wave of news.
~~~
nikatwork
Bizarrely, this whole scenario is very similar to the privacy issues explored
in Brunner's 1975 book "The Shockwave Rider"[1].
Perhaps, as in the book, we need to setup an independent encrypted
communication service where people can vent their frustrations at pervasive
surveillance.
[1]
[http://en.wikipedia.org/wiki/The_Shockwave_Rider](http://en.wikipedia.org/wiki/The_Shockwave_Rider)
~~~
Zigurd
Never mind PRISM. The week before the PRISM leaks, the news was full of hack
attacks by state actors against US business and government targets. Why are we
emailing and talking in the clear? That's just dumb.
Moreover, the toothpaste can't be put back in the tube. Short of
transformative change in government, how do we know there isn't another PRISM
at another TLA?
The only way to restore confidence in communications is to secure them against
all attacks.
~~~
fnordfnordfnord
You're right, but I'd still like to see us make a giant collective bowel
movement on the spilled toothpaste, and generally make it so undesirable for a
government agency to use the toothpaste that they'll only do so when no other
alternative exists, or only when it's actually very important to do so.
------
Vivtek
Ah. This one is actually kind of credible.
But if the client was already accused of terrorism, then this monitoring was
on his end, and surely covered by a specific warrant. So this isn't
(presumably) the kind of massive data hoovering that is the primary concern;
every country does this kind of thing. (Back when I was running Despammed.com
I'd get requests from various LEOs - one came with a real live subpoena for
information related to an identity theft ring, and one was from Italian
authorities pursuing an insult to Mary.)
Where it gets to be a concern is revoking a guy's visa because he's defending
a terror suspect.
~~~
drrotmos
I know this isn't an opinion shared by the current US administration, but
having a fair trial for one's crimes is a human right. It's a right guaranteed
by articles 10 and 11 of the UN Universal Declaration of Human Rights.
Part of having a fair trial includes having legal representation, and the
ability to communicate with your legal council in confidence.
Eavesdropping on privileged lawyer-client conversations, regardless of
legality is outrageously indecent and _should_ be illegal. Revoking a lawyer's
visa because he is representing a particular client is equally outrageous,
especially due to the chilling effects it causes upon the legal community
making it much more difficult for suspects of serious crimes to find good
legal representation.
~~~
rayiner
It's not the view of any administration. The client was Norwegian-Chilean.
Foreigners not in the US don't have A right to counsel (which is the
constitutional basis of attorney client privilege in the US).
And I'd argue that's the way it should be. Every time courts declare something
unconstitutional, they use up limited political capital. I don't think
defending the "human rights" of non Americans is a valid use of that political
capital.
~~~
meepmorp
> Foreigners not in the US don't have A right to counsel (which is the
> constitutional basis of attorney client privilege in the US).
Do you have a cite for this? I know that there's no right to counsel in civil
trials, and this includes immigration courts (say in a deportation hearing),
but thought that criminal trials do guarantee right to counsel regardless of
citizenship.
Edit: sorry, I misread what you wrote. It's totally reasonable and doesn't
deserve downvotes.
FWIW, web searching does seem to indicate that there's no explicit
constitutional basis for attorney client privilege, and that it's just
provided for by US (and often, state) law.
~~~
DannyBee
Wong Wing v. United States, 163 U.S. 228 (holding that noncitizens charged
with crimes are protected by the Fifth, Sixth, and Fourteenth Amendments)
Fong Yue Ting v. United States, 149 U.S. 698 ((concurrence arguing that
noncitizens are protected by the First, Fifth, and Fourteenth Amendments)
Almeida-Sanchez v. United States, 413 U.S. 266
Bridges v. Wixon, 326 U.S. 135, 161
etc. The only holding otherwise is the 4th amendment one of a number of
appeals courts.
~~~
meepmorp
Thanks. I kind of assumed that those protections extended to non-citizens, but
it's nice to have actual case law.
------
anologwintermut
I'm shocked, shocked to find that the NSA is spying on a foreign terror
suspect in a foreign country communicating with another foreign person.*
Actually, I am shocked. Why'd the lawyer use Facebook for privileged
communication? Why does the NSA care about someone who posted a threatening
video in Norway? Hint: they don't. If they looked, it's probably because
Norwegian Intelligence asked them to.( Which might well be a huge legal
problem, for Norway)
In fact, it seems there is little evidence that any of this happened. Marking
messages as spam does not seem like something the NSA would do and as to
denying him entry into the US: if US gov is in the habit of denying visa's to
those who represent a foreign terror suspect, they didn't need Facebook to
establish that.
*Note, attorney client privilege doesn't apply to cases completely out of US jurisdiction with lawyers who are not lawyers in the US
~~~
polemic
It's hard to say without knowing what was said, but the fact that his visa was
withdrawn on the basis of a conversation between a laywer and his client is
alarming.
In other words: did the US government consider him a threat, or was it a
tactic to infringe the alleged terrorist's right to a fair trial? If the
latter, then it's an abuse of surveillance privileges.
~~~
spinlock
It would be alarming if a lawyer and his client were using facebook for
privileged communications. That's your first hint you need a new lawyer. If
they can't understand Facebook's TOS they can't possibly defend you.
But, seriously, these are foreign nationals. We've had a longstanding
distinction between foreign and domestic surveillance. Think of it this way,
would you really want to need permission from Pakistan to surveil Osama bin
Laden? He was an enemy of the USA and he was being harbored by Pakistan.
Different rules apply in that case than in a domestic case.
~~~
cmircea
Horrible example. In the case of Osama the US could have broken each and every
law in Pakistan and nobody would give a shit.
This is about a suspect, at best. Not the world's most wanted terrorist.
------
vidarh
Here's a rough/quick manual translation:
\---
Private Facebook-correspondance between John Christian Elden and a client
charged with terror offenses was monitored by American security services
(NSA), the lawyer claims.
Elden was discussing scheduling of the case with the Norwegian-Chilean client
(20), who was charged with publishing a video where he threatened Norwegian
officials and the royal family. Elden says that he has documentation that it
was American authorities that were snooping on his Facebook-profile, TV2
writes.
\- That we as Norwegians are under surveillance by American authorities, I am
not particularly happy about. It is uncomfortable to know that someone
continuously reads what you write at communicate with other persons via what
one believes is a closed channel, says the lawyer.
The messages of the person in question got deleted on an ongoing basis, and in
the chat-log they are now marked as "identified as offensive or marked as
spam". Four days after the conversation, the well known lawyers visa was
withdrawn.
Elden says his client wished to show up in court, but that he no longer is
able to contact him after the Facebook-profile was deleted.
Facebook is one of the websites mentioned in The Guardian and Washington Posts
revelations of NSAs surveillance of foreign citizens in the PRISM project.
Ministor of Justice Grete Faremo has sent a request to the US, where the
justice department requests a clarification about whether or not Norwegian
citizens have been under surveillance.
\---
The main thing to note is that the bit about the deleted Facebook profile was
unclear in the machine translation. It appears quite clear in the original
article that the reason his communication with his client ceased was that the
client used Facebook as his only communications-channel with his lawyer, and
so the deleted Facebook profile means Elden is _unable_ to communicate with
his client.
It is not made clear whether he suspects or claims that American authorities
caused the profile to get deleted too, or if the client got spooked by the
deleted messages.
------
Deestan
Summary: Lawyer conversing with client accused of terrorism, via private
Facebook messages. Client's messages suddenly deleted as "spam", and 4 days
later the lawyer was notified that his US Visa had been revoked.
~~~
smartician
In other words: A Norwegian lawyer notices something weird going on with his
private Facebook messages, and four days after this, his visa gets revoked.
Later, after reading about PRISM in the morning newspaper, he's convinced that
the NSA has been spying on him.
It's obvious! After all, spy rule #1 is "make sure your subject knows he's
being spied on by marking his messages as 'infringing or spam'". And it's
totally impossible that the visa thing coincided with this.
~~~
einhverfr
Twice in my life I have noticed things that made me wonder. The first time I
currently think was in my imagination. This time I am not so sure. I am
noticing for example a cell phone whose battery level drops when connected to
the charger and not in airplane mode. Google chat messages apparently long
delayed. That this started after the Snowden leak makes it even more
suspicious to me. I am an American citizen residing abroad.
I could just be seeing things that aren't there. However as a vocal opponent
of this sort of surveillance, it would make sense that I would be caught up in
some sort of filter especially as the hunt for Snowden continues.
(So note: If you are listening I think you might be. I am a patriot, as I
believe Snowden is. I have not provided any active assistance for him, but I
applaud those who do. My wife thinks I am too political but at some point my
loyalty to my country, the United States, compels me to stand up to this sort
of thing.)
~~~
Filligree
Battery levels will drop when connected to the charger - because of code in
the battery controller. It's bad for the battery to stay at 100% for any
amount of time, so the controller will cycle it in the 95-100% area. Smarter
controllers will hide what they're doing.
Google chat messages can be delayed for any number of reasons, ranging from
internal glitches to "Your network connectivity was bad at the precise moment
the message was attempted to be delivered, thrice, and it retries at
exponentially longer intervals."
~~~
einhverfr
But go from 5% to 0%? I am used to glitches but there are oddities here that
are either hardware issues (battery discharging while low and connected to the
charger), network issues. This is beyond what I am used to. Again, I could be
connecting the dots incorrectly but I would not be surprised if I am right :-P
------
woof
* The lawyer John Christian Elden defends several terror suspects, including Arfan Bhatti (now arrested in Pakistan) who was charged for terror planning agains the US embassy in Norway several years ago.
* He disucussed a court meeting with another client on Facebook, it was not a attorney–client privileged discussion. Elden was briefed by the FBI on their e-surveilence in 2005 (with a group from Norwegian Justice dept.) so he probably has a good grasp on how private Facebook really is.
* His US Visa was revoked four days after the conversation, the US embassy in Norway cites "Homeland Security"
* Eldens comments gives the impression that he believes he's automaticly flagged, while still beeing a friend of the US.
More facts:
[http://translate.google.com/translate?hl=en&sl=no&tl=en&u=ht...](http://translate.google.com/translate?hl=en&sl=no&tl=en&u=http%3A%2F%2Fwww.dagbladet.no%2F2013%2F06%2F11%2Fnyheter%2Finnenriks%2Fovervakning%2Fusa%2F27658066%2F)
------
werid
This lawyer is a known figure in Norway and not some guy looking for his
fifteen minutes of fame. He has defended people on terrorism charges in Norway
before, and gotten them acquitted on those charges (while other lesser charges
still stuck).
On his twitter, he claims that the US embassy doesn't know why his visa was
revoked, only that "Homeland security's computers" are telling them it's
revoked.
This is then connected to NSA leak by journalists. He is still waiting for a
proper explanation from the US embassy.
------
Zimahl
Isn't the NSA supposed to be for foreign intelligence only? I don't find it
shocking that the US would track the messages of an accused terrorist. What I
find funny is that a lawyer used Facebook for privileged communication.
------
einhverfr
Just remember, if you ever want to visit the US and you are not an American,
you must be much more supportive of American foreign policies than most
Americans are!
------
tropicalmug
Isn't this a bigger deal than just monitoring supposedly private Facebook
communications? This would also violate attorney-client privilege too, right?
EDIT: This is just naïveté on my part.
~~~
saraid216
Why would the not-an-American-citizen lawyer speaking to a not-an-American-
citizen have attorney-client privilege from the perspective of an American
governmental organization?
Edited to add: It's remarkably difficult to quickly find information about
attorney-client privilege in settings other than US, UK, Canada, and
Australia. I found a brief mention that the privilege does not apply to in-
house counsel in the EU, and that Brazil breaches it with a court order, but
that's all. I'd hope I could find more given some more time, but I need to get
back to work.
~~~
anaptdemise
Ha. Also, what kind of attorney would have the kind of conversation covered
under attorney client privileges on Facebook, PM or otherwise?
~~~
nullc
The same kind that run third party provided spyware on their personal
computers in order to take exams in law school.
(In other words: Practically all newly minted attorneys in the US)
There is no education in law school in the US at least on responsible data
handling, and— in fact— schools often direct students to behave irresponsibly
with respect to data security.
~~~
andreyf
_The same kind that run third party provided spyware on their personal
computers in order to take exams in law school._
Do you have a specific case in mind?
_schools often direct students to behave irresponsibly with respect to data
security_
Why would they do that? Reference?
~~~
nullc
Sure, the practice is ubiquitous
Example software and policies are things like:
[http://www.exam4.com/](http://www.exam4.com/) (used by Harvard, George
Washington, etc)
[http://www.law.wisc.edu/help/for_students/securexam/](http://www.law.wisc.edu/help/for_students/securexam/)
[http://www.law.columbia.edu/academics/registrar/Laptop_Exams](http://www.law.columbia.edu/academics/registrar/Laptop_Exams)
[https://www.law.umich.edu/currentstudents/registration/exams...](https://www.law.umich.edu/currentstudents/registration/exams/Pages/default.aspx)
Most (all?) schools offer students the ability to take their exams on paper,
but doing so is a substantial competitive disadvantage because examinations
are usually timed and writing on paper is much slower, students are marked
down for legibility and copy-editing noise, etc.
I don't have a citation studying it— but by all appearances it's only a small
minority of students that opt out of using their laptops. ("Most Stanford Law
School students take their examinations on laptops")
IIRC the California bar exam now also uses one of these spyware exam packages.
I'm mostly amused that we have a whole information-security critical
profession who is nearly required to behave negligently wrt information
security from day one. :P
~~~
andreyf
Wow, no kidding. Why the heck could it need "Administrator level account
permissions" (both on OSX and Windows [1])? I guess you could run it in a VM
and wipe it afterwards.
1\.
[https://www.examsoft.com/dotnet/Default.aspx?f=mtlaw](https://www.examsoft.com/dotnet/Default.aspx?f=mtlaw)
~~~
nullc
You're prohibited from running it in a VM, and at least some law schools have
the students sign some form under penalty of the school ethical code yadda
yadda that you won't do that.
(And then— some students do it anyways, because thats the only way to use it
on their otherwise non-supported system or because of some other
incompatibility. And nothing comes of it... I guess until something does.
Better not make too many enemies)
~~~
andreyf
A friend in law school to explained that this software is used for in-
classroom exams and prevents any other programs from being used while a
student is taking the exam, as well as saving all the work incrementally (in
case the computer crashes).
It's certainly not the most secure thing to do, but they need to focus on
studying law, not securing systems. I imagine that when lawyers are working on
cases, they might end up using more secure devices than their old college
laptops.
------
etchalon
This story reeks. None of it makes any sense (the messages were marked as
SPAM?).
I'm filing this under the same rubric mentally as all those tea party lunies
who suddenly swore their legitimate, random audit was caused by their
membership in the Tea Party.
~~~
Filligree
Elden is a top-flight defence lawyer. He's not any good with computers
(clearly..), but I'm sure he told the truth as he understands it.
------
platz
Two Facebook articles on foreign privacy events in one day? Where were these
reports before Snowden hit the news cycle?
~~~
stackedmidgets
Before that, you'd be voted down and hollered at because there would be little
credibility for it among common idiots. This has been the case for years,
because a lot of the information about the NSA published by journalists was
built on anonymous sourcing. Now, there's more documentary evidence available
to support it, so the US government no longer enjoys the benefit of ignorant
doubt.
Now, these stories can gain traction.
~~~
untog
Conversely, these stories were previously ignored because of a lack of
supporting evidence. Now that US surveillance is a talked about topic, these
stories are gaining traction without people going through the critical thought
processes they otherwise would have.
Neither of these options are provably false.
------
XorNot
Ok can anyone who reads Norwegian actually translate this properly? Because
the Google translation certainly doesn't capture the nuance, and their are
some notable inconsistencies in it - namely, why is someone's lawyer "no
longer in contact now that their Facebook profile has been deleted".
------
deshmane
what I am curious about in this and similar stories is whether the officials
actually carry out due diligence in making sure the profile actually belongs
to the person in question. after all, anybody can get an email and spoof a
profile.
------
gcb0
This is the same a lawyer sending private information via a post-card. Plain
irresponsible.
But then again, which layer knows how to send PGP'ed emails?
------
brown9-2
Worth noting that the lawyer says he has evidence but has not presented it,
and until then it's just his word.
------
mariuolo
Just tell me what kind idiot would use Facebook for a private conversation.
~~~
vidarh
Who are you talking about? Elden or his client? The article implies Facebook
was Elden's only way of reaching his client, so the "idiot" appears to have
been the client. If the client is not very technical it is not unreasonable to
assume the client felt Facebook was easier for him to use to communicate
covertly with Elden and didn't want to give out a phone number or other
details.
~~~
mariuolo
Either. Facebook retains forever anything done or written on their platform
and that's a well known fact.
Why anyone would use it for anything remotely confidential, is beyond me.
------
ttrreeww
This is the generation in which freedom was lost.
~~~
hughes
Or perhaps the generation in which freedom is to be reclaimed? It's too early
to tell.
~~~
TillE
It's extraordinarily difficult not to be pessimistic when you see the abuses
initiated by one party continued and expanded by the other, after bleating on
about their supposed opposition to such programs.
I'm convinced that the Democratic Party is the biggest roadblock to
accomplishing meaningful change in the US. It exemplifies the mushy,
frightened middle in the worst possible way, and should be reviled by anyone
with principles.
For example: [http://www.people-
press.org/files/2013/06/6-10-13-4.png](http://www.people-
press.org/files/2013/06/6-10-13-4.png)
~~~
nikster
It's hard to see any difference between Democrats and Republicans at this
point. The entire system needs to be thrown out.
I remember Ralph Nader was once asked why he is running for president when his
candidacy might take away crucial votes from the Democrats and let the
Republicans win; Wouldn't it be better if the lesser of two evils won? His
answer: The difference between the Republics and Democrats is "the difference
between Humpty and Dumpty".
At the time, I didn't agree with him. But when I see what's going on now; how
the Obama administration is basically run by the CIA and US big business; then
I have to think of this quote and how right he was.
| {
"pile_set_name": "HackerNews"
} |
N2O: Erlang Web Framework - andreygrehov
http://kukuruku.co/hub/erlang/n2o-erlang-web-framework
======
rdtsc
This framework is really out there. It is well ... different and interesting.
Here are some not so conventional things going on:
* Write your page in Erlang.
* Even translates Erlang to JS as a parse transform.
* Websockets (with a fallback mechanism) is the default connection mechanism.
* Don't want to use JSON for some crazy reason? That's alright, user Bert (and ship binary encoded Erlang terms to the browser!).
* Can render stuff on the server and send the whole thing over the websocket connection.
[https://github.com/5HT/n2o](https://github.com/5HT/n2o)
~~~
hackerboos
>Websockets (with a fallback mechanism)
Not enough frameworks make this as seamless as it should be. So it's
refreshing to see it done here.
I've been looking at Phoenix (Elixir not Erlang) but the problem is that I'll
have to do logic to detect when WS are available and do the fallback myself
which results in more code server side.
I think this is something that should be handled by the framework itself.
~~~
chrismccord
I'm the author of Phoenix. I'm glad to hear you're giving it a look! I spent a
lot of time thinking about the WS layer and fallback support. Ultimately I
settled for standard WS with a small multiplexing layer on top. For those that
need fallback, they can drop in websocket-js (flash fallback, compatible with
native WS api). Have you taken at look at this for your fallback support?
[https://github.com/gimite/web-socket-js](https://github.com/gimite/web-
socket-js)
Phoenix is still far from 1.0, so any and all feedback is welcome.
~~~
findjashua
I'm new to Elixir (and FP in general), but I really like the language. I came
across a couple of youtube videos on Phoenix , and it looks really nice
(reminds me of Rails). I think something like Sinatra would be even nicer, for
2 reasons: 1\. the standard these days is to have a rest api serving json to
web/mobile clients, so a Rails-like framework seems overkill 2\. Pretty much
all my friends who are learning web dev on their own, find Sinatra/Flask way
easier to get started with than Rails/Django.
Regardless, Elixir (and Phoenix) is a great leap forward compared to the other
options for building concurrent, reliable servers. I hope more people try it
out before falling for hype/marketing ( _cough_ node.js _cough_ mongodb _cough
cough_ )
------
davidw
I think there's still room to innovate in the Erlang/Web space. Chicago Boss
was a nice improvement because it ties so much different stuff together. It
does have a few issues though, like not being very 'Erlangy' in places: it
uses lots of parse transform magic. I think it's the right direction though,
in that it's fairly general purpose.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What are some good developer portfolios? - mattm
I'm setting up a personal website and want to include a portfolio page to showcase the work I've done. Does anyone have any good examples of portfolio pages for developers? I'd like to look through some to gather ideas for setting up mine.<p>I've come across this one - http://thinkcage.com/portfolio/ - but he is a designer, not a developer.<p>Feel free to promote your own.
======
vorador
I just show my github page.
| {
"pile_set_name": "HackerNews"
} |
How to focus amongst all the noise - mashhoodr
https://medium.com/@mashhoodr/how-to-focus-amongst-all-the-noise-47d75f8dae44#.dpyt3qkva
======
mashhoodr
Focus is a thing hard to come by in our offices these days. We are constantly
badgered by people and apps, and this is my take on how we can control a bit
of the app part and a bit of the people part. This is essentially a movement
towards creating a culture where you can get to focus on the important stuff
for a greater time period.
This was partly inspired by Jason Fried's talk on TED
([http://www.ted.com/talks/jason_fried_why_work_doesn_t_happen...](http://www.ted.com/talks/jason_fried_why_work_doesn_t_happen_at_work#t-760426))
and his amazing book "Remote: Office not required".
| {
"pile_set_name": "HackerNews"
} |
So What If New York Is Unaffordable? That Helps the U.S - linkregister
https://www.bloomberg.com/view/articles/2016-08-22/so-what-if-new-york-is-unaffordable-that-helps-the-u-s
======
linkregister
It might benefit the rest of the U.S., but the rate of job expansion outside
the Bay Area is excruciatingly slow compared to the rate of rent and house
price increases.
| {
"pile_set_name": "HackerNews"
} |
How to Make a Computer Operating System - hitr
https://github.com/SamyPesse/How-to-Make-a-Computer-Operating-System
======
joelg
Another great free OS resource is MIT's 6.828: Operating System Engineering.
"This course studies fundamental design and implementation ideas in the
engineering of operating systems. Lectures are based on a study of UNIX and
research papers. Topics include virtual memory, threads, context switches,
kernels, interrupts, system calls, interprocess communication, coordination,
and the interaction between software and hardware. Individual laboratory
assignments involve implementation of a small operating system in C, with some
x86 assembly."
Lecture notes from 2012: [https://ocw.mit.edu/courses/electrical-engineering-
and-compu...](https://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-828-operating-system-engineering-fall-2012/)
Video lectures from 2014:
[https://www.youtube.com/watch?v=kDRHsNauoxk&list=PLfciLKR3Sg...](https://www.youtube.com/watch?v=kDRHsNauoxk&list=PLfciLKR3SgqNJKKIKUliWoNBBH1VHL3AP)
------
Jeaye
Note that this book is half-finished and work on it has been discontinued (as
of 2 years ago). If you want a good resource on OSdev, start here:
[http://wiki.osdev.org/Main_Page](http://wiki.osdev.org/Main_Page)
~~~
dreta
Got another great resource here
[http://www.brokenthorn.com/Resources/](http://www.brokenthorn.com/Resources/)
------
OJFord
> Chapter-1
> Chapter-2
> ...
> Chapter-8
> chapter9
Aaargh!!
------
k_sze
I wish people would stop teaching C/C++. I want a book that teaches writing OS
using Rust.
~~~
pkaye
And what is a good book on writing an OS using Rust?
~~~
dbaupp
There's [http://intermezzos.github.io/](http://intermezzos.github.io/)
~~~
steveklabnik
Maintainer here! We actually have more developed than the tutorial lets on; at
Rust Belt Rust next week, we're running a six-hour class, so focus has been on
material for that, rather than on writing more book chapters. I hope to get
them out afterwards, though.
There's also some open PRs with more functionality too! Basically, check out
the kernel repo if you finish the book and want more :)
| {
"pile_set_name": "HackerNews"
} |
Famous Fluid Equations Are Incomplete - retupmoc01
https://www.quantamagazine.org/20150721-famous-fluid-equations-are-incomplete/
======
vhffm
If you are interested in some of the details:
The Navier-Stokes equations can be derived from the Boltzmann equation by
applying a slight perturbation, expanding the result as a series, and taking
the moments.
Taking the moments is essentially an integration, which comes with the
implicit assumption that the system you're describing has sufficiently many
particles. When running low on particles, this integration does not make
sense. This is why the resulting equations do not apply at low densities.
The Navier-Stokes equations are the second order expansion of this procedure.
The result of the first order expansion are the Euler equations.
This is called the Chapman-Enskog procedure. It's really quite illuminating
when you see it for the first time. There's a great derivation in [1] if you
can get your hand on it.
[1] [http://www.uscibooks.com/shu3.htm](http://www.uscibooks.com/shu3.htm)
~~~
orbifold
When I saw this derivation during a course Theoretical Astrophysics it was
indeed very enlightening, what is interesting is that it easily generalises to
magneto hydrodynamics and other more complicated situations (mixture of
multiple different fluids, fluids that react with each other etc.). I believe
Landau Lifshitz contains some of them.
------
MyHypatia
The best commentary I have seen on the article comes from a coworker, who took
the time to dissect why the conclusion from this article is not surprising:
The notion of a fluid is more generally related to the concept
of a continuum which allows for the PDE description the Navier-Stokes
equations offer. It is taken for granted that density or velocity are point-
quantities in space, but there can be no such simplifying description in
rarefied situations or more precisely when the Knudsen number is not small.
Batchelor 1967 has a good discussion on this.
In addition the notion of viscosity which relies on writing the deviatoric
stress as proportional to the gradient in velocity relies on dropping the
higher order terms in the velocity gradient Maclaurin series assuming they are
small (which they usually are for very small Knudsen
number).A Boltzmann-like description will always be
more general because it is a pdf-based description which is really just fancy
counting and doesn't have the Knudsen number limitation. Therefore calling
the Navier-Stokes equations incomplete is a bit
imprecise. It would be more accurate to say that the labels (fluid,
material, continuum) are great simplifications which are incredibly useful
when they apply.
~~~
vanderZwan
> _Therefore calling the Navier-Stokes equations incomplete is a bit
> imprecise._
Oh, those sloppy mathematicians... ;)
(for the non-physicists/mathematicians: a running gag between mathematicians
and physicists is that the former accuse the latter of being sloppy, because
the latter take a _lot_ of mathematical liberties. Allegedly, in my old
university there was a joint class between physics and mathematics (I never
got that far to see for myself), and the professor would start the first
lesson with "I brought barf bags for the mathematicians. You're going to need
them." I even have a friend who switched from physics to maths because he
claimed to be disgusted by the way physicists "proved" their "theorems".
Luckily he mellowed out a bit after marrying an applied physicists - they even
published a paper together.)
~~~
MyHypatia
Haha, yea. From an engineering perspective... you can spend all day debating
the philosophical implications of taking a derivative and have very
interesting conversations, or you could just take the derivative because it's
useful and go make things.
------
habosa
Fluid dynamics is hard.
"When I meet God, I am going to ask him two questions: Why relativity? And why
turbulence? I really believe he will have an answer for the first." \- Werner
Heisenberg
------
PeterWhittaker
Summary: Navier-Stokes cannot translate to Boltzmann, because Navier-Stokes is
incomplete... ...and even the best candidate to replace it fails at extremely
low pressures.
This is very, very exciting, because it means our theoretical understanding of
fluid dynamics is flawed.
Flawed theory often (usually?) leads to radical rethink and wildly different
perspectives.
~~~
danbruc
I never thought about this before reading the article but now it seems pretty
obvious to me that both descriptions can not yield the same results under all
circumstances. The Navier–Stokes equations are based on quantities like
density and flow velocity which are only really meaningful if you have
sufficiently many particles to average about. In consequence I am hardly
surprised that one gets disagreeing results under extreme conditions like very
low densities.
~~~
semi-extrinsic
I'm also quite surprised that this article tries to spin it as very novel.
We've known this for literally a hundred years. Moreover, there's no mention
of the pioneers in the field - Chapman, Engskog, Burnett, Knudsen, etc - much
to my dismay.
The recommendation is for major revisions including a detailed literature
review.
</grumpy-reviewer-mode>
~~~
tanderson92
I was also dismayed when they referred to KdV (Korteweg de Vries) theory as a
"relatively unheralded" theory. KdV theory is an incredibly well known and
thoroughly studied area of Mathematics.
~~~
vanderZwan
Well, those two statements aren't necessarily mutually exclusive, because it
can still be _relatively_ unheralded. But only because _every_ physicist knows
of Navier-Stokes.
------
dnautics
"The terms in the series quickly become unruly, however; energy, instead of
diminishing at shorter and shorter distances in the gas, seems to amplify."
This sounds a whole lot like the ultraviolet catastrophe. The solution there
was quantization of energy packets and a statistical treatment of the fewer
amount of packets that come through.
------
Xcelerate
> He began by rewriting the complicated Boltzmann equation as the sum of a
> series of decreasing terms. Theoretically, this chunky decomposition of the
> equation would be more easily recognizable as a different, but axiomatically
> equivalent, physical description of a gas — perhaps, a fluid description.
> The terms in the series quickly become unruly, however; energy, instead of
> diminishing at shorter and shorter distances in the gas, seems to amplify.
> This prevented Hilbert and others from summing up the series and
> interpreting it. Nonetheless, there was reason for optimism: The leading
> terms of the series looked like the Navier-Stokes equations when a gas
> becomes denser and more fluidlike. “So the physicists were happy, sort of,"
> said Ilya Karlin, a physicist at ETH Zurich in Switzerland. “It’s in all the
> textbooks.”
This reminds me a lot of perturbation theory, a method used to solve the
complicated equations of quantum field theory. The technique basically
involves summing up a bunch of Feynman diagrams (of decreasing significance),
and it has been used to calculate the value of the gyromagnetic ratio of an
isolated electron to within 10 decimal places of its experimentally measured
value (which is absolutely amazing, both from a theoretical and experimental
standpoint).
However, what's peculiar about this summation is that it _fails to converge_.
You would think that by adding up smaller and smaller terms, the series would
eventually reach some limiting value, but that doesn't occur. So the most
predictive theory that mankind has ever created (quantum electrodynamics)
works only as long as you don't keep adding up more terms.
(*Technically speaking, this isn't a failure of QED, but of the method used to
solve its equations. There are other solution techniques that don't have this
problem.)
~~~
plus
The issue isn't that an infinite sum of tiny terms don't converge -- the issue
is that individual terms of perturbation theory diverge. An example can be
found in J. Chem. Phys. 112, 2000, 9736-9748 "Divergence in Moller--Plesset
Theory: A Simple Explanation Based on a Two-State Model" DOI 10.1063/1.481611
(Note that this is specifically in reference to Moller--Plesset Perturbation
Theory, but the divergence is a general phenomenon)
I'm not saying that _all_ perturbation theories diverge. Moller--Plesset
perturbation theory doesn't even always diverge. But the divergence behaviour
is not in the form of an infinite sum of tiny terms being infinite, but rather
the individual terms of the perturbation theory increasing without bound (and
oscillating sign).
Also note that it is possible for truncations of perturbation theory to
diverge with increasing order, but for the infinite sum of all (divergent) PT
terms to converge and be finite.
------
GregBuchholz
The linked article is not related to the quest to determine whether the
Navier-Stokes equations are capable of supporting Turing machine-like
computation:
[https://www.quantamagazine.org/20140224-a-fluid-new-path-
in-...](https://www.quantamagazine.org/20140224-a-fluid-new-path-in-grand-
math-challenge/)
------
amelius
If one says "X equations are incomplete", that means that there is more than
one solution to X. However, somehow I suspect that is not what is meant
here...
------
kunstmord
Some thoughts: expansion-in-series-based methods (including Hilbert's, which
is not used in practice) and the Chapman-Enskog method work only for
moderately rarefied gas flows (where you can neglect higher-order collisions;
this can be derived explicitly using the BBGKY hierarchy). Also, since the
Chapman-Enskog method is asymptotic, it is not guaranteed that higher-order
equations (inviscid Euler equations being the zero-order equations and Navier-
Stokes equations being the first-order equations) will provide an accurate
description of flows. Indeed, the second-order equations (Burnett and super-
Burnett equations) seem to fail in some cases, while providing more correct
results in others. But given the complexity of the equations themselves and
the complexity of the boundary conditions, no one really uses them. The cool
thing about the Chapman-Enskog method is that it gives a closed set of
equations, so you don't need empirical models for heat conductivity,
viscosity, etc.
That's the first point – that methods depending on series decomposition might
never guarantee a solution that's accurate in all cases. There are also
moment-based methods (Grad's method, for example, being one of the most
famous), which have additional equations for parts of the stress tensor (I
think; never really read much about them). The second point is that the
equations correspond to conservation laws: mass, linear momentum, energy. The
equation corresponding to the conservation of angular momentum is usually
neglected: the terms related to internal angular momenta of particles are
considered to cancel each other out (which seems logical, since unless there's
some magnetization happening, the particles will be chaotically oriented and
the average of the angular momentum will be 0), and in that case, the equation
is satisfied since it just follows from the equation corresponding to the
conservation of linear momentum. However, there's been some research recently
on whether this equation can actually be neglected and what implications it
carries, whether it's connected to turbulence or some other effects.
The third point is that in high-altitude hypersonic flows, there are far more
complex effects going on in flows that just simple collisions between
particles – there are transitions of internal energy (which is a quantity
described by quantum mechanics), chemical reactions (dissociation, exchange
reactions), and this all complicates the Navier-Stokes equations – additional
terms appear (bulk viscosity, relaxation terms, relaxation pressure). And
correct modelling of these terms requires solving large linear systems with
quite complex coefficients, and to complicate things further, for many of the
processes mentioned, there aren't any easy or even correct models (to take
into account dissociation, for example, you need to know the cross-section of
the reaction for each vibrational level of each molecular species involved in
the flow), since these models are either computed via quantum mechanics (which
takes enormous amounts of computational power) or are obtained experimentally
(which limits the range of conditions under which the results are obtained).
DSMC methods have being increasingly popular as of late, but of course, they
can't provide theoretical results, while it is possible to observe some
interesting effects even in theory using the Chapman-Enskog method.
So the problem is not only getting more "correct" equations, it's also being
able to correctly model everything that goes into the equations we currently
have, and then being able to solve them (for a simple flow of a N2/N mixture,
if you use a detailed description of the flow, you get a system of 51 PDEs).
And in engineering applications drastically over-simplified models are often
used, and yet it's not like every high-altitude air/space-craft has burned to
a crisp because of this. While new, "more correct" equations are interesting,
of course, there's enough work to be done with the current ones.
Source: I do theoretical research and numeric computations of rarefied gas
flows for a living (at the Saint-Petersburg State University).
------
sizzzzlerz
No wonder I had such a tough time in my Fluid Dynamics class. The material was
incomplete! Do over!
~~~
pdonis
lol -- I should go back and demand a recount for all those exams I sweated
through...
| {
"pile_set_name": "HackerNews"
} |
HN Jobs: NYC startup Minus (min.us) is hiring - mindotus
Minus is hiring! We are on a mission to simplify sharing and to create the simplest universal sharing platform. We are seeking tech fanatics, passionate enthusiasts and self-driven individuals in our New York City midtown office.<p>Positions include full-time, part-time, and interns in design and software engineering.<p>- Our stack is built on python, django, javascript, jquery, css, and html.<p>- For designers, Adobe PS, AI, CSS/JS and UX experience is essential.<p>We're an all-star team with the founders being Carl and myself.<p>Carl is a serial entrepreneur and ex-principal engineer at Amazon.<p>John Xie is the founder of Cirtex.com, a leading web hosting provider.<p>Interested?<p>Shoot us an email at [email protected] with your info, work experiences and let’s get started!
======
mindotus
Looking forward to hearing from everyone, preferably in NYC area :)
| {
"pile_set_name": "HackerNews"
} |
Skype IP Lookup - lobovkin
http://skype-ip-finder.tk/
======
zhovner
Ok, so I'm develop this.
It based on deobfuscated Skypekit runtime that write clear debug log.
Wrapper just make vcard refresh from p2p skype network and then parse debug
log.
Here is the sources of python wrapper <https://github.com/zhovner/Skype-
iplookup/>
~~~
zhovner
Lol, skype banned my account.
~~~
dennisgorelik
Why banned?
~~~
ashconnor
Why do you think? This probably violates a _few_ terms of use.
------
zhovner
It is work for you?
~~~
bryanlarsen
Please don't downvote this. This is the actual developer asking for failure
reports etc. English is not his first language, either, so please don't
downvote because of brevity or poor grammar, either.
~~~
artursapek
English may not be his first language but he seems to know the basics.
<http://i.imgur.com/ADxK3.jpg>
------
Wilya
Skype is at its core a p2p idea, so this is expectable. That's sort of the
same thing that was done for bittorent users, except with a single centralized
authority.
The interesting thing is that they do this without making a call. They only
request contact information. This could be avoided.
Skype can mitigate this, but in the end, there is little more to be done. If
you want a p2p network where anyone can be reached, at some point, you _will_
need ips.
~~~
corin_
What they could do is have contact requests go through Skype master servers,
not p2p, that way you could only look up the IPs of people you are connected
to. But is it a big enough issue that they will make such a big change? I
doubt it - and I'm not sure they ought to have to do it, either.
~~~
acqq
Yes there would have to be master servers to close this hole, but I can't
imagine how it can be done without everybody upgrading to the new client, so
we can assume that every Skype user's ip is known or will soon be known. The
current state will last for a while.
You don't have to be even logged in for this to work(!) according to some
already published research.
------
JohnnyFlash
Really scary.
I wanted to see if i could find someone. Went onto twitch.tv. Picked a random
stream. Got email. Looked up Skype id from email. Searched for skype id which
gave me the IP and the small town where they currently reside.
Its worrying how easy this makes it to find someone.
~~~
TomGullen
Honest question, why is it scary?
My IP resolves to a location ~20 miles away. I don't see why having a Skype
contact and knowing a 20 mile radius where they live is anything to worry
about?
~~~
jeff18
Most residential internet connections don't have any sort of DDOS protection,
so privacy issues aside, at the very least you are open to a simple denial-of-
service attack. This was a huge problem for the popular progamer "Destiny" in
the Starcraft 2 community.
~~~
TomGullen
So is it also really scary that the mods/admins on the Starcraft 2 forum could
also see his IP address?
The risk of being DDOSed when you share a contact on Skype and they find out
your IP address is hyperbole.
~~~
jeff18
There is a pretty substantial difference between a few Blizzard employees
knowing your IP address and the entire public knowing your IP address.
------
hanbam
Here [1] is an interesting paper regarding P2P networks and privacy ---
"Exploiting P2P Communications to Invade Users’ Privacy"
[1] <http://cis.poly.edu/~ross/papers/skypeIMC2011.pdf>
------
Mizza
Not sure why people are surprised by this.. what did you think P2P meant?
~~~
aw3c2
that calls/communication would be p2p (direct connections) but not that
looking up my nickname would disclose my current ip.
------
bemmu
Could you somehow scrape all users and get an IP address -> skype name
mapping? You could then know the Skype usernames of all visitors to your
website.
~~~
zhovner
No this not possible. Only skypename -> IP, and only email -> skypename. You
can parse whole skype network and store all IP's if you can handle so many
data.
------
vsviridov
Cool, my router lacks decent DynDNS support, but I have skype signed in at
home, so I can always check what my IP is and VNC myself in :D
------
driverdan
If you're not currently logged in it still discloses the last IP you used. I
can't think of any good reason for it to do that.
~~~
TazeTSchnitzel
It doesn't work if you're not logged in.
~~~
driverdan
I was logged out for over 5 hours when I tested it and it showed my IP.
------
aw3c2
[http://skype-open-source.blogspot.de/2012/04/skype-user-
ip-a...](http://skype-open-source.blogspot.de/2012/04/skype-user-ip-address-
disclosure.html)
------
rjsamson
So yeah, this has me more than a little perturbed. I generally don't have a
problem sacrificing some privacy in return for functionality (the terms of
service of several popular social networks come to mind), but this... is a bit
of a different situation.
Does anybody have a good short-list of Skype alternatives? I don't know that
its possible for me to stop using it altogether, but I'd certainly consider
cutting back...
~~~
18pfsmt
I would point you toward Jitsi: <http://en.wikipedia.org/wiki/Jitsi>
But, it doesn't support the Skype protocol, and it runs on Java, with which
some people have an issue (but also allows for cross-platform compatibility).
------
ilya2
should be easy to do file sharing over skype when you have the receiver's ip
and an open udp port through the firewall. maybe someone will release an app.
can the mpaa sue microsoft?
------
option_greek
Something worth 8.5 billion got to be a little more secure.
------
ajross
Any insights into the exploit? Obviously the bug here is that they got the IP
without any confirmation from me; ideally Skype should be popping up the "new
buddy request" dialog, but it's not.
So is this a fixable leak, or something core to the protocol (i.e. do you
request a buddy P2P too?)
------
myared
It's interesting that I can lookup people at my company who are behind the
same connection that I am, but my account doesn't give away my IP. They also
seem to get a lot more SPAM calls whereas I get fewer. I wonder if it's a
privacy setting that I setup in the past or just the fact that my account is
older.
Either way, it's great to know that this is possible.
------
alexchamberlain
Reasonably impressive and scary.
~~~
mcs
Yeah, now you can obtain an IP by name by searching for their name in the
contact search of skype to get the username, then using this tool.
~~~
zhovner
Search by email also work.
~~~
mcs
This isn't exactly patchable by skype, is it? Obviously skype could turn off
some printfs from the log, but the fact the client needs the IPs and Ports to
attempt connecting locally, and then over WAN, makes me think that a tool like
this can exist forever.
------
sek
That's why Google didn't bought Skype, their P2P is not state of the art. Your
client is also a server for someone else, they obviously need your IP address
and a proxy would not reduce traffic for Skype.
Why the heck did MS pay so much for it?
~~~
bdonlan
> Why the heck did MS pay so much for it?
Skype has a huge userbase. They can always migrate that userbase to a
different technology later if they think it's worth it.
------
tutre
it even show my local 192.168... weird
BUT HOW?
~~~
zhovner
Skype announce both your IP's into network.
~~~
TazeTSchnitzel
Presumably for LAN efficiency? If you have two people on LAN using Skype it
goes via LAN IP?
------
skypeopensource
This is more informative description.
[http://nickfurneaux.blogspot.com/2012/04/skype-ip-
addresses-...](http://nickfurneaux.blogspot.com/2012/04/skype-ip-addresses-in-
clear.html)
------
antirez
Using the IP is for instance possible to locate, roughly, where the user is,
that is already a big privacy concern...
~~~
revelation
Skype is P2P. No way to fix it, you can only hope to mitigate it.
------
kevinpacheco
"This domain and website have been suspended because of abuse or copyright
reasons."
------
tdr
Can it be used like the invisible scanner for Yahoo Messenger? (see who's
invisible)
~~~
zhovner
No, after disconnect it still show IP few hours
------
ilya2
this is not an "exploit". as the man says, your IP is being sent out to the
network. others on the network are using your machine's resources. that's how
skype works. he's just showing you this fact.
------
mikelnight5l
technikboy04
------
gitarr
Well this is scary for Skype users and very embarrassing for Skype
developers/owners aka. Microsoft.
I sure hope they fix this before they get sued into oblivion for this blatant
privacy breach.
~~~
viraptor
Why is that? You get the same thing with emails / IRC / some IM protocols /
VoIP. What's so "scary" about someone knowing your current IP?
I mean - it's one thing if Skype was advertising itself as a privacy
protecting, identity hiding service... but they don't. They provide convenient
A/V connections.
~~~
rhplus
Let's say A wants to find B's IP address. In the case of email, A would need
to trick B into replying to an email (and also use an email service that adds
the client IP header). In the case of most IM servces, B would need to accept
a friend request federated from a server. If I'm understanding this correctly,
with Skype, A merely has to query B's status to get B's IP address.
~~~
daeken
In the case of email, the easiest way to get a user's IP is to have them load
an external image.
~~~
michaelhart
Not true if you use a secure/intelligent email client, like Gmail. It will
prompt you with a yellow bar above the email before loading any images.
It also implies that they'll open the email, which most average people won't
do unless they know the sender or are otherwise expecting an email.
------
AnonCIO
I am firing our security consultant for not telling us about this. Our entire
organization is exposed. We have just learned that the man behind Skype is the
same person who was behind Kazaa. And he knew this all along.
~~~
steve918
Or maybe you could resign for being an uninformed CIO. P2P is 1990s
technology.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What do you use Google Sheet or Excel for? - daolf
It is often said that where there is a spreadsheet, there is a product waiting to be built.<p>The more complex the spreadsheet, the more needed is the product.
======
plumsempy
I use Google Sheets a lot and I love it. From the common uses like task
management to using it as mvp for a web app or a database for an mvp.
What's interesting about Sheets is that the tables can be thought of as a
database with columns and records, or a CSS grid and you can put buttons and
everything on it.
Currently I made a task queue for myself in sheets. There are a lot of things
I want to do and I find out about these interesting stuff while in the middle
of something else; so I just put the mew thing on the queue and forget about
it. Works great. I also tag these tags in case I want to analyze them later.
Finally, for fun, and also shameless plug, I made a Tetris using it. It even
has animation when the tiles disappear. See it here:
[https://plumsempy.com/2018/09/17/tetris-on-google-
sheets/](https://plumsempy.com/2018/09/17/tetris-on-google-sheets/)
It is a very powerful tool.
------
DoreenMichele
The insurance industry probably needs new products, but when I worked at
Aflac, they did a lot in house, so I don't know if you could readily market
it. They did tons in spreadsheets and I kept angling to improve stuff and
actually got an award for a thing I did, and then had to harangue people to
actually get it made available department-wide and then someone else took it
over and promptly screwed up the formatting with the very first update.
I left insurance years ago. My knowledge is not current. But if you want
product ideas, insurance is an industry drowning in information overload and
if you could figure out how to throw them a life preserver and get them to pay
for it, you could potentially make a killing.
------
ploika
I think you'd find some fascinating answers on accounting, finance and
actuarial forums, that might not turn up on HN.
A few years ago I worked on a couple of different financial services projects
that involved porting massive Excel-based jobs over to sturdier setups. Even
the relatively simple spreadsheets (for people with a finance background) were
long, complex projects that needed an RDBMS, an R or Python program, and a web
app to do what Excel was just about handling on its own.
~~~
huhnmonster
This. The stuff I am working on is mostly glorified ETL. Some sheets of course
are user facing and will have a nice sheet inside the workbook where
everything is summed up.
Generally it looks like this:
1\. Get data from a central source (data warehouse/other departments)
2\. Transform/combine different sources etc. (mostly with pivot tables)
3\. output to a sheet
4\. add some sort of automation so it runs on its own the next time
------
nocubicles
I use Excel when i'm developing SSAS analytics Cubes to test them. Also using
it sometimes to get data from Odata feed. Maybe also some times to make some
quick calculations of something.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Free Cardiac Monitor Simulator (with embeddable plugin) - th3o6a1d
https://monitorsim.com
======
th3o6a1d
Hey everyone,
I'm a physician who likes to code. For a while, I've thought it would be cool
to be able to pull up a free cardiac monitor simulator in a browser for
teaching purposes. Would be even better if you could somehow control a monitor
window remotely using your phone to simulate a case in realtime. Some of the
existing sim software is ridiculously expensive (like everything in
healthcare), or requires a software download. With that in mind, I give you:
The Site: [https://monitorsim.com](https://monitorsim.com)
About It:
[https://jt.netlify.app/posts/monitor/](https://jt.netlify.app/posts/monitor/)
Tech: * React, Firebase (for remote control), D3.js, Netlify
Features: * control simulated scenario remotely using phone or other browser
window (uses Firebase) * supports a variety of tracings (needs more work) *
embeddable plugin for blog posts - just pass a JSON object
Future directions: * add more waveforms (heart blocks) and finetune the
existing ones * add arterial blood pressure waveform and styling
Collaboration/Feedback: * Feel free to fork on github:
[https://github.com/th3o6a1d/monitor](https://github.com/th3o6a1d/monitor) *
Hit me up on twitter for feedback:
[https://twitter.com/th3o6a1d](https://twitter.com/th3o6a1d)
Disclaimer: * Some of the wave forms are the best approximations I can make at
this time, and I've tried to keep them to scale. There's plenty of room for
improvement. Go easy on me!
| {
"pile_set_name": "HackerNews"
} |
The Man Trap - SirensOfTitan
https://www.1843magazine.com/features/the-man-trap
======
dnautics
> Women who behave like their male colleagues may be disliked for being
> “pushy” or “bitchy”, but these penalties are offset by the fact that they
> are also likely to enjoy more power and greater financial rewards. When men
> adopt the jobs and behaviours associated with women, however, they typically
> experience a loss of status with fewer perks and more social sanctions,
> especially from other men.
I think older (straight) men do not care about loss of status or social
sanctions from other men. They care about loss of status from women.
Especially when in long term relationships, there's the fear of an incorrect
narrative where their partner often _says_ that adopting feminine roles is
appreciated, but subconsciously leads to less attraction. (with
acknowledgement that the reverse is sometimes true for women's perceptions in
the eyes of men)
[edit: it's actually in the article too ~3/4 of the way down, I should have
been more patient]
------
socrates1998
Yes, I agree. Men are often told they need to be everything:
Successful yet don't work too much. Help with the kids, yet don't be feminine.
Be strong, yet sensitive. Have hard employable skills, but also be an artist.
Don't play video games, but be fun to be around.
All of these are difficult to balance.
The scary part is the a majority of divorces are ended by women.
I don't know the answer, but as long as we acknowledge the fact that both
modern men and modern women have a long list of conflicting demands on them, I
think we can show everyone more compassion.
~~~
dionidium
_" Don't play video games, but be fun to be around."_
I was nodding along until I got to this one. What in the world do video games
have to do with being a fun person to be around?
~~~
greglindahl
I think the point here is about interacting with a computer vs interacting
with the people in the room.
~~~
dionidium
Yes, except _those two_ ideas are strikingly compatible :)
------
scarface74
Why the focus on married women earning less than single women instead of
focusing on household income?
Even if the single woman earns more than a married woman. For the most part, a
married couple should have more disposable income after living expenses than
her single counterpart. Married couples have a greater per capita net worth
than their single counterpart.
In our case, because of choices we made together, my wife earns a little over
10K less than she did when she was single 5 years ago, but because I could
depend on her to carry health insurance and she has more flexible schedule,
that helps our family, I was able to switch jobs aggressively and I earn
roughly $50,000 more than I did before we got married. We are both better off.
[https://www.forbes.com/2006/07/25/singles-marriage-money-
cx_...](https://www.forbes.com/2006/07/25/singles-marriage-money-
cx_tvr_06singles_0725costs.html)
[https://www.google.com/amp/www.today.com/amp/money/why-
marri...](https://www.google.com/amp/www.today.com/amp/money/why-married-
people-tend-be-wealthier-its-complicated-1C8364877)
_Once they are married, the couples also are able to take advantage of
economies of scale – anything from buying just one dishwasher to relying on
one another’s health insurance. That allows them to build wealth more quickly
than their peers who are single, divorced or living together romantically._
_For example, a married man may be able to work 12 hours a day to please his
bosses and get promoted, because he and his wife can divide household duties
so he can get ahead. That’s not as much of an option for a single parent._
~~~
watwut
I think that money are simple and crude measure of achievement and social
approval. E.g. If part of your motivation is competitiveness or passion for
your profession, she is losing after each child. My husbands higher salary
won't make up for my missed promotion or less interesting project, it won't
make me proud of my skills, basically.
The article however was about men primary. Which I found interesting, because
perspective of male who would prefer different tradeoff is rarely available.
~~~
scarface74
Isn't that a lifestyle choice that spouses make together - whether to have
kids, devote more time to each other, pursue their careers, etc.?
~~~
Mz
When you are married AND have kids, one thing that can happen is that the
parent who is most concerned about the welfare of the kids has no choice but
to simply buck up and do what is necessary for their sake. Quibbling with
another grown adult about "fairness" and "equality" can be a great way to see
the kids get shortchanged.
So, once there are children involved, it is not unusual for the wife to just
suck it up and do what the kids need at personal cost to her. It often makes
little sense to try to hold hubby's feet to the fire and insist on him doing
his fair share at home or whatever.
This is not man-bashing. I am a woman and former homemaker. I do a lot of
even-handed writing that tries to consider both sides of the picture. But this
is a reality in many marriages. I do not self-identify as a "feminist" in part
because, to me, "feminism" is about women wanting careers and to hell with any
other considerations. This ongoing argument about equality of the sexes seems
to mostly leave out the critical detail of the welfare of the children.
Unless and until we start talking about what works for the family as a whole,
including kids, and society as a whole, including families with legal minors,
this entire argument about men vs. women and so-called equality will continue
to be sick and twisted and will tend to continue to crap on any parent that
actually cares deeply about the welfare of their kids, regardless of gender.
~~~
lactau
>to me, "feminism" is about women wanting careers
Second-wave feminism lasted from 1960s to 1980s.
~~~
Mz
I don't know what relevance that has to anything I said. Aside from the cherry
picking aspect of how you (mis)quoted me, I was a homemaker for a lot of
years. I am 51 years old. I deal routinely with women who look down upon me
because they chose to put their careers first. In some cases, they chose to
not have children at all. In their eyes, I absolutely am not their equal and
unequivocally not deserving of any real respect.
Dealing with such women is usually a worse experience for me than dealing with
most men. Such women are typically pretty toxic.
~~~
watwut
Because these choices are very personal and people are very insecure about
them. Does you staying home and being alright with it really means she is bad
egoistic mother? Does some women liking staying home signify that world is
moving back to homemaker side? Does me staying at home (and having to fight
changes situation push on you) really means I am naturally lazy or less
capable as conservatives like to suggest?
In a sense, no one talks about these considerations openly, ever. So it comes
out indirectly through attitudes.
Everyone is supposed to be motivated only by positive things, you are supposed
to stay because you are caring and loving not because you are sucking it up.
That idea insults people. You are supposed to work because you love career,
not because you don't want to be the lazy nagging stereotype - which you are
pretty sure you would turn into if forced to stay at home.
~~~
Mz
I have plenty of hypotheses of my own as to why other women do this sort of
crap to me. In the end, I don't think it matters. If you want to talk about
making the world a better place and "equality for all," then shitting all over
me because I made different lifestyle choices from you and this hits some
nerve of yours -- well, get therapy and quit making it my problem that you
aren't actually happy with the lifestyle choices you made.
If you want to call yourself a feminist and talk about getting equal rights
_for women_ , then I don't want to hear your crap about how your ideals only
actually apply to _women like you_ but still exclude large groups of women.
I think these are just bitter people who felt "It's a man's world and the
least worst option for a woman is to not have children."
That's not an idealistic solution. That is not about making the world a better
place. That is not about expecting more of the world. That is basically saying
"No point in fighting evil. You can't win."
Turning around and shitting on me because you gave up years ago makes you part
of the problem, not part of the solution.
------
Pxtl
I'm always iffy about articles like this because so much of the ground has
been utterly salted by online "men's rights" activists that are more motivated
by misogyny and antifeminism than actually righting wrongs.
It seems like a good read, but I would loved more data to crunch. The numerics
of working long hours was a great example of how the mathematics of the
situation make this stuff happen.
Anyhow, my wife and I actually do the fully-equal thing - she makes a bit more
than me (teaching in Ontario is paid well and I'm not working in one of the
Big Tech Market cities) so we have get to split things 50/50.
This actually presented a great opportunity: She wanted to go back to work
early from her parental leave after the birth of our 3rd kid. So I did
something new: I took the other half of the leave. 7 months off with my kids.
If you have the opportunity, do this. It actually made _more_ money for us, in
terms of strict income. Both of our employers provided a several weeks of top-
up parental leave in which the government unemployment-insurance benefit is
supplemented up to your full weekly paycheck. So by both of us taking time
off, we got to double-dip on this.
And the experience with the kids was fantastic. I got to play boardgames with
my son and teach him to ride his bike, and took my two daughters jogging in a
double-stroller every other day. I practically _lived_ in a ring-sling, even
getting the stink-eye from greasy guys on a family trip to Manhattan. I got to
properly get to know the parents of all my son's best friends and we're all
still close. I was in the best shape of my life and had a great time, and my
wife likes her job so she was happy to be at work.
I'd considered doing it on our 2nd kid, but friends and family had talked me
out of it because of worries about my career. I don't even _work_ in the same
place I did when my 2nd was born. I quit that job later on anyways, so I
missed out on that time for nothing.
To me, the biggest tragedy is the every-upwards climb of working hours for the
household. There was a time when a family would live on a single 40 hour
workweek. And what have we gained? I mean, for people who work retail, does
the fact that they work on Sundays and every night to 10PM mean they actually
sell more goods? Or does it just mean that retailers need to pay staff less
per-hour because they're selling for more hours?
~~~
emsy
>"men's rights" activists that are more motivated by misogyny and antifeminism
than actually righting wrongs.
What you think or told they are motivated by and their actual motivations are
different things.
~~~
vacri
If MRAs were primarily motivated by actual concern for men's lot in life,
they'd do more proactive work. Instead they are mostly reactive, coming out of
the woodwork when something is mentioned about women. They bitch about how bad
men have it in order to deflate the issue de jour, then are not to be seen
until the next time someone talks about women.
The OP is utterly right in saying that the ground has been salted by these
man-children. Men do face problems, but MRAs do little work and mostly just
armchair whine. How many of them create working groups or petition politicians
or similar? Compare to feminism, where while it has a share of whiners, is
mostly pro-active; organising events, talking to stakeholders, starting
discussions instead of derailing them. You see it here on HN, where a topic is
about women, and the whiners come out to derail... yet these same MRAs don't
post their own articles about men.
~~~
Pxtl
I always like to point at Movember. A solid men's-issue foundation with no
whining, no association with mras, and it generates an pantsload of money for
a good men's-issue cause with relentless positivity.
MRAs complain while the real men concerned by men's issues are actually doing
something. Worried about the discrepancy between funding for breast cancer vs
prostate cancer? Well you can moan about the nasty feminists and how society
cares about women's suffering more than men's, or you can make a difference.
~~~
ar15saveslives
How to be "proactive" with widely accepted discriminatory practices and
affirmative action?
------
wcummings
>Chase, a father in his late 40s who is a partner at an international law firm
in Chicago. “When I see a woman who has children and I know she and her
husband are working like crazy, that concerns me for the sake of the kids,” he
says. “But when I see stay-at-home dads, I don’t think very highly of them.
Call it sexist, call it whatever you want, but I think it’s kind of wimpy to
do that. It’s checking out, not being in the game, not fighting for success.
Those are the traits I value.”
Wow, Chase sounds like a real turd, I feel bad for his kids.
------
phd514
>> Coltrane has found that after controlling for variables like age and
education, married American men earn significantly more than their unmarried
or divorced male peers, and their earnings go up with every child they have.
Marriage seems to make men more productive at work because it allows them to
outsource much of the housekeeping to their wives.<<
I don't see how that last sentence makes any sense. Since when does one's
earnings have anything to do with whether the housekeeping for one child or
three children is "outsourced" to wives? I think it's far more likely that
being a father to more children is correlated with being older and more
experienced and therefore more highly compensated.
~~~
icewater
"...after controlling for variables like age and education."
~~~
phd514
Hmm, fair point. It just doesn't make any sense then that more children
correlate to higher income because the housekeeping responsibilities fall to
the wife.
------
gozur88
>Coltrane has found that after controlling for variables like age and
education, married American men earn significantly more than their unmarried
or divorced male peers, and their earnings go up with every child they have.
Marriage seems to make men more productive at work because it allows them to
outsource much of the housekeeping to their wives.
That's an assumption. IMO it's more likely to be that men with families out-
earn their single and divorced counterparts because they're motivated to do
so.
Particularly as compared to divorced men. There's no point in killing yourself
at work if the ex is going to take half the money.
~~~
micahbright
Luckily, in Texas, child support is capped and alimony is politically
unpopular. So there is a motivation for divorced men to make more.
------
jondubois
I think that in a couple, there is always some form of resentment towards the
partner which does not work (is not pulling their weight) but I think that the
level of resentment is many times higher if the non-working partner happens to
be a man.
~~~
killjoywashere
When I was in med school, married with two kids, every member of my wife's
family told her to divorce me, not once, but on a regular basis. They all
think I'm great now and wonder why we don't visit them much.
------
samirillian
I think a big part of reckoning with these social pressures may be simply
expressing them out loud. It seems to be a truism of therapy that "you should
never be ashamed of your feelings," but the very real benefits that white men
continue to reap on one level tend to preclude an emotionally honest reckoning
on another level.
A lot of the negative effects documented in this essay seem to purely relate
to cognitive dissonance. For example, the essay said that men would be more
willing to accept certain "child-friendly policies" if they believe that it
would not decrease their socially-perceived masculinity.
I can only believe that it would also help men to openly express these fears,
to state out loud the dissonance in their self conception that (largely
positive) social change has wrought.
------
monksy
> misogyny and antifeminism
What's wrong with anti-feminism? Also, misogyny is one of those catch-alls
that's being abused to mean: "Does not complement women" as of late.
~~~
dang
Please don't post like this. The unsubstantive+ragey combo forms an inflection
point at which threads go from bad to much worse.
When a discussion turns into "yay label" vs. "boo label", there's no
information left in it, and flamewars are the only thing left to do.
We detached this subthread from
[https://news.ycombinator.com/item?id=14337198](https://news.ycombinator.com/item?id=14337198)
and marked it off-topic.
~~~
monksy
I disagree with your view of what I posted. (it was not done with a
unsubstantive+ragey stance) What I posted was a disagreement with the
exaggeration of the alternative point of view they had. (Any article that
doesn't agree with the popular ultra liberal women's positive view is "salted
by mens rights [insert derogative terms etc]")
Being anti-feminist is not a bad thing. It is a conversation that could be had
(is it good, is it bad, etc), the topic of the original poster leads to a
conversatoin about this.
My concern with Hacker News is that it claims to be technology focused, and
founder focus. However articles dealing with gender issues, like this, get
promoted and kept up.
I completely respect and understand your stance that those topics tend to go
badly.
| {
"pile_set_name": "HackerNews"
} |
For sale: an Enigma machine - epo
http://www.christies.com/lotfinder/lot_details.aspx?from=salesummary&intObjectID=5370959&sid=5d471a41-553e-4a2d-b9ee-cf27e36133b8
======
Robin_Message
Also, the next lot is even more exciting: Some offprints of Turing's papers
and manuscripts, formed by Prof. Maxwell Newman, guide price _300 to 500
thousand pounds!_ Apparently these are extremely rare; none have appeared in
auction for 35 years!
[http://www.christies.com/lotfinder/lot_details.aspx?from=sal...](http://www.christies.com/lotfinder/lot_details.aspx?from=salesummary&pos=10&intObjectID=5370960&sid=5d471a41-553e-4a2d-b9ee-
cf27e36133b8)
~~~
KoZeN
[http://www.christies.com/lotfinder/lot_details.aspx?from=sal...](http://www.christies.com/lotfinder/lot_details.aspx?from=salesummary&pos=5&intObjectID=5370965&sid=5d471a41-553e-4a2d-b9ee-
cf27e36133b8)
I'm surprised this hadn't had more attention here!
_APPLE-1 -- Personal Computer. An Apple-1 motherboard, number 82, printed
label to reverse, with a few slightly later additions including a 6502
microprocessor, labeled R6502P R6502-11 8145, printed circuit board with 4
rows A-D and columns 1-18, three capacitors, heatsink, cassette board
connector, 8K bytes of RAM, keyboard interface, firmware in PROMS, low-profile
sockets on all integrated circuits, video terminal, breadboard area with
slightly later connector, with later soldering, wires and electrical tape to
reverse, printed to obverse Apple Computer 1 Palo Alto. Ca. Copyright 1976_
~~~
asmithmd1
Wow! a the Apple-1 is estimated to go for £100,000 - £150,000
I wonder if there are any artifacts from todays companies that we should be
grabbing up
~~~
asmithmd1
Now I see why - it comes with the optional cassette interface and BASIC on a
tape :)
Seriously it is an exceptional artifact: original invoice (Salesperson:
STEVEN) and a typed note from Steven Jobs explaining how to hook-up a TV and
keyboard:
[http://www.christies.com/lotfinder/ZoomImage.aspx?image=/lot...](http://www.christies.com/lotfinder/ZoomImage.aspx?image=/lotfinderimages/D53709/d5370965)
------
user24
I hope a museum gets it, but I think it will probably go for much more than
the estimate.
By the way, any UK HNers should definitely try to get down to the museum at
Bletchley park and the national computing museum. Geek heaven :)
edit: wow, they also have the first published ENIAC patents:
[http://www.christies.com/lotfinder/lot_details.aspx?from=sal...](http://www.christies.com/lotfinder/lot_details.aspx?from=salesummary&intObjectID=5370963&sid=b1077a41-474f-47b4-8f48-25f5c24fca97)
~~~
shrikant
Visitors might want to be a bit patient on the guided tour - largely seems a
waste of time initially, with the guide talking a lot about the history of the
land/park itself, and the WW2/code-breaking info being somewhat superficial.
Then he takes you into the National Museum of Computing and demonstrates the
machines, and sometimes lets you touch and feel as well - awesome! The guided
tour ends on quite the high!
~~~
user24
depends on the guide I guess, I've been there about 4 times (used to live just
down the road, and the ticket is for a whole year!) and took the tour twice,
the code-breaking content wasn't highly technical, but it was covered in a
decent amount of depth I felt.
Riddle from the tour: What must you add to nine to get six? (and no, it's not
-3)
~~~
user24
replying in case someone years from now reads this:
Gur nafjre vf f. avar va ebzna ahzrenyf vf vk, nqq na f naq lbh trg fvk ;)
------
cromulent
One day, I'd like to have a library like Jay Walker's to add this to. He's
even got a Sputnik in there, along with his Enigma.
[http://www.wired.com/techbiz/people/magazine/16-10/ff_walker...](http://www.wired.com/techbiz/people/magazine/16-10/ff_walker?currentPage=all)
------
wgrover
Bay Area folks who've read down this far, you'll absolutely love the Computer
History Museum, <http://www.computerhistory.org>
------
Luc
That would look nice on the living room cupboard, but you can't beat this one
for glamour: <http://www.tatjavanvark.nl/tvv1/pht10.html>
Perhaps someone here will be able to decrypt that encoded Haiku...
------
pbhjpbhj
I was interested in the many manuscripts in that sale. I wonder if Google
would buy them, scan them and resell them ... they could buy through a third
party/anonymous bid and only release the scanned copy after the resale to
avoid a negative effect on price.
------
ljf
Amazing piece of kit that would be great to own - but what would /you/ do with
one?
~~~
brk
You could probably gut it and put an Arduino inside of it that played MP3's.
~~~
astine
With all due respect, wouldn't that be a little like upholstering your couch
with the Bayeux Tapestry? While the Enigma machine isn't exactly one of a
kind, it is quite rare and has a great deal of historical significance.
~~~
brk
Sorry, I had a feeling the sarcasm in my initial post wouldn't fully come
through :)
I probably should have gone with the steampunk-themed comment I was originally
planning.
------
tomjen3
30-50k pounds. Shit thats a high price.
~~~
user24
You think? I wouldn't have been surprised to see it fetch twice the high
estimate. It's got appeal to people interested in:
Computing
Codes/Ciphers
WW2
That's pretty broad appeal. I mean even if it was only of interest to Turing
fans that's still a huge market, and Turing fans are only a small subset of
those larger markets.
Just my opinion, I've no idea if these things come up fairly often or not.
~~~
tomjen3
It may still fetch more, but honestly that doesn't change that it is a very
large amount of money.
~~~
shabda
> that it is a very large amount of money.
Compared to what?
People pay 100K$ for rocks which have no intrintrinsic value.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Did posting your startup in HN give you users or only competitors? - thisuseristaken
I can't understand why anyone would want to post their startup in an early stage, when their codebase is probably quite small and easily clonable . Unless, of course, they want the professional feedback and brainstorming that this forum provides .
Did any of you who posted your startup here gain a decent amount of real users (clients) ?
======
iamwithnail
I've mentioned mine on here a few times, and it's picked me up a few users,
although i've never done a 'Show HN' type thing. Most startups, I'd hazard,
are specific enough in implementation, market and niche that you need to find
someone who cares enough about all of those same things _and has the skills,
time and wherewithal to land it_. Lots of people have ideas, few land them -
there are probably 50 teams working on the same idea as you, but most won't
land them. (On my example - the site's in beta, it's a soccer stats site so
probably barred in a lot of non-UK domains as gambling related, of niche
interest in any case, etc. My site's a clone/improvement, in many respects, on
others that I've used in the past - fixing my own problems.)
If you've got a world-changing, hugely scalable, easily copyable concept, then
yeah, probably don't post it here. But otherwise, it's all in the
implementation, and posting it on HN probably won't change that.
~~~
Mankhool
I echo this completely and I have done a "Show HN" and a "Share Your Startup"
on Reddit.
------
yellow_and_gray
Speaking openly about a problem is a sign of strength, not a weakness. It's a
sign of a weakness to avoid showing signs of weakness.
You want to be educating people of what you are doing. Copying an idea has
little to do with the codebase being either small or clonable and more with
the people behind the idea. And I don't mean just about having courage. Ideas
by themselves are roughly worthless. There's no market for them. There's no
place where one can go and buy an idea.
Describing your idea in detail doesn't mean other people will copy it. First
they'll have to be convinced it's a good idea. If you ever tried to change
anyone else's mind you know by now how hard that is. Not even founders
themselves can predict how well their own ideas will do. Larry and Sergey
originally tried to sell Google to Yahoo for $1m.
And even if people are convinced your idea is a good idea, they'll still have
to compare it to the existing idea they are already working on and see which
one they're more likely to do well with. A better, more ambitious idea might
seem frightening. A simpler idea might seem more tangible. It could be at
least a year before one can convince themselves it's ok to let an old idea
die, and at least two years to pursue an ambitious one. Ambitious ideas really
are that frightening.
If you are not convinced choosing between two ideas like this is hard, here's
a simpler test that doesn't even involve a good idea. When you have only a bad
idea and no good ones, how long does it take you to stop working on it?
Regardless, good ideas will have competition anyway. You can't avoid it. So
actively working on the next step of getting feedback on what you have is a
sign you are strong enough to take the next steps, however small they seem, as
opposed to hiding to avoid competition.
Dropbox launched on HN
([https://news.ycombinator.com/item?id=8863](https://news.ycombinator.com/item?id=8863))
and their biggest gain wasn't the number of users they got from HN. Their
biggest gain was probably that they became less frightened by the idea of one
day evolving into a startup with 300m users.
------
satvik1985
I think the biggest disservice you can do to yourself as a startup is to be
protective about your idea and being afraid of getting cloned..
Ideas by themselves don't mean anything.. Its the execution of this idea that
makes for a good venture. I personally meet so many people who don't speak or
talk about their idea because they are afraid it will be copied. It just stops
them from getting help from others.
On the other hand I have been open about our startup idea and what we are
developing, and an amazing thing it has done for us, is the feedback it has
gotten us and more importantly connections to right places and people.
| {
"pile_set_name": "HackerNews"
} |
Extinct Startups Tees - signaler
http://extinctstartups.com/
======
pstevesy
Clever. I'd like to see an Enron or Compaq tee.
| {
"pile_set_name": "HackerNews"
} |
Hacker News Chicago meetup Wednesday 3/31 at 8pm - ccg
Chicago hackers: Please join us for the next Hacker News Chicago meetup on Wednesday, 3/31/2010, at 8:00pm at the Hophaus (646 N. Franklin, 312-280-8832, http://www.thehophaus.com/). Please join our mailing list (http://groups.google.com/group/hn-chicago) for announcements and hacker discussions, and follow us on twitter or identi.ca (@hnchicago).
======
tptacek
Wow, maybe a little more than 20 hours notice next time? Glad you're getting
this together and all, but can you all figure out what the April date will be
and announce it this week too?
~~~
j053003
Agreed. Wish there was little more notice.
~~~
danielzarick
There is a Google group that we all use to discuss when the next event was a
few weeks ago. We must have all just forgotten to post it on HN. Join the
group though if you want to help choose when the next date is.
<http://groups.google.com/group/hn-chicago>
| {
"pile_set_name": "HackerNews"
} |
A wiki for anecdotal or useless informations - lawyearsdw
http://www.bt-wiki.net/Main_Page
======
ende
How is this different from Wikipedia ?
| {
"pile_set_name": "HackerNews"
} |
Bringing back the PC - ibrad
http://idiallo.com/blog/2014/05/bringing-back-the-pc
======
ivan_ah
The personal cloud built on FOSS is a very nice idea. We need this real bad.
I think the technological complexity of implementing this is quite serious---
setting this up for the average non-technical person would be an impossible
task. If we can get past the Dynamic DNS + opening ports on the home router,
this will be immediately useful. Then again maybe "my personal cloud" could be
on AWS to begin with, see [1].
Are there any efforts for the "personal cloud" platforms that have traction?
I'm interested particularly for easy-to-use ones---possibly focussing on a
single application, e.g., share pictures with your family from the old PC in
your closet.
[1] [http://minireference.com/blog/a-scriptable-future-for-the-
we...](http://minireference.com/blog/a-scriptable-future-for-the-web-and-home-
servers/)
~~~
buckbova
For non-technical folks this is pretty difficult.
The average person can purchase an off the shelf personal cloud and probably
get somewhere with storing and accessing files but going beyond that requires
help.
As a test I set up owncloud and personal email on a digital ocean droplet just
last night on their lowest tier. So far so good. But it needs some help on the
user friendly aspects, like sharing a photo gallery.
~~~
scarecrowbob
I'm pretty non-savvy about servers, but I have been trying to learn ansible,
and at the same time I wanted to consolidate some personal servers, so I used
this to deploy more-or-less the same thing on DO the other day:
[https://github.com/al3x/sovereign](https://github.com/al3x/sovereign)
My experience was that it wasn't impossible, and faster than spinning up a
postfix/dovecot server by hand... but it was really buggy and lots of little
problems. As far as I could tell, there were some problems using the encfs
with the kernal DO uses, and that took a lot of troubleshooting.
I am thinking that at some point there will be a setup that is as easy as,
say, spinning out a wordpress site on shared hosting.
------
danelectro
Back in the late '90's I thought lots of aspiring computer scientists were
already using Windows or Linux as they VPN'd from their remote laptop back to
home so they could access their personal files and full desktop through VNC.
Not much differently than people would do on a commercial scale to their
company network when they were away. Mainly dial-up except for the few who had
broadband.
I was too preoccupied with natural science, but by 2003 I got a cellphone
containing a regular USB GSM modem and would use that plugged in to my laptop
to log in to my own desktop PC network using dial-up my dang self. From
anywhere having cellular service, no need for a wireless data plan which was
not available in most coverage areas anyway. Was good to have a nationwide
calling plan which most people did not have either, and it still used up
minutes of your monthly allowance.
No FOSS on the laptop back then for me usually, but if you had broadband at a
remote location too, Windows XP had everything you needed to VPN back to a
regular home Linksys router which normally contained its own VPN endpoint in
those days with a new service called DynDNS already preconfigured in the
router's firmware. Too bad DynDNS is not free any more but with the router
handling VPN, you could still access a home network that was barebones
Windows9x, Linux, even DOS.
No need for software, just common hardware and regular Windows features.
Unless you wanted the automated "Assistant" type stuff like in the article,
then you would need software, regular users would never call it apps.
Later once 3G wireless arrived, I got a phone supporting that and could get
better speeds (when available) than dial-up, and without even needing the
laptop when I just wanted convenient recreational use on the small-screen.
Never did want to lose the regular dial-up cell modem from my toolbox though,
but a number of years ago Tmobile walled it off. So much for Plan B when there
is no 4G, which is still not everywhere. Plus, no faxing for you[1] directly
from a laptop through a cellphone any more, without having to go through a web
service. Clouds got in my way.
[1] I realize now the '80's called and they want their facsimile machine back,
but I was out of tape on my telephone answering machine ;-)
------
mmphosis
I like the ideas in this article.
Rather than the big footprint whitebox/blackbox from PC 1.0, instead I imagine
PC 2.0 being a very small board, no fans, no spinning drives but having very
fast CPUs and GPUs driving big monitors.
[http://en.wikipedia.org/wiki/Nettop](http://en.wikipedia.org/wiki/Nettop) Or,
the very fast CPUs and GPUs with or without fans would hide quietly away in a
closet but with a connector hub on my desk. It could draw a lot of power if
required. It would be like an iMac, but with an open and modular PC 2.0 board
that is separate from the big dumb monitor(s).
Rather than UEFI or UEFI-like so-called "secure" boot, instead PC 2.0 would be
instantly on, and support virtual "smart" bootloaders as an option. So without
a config card (swipe or whatever), it would default to turning on instantly.
With a previously used config card it would turn on instantly using that
previous configuration. For any new config card to would actually "boot" up
the new configuration and create a new "instant on" configuration. There would
be options to backup and remove old configurations, and to set the default
instant on configuration. Sort of like Virtual Box snapshots, but using
hardware for the snapshots to actually make the computer "instantly" turn on.
------
jacquesm
Super nice article. In some ways a next step compared to 'The Mother Of All
Demos', in some ways a step back. But still quite neat.
One thing all those 'always on' devices could use their unused cycles for is
to create things like federated search engines, peer-to-peer encrypted backups
(for instance, seed a torrent of your own encrypted data with a key only you
know, boot your 'assistant' afresh and the first thing it could ask you is to
restore from some torrent).
------
computerslol
I am totally behind the spirit of your article. I also believe it's a travesty
that we have so much power that can be attained so cheaply, yet we aren't
using it at home.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Suggested backend(s) for JavaScript MVC tutorial? - eliot_sykes
I'm considering writing a tutorial on JavaScript MVC frameworks, and a large part of it would ideally use a backend for server dependent demos like signing a user in and making RESTful API calls.<p>What do you suggest as ways of making this backend easy to run locally for a range of developer experience and operating systems? The only thing the developers would have in common is that they know some JavaScript.
======
facorreia
PHP is known for being widely supported and easy to get running. It does
require integration with a web server, though, which can present some
complexities for novice installing it on random environments (which may
already have a web server running, so configuration instructions become
complex).
Node.js would be an interesting alternative because you could keep it
JavaScript-only across the tiers. It is supported across operating systems and
it's easy to install. For instance, on Windows it can be installed as "cinst
nodejs".
For database I recommend SQLite since it would avoid a lot of friction
associated with database servers (ports, permission, etc.)
------
poissonpie
Try [http://www.redbeanphp.com/](http://www.redbeanphp.com/) and
[http://redbeanphp.com/extra/beancan_server](http://redbeanphp.com/extra/beancan_server)
specifically - it's a minimal bit of php with some magic that will probably
mean you won't have to get too much into the server side of things with your
tutorial.
------
bennyp101
For quick testing rest Apis I use [http://sailsjs.org/](http://sailsjs.org/)
------
kissmd
if static data is enough, you can simple place it to files and serve as simple
http
~~~
eliot_sykes
Is there a particular server you'd recommend?
~~~
kissmd
sry for the late answer.
you don't even need to run apache or a node.js file server for this.
just put your responses into files on a relative path according to your
request. eg: api/product/31.json api/product?orderby=name
api/product?orderby=price
so if you have a demo webapp just package the static data with your app and
redirect/configure api calls to the service in the live app
| {
"pile_set_name": "HackerNews"
} |
Could the IPad make computer science obsolete? - Mongoose
http://geomblog.blogspot.com/2010/02/could-ipad-make-computer-science.html
======
gprisament
This guy completely misses what computer science is about. Sure, nobody
studies "Toaster Science" just like nobody will ever study "iPad Science". But
plenty of people study mechanics, electrical engineering, thermodynamics and
other fundamental academic fields that have enabled humans to design and mass-
produce toasters.
At it's core, Computer Science is the mathematical study of computation and
algorithms. Some of the most important results in CS were discovered before
computers even existed (like the Church Turing Thesis). A new device with a
slick form-factor and usable interface will not at all make CS obsolete.
If it could efficiently solve NP problems...THEN perhaps some computer
scientists would out of work ;)
| {
"pile_set_name": "HackerNews"
} |
Yahoo Should Buy Microsoft - naish
http://www.cringely.com/2009/02/yahoo-should-buy-microsoft/
======
gjm11
What he's actually claiming is that Yahoo! should buy _MSN_ , so that MSN gets
the benefit of Yahoo!'s alleged knowledge of how to make money online, and
Yahoo! gets the benefit of having MS as a significant minority owner, which
supposedly would stop them doing silly things that, er, stop them making money
online. The whole thing seems to depend on two contradictory ideas about the
relative cluefulness of Yahoo! and MS.
The nearest I can come to making sense of this is: Cringely thinks that Yahoo!
understand _how_ to make money by doing business online, but that the
management of Yahoo! doesn't really care whether they make money or not and
therefore doesn't bother to do the right things rather than the wrong things.
In that case, having MS as a substantial shareholder might enable MS to
pressure Yahoo! into trying to make money. But this strikes me as very silly
indeed.
Anyway. Can some kind person with the necessary awesome powers please change
the HN title to "Yahoo should buy MSN"?
------
jacquesm
What total nonsense. MSN is losing money, sure. But the combined power of MSN
and MSNBC give microsoft mindshare that you could only dream of if you had to
go and buy advertising to get the same effect.
Last I checked it was Microsoft looking to buy Yahoo, not the other way
around. Between MSN and live.com they have traffic galore, and it would
certainly help to solidify microsofts position on the internet.
Especially given that long term they will have to go head to head with Google,
every little bit will help then. The blatant anti-trust violations that
Microsoft practiced in the past will no longer be tolerated, and the opponent
is actually 'qualified' this time, and has a very solid business model.
------
jyothi
Random ramble.. One, a company's assets (let alone possible innovative deal
terms where yahoo don't have to spend much) should seem like it can buy
another company. Which in this case is a clear no.
Secondly, given the way yahoo screwed up overture and many other great
acquired products and people, they definitely cannot run a serious OS or
software business, not even half as good as MSFT runs it today.
The only good thing that can happen is someone really focuses on the huge
content and community portals that yahoo has been a leader in well.
------
tom_rath
Wait a second... Cringely is concerned about Microsoft screwing up Yahoo?
Yahoo. The company which has been incapable of implementing business-friendly
search advertising after more than a decade of trying? _That_ Yahoo?!?
Heck, if Yahoo's Sponsored Search provided the same limited functionality
Microsoft Live delivers today, we'd be happy to shovel buckets of advertising
dollars their way. Yahoo is definitely not the zombie one would want in charge
of that business partnership.
------
bdfh42
The post title is "link bait" to front up the begging letter at the bottom of
the item.
------
lionhearted
I read a great article about an airplane manufacturer that had tight
operations and grew at a solid, relatively slow rate. They had a really solid
engineering corps, and didn't hire new engineers for certain But wasn't hiring
much on certain parts of their engineering corps. Then they look around, and
realize in a decade, everyone's going to retire and they'll lose all the
knowledge that wasn't handed down. All that expertise, know-how, common sense,
and hard lessons learned were going to evaporate if they didn't have young
people working on it, and veterans handing down the valuable lessons.
Microsoft absolutely needs to be developing and growing online business for
the innovation and expertise that come from it. Even if they lose money on
most of their online ventures, they _still_ need to do it. Who thinks the
desktop OS and business software is going to be a huge cash cow in 20 years?
MS needs smart people learning, building, experimenting, and doing cool stuff
online to keep going.
| {
"pile_set_name": "HackerNews"
} |
Facebook is not worth $33B - abhi3
https://signalvnoise.com/posts/2585-facebook-is-not-worth-33000000000
======
mikro2nd
Missing __[2010] __flare.
------
enjoyitasus
great look-back. Love reading these and putting things into context. you can
only connect the dots looking backward
| {
"pile_set_name": "HackerNews"
} |
Pokémon Go proves investors were clueless about augmented reality - jflowers45
http://venturebeat.com/2016/07/12/pokemon-go-proves-investors-were-clueless-about-augmented-reality/
======
beat
I remember a friend of mine talking about a horror game working on the same
principle as Pokémon Go, three or four years ago. Unfortunately, he's not
technical and lacks resources, so it never went anywhere. But the idea is
perfectly sound.
Focusing on VR displays completely missed the point in the industry. It can be
done with just GPS and a camera phone. VR displays actually get in the way, by
making it hard to perceive your non-VR surroundings.
| {
"pile_set_name": "HackerNews"
} |
Animals can inherit traumatic experiences, study shows - 001sky
http://www.washingtonpost.com/national/health-science/study-finds-that-fear-can-travel-quickly-through-generations-of-mice-dna/2013/12/07/94dc97f2-5e8e-11e3-bc56-c6ca94801fac_story.html
======
anigbrowl
This is pretty mind-boggling stuff. Also, an impressively well-written science
article for a daily newspaper.
------
aneeshm
If this turns out to be true for human beings (and I can't think of a reason,
a priori, why it shouldn't be), then it has massive implications for both
practical ethics as well as more abstract ideas of morality.
If what you do to someone affects not just him, but also all his descendants
up to N generations....
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How are you keeping your kids occupied at these times? - imalolz
Dear HN parents,<p>With the current situation, our entire region is under strict lockdown.<p>Both my spouse and I are now working from home, and since there's no school and people are discouraged from going outdoors we have to find solutions to keep our kids - aged 3rd grade & kindergarten - occupied throughout the day.<p>We're really trying hard not to have them watch TV or use tablet/phone/computer all the time; we bought plenty of arts and crafts and the teachers emailed some worksheets and assignments, but it's VERY difficult for them to be so socially isolated, constantly indoors, without their friends and teachers, and sitting down and working through take home tasks all day just doesn't work. Both sides are frustrated and with good cause.<p>We find that we're constantly giving up and letting them use screens since we need some time to get work done (meetings, calls, writing docs and code, etc). Afterwards we feel terrible, saying we have to come up with a solution. I thought about starting my workday after they go to sleep (9PM), but that doesn't scale well unless I sleep 2 hrs/night. I realize this is a new reality for many people, and we have to adjust.<p>How are you dealing with this situation - keeping your kids engaged, doing something positive and still making time to work?
======
kinj28
The surprising element is that kids have adjusted to this fact of "Staying at
home" without access to parks and outdoors.
I have made sure from start that they wont get more than 3 episodes or 1 movie
a day.
Mostly they are busy with \- Lego (Ask them to build some new thing..like park
one day, mall another, water play area, etc) \- Then there is pic painting
event of the day. (Topic is assigned) \- Few rounds of cycling / skipping \-
500 piece puzzle is out down \- took out lots of games which ever dumped in
attic \- worksheets on math & english \- most importantly get them involved in
house old chores
At times I take a break and hand over the assignments, check their paintings,
show some patterns to copy, introduce a new game.
Hope you find some of it useful in your situation.
Let me know your tips.
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits