id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
28,924,591
https://www.permaculturenews.org/2013/07/25/the-dawn-of-cybernetic-civilization/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,463,915
http://www.followletter.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,722,526
http://motherboard.vice.com/read/satoshis-pgp-keys-are-probably-backdated-and-point-to-a-hoax
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,559,622
https://timesmachine.nytimes.com/timesmachine/1860/03/28/91453742.html
TimesMachine: Wednesday March 28, 1860
null
ARTICLES ARE CLICKABLE Click on an article to see information related to the article. The full text of articles is available as a PDF. DRAG AND ZOOM TO EXPLORE TimesMachine works like an online map. Click, drag and zoom the paper to focus on interesting areas. CHANGE THE DATE Click on the date to choose any issue of The New York Times, from September 18, 1851 to December 31, 1980.
true
true
true
null
2024-10-12 00:00:00
2002-12-31 00:00:00
http://tiles.nytimes.com.s3.amazonaws.com/issue/thumbs/2/1860/03/28/91453742_360.png
article
nytimes.comhttp:
NYTimes.com
null
null
4,972,251
http://joncairns.com/2012/08/vdebug-a-dbgp-debugger-client-for-vim-supporting-php-python-perl-and-ruby/
Jon Cairns Blog
Jon Cairns
## Vdebug: a DBGP debugger client for Vim supporting PHP, Python, Perl and Ruby *To download Vdebug, go to https://github.com/joonty/vdebug* Until recently, for debugging my programs in Vim I've been using my own fork of the PHP & Xdebug Vim plugin originally created by Seung Woo Shin. This worked well, although it sometimes ran slowly and ended up breaking under certain, apparently random, circumstances. On top of that, I'd done some extending to include new features, but the source code was getting too convoluted and difficult to maintain. I also knew that there were other languages that had debugger engines using the DBG protocol, but this script was specifically aimed at PHP and Xdebug, and it wouldn't work with these other languages and engines. ## D.I.Y. I knew that it would mean a total rewrite to fix these issues, but I was feeling hardy that day. Over the course of the next couple of months I created Vdebug, which is an attempt to bring all your debugging needs under one roof. Like the old script, it's written largely in Python, but I made more of an attempt to modularise it. Therefore, there's a package-like structure to the code. I also wanted to write unit tests, which is something I've never done before with Python, so there are some of these tests covering the more critical parts of the code. I'm pretty pleased with what I came up with in the end: it's much more of a full package than any of the other Vim plugins I've written, as it contains a very extensive Vim help file that covers every part of the plugin. If you want to try it out, go to the Github repository mentioned at the top of the page and read the README on the front page to get installation instructions. After installing, run to get help on how to use it. ## There's life outside of PHP As I said, I wanted to add support for other languages and their debuggers. ActiveState (the makers of Komodo Edit/IDE) helpfully provide debugger engine scripts for Python, Perl and Ruby, which can be used in conjunction with Vdebug to debug your own scripts (see `:help VdebugSetUp` for instructions on setting up each language's debugger engine). ## See how it works Part of the modularisation of the code means that it's possible to run some of Vdebug from the command line. I did this at the early stage to make development easier - I didn't need the Vim GUI to test the connection and data transfer between the client and engine, and it was easier to test separating the two. I don't know how useful this is, but for people who like to do random stuff for the sake of it, try something like this (works on Linux): Now, start up your script with the debugger engine active (e.g. set the IDE key/GET variable that you normally use) to create the connection. The prompt will return, and you can now access all the methods on the `vdebug.dbgp.Api` class: Etcetera. Happy debugging!
true
true
true
Software development blog on ruby, git, unix and vim
2024-10-12 00:00:00
2012-08-15 00:00:00
http://www.gravatar.com/avatar/d436756dba6b642937ce602aef83e4e1.png?size=220
website
null
Jon Cairns Blog
null
null
19,872,603
http://xvilka.me/h2hc2014-reversing-firmware-radare-slides.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
35,335,726
https://nolanlawson.com/2021/08/01/why-its-okay-for-web-components-to-use-frameworks/
Why it’s okay for web components to use frameworks
null
Should standalone web components be written in vanilla JavaScript? Or is it okay if they use (or even bundle) their own framework? With Vue 3 announcing built-in support for building web components, and with frameworks like Svelte and Lit having offered this functionality for some time, it seems like a good time to revisit the question. First off, I should state my own bias. When I released `emoji-picker-element` , I made the decision to bundle its framework (Svelte) directly into the component. Clearly I don’t think this is a bad idea (despite my reputation as a perf guy!), so I’d like to explain why it doesn’t shock me for a web component to rely on a framework. ## Size concerns Many web developers might bristle at the idea of a standalone web component relying on its own framework. If I want a date picker, or a modal dialog, or some other utility component, why should I pay the tax of including its entire framework in my bundle? But I think this is the wrong way to look at things. First off, JavaScript frameworks have come a long way from the days when they were huge, kitchen-sink monoliths. Today’s frameworks like Svelte, Lit, Preact, Vue, and others tend to be smaller, more focused, and more tree-shakeable. A Svelte “hello world” is 1.18 kB (minified and compressed), a Lit “hello world” is 5.7 kB, and petite-vue aims for a 5.8 kB compressed size. These are not huge by any stretch of the imagination. If you dig deeper, the situation gets even more interesting. As Evan You points out, some frameworks (such as Vue) have a relatively high baseline cost that is amortized by a small per-component size, whereas other frameworks (such as Svelte) have a lower baseline cost but a higher per-component size. The days when you could confidently say “Framework X costs Y kilobytes” are over – the conversation has become much more complex and nuanced. Second, with code-splitting becoming more common, the individual cost of a dependency has become less important than whether it can be lazy-loaded. For instance, if you use a date picker or modal dialog that bundles its own framework, why not dynamically `import()` it when it actually needs to be shown? There’s no reason to pay the cost on initial page load for a component that the user may never even need. Third, bundle size is not the only performance metric that matters. There are also considerations like runtime cost, memory overhead, and energy usage that web developers rarely consider. Looking at runtime cost, a framework can be small, but that’s not necessarily the same thing as being fast. Sometimes it takes *more* code to make an algorithm faster! For example, Inferno aims for faster runtime performance at the cost of a higher bundle size when compared to something like Preact. So it’s worth considering whether a component is fast in other metrics beside bundle size. ## Caveats That said, I don’t think “bring your own framework” is without its downsides. So let’s go over some problems you may run into when you mix-and-match frameworks. You can imagine that, if every web component came with its own framework, then you might end up with multiple copies of the same framework on the same page. And this is definitely a concern! But assuming that the component externalizes its framework dependency (e.g. `import 'my-framework'` ), then multiple components should be able to share the same framework code under the hood. I used this technique in my own `emoji-picker-element` . If you’re already using Svelte in your project, then you can `import 'emoji-picker-element/svelte'` and get a version that doesn’t bundle its own framework, ensuring de-duplication. This saves a paltry 1.4 kB out of 13.9 kB total (compressed), but hey, it’s there. (Potentially I could make this the default behavior, but I like the bundled version for the benefit of folks who use `<script>` tags instead of bundlers. Maybe something like Skypack could make this simpler in the future.) Another potential downside of bring-your-own-framework is when frameworks mutate global state, which can lead to conflicts between frameworks. For instance, React has historically attached global event listeners to the `document` (although thankfully this changed in React v17). Also, Angular’s Zone.js overrides the global `Object.defineProperty` (although there is a workaround). When mixing-and-matching frameworks, it’s best to avoid frameworks that mutate global state, or to carefully ensure that they don’t conflict with one another. If you look at the compiled output for a framework like Svelte, though, you’ll see that it’s basically just a collection of pure functions that don’t modify the global state. Combining such frameworks in the same codebase is no more harmful than bundling different versions of Lodash or Underscore. Now, to be clear: in an ideal world, your web app would only contain one framework. Otherwise it’s shipping duplicate code that essentially does the same thing. But web development is all about tradeoffs, and I don’t believe that it’s worth rejecting a component out-of-hand just to avoid a few extra kBs from a tiny framework like Preact or Lit. (Of course, for a larger framework, this may be a different story. But this is true of any component dependency, not just a framework.) ## Framework chauvinism In general, I don’t think the question should be whether a component uses its own framework or not. Instead, the question should be: Is this component small enough/fast enough for my use case? After all, a component can be huge without using a framework, and it can be slow even when written in vanilla JS. The framework is part of the story, but it’s not the whole story. I also think that focusing too much on frameworks plays against the strengths of web components. The whole point of web components is to have a standard, interoperable way to add a component to a page without worrying about what framework it’s using under the hood (or if it’s using a framework at all). Web components also serve as a fantastic glue layer between frameworks. If there’s a great React component out there that you want to use in your Vue codebase, why not wrap it in Remount (2.4 kB) and Preact (4 kB) and call it a day? Even if you spent the time to laboriously create your own Vue version of the component, are you really sure you’ll improve upon the battle-tested version that already exists on npm? Part of the reason I wrote `emoji-picker-element` as a web component (and not, for instance, as a Svelte component) is that I think it’s silly to re-implement something like an emoji picker in multiple frameworks. The core business logic of an emoji picker has nothing to do with frameworks – in fact, I think my main contribution to the emoji picker landscape was in innovating around IndexedDB, accessibility, and data loading. Should we really re-implement all of those things just to satisfy developers who want their codebase to be pure Vue, or pure Lit, or pure React, or pure whatever? Do we need an entirely new ecosystem every time a new framework comes out? The belief that it’s unacceptable for a web app to contain more than one framework is something I might call “framework chauvinism.” And honestly, if you feel this way, then you may as well choose the framework that has the most market share and biggest ecosystem – i.e. you may as well choose React. After all, if you chose Vue or Svelte or some other less-popular framework, then you might find that when you reach for some utility component on npm, nobody has written it in your framework of choice. Now, if you like living in a React-only world: that’s great. You can definitely do so, given how enormous the React ecosystem is. But personally, I like playing around with different frameworks, comparing their strengths and weaknesses, and letting developers use whichever one tickles their fancy. The vision of a React-only future fills me with a deep boredom. I would much rather see frameworks continue to compete and innovate and push the boundaries of what’s possible in web development than to see one framework “solve” web development forever. (Or to see frameworks locked in a perpetual ecosystem race against each other.) To me, the main benefit of web components is that they liberate us from the tyranny of frameworks. Rather than focusing on cosmetic questions of how a component is written (did you use React? did you use Vue? who cares!), we can focus on more important questions of performance, accessibility, correctness, and things that have nothing to do with whether you use HTML templates or a `render()` function. Balking at web components that use frameworks is, in my opinion, missing the entire point of web components. *Thanks to Thomas Steiner and Thomas Wilburn for their thoughtful feedback on a draft of this blog post.* Posted by g2-8742ea135d1d4f0a14f49531dec3f607 on August 1, 2021 at 11:06 PM Nice info, I did not know some frameworks rewrite native functions… Now that you mentioned it, it kind of makes sense.
true
true
true
Should standalone web components be written in vanilla JavaScript? Or is it okay if they use (or even bundle) their own framework? With Vue 3 announcing built-in support for building web components…
2024-10-12 00:00:00
2021-08-01 00:00:00
https://secure.gravatar.com/blavatar/86a4db4d496aa2fad7e47b11a865e80cfbbbac38285b65ff518b9c98aa47f7d7?s=200&ts=1728763439
article
nolanlawson.com
Read the Tea Leaves
null
null
11,158,357
https://leejo.github.io/2016/02/22/all_software_is_legacy/
All Software is Legacy
null
In what may be judged in years to come as a moment of madness, I have volunteered to be the primary maintainer of the Perl CGI module (CGI.pm). For the non-technical readers of this post: CGI.pm is a few thousand lines of code that in the mid to late nineties, and even some years later, was helping many websites function. Ever visited a website and seen ‘cgi-bin’ in the URL? Yep, that was *probably* running Perl scripts and those were almost certainly using CGI.pm I actually volunteered to be the primary maintainer back in April 2014. The reason I’ve taken so long to write this post is that I’ve been busy, er, maintaining the module. I’ve fixed the bulk of existing issues1, written and given a talk on the plan for the module2, released an extra module to point people at better resources3, and occasionally been responding to questions about the module4, oh and of course the usual reason that it takes posts several months to get out of my drafts folder. Despite having used the module frequently over the years, and even volunteering to be the primary maintainer, I do not like it. It was an important and useful module early on, but it has no place in a modern [perl] web stack and hasn’t deserved a place in at least a decade. This is not a criticism of the original author(s) or the original implementation, it’s simply down to the fact that the web development field has progressed and lessons have been learnt. An important point to make is the difference between CGI and CGI.pm. CGI is the Common Gateway Interface protocol, or specification if you like, whereas CGI.pm is an implementation of that specification. CGI is still a reasonable protocol for doing web programming in some cases, whereas CGI.pm is not.5 CGI.pm wasn’t the first implementation, but it was widely adopted after being included with the Perl core: ``` /Users/leejo/working/CGI.pm > corelist CGI Data for 2013-08-12 CGI was first released with perl 5.004 ``` And when was perl 5.004 released? 15th May 1997, almost twenty years ago. **The Past** Up until that point if you wanted to do CGI programming with Perl you had to install CGI.pm manually, write your own implementation, or install scripts that did it for you. A well known example is cgi-lib.pl.6 In fact, it would probably be fair to say cgi-lib.pl was commonly used as CGI.pm included functions to make porting scripts from cgi-lib.pl easy. Over time CGI.pm grew and grew, and grew some more, until it had implemented most (if not all) of the CGI protocol specification and beyond: https://tools.ietf.org/html/rfc3875 Take a look at that RFC and see if anything stands out. I’ll give you a clue: it’s to do with the date… Got it? Yes, RFC 3875 was finalised in October 2004, some seven years after CGI.pm was released with Perl and at least a decade after the original NCSA informal specification was released. Work on RFC 3875 didn’t start until 1997, by then there were already many different implementations of a specification that had no official formal definition. The first official draft of the CGI specification was not released until May 1998. By then there were several large sites already running on Perl and even with CGI.pm: eBay, IMDb, cPanel, Slashdot, Craigslist, Ticketmaster, Booking.com, several payment processors, and many many others.7 Before that the CGI protocol was very much a work in progress, its history looking something like this8: - 02 Jun ‘93: Dave Raggett updates his HTML+ DTD to include support for “INPUT and SELECT form interaction elements”[0] - 19 Jul ‘93: Nathan Torkington adds an executable shell script ability to the standard CERN (2.06) daemon[8] - 05 Sep ‘93: Marc Andreessen says NCSA Mosaic 2.0 will submit form parameters as: “name=value&name=value&name=value”[1] - 13 Sep ‘93: Rob McCool announces NCSA httpd 1.0a1[2], which supports “server scripts, which are executable programs which the server runs to generate documents on the fly. They are easy to write and can be written in your favorite language, whether it be C, PERL, or even the Bourne shell” [3] - 14 Nov ‘93: Rob McCool complains that his users are avoiding writing code because they think the interface will change, and throws open a bunch of open-issues he wants fixing in what he calls the “gateway” - 17 Nov ‘93: Rob McCool releases “CGP/1.0 specification”[7], renamed to CGI two days later - 13 Dec ‘93: NCSA httpd 1.0 released, with “CGI” support[6] As the RFC drafts were expanded more sites and software were released that used Perl and CGI.pm: TWiki, Bugzilla, Movable Type, LiveJournal, and thousands of others. Even the Internet Pinball Database. **So What Happened?** Time passed, more and more features were added, scope crept.9 After a few years it turned out that some of the implementation decisions didn’t fit well into modern requirements, and others could lead to nasty vulnerabilities if not used with care. Some workarounds could be made: fastcgi for persistence, mod_perl for speed and plugging into apache, but they required adapting of scripts using CGI.pm. Often they came with a cost - mod_perl’s propensity to segfault being one of them. This wasn’t unusual, the web was immature back then and development around it reflected that. There’s also the consideration that you can’t predict the future and it’s incredibly difficult to make accurate estimates in software development. Sometimes you just stick a TODO or FIXME in the code and worry about it later - Y2K anyone? IPv4? CGI.pm grew to a point that it could be used for many different functions of early web development, and could be used in different ways within each of those different functions. The result was that CGI.pm had to include an awful lot of code to deal with these different uses, its dual interface, edge cases, and existing bugs. Simple functions that should be a couple of lines long accumulated cruft. **Cruft** ``` 1 '_maybe_escapeHTML' => <<'END_OF_FUNC', 2 sub _maybe_escapeHTML { 3 # hack to work around earlier hacks 4 push @_,$_[0] if @_==1 && $_[0] eq 'CGI'; 5 my ($self,$toencode,$newlinestoo) = CGI::self_or_default(@_); 6 return undef unless defined($toencode); 7 return $toencode if ref($self) && !$self->{'escape'}; 8 return $self->escapeHTML($toencode, $newlinestoo); 9 } 10 END_OF_FUNC ``` The above chunk of code is one of the smaller functions from CGI.pm and demonstrates how much history the code has accumulated over the years. What does it do? It sanitises text input to prevent html tags (and such) being injected into output. It will, for example, turn the < character into `<` . This is used to prevent cross-site scripting attacks. The thing is, minus all the boilerplate and with a clean interface, to do this requires just three lines of code10: ``` 1 sub _maybe_escapeHTML { 2 return shift->{escape} ? escapeHTML( @_ ) : @_; 3 } ``` So what’s all the other code doing? Let’s break it down. ``` 1 '_maybe_escapeHTML' => <<'END_OF_FUNC', ... 10 END_OF_FUNC ``` Lines 1 and 10 were added to defer the compilation of this function to runtime. Twenty years or so ago, when CGI.pm was just getting popular, hardware wasn’t optimal for compiling several thousand lines of code for every request. To speed up the load of the module a large number of functions were wrapped as strings, to then be compiled only when they were called - so called “lazy loading”. ``` 3 # hack to work around earlier hacks 4 push @_,$_[0] if @_==1 && $_[0] eq 'CGI'; 5 my ($self,$toencode,$newlinestoo) = CGI::self_or_default(@_); ``` Lines 3 to 5 get around the fact that CGI.pm has a dual interface - its methods can be called procedurally or as method calls on an instance of the class (an “object”). To provide this dual interface requires every single method within CGI.pm check how it was called and then create a default object when it was called procedurally. ``` 6 return undef unless defined($toencode); 7 return $toencode if ref($self) && !$self->{'escape'}; ``` Lines 6 and 7 are sanity checks and a short circuit. Line 6 says “if we weren’t given any input then don’t continue” and line 7 says “if we decided earlier that we don’t want to sanitise input then don’t continue”. It’s arguable whether or not these belong in this function; in the case of 6 I would say “yes, but calling the method with no input is probably a bug in the calling code”, and in the case of 7 I would say “yes, but only if this cleans up the code elsewhere” (it turns out this function is called 39 times internally so it *is* cleaner to have the check here). This is one small function in, what was, an eight thousand line module. CGI.pm currently has over 150 functions, excluding the private functions and auto generated ones, each function has the same or similar code to deal with the old code calling it and handle the implementation decisions taken in the module’s early life. **Standing Still, Moving Forwards** When I tell non-perl developers, or sometimes ex-perl developers, that I primarily code in Perl they often express shock that Perl is still used. They don’t know the modern ecosystem, or that Perl is still actively used in many places for new development11, or the dozens of Perl conferences attended by thousands of developers every year, or that Perl is maintained with yearly major releases. Most strange is that they seem to think that the hundreds of millions of lines of Perl code just evaporated the moment they switched to their new language of choice. Many software systems are like snakes shedding their skin, going through constant maintenance and every few years having their components replaced completely. However their core functionality remains the same during this process, and to many users they can appear to go for years without any change at all. Sometimes the snake is replaced with a newer, more shiny snake, but users aren’t compelled to use it or don’t even know if it’s the right snake.12 Some of the most successful sites on the web have gone through multiple rewrites over the years.7 Many more sites and applications haven’t. Because of the have nots CGI.pm must remain backwards compatibility. So whilst I have removed most of its dead skin there is little else I can do to improve its remaining state. Much software is like this, parts of the ecosystem must stand still whilst other parts move forward. Besides, there is no value in developing CGI.pm further anyway. There are much better alternatives available within the Perl ecosystem that will not only handle all of what CGI.pm can do but also expand the functionality to work with modern code and requirements. These are the parts of the ecosystem that have moved forward. CGI.pm is very much legacy software. **“Legacy”** If you see or hear the term “legacy” used in reference to software, code, an application, or a device, you can be sure the usage is pejorative.13 However the reality is that all software is legacy and we often just substitute the term “legacy code” for “technical debt”. Whilst they’re not exactly the same thing the presence of one almost certainly suggests the other. “Technical debt”14 is a more useful term anyway as it implies a route to legacy software. If we do not pay off the debt we have to declare bankruptcy, we have to abandon the code. This rarely happens in software, complete abandonment, even decades old code requires support and maintenance. But technical debt is a useful concept as speed to market is often more important than future proof code.15 Future proof code is an oxymoron anyway, the idea that code can forever work when the foundations shift beneath it and expectations change.16 So when writing code we worry about the short term, the long term can look after itself. Do you think Google, Facebook, et al, would have grown to their size and domination had they anticipated the size of their business a year, five years, a decade down the line and spent extra time designing the systems to deal with that? “We can’t use php, mysql, whatever, because it won’t scale to a billion users?” No, they *make* it scale and if they can’t they will replace it with something else that can, later. **Future Legacy** It’s easy to forget that there is legacy software still in use that predates the internet. Heck, the web has become so dominant that sometimes it’s easy to forget that software existed before it. The next time you take a flight see if you can peek at the check-in software, something probably written in the sixties and running in an emulator. Or how about your bank17, or your doctor’s office?18 These are the systems that are like cement.19 The browser you’re reading this in has a history going back twenty years. The high level protocol the data was delivered by predates that by another five. The low level protocol beats that again by ten or twenty depending on which spec you assume as the basis. It’s legacy all the way down, and as we build on top of that we create legacy all the way up. All of this creates constraints on what we can produce, and sometimes we find dangerous cracks in the foundations.20 The correct way to build that legacy is to correctly abstract the internals away and provide clean and sane interfaces. The cleaner the interface and the looser the coupling the easier it is to unplug the upstream code from it and into something else. But paradoxically the cleaner and saner your interface the more likely it is to succeed and thus more likely to become constrained by its users, to solidify. Legacy pervades the software we use from the trivial small utilities included with the operating systems (coreutils) to the encryption mechanisms that we rely on (openssh) and the software that synchronizes your clocks (ntp21). Apple still ship their OS with an insane filesystem because there is so much legacy software that will break should they fix it.22 Every line of new code you write will inevitably become legacy software. The language or idiom you choose will fall out of favour23; the framework will be superseded; the libraries it uses will need updating due to bugs and vulnerabilities or some dependencies will be abandoned. Just look at github, the most popular hosting site for open source software projects, quickly becoming the world’s largest software graveyard. Even the shoulders you stand upon are sagging.24 Lest you think you’re immune to this ask yourself if you’ve ever written code for more than one company? Ever provided a patch or bug fix for a library? Replied to a question on stackoverflow? You, me, every software developer out there, right now, is creating legacy software. **Legacy** If the above gives the impression that the end of software is nigh, let me reassure you that it’s not - this is just simply the way things are. There’s a thought process that goes with software development which reads something like “if you can’t look back at your old code and see how bad it is then you’re not improving”. Substitute “software development” for whatever your trade. Everyone writes bad code, makes bad implementation decisions, poor estimates, and so on. Everyone starts out as a novice.25 Just as all software contains bugs, all software is legacy. Software development is about solving specific problems that exist now and not about solving non-specific or future problems. Every few years, once a decade perhaps, there is a paradigm shift. Given time parts of the technological bedrock will have changed enough to render previous patterns obsolete. This is perhaps why new developers tend to push back against older code26, sometimes drawing assumptions from the implementation and using it to ridicule the choir. The shiny new can be attractive, however the newer you are the quicker your software becomes legacy because you don’t yet know what you don’t know. The older code still exists because the cost gains in replacing it are not yet compelling enough, and it just works.27 The new code you write will find itself in the same situation sooner than you expect. The important thing is that we learn from that and we make sure the future legacy is easy to understand. Write clean interfaces, good tests, descriptive commit messages.28 Another important thing is to understand the lineage; In CGI.pm’s case the module was in part responsible for Perl’s huge popularity early on, and even propelled Perl forwards. When the limitations of CGI.pm’s implementation were hit alternative code was written in response, again and again, leading to where we are today. Where we are today is pretty interesting and lots of fun, tomorrow will be more interesting and even more fun. So this will be the last I write, talk, present, or ramble on about CGI.pm, even though it’s likely that the module will still be with us for a while. I’d rather concentrate on current and future legacy code, not the past. - Eh, there’s a few bits and pieces in various places. Perlmonks, LinkedIn, Github, etc. ↩ - http://cgi-lib.berkeley.edu/ - and of course Matt’s script archive. ↩ - https://en.wikipedia.org/wiki/Perl#Applications https://news.ycombinator.com/item?id=10590612 ↩ ↩ 2 - Thanks to Pete for putting this together, the links to various sources: 0 1 2 3 4 5 6 7 8 ↩ - The function actually calls *another*function and, guess what, that called function has all the legacy boilerplate code as well. ↩ - https://news.ycombinator.com/item?id=11100251 https://hynek.me/articles/python3-2016/ ↩ - How You Can Leverage Technical Debt & Why You Should / https://news.ycombinator.com/item?id=1092514 ↩ - How about rewriting the inflight software systems for something over 4billion kilometers away? ↩ - http://www.theguardian.com/commentisfree/joris-luyendijk-banking-blog/2012/may/30/former-it-salesman-voices-of-finance ↩ - https://blogs.harvard.edu/philg/2015/12/07/brain-surgeon-tortured-by-software-developers-and-hospital-bureaucrats/ ↩ - Description taken from a very interesting metafilter thread (start there and keep reading). ↩ - http://www.informationweek.com/it-life/ntps-fate-hinges-on-father-time/d/d-id/1319432 ↩ - https://news.ycombinator.com/item?id=8876319 / http://arstechnica.com/apple/2011/07/mac-os-x-10-7/12/#hfs-problems ↩ - https://www.mnot.net/blog/2014/06/07/rfc2616_is_dead / http://www.washingtonpost.com/sf/business/2015/05/31/net-of-insecurity-part-2/ ↩ - http://www.kitchensoap.com/2012/10/25/on-being-a-senior-engineer/ ↩ - https://medium.com/@landongn/12-years-later-what-i-ve-learned-about-being-a-software-engineer-d6e334d6e8a3#.szqirf714 ↩ - https://leejo.github.io/2013/11/03/please_use_verbose_commits/ ↩
true
true
true
null
2024-10-12 00:00:00
2016-02-22 00:00:00
null
null
null
null
null
null
29,347,835
https://www.creativelog.io/?ref=hackernews
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,089,574
http://www.cnbc.com/2015/08/19/you-cant-get-this-from-ubereats-or-instacart.html
You can't get this from UberEATS or Instacart
Ari Levy
On-demand food delivery has made for a funding bonanza. From Instacart's latest $220 million round and Blue Apron's $135 million financing in June to the brand new UberEATS, there's a seemingly infinite number of ways to get cash-rich start-ups to bring you food. One company is trying to prove that there's a leaner way. ZeroCater, which acts as a matchmaker between local food vendors and businesses that want to feed their employees, has raised a mere $1.5 million in its four years on the market. Read MoreWhy food delivery apps is a tasty business With that, the San Francisco-based company said on Wednesday that it's facilitated $100 million in sales and is serving tens of thousands of meals per day. ZeroCater doesn't have its own fleet of delivery people. Rather it takes orders from companies including Google, Nissan and Salesforce.com, and has the taco shop, food truck or barbecue joint on the other end do all the cooking and delivery. ZeroCater takes a cut of every transaction. Arram Sabeti, who bootstrapped the business for two years before officially launching in 2011, said he gets cold calls from venture capitalists every week looking to invest. He just hasn't found a compelling reason to say yes. "One school says if the money is there you should take it," said Sabeti. "We'd need a specific plan to convert that money into growth." According to CB Insights, one-third of venture-backed companies in the food-delivery industry took in their first round of funding in the past year. ZeroCater got its start in the Bay Area, largely working with other start-ups that Sabeti met while in Y Combinator. The company has since started serving in New York City, Chicago and Washington, D.C. Read MoreBlue Apron makes cooking easy and fun Sabeti said the company is being very deliberate in its geographic growth. Good Eggs, a start-up designed to bring farmers market food to your door, shuttered its New York operation this month, acknowledging in a blog post that it "made a mistake in expanding it as quickly as we did without perfecting the model first." ZeroCater has "historically been very conservative opening new markets," Sabeti said. While Sabeti is by no means ruling out raising a bigger sum of money, he said that cash isn't the biggest constraint right now. Rather, it's finding the right people in a market with so many emerging companies. Sabeti said he's interviewed 40 candidates for the role of vice president of marketing, but hasn't met the right fit. "The really tricky thing is not getting money anymore," Sabeti said. "The really tricky thing is getting great talent."
true
true
true
Amid a food-delivery funding blitz that's valued Instacart and Blue Apron in the billions, a start-up called ZeroCater is taking a leaner approach.
2024-10-12 00:00:00
2015-08-19 00:00:00
https://image.cnbcfm.com…35&w=1920&h=1080
article
cnbc.com
CNBC
null
null
5,990,344
http://blog.gibbon.co/posts/2013-07-03-users-are-your-eyes.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
32,079,664
https://bigthink.com/starts-with-a-bang/james-webb-spikes/
Where do James Webb's unique "spikes" come from?
Ethan Siegel
# Where do James Webb’s unique “spikes” come from? - Since its launch in December 2021, NASA’s James Webb Space Telescope has successfully reached and surpassed a number of important milestones in launch, arrival, and configuration. - The mirrors and instruments are being cooled and calibrated over the telescope’s first six months in space in preparation for science operations, and it’s either right on or even ahead of schedule for everything. - But when you look at the first aligned image, you might notice an unusual pattern of spikes coming from their very first star. Here’s why at least six of those spikes will always persist. Less than three months after its initial launch, it finally happened: the James Webb Space Telescope revealed its first telescope alignment evaluation image. For the first time, humanity’s latest, greatest flagship observatory has successfully aligned, focused, and finely phased the light from all 18 of its primary mirror segments to successfully produce a single image of a “test star.” The results are so good that even in this preliminary image, faint stars and galaxies located in the distant background can also be seen, resolved, and even examined in detail. It’s a remarkable triumph, and as NASA themselves announced, “Every optical parameter that has been checked and tested is performing at, or above, expectations.” Still, what you’ll notice, even on a cursory glance at the image, is that the main “star” imaged has a series of spikes coming out of it: six large ones and two smaller ones, all visible even zoomed out and at low resolution. For comparison, the Hubble Space Telescope only had four spikes attached to every star, and an upcoming 25-meter ground-based telescope, the Giant Magellan Telescope, will be the first observatory of its kind to have no spikes on its stars at all. Here’s what we can expect, as far as spikes go, from the James Webb Space Telescope, even at full, maximum power. The first thing you have to realize about the James Webb Space Telescope is that, unlike Hubble, it is *not* a single-dish mirror. Instead, there are 18 individual segments to its primary mirror, and the goal of an optimally-configured James Webb will be to have all 18 of those segments function as though they are a single mirror, in a single plane, with optical perfection. What does “optical perfection” mean in this instance? It means that each one of the 18 segments will make up a section of a perfect mirror, all designed to have the cumulative light striking it from a distant, observed target focus down to a single point precisely in the telescope’s instruments. This is a tremendously ambitious task, requiring that we compensate for: - the spacing in between each segment, - the edges, and in particular, the sharp corners, of each segment, - the optical imperfections induced by the trusses that hold the secondary mirror in front of the primary mirror, - and the individual variations both across each segment and from segment-to-segment. Each one of the individual segments themselves, back at various points in the 20th century, would have made a spectacular, cutting-edge observatory all by itself. If you want to focus your light properly, you need to avoid the problem that initially plagued the Hubble Space Telescope: spherical aberration. Spherical shapes, for both lenses and mirrors, are easier to manufacture than other curves, like parabolas, and the shapes of large lenses and mirrors can be distorted by gravity. As a result, when Webb, which was built on Earth but operates in space, was launched, there was a worry that everything might not fall properly into place. Even though the mirrors were checked and re-checked and re-re-checked again and again, there was always the worry that something would be off with the optics. If that was the case, then the mirrors would be unable to focus the distant starlight into the single image that was desired, and we’d have to find some way to compensate for the blurriness that would arise. For the 18 mirror segments that James Webb possesses, there are three individual designs that needed to be manufactured six times each: - the “A” segments, which are for the interior segments, where five of the six hexagonal edges will border another mirror segment, but the innermost one will leave a gap for light to be reflected inside, onto the instruments, - the “B” segments, which are at the outside corners of the hexagon-shaped honeycomb, each have three edges that border another mirror segment, but three edges that comprise the outside border of the primary mirror, - and the “C” segments, which go between the “B” segments and possess four edges that border another mirror segment, but two edges that, along with the “B” segments, define the outside border of the primary mirror. As a result, the shape of the James Webb Space Telescope’s primary mirror makes a shape known as a tricontagon, or a 30-sided polygon. This is a very, very complicated geometrical shape to deal with, and the technical achievements needed to produce a quality data product are literally astronomical. In its unfocused state, the James Webb Space Telescope would simply be made up of 18 individual mirrors, each with their own shape, their own plane of focus, and each one would produce their own image for whatever object we were attempting to observe. The goal is to have each of these 18 segments form a single plane, together, that has a parabolic shape. At some 6.5 meters (around 21 feet) across, the variations in the plane, both across each segment and from segment-to-segment, should be right around ~20 nanometers for optimal performance. That’s an incredible precision, by the way; if the surface of the entire Earth were as smooth as Webb’s precision needs to be for its optics, then the highest mountain and the deepest ocean trench would only depart from sea level by about 2 centimeters (less than one inch), total. When Webb took its very first image of a star, attempting to see what sort of image the 18 segments produced, it was clear that a lot of work remained ahead of the team. You’ll notice, when you view the initial (above) image, you see 18 different sources of light: one corresponding to each segment of the primary mirror. You’ll also notice that these sources appear all over the place, rather than in the desired “honeycomb” configuration that the mirror segments themselves take on. Finally, you’ll notice that each one of these sources doesn’t correspond to a single point-like source, which is what you’d expect for a star, but that each one appears distorted and spread out over a volume of space. When you take a true point source of light and image it through any sort of optical system, you’re not going to get a “point” back again. Instead, you’ll get a shape unique to your equipment, which can be described by a mathematical equation known as a point-spread function. We can know that we’re looking at a star, and that the star ought to appear as a single point of light, but that’s not what we see. For James Webb, with its 18 hexagonal segments configured to make a tricontagon overall, it results in an incredibly complicated point-spread function that astronomers working on the telescope simply call, “the nightmare snowflake.” Even with the substantial progress that was made to focus and align the individual mirror segments into a single plane, and then to combine those 18 individual images into a single one that best represents the true point source we’re observing, the “nightmare snowflake” clearly rears its ugly head. The individual star you see at the end, above, represents what happens when all of the mirrors are focused and phased together. But everywhere you have an edge, a gap, or something that blocks a portion of the light from coming into your primary mirror, you’re going to get an image artifact, and that’s something we must be able to successfully correct for. That involves tweaking the shape of each individual mirror through the use of actuators, and doing so in a wavelength-dependent way. It involves ensuring that each individual mirror not only makes its own perfectly parabolic shape, but that the shapes between mirrors all correspond to a different portion of the same parabola. And, even with gaps that are approximately ~4 millimeters between the individual mirror segments, that one single parabola needs to be perfect down to a tolerance of ~20 nanometers. Even at that, there are still the edges to reckon with, and the fact that there are supports for holding the secondary mirror in place that cross the plane of the primary mirror. In the case of James Webb, specifically, there are three axes that the supports rest on, and they cause three sets of inevitable spikes that will always appear in images of point sources, such as stars. This isn’t unique to the James Webb Space Telescope, by the way. Any reflecting telescope where the light is reflected back in front of the telescope and into a secondary mirror, where the secondary mirror then reflects the light into a “hole” located in the primary mirror itself (and then, into the instruments), needs that secondary mirror held in place in front of the primary mirror by *something*. However those supports are configured determines the shape of the spikes, with each unique support creating its own diffraction spikes that are perpendicular to that support structure itself. The Hubble Space Telescope, whose images are arguably the most ubiquitous and recognizable of all telescopes in human history, is configured in a typical way for a reflecting telescope: with supports for the secondary mirror shaped like the “+” sign. Its perpendicular supports ensure that there will be heavy diffraction spikes making a “+” shape coming off of every source that qualifies as point-like: the individual stars it can see. Other sources, however, such as distant galaxies and nebulae, are what we know of as extended sources, as their light is spread out over a larger area on the sky. As a result, these spikes are non-existent, since light arrives from more than just a point, and that optical effect is effectively washed out over the large angular area that the extended object provides. In the (Hubble) image below, for example, you can easily identify the points of light that are stars contained within our own Milky Way by their diffraction spikes, whereas the fainter, more distant, extended objects definitively do not possess them. As the fine phasing of the James Webb Space Telescope continues, we’ll continue to see the “nightmare snowflake” evolve closer and closer toward its desired shape: of simply six spikes coming off of every star, and of the more distant, extended objects looking more and more pristine. On March 11, 2022, the James Webb Space Telescope team at NASA produced what’s been labeled as a Telescope Alignment Evaluation Image, and the results are immediately apparent as spectacular. Instead of a snowflake with all sorts of artifacts emanating from it, and instead of an extended point where the light is clearly spread out over a large area, the star itself looks crisp, collimated, and has six major diffraction spikes in the expected directions. Taken only with the NIRCam instrument, it’s already well-enough aligned to reveal background stars and galaxies, with many of the background stars displaying their own diffraction spikes as well. This isn’t, however, “as good as it gets” for James Webb. If you look at this with a very careful eye and pay attention to details, you’ll notice many things about this star and the rays that come off of it. For example: - Each of the six major diffraction spikes has a set of perhaps five-to-seven major streaks, rather than all being aligned into a single spike. - Between the spikes, there are smaller rays of light that come off and prevent us from viewing objects that are too close to the luminous star. - In the horizontal plane, where there should be no diffraction spikes, we have an extra, fainter, but still substantial set of spikes: a 7th and 8th spike, both of which should be eliminated. - And that if you examine the other stars or galaxies revealed in the image, you can see that they’re not pristine either, but rather have distortions consistent with Webb not yet being optimally aligned and configured. In the image below, I’ve highlighted some of the details that the team will work to improve over the coming months. Even though progress has been spectacular over the first three months of Webb’s commissioning, you should be heartened to know that there are still a few months before science operations begin, giving the team an opportunity to iron out as many of these details as possible before we start using the observatory’s capabilities to teach us as much as possible about the Universe. **Update (3/23/2022)**: One of the fun things that’s available, historically, is the original prediction of what the James Webb Space Telescope should see when it looks at a very bright point source. (For reference, this “calibration star” seen in the above image is much brighter than anything that Webb will ever observe on purpose.) Way back in 2007, four scientists working on a technical report for the James Webb Space Telescope calculated the expected effects of Webb’s unique configuration on the point-spread function of a bright star that was observed with the telescope. As you can see, from the image below, there are four major factors that create the point-spread function of the JWST. They are: - the fact that the overall shape of the mirror is hexagonal, rather than circular, - the fact that it’s not a single, solid mirror, but rather is a series of 18 tiled hexagons, - the fact that there are small (about ~4 mm) gaps between each of the hexagonal tiles, - and then, finally, the fact that the support struts exist to hold the secondary mirror in place. It looks like, based on this analysis, that the only way to remove the six large (plus two smaller) diffraction spikes will be with software, after-the-fact. However, you shouldn’t assume that every telescope, or even every reflecting telescope, will always be stuck with this “diffraction spike” problem. Right now, on the first sets of images we’re seeing from James Webb, there are many more spikes and features than we should see when calibration is complete. At that point, there should be only the six major spikes and nothing else; the additional features should be absolutely minimized. The only reason a star should appear larger than a single point, excepting the spikes, should be if it’s bright enough to saturate the detector itself. Moreover, there’s already a world-class telescope under construction that should be the first of its kind to produce images without any diffraction spikes. The Giant Magellan Telescope, slated for completion perhaps toward the end of the 2020s, is going to be approximately 25 meters in diameter, making it the second-largest optical telescope in the world behind the (also under construction) 39 meter European Extremely Large Telescope. But unlike its larger counterpart, which, like Webb, will be made up of large numbers of hexagonal segments tiled together, the Giant Magellan Telescope will only be made up of seven large, circular mirrors, all installed upon the same telescope mount. As a result of the Giant Magellan Telescope’s unique configuration, the three support struts that will hold the secondary mirrors in place will exist in the gaps between the primary mirror segments; they will not obstruct the light that reaches and reflects off of the telescope mirrors at all! Although there will be other image artifacts that arise, in particular a set of circular beads that appear along ring-like paths (Airy rings), simply observing the same object for about 15 minutes or longer will fill those beads in, creating our first cutting-edge images of stars using a reflecting telescope without any diffraction spikes at all. The six spikes coming off of James Webb’s best configuration image to date will improve and narrow with time, and the remaining spikes and image artifacts should be improved upon in the coming months. Although Webb has often been called the successor to Hubble, it will be observing primarily in the infrared: the same sets of wavelengths previously viewed by another six-spiked space observatory: NASA’s Spitzer. Sure, Webb will always possess these spikes, but thanks to clever engineering, there will be other telescopes that won’t have them at all. We’re going to be getting not only a whole new view of the Universe, but an entirely novel experience in visualizing it. With each passing day, the potential of science and discovery with James Webb only gets more and more exciting.
true
true
true
When we started imaging the Universe with Hubble, every star had four "spikes" coming from it. Here's why Webb will have more.
2024-10-12 00:00:00
2022-03-23 00:00:00
https://bigthink.com/wp-…?resize=1200,630
article
bigthink.com
Big Think
null
null
13,483,586
http://www.theverge.com/2017/1/25/14371450/indus-valley-civilization-ancient-seals-symbols-language-algorithms-ai
Machine learning could finally crack the 4,000-year-old Indus script
Mallory Locklear
In 1872 a British general named Alexander Cunningham, excavating an area in what was then British-controlled northern India, came across something peculiar. Buried in some ruins, he uncovered a small, one inch by one inch square piece of what he described as smooth, black, unpolished stone engraved with strange symbols — lines, interlocking ovals, something resembling a fish — and what looked like a bull etched underneath. The general, not recognizing the symbols and finding the bull to be unlike other Indian animals, assumed the artifact wasn’t Indian at all but some misplaced foreign token. The stone, along with similar ones found over the next few years, ended up in the British Museum. In the 1920s many more of these artifacts, by then known as seals, were found and identified as evidence of a 4,000-year-old culture now known as the Indus Valley Civilization, the oldest known Indian civilization to date. Since then, thousands more of these tiny seals have been uncovered. Most of them feature one line of symbols at the top with a picture, usually of an animal, carved below. The animals pictured include bulls, rhinoceros, elephants, and puzzlingly, unicorns. They’ve been found in a swath of territory that covers present-day India and Pakistan and along trade routes, with seals being found as far as present-day Iraq. And the symbols, which range from geometric designs to representations of fish or jars, have also been found on signs, tablets, copper plates, tools, and pottery. Though we now have thousands of examples of these symbols, we have very little idea what they mean. Over a century after Cunningham’s discovery, the seals remain undeciphered, their messages lost to us. Are they the letters of an ancient language? Or are they just religious, familial, or political symbols? Those hotly contested questions have sparked infighting among scholars and exacerbated cultural rivalries over who can claim the script as their heritage. But new work from researchers using sophisticated algorithms, machine learning, and even cognitive science are finally helping push us to the edge of cracking the Indus script. Spanning from 2600 to 1900 BC, the Indus Valley Civilization was larger than the Egyptian and Mesopotamian civilizations, encompassing over 1 million square kilometers that stretched over present-day India and Pakistan. It featured sophisticated infrastructure including advanced water management and drainage systems, well-organized cities with street planning, and some of the first known toilets. The Indus people also hosted a massive trade network, traveling as far as the Persian Gulf. In fact, the first traces of the Indus people were rediscovered in the mid-19th century, when construction workers tasked with connecting two cities in modern-day Pakistan came across a massive supply of bricks among some old ruins. The workers used them to construct nearly 100 miles of railroad tracks. It would be some time before archaeologists realized those bricks came from the Indus Valley Civilization. Archeological digs revealed precious little: oddly and rather inconsistently with other Bronze Age civilizations, there is no evidence of powerful rulers or religious icons. We haven’t found any palaces or large statues, nothing like the ziggurats of Mesopotamia or the pyramids in Egypt. And we have very little indication of warfare, save for some excavated spearheads and arrowheads. In fact, we know almost nothing. “If you were to ask an archaeologist, they would not be able to tell you where the Indus Civilization came from with certainty, or how it ended, or what they were doing when they were around,” says epigrapher Bryan Wells. To us, the Indus Civilization is as mysterious as its symbols. The Indus symbols are part of a slowly shrinking list of undeciphered ancient scripts. Scholars are still working on a number of writing systems found all over the world including Linear A and Cretan hieroglyphs (two scripts from ancient Greece), Proto-Elamite (writing from the oldest known Iranian civilization), a handful of Mesoamerican scripts, and the Rongorongo script of Easter Island. Some Neolithic symbols, with no known linguistic descendents, may never be deciphered. Other ancient scripts, such as Linear B, an early precursor to Greek, were eventually deciphered by charting out the signs, figuring out which marked the start of a phrase and which marked the end, how different syllables changed the meaning of a word, and how consonants and vowels were structured within a sentence. It’s not unlike what’s depicted in the alien sci-fi film *Arrival* — searching for patterns, testing out theories, and lots and lots of trial and error. Though there’s slightly less pressure on Indus scholars than on *Arrival*’s linguist — people aren’t quite as worried about ancient civilizations as they are about invading aliens. “It’s often called the most deciphered script because there are around 100 decipherments... but of course nobody likes any of them.” In the past, much of this work was done by hand. For Linear B, phonetic charts painstakingly eventually led to that language’s decipherment. Similar approaches have been tried with the Indus script as well. In the 1930s, the scholar G.R. Hunter worked out sign clusters that enabled him to figure out some of the structure embedded in the script. But Hunter failed to unlock the code. “There are several reasons why it’s been too difficult to decipher this script,” says Nisha Yadav, a researcher in the Department of Astronomy and Astrophysics at the Tata Institute of Fundamental Research in Mumbai, India. “The first one is that the texts are really short.” An average artifact only has five symbols. The longest example excavated so far has 17. Such short texts make uncovering the writing’s structure difficult. “Complicating the problem is the fact that we don’t know the underlying language,” says Rajesh Rao, director of the National Science Foundation’s Center for Sensorimotor Neural Engineering and a professor in the Computer Science and Engineering Department at the University of Washington. “We don’t even know the language family that was spoken by people in that region at that time.” And once the civilization ended, it appears that its culture and writing system did, too. “We do not have any continuing cultural tradition,” says Yadav. Archaeologists have yet to find a multilingual text like the Rosetta Stone, which was key to deciphering Egyptian hieroglyphs. While our understanding of the Indus script remains minimal, it’s certainly not for lack of trying. “It’s often called the most deciphered script because there are around 100 decipherments,” says Wells, “but of course nobody likes any of them.” Many people have claimed to have cracked the script, often asserting it’s a precursor to a later language, but none of the decodings have held up. “I suppose the wackiest one is a tantric guru who meditated and got in touch with the great beyond, which told him what the script said,” says Wells. In order to decipher the Indus script, it’s important to ascertain what we’re looking at — whether the symbols stand for a language, or, like totem poles or coats of arms, just representations of things like family names or gods. “Given the amount of data we have, we cannot make any firm statement regarding the content of the script,” says Yadav. “I think what we’ve done is try to piece together whatever evidence we have to see if it leads us one way or the other,” says Rao. “And I think, at least from the work we’ve done, it seems like it’s more tailed towards the language hypothesis than not.” Most scholars tend to agree. In 2009, Rao published a study that examined the sequential structure of the Indus script, or how likely it is that particular symbols follow or precede other symbols. In most linguistic systems, words or symbols follow each other in a semi-predictable manner. There are certain dictating sentence structures, but also a fair amount of flexibility. Researchers call this semi-predictability “conditional entropy.” Rao and his colleagues calculated how likely it was that one symbol followed another in an intentional order. “What we were interested in was if we could deduce some statistical regularities or structure,” says Rao, “basically ruling out that these symbols were just juxtapositions of symbols and that there were actually some rules or patterns.” They compared the conditional entropy of the Indus script to known linguistic systems, like Vedic Sanskrit, and known nonlinguistic systems, like human DNA sequences, and found that the Indus script was much more similar to the linguistic systems. “So, it’s not proof that the symbols are encoding a language but it’s additional evidence hinting that these symbols are not just random juxtapositions of arbitrary symbols,” says Rao, “and they follow patterns that are consistent with the those you would you expect to find if the symbols are encoding language.” Seals found in what is now Iraq have symbol sequences that tend to be different from others found in India and Pakistan In a subsequent paper, Rao and his colleagues took all of Indus’ known symbols and looked at where they fell within the inscriptions they were found in. This statistical technique, known as a Markov model, was able to pinpoint specifics like which symbols were most likely to begin a text, which were most likely to end it, which symbols were likely to repeat, which symbols often pair together, and which symbols tend to precede or follow a particular symbol. The Markov model is also useful when it comes to incomplete inscriptions. Many artifacts are found damaged, with parts of the inscription missing or unreadable, and a Markov model can help fill in those gaps. “You can try to complete missing symbols based on the statistics of other sequences that are complete,” explains Rao. Yadav performed a similar analysis using a different type of Markov model known as an *n*-gram analysis. An example of an *n*-gram at work is the Google search bar. As you start typing a query the search bar fills in suggestions based on what you’ve typed, and as you type more words the suggestions change to fit the entered text. Yadav and her colleagues looked at both the probability of a particular symbol given the symbol preceding it — a bigram — and the probability of a particular symbol given the two symbols preceding it — a trigram. The resulting patterns suggested the script had a syntax, supporting the idea that it’s linguistic. And like the Markov model, it was also able to fill in probable symbols when inscriptions were missing portions of their text. These two techniques also uncovered something unexpected: artifacts found in different regions depicted distinctly different symbol sequences. So seals found in what is now Iraq have symbol sequences that tend to be different from others found in India and Pakistan. “This suggests that maybe the same symbols were being used to encode the local language there,” says Rao. “It’s like they were experimenting with the script,” says Yadav. “They were using the same script to write some other language or some other content maybe.” Providing anthropological and archaeological context to the artifacts we do have would also help further our understanding of the script. Gabriel Recchia, a research associate at the Cambridge Centre for Digital Knowledge at the University of Cambridge, published a method that aimed to do just that. In previous cognitive science studies, he and his colleagues showed that you can estimate the distances between cities by how often they’re mentioned together in writing. This was true for US cities based on their co-occurrences in national newspapers, Middle Eastern and Chinese cities based on Arabic and Chinese texts, and even cities in *The Lord of the Rings*. Recchia applied that idea to the Indus script, taking symbols from artifacts whose origins were known and using them to predict where artifacts of unknown origin with similar symbols came from. Recchia explains that a version of this method that takes into account much more detailed information could be very useful. “There are significant differences between artifacts that appear in different sublocations within a site and this is what is much more frequently unknown and in many cases, could provide more useful information,” says Recchia. “Was this found in a garbage heap along with a number of other seals or was this something that was imported from elsewhere?” Meanwhile, Ronojoy Adhikari, a physics professor at The Institute of Mathematical Sciences in Chennai, India, and his research associate Satish Palaniappan are working on a program that can accurately extract symbols from a photo of an Indus artifact. “If an archaeologist goes to an Indus site and finds a new seal, it takes a lot of time for those seals to actually be mapped and added to a database if it’s done manually,” says Palaniappan. “In our case the ultimate aim is just with a photograph of a particular seal to be able to extract out the text regions automatically.” He and Adhikari are working on building an app that archaeologists can bring to a site on a mobile device that will extract new inscriptions instantly. But not everyone agrees that the script is a language. In 2004, a paper written by cultural neurobiologist and comparative historian Steve Farmer, computational theorist Richard Sproat, and philologist Michael Witzel claimed that the Indus script was not a language. The authors even went so far as to offer a $10,000 reward to anyone who finds a lengthy Indus inscription. “To view the Indus symbols as part of an ‘undeciphered script’ isn’t a view anyone outside the highly politicized world of India believes,” Farmer said in an email. After their position on the script was published, Sproat wrote two papers that examined the conditional entropy techniques used by Rao and colleagues as well as similar techniques used by a different group examining Pictish symbols, another ancient writing system. In them, Sproat concludes that the conditional entropy measure isn’t a useful technique. “What does it tell you? It tells you that it’s not completely rigid. It tells you that it’s not completely random. We knew that already. It’s just not informative,” says Sproat. “It doesn’t tell you anything.” “Just finding structure in a bunch of symbols certainly doesn’t mean you’ve found evidence that those symbols encode language. Even heraldic symbols or astrological signs or strings of Boy Scout medals have structure in them,” says Farmer. In response to Sproat’s papers, both Rao and colleagues and the authors of the Pictish symbols study challenged by Sproat wrote replies that addressed his concerns. Sproat, in turn, wrote a response to the response. Wells compares fact-checking Farmer to fact-checking Donald Trump “You would be better off getting medical advice from your garbage man than you would getting ideas about the Indus script from listening to Steve Farmer,” says Wells. “None of the three authors have a degree in archaeology, epigraphy, or anything to do with ancient writing. Their underlying subtext is, ‘We’re all so brilliant and we can’t decipher it so it can’t be writing.’ It’s ludicrous.” Wells compares fact-checking Farmer to fact-checking Donald Trump. “You have to fact-check every single thing he says because it’s mostly wrong.” And Wells’ beef with Witzel goes all the way back to his PhD dissertation on the Indus script, which Witzel tried to block, according to Wells. Later, while escorting Witzel through India, Wells would show him a PowerPoint presentation entitled “Ten reasons you don’t know what you’re talking about” while in the back of a cab. One thing Rao and Sproat do agree on is that if the Indus script turns out not to encode a language, that might end up being even more interesting. “We know a lot about ancient civilizations that had writing but we know a lot less about civilizations that lacked writing,” says Sproat. “And if this was some kind of general nonlinguistic system, in a sense, that would be much more interesting than if it was just some kind of script.” \Rao also thinks there were some nuances of his work that were lost in the debate. “It was an interesting intellectual debate with them and hopefully we’ve now reached a truce,” Rao says, laughing. “Hopefully it’s not going to be a continued lifelong debate, but I think we’ve done our best so far on either side. I’m definitely an optimist and I think we will have a much better understanding of the Indus script one way or the other, linguistic or not.” Wells would show Witzel a PowerPoint presentation entitled “Ten reasons you don’t know what you’re talking about” while in the back of a cab Outside of this debate, decipherment progress is also threatened by modern-day politics. Within India, different factions are fighting over whose language and culture descended from the Indus Valley Civilization. There’s the Sanskrit region in the north, the Dravidian region in the south, and those speaking tribal languages in the middle. “They’re arguing that whoever is descended from the people who wrote the Indus script are the true inheritors of India,” says Wells. “So, they’re arguing about this from a modern political point of view. I know people who have received death threats for saying it’s not Sanskrit or saying it’s not Dravidian.” And because the Indus Valley Civilization spanned across present-day India and Pakistan, modern tensions between the two countries bleed into the Indus studies. The photographic collections of the Indus artifacts are published in two separate volumes — one for the artifacts found in India and another for those found in Pakistan. Another challenge to the script’s decipherment is a classic one: money. Wells believes that until universities and funding agencies make a concerted effort to foster the study of the Indus script, little headway will be made. “It has to be a cooperative effort, it has to be funded, and it has to have a home,” says Wells. For his part in fostering a collaborative effort, Wells is hosting a second annual meeting on the Indus script to take place this March in British Columbia. And if nothing else, that $10,000 reward is on the table for as long as Farmer is alive. We don’t have a decipherment yet but Rao believes that until we find longer samples or a multilingual text, these statistical strategies are our best bet. And Wells says progress will hinge on cooperation. “I think all of the pieces to decipher the script are there,” he says, “teamwork — interdisciplinary, multigenerational probably — the more we work on it the more progress we make.” Wells and his colleagues have made some progress and plan to present it at the meeting this March. Their findings and other work presented at the meeting should be available to the public in April published as the *Proceedings of the Second International Meeting on Indus Epigraphy.* In the meantime, anyone working on the script is welcome to contribute to Wells’ collaborative website, which features all of the known symbols and various analytical tools. When asked about *Arrival* and whether being able to decipher scripts might one day save the world, Rao laughs. “Well,” he says, “[it] depends on the situation.”
true
true
true
After a century of failing to crack an ancient script, linguists turn to machines.
2024-10-12 00:00:00
2017-01-25 00:00:00
https://cdn.vox-cdn.com/…15_lede_v3_2.jpg
article
theverge.com
The Verge
null
null
28,722,923
https://agora-game.club
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,178,653
https://futurism.com/the-byte/machine-learning-ai-outcome-ovarian-cancer
This AI can predict the survival rate of ovarian cancer patients
Victor Tangermann
## Predicting Outcomes Figuring out the survival rate of cancer patients relies on a number of tests and it can be difficult for clinicians to determine the prognosis. But a newly developed AI could give them a big leg up. Scientists at Imperial College London and the University of Melbourne developed a piece of machine learning software that can predict the prognosis of ovarian cancer patients — and at a higher accuracy than conventional methods. Their research and the results of an initial trial were published in the journal *Nature Communications *yesterday. The researchers found that the survival rate of epithelial ovarian cancer was approximately 35-40 percent despite the existence of a number of treatment options. Some 6,000 new cases appear in the UK every year. But developing a treatment that is personalized to the patient is critical — and the earlier the better. ## "Radiomic Prognostic Vector" The researchers developed a "radiomic prognostic vector" (RPV) — a piece of software that looks at four biological characteristics of tumors including structure, shape, size, and genetic makeup in CT scans — that turned out to be four times as accurate at predicting outcomes when compared to conventional methods in an initial trial that examined samples from 364 women. The RPV also "reliably identifies" the five percent of patients that normally only have two years to live. By identifying them early on, they could improve prognosis and optimize treatment plans for those patients. ## Transforming Healthcare "Artificial intelligence has the potential to transform the way healthcare is delivered and improve patient outcomes," co-author of the study and radiologist at the Imperial College Healthcare NHS Trust Andrea Rockall said in a press release. "Our software is an example of this and we hope that it can be used as a tool to help clinicians with how to best manage and treat patients with ovarian cancer." **READ MORE:** Artificial intelligence can predict survival of ovarian cancer patients [*EurekaAlert*] **More on machine learning and cancer: Microsoft Wants to Use AI and Machine Learning to Discover a Cure for Cancer** Share This Article
true
true
true
The AI could help clinicians administer the best treatment plans.
2024-10-12 00:00:00
2019-02-16 00:00:00
https://wordpress-assets…arian-cancer.jpg
article
futurism.com
Futurism
null
null
12,665,869
https://www.youtube.com/watch?v=EH6UVQZgvJE
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,644,371
http://www.telegraph.co.uk/science/2016/10/05/fitness-trackers-unlikely-to-make-you-healthier-say-scientists/
Fitness trackers unlikely to make you healthier, say scientists
Sarah Knapton
Wearable exercise trackers or pedometers do not increase activity levels enough to benefit health, scientists have concluded. A study carried out by an international team of researchers tracked 800 people for a year to see what impact the devices, in this case a Fitbit Zip, had on their fitness levels. Some were given cash rewards if they stuck with the devices while others were told that money would be given to charity if they hit their 10,000 steps a day target. Others had no incentive. As well as monitoring steps taken daily, the scientists measured participants' levels of moderate-to-vigorous physical activity (MVPA) per week as well as their weight, blood pressure and cardio-respiratory fitness at the start of the study and six and 12 months later. They found that during the first six months of the study, only participants in the cash incentive group recorded increases in physical activity.
true
true
true
Wearable exercise trackers or pedometers do not increase activity levels enough to benefit health, scientists have concluded.
2024-10-12 00:00:00
2016-10-05 00:00:00
https://www.telegraph.co…licy=OG-Standard
article
telegraph.co.uk
The Telegraph
null
null
26,784,365
https://www.reuters.com/article/uk-hsbc-cryptocurrency-idUSKBN2BZ225
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,711,648
http://www.theatlantic.com/business/archive/2016/05/car-alarms-dont-work-why-so-common/482769/?single_page=true
Car Alarms Don't Work. Why Are They So Common?
Ilana E Strauss
# The Alarming Truth Car alarms don't deter criminals, and they're a public nuisance. Why are they still so common? In the animated TV show *Rick and Morty*, there is a car-security system that keeps passengers safe by inflicting psychological warfare on, and then destroying, anyone who so much as approaches the vehicle. That fictional system, as anyone who’s ever woken up to a car alarm blasting at 3 a.m. knows, is only slightly more irritating than the real ones. Car alarms, it turns out, do very little of what they’re intended to do. For one thing, they are supposed to sniff out thieves, but plenty go off when a leaf floats down onto a windshield or a gust of wind blows. If two analyses done in the 1990s still hold, 95 to 99 percent of all car-alarm triggerings are literally false alarms. “Frankly, I think they’re a waste of money,” said Dr. Peter Frise, the director of AUTO21, a Canadian government-funded research group on the auto industry. Perhaps because of that, car-security experts say, people rarely pay them any mind, rendering them even less effective. Since blaring alarms usually mean someone accidentally bumped into a vehicle, or even just happened to play loud music down the street, an alarm rarely means an actual theft is taking place. Besides, if a thief really is trying to steal a vehicle, who wants to approach a potentially dangerous criminal? “You have a car thief attacking your car. You’re going to run out, and you’re going to do… what?” asked Reg Phillips, a vehicle-security expert who works with the International Association of Auto Theft Investigators. “What is in that car that’s worth getting hurt over?” (Of course, one *could* call the police instead.) Moreover, a blaring alarm might scare off a first-time joyrider, but they’re a non-issue for most professional thieves, who can clip a few wires and silence an alarm with ease. Indeed, one 1997 analysis found that cars with alarms “show no overall reduction in theft losses.” Worse, car alarms may be affecting the health of the people around them when they go off. A report from Transportation Alternatives, a bicycle-advocacy organization, estimated that New York’s car alarms lead to about $400 to $500 million per year in “public-health costs, lost productivity, decreased property value, and diminished quality of life.” An estimate from an organization whose stated goal is “to reclaim New York City's streets from the automobile” should be taken with a grain of salt, but the point still stands that car-alarm sounds are stress-inducing and sleep-interrupting. So if car alarms don’t work, how did they become ubiquitous in the first place? While the first car alarm was developed about a century ago and more or less resembles the alarms still in use today, the technology didn’t become truly widespread until the ‘60s and ‘70s, a time when, according to Phillips, neighbors might have been more likely to come to the aid of a besieged vehicle. Over time, though, people grew to ignore them, especially in the ‘80s and ‘90s, when urban crime rates increased and even more drivers started installing them. From around that time, there is even an account of a thief using a siren to cover up the sound of a car’s windows getting shattered. Nowadays, the streets are still teeming with older cars whose loud, obnoxious anti-theft measures reflect the eras during which they were built. Today, very few cars roll out of factories equipped with car alarms. For example, a spokesperson for Toyota, currently the world’s largest automaker, says that alarms are not standard in their cars. (Currently, the federal government doesn’t require carmakers to take many anti-theft precautions, other than mandating that they differentiate the key combinations for different vehicles and mark various parts of their cars with vehicle identification numbers.) That said, many drivers today still elect to purchase car alarms separately from aftermarket vendors, so they are not exactly dying off. Further, one of the car-security experts I spoke to told me that most alarms are purchased aftermarket, only to email me shortly afterward saying that he actually wasn’t sure if they were being installed in factories—demonstrating a confusion about alarms’ provenance that does not bode well for ridding the world of them. Another thing keeping them around is that they’re cheap. At their cheapest they cost about $20 to $30, which is far less expensive than the more-sophisticated immobilization systems that are built into most cars today. Those can cost around $500, and make sure that the engine won’t start unless a vehicle’s key is inserted. (Digital chips are embedded into these keys, which is why many car keys are so expensive to replace.) Given all this, Frise says that companies that sell loud car alarms almost certainly know that their products are ineffective. “If nobody bought them,” he said, “they’d stop making them. It’s the old supply-and-demand thing.” For a time, the Canadian city of Winnipeg had the highest rate of auto thefts per capita in North America—higher than those of Los Angeles, New York, and Chicago. Between 2003 and 2008, vehicle thefts cost Winnipeg $40 million a year, and auto thieves, who were typically reckless, were a serious danger to residents. “Everybody in the city had either had their car stolen or knew someone who’d had their car stolen,” said Rick Linden, a University of Manitoba sociology professor. Car theft was deeply embedded in Winnipeg’s teen culture. Peers pressured each other to steal cars, usually for joyriding. Thieves were quite young, stealing their first cars, on average, at age 13. There were even reports of 10-year-olds stealing cars with screwdrivers. “Auto theft is an activity for them,” one government informant explained in a report funded by Canada’s National Crime Prevention Centre. “Somehow it got started, and now it’s just what they do. In one neighborhood, they play pickup hockey. In this neighborhood, they steal cars.” According to Linden, some families had three generations of auto thieves, and the problem was especially acute among young men from indigenous families. “They are by far the most disadvantaged residents of our city,” explained Linden. The police started addressing the auto-theft problem in 2001, but nothing they did seemed to work. In 2005, a group of sociologists, led by Linden, stepped in with a set of recommendations and advised the city as it wrote up a plan called the Winnipeg Auto Theft Suppression Strategy. The plan addressed auto theft with several different techniques. First, the city required the owners of at-risk vehicles to get immobilization systems, which insurance companies paid for. These were more expensive than standard car alarms, but much less expensive than reimbursing customers for stolen vehicles. Second, the plan initiated the monitoring of what it called “high-rate offenders”—probation officers were to do what they could to keep repeat offenders in check, sometimes calling them as often as every few hours. The last piece of the puzzle was youth programming. The task force worked with organizations such as Big Brothers to develop counseling and substance-abuse programs. It worked. Auto thefts in Winnipeg dropped by 84 percent over five years, according to a National Crime Prevention Centre report, and the city started saving $30 million per year. The program paid for itself; as of 2010, the government had invested $52 million in the programs and saved $90 million. While the incessant phone calls were probably pretty irritating for those teens, Linden found that high-rate offenders committed fewer crimes when they were monitored. Linden estimates this tactic was responsible for about 40 to 45 percent of the decrease in thefts. Immobilizers were estimated to be responsible for another 40 to 45 percent, and youth programs helped a little bit, but Linden and his fellow researchers concluded that the offenders needed longer-term programs to stay out of trouble. Linden was quick to point out that what worked in Winnipeg wouldn’t work everywhere. An area ridden with professional thieves needs a very different solution than one full of impoverished teens who pressure each other to go joyriding. Still, Winnipeg’s success might offer lessons to other cities trying to cut down on auto theft. Immobilizers, for instance, seem like a pretty good all-around solution, and every security expert I spoke to agreed. (They’re now more or less standard in cars made today.) Frise added that silent car alarms, which give owners the GPS locations of their vehicles without tipping off thieves, might even be more effective, since an owner can use it to find a car even after it’s driven away. Additionally, it’s telling that Winnipeg was able to address the problem by identifying its source and creating a custom solution, rather than just going at it with brute force tactics like longer sentences and loud alarms. The city found it both cheaper and more effective to treat the problem exactingly and artfully. Instead of just labeling car thieves as bad eggs, Winnipeg’s task force took steps to try and change their habits. Which brings up a larger point: The city with the highest rate of auto thefts in North America didn’t fix its problem by being “tough on crime” and enacting harsher punishments. Instead, it realized that people tend to think more about short-term gains than long-term consequences—when Winnipeg’s would-be thieves got calls reminding them they’d almost certainly get caught, they stopped stealing. Even beyond auto theft, it’s worth considering if such savvy interventions could reduce all sort of crimes and keep more people out of prison. After all, one of the best ways to cut down on crime is to persuade people not to become criminals in the first place.
true
true
true
They don't deter criminals, they're too easily triggered, and they're a public nuisance.
2024-10-12 00:00:00
2016-05-16 00:00:00
https://cdn.theatlantic.…sdv/original.gif
article
theatlantic.com
The Atlantic
null
null
6,622,394
http://www.avc.com/a_vc/2013/10/they-dont-make-any-money.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,333,656
http://www.theatlantic.com/magazine/archive/2016/04/the-obama-doctrine/471525/?single_page=true
The Obama Doctrine
Jeffrey Goldberg
# The Obama Doctrine The U.S. president talks through his hardest decisions about America’s role in the world. Friday, August 30, 2013, the day the feckless Barack Obama brought to a premature end America’s reign as the world’s sole indispensable superpower—or, alternatively, the day the sagacious Barack Obama peered into the Middle Eastern abyss and stepped back from the consuming void—began with a thundering speech given on Obama’s behalf by his secretary of state, John Kerry, in Washington, D.C. The subject of Kerry’s uncharacteristically Churchillian remarks, delivered in the Treaty Room at the State Department, was the gassing of civilians by the president of Syria, Bashar al-Assad. Obama, in whose Cabinet Kerry serves faithfully, but with some exasperation, is himself given to vaulting oratory, but not usually of the martial sort associated with Churchill. Obama believes that the Manichaeanism, and eloquently rendered bellicosity, commonly associated with Churchill were justified by Hitler’s rise, and were at times defensible in the struggle against the Soviet Union. But he also thinks rhetoric should be weaponized sparingly, if at all, in today’s more ambiguous and complicated international arena. The president believes that Churchillian rhetoric and, more to the point, Churchillian habits of thought, helped bring his predecessor, George W. Bush, to ruinous war in Iraq. Obama entered the White House bent on getting out of Iraq and Afghanistan; he was not seeking new dragons to slay. And he was particularly mindful of promising victory in conflicts he believed to be unwinnable. “If you were to say, for instance, that we’re going to rid Afghanistan of the Taliban and build a prosperous democracy instead, the president is aware that someone, seven years later, is going to hold you to that promise,” Ben Rhodes, Obama’s deputy national-security adviser, and his foreign-policy amanuensis, told me not long ago. But Kerry’s rousing remarks on that August day, which had been drafted in part by Rhodes, were threaded with righteous anger and bold promises, including the barely concealed threat of imminent attack. Kerry, like Obama himself, was horrified by the sins committed by the Syrian regime in its attempt to put down a two-year-old rebellion. In the Damascus suburb of Ghouta nine days earlier, Assad’s army had murdered more than 1,400 civilians with sarin gas. The strong sentiment inside the Obama administration was that Assad had earned dire punishment. In Situation Room meetings that followed the attack on Ghouta, only the White House chief of staff, Denis McDonough, cautioned explicitly about the perils of intervention. John Kerry argued vociferously for action. “As previous storms in history have gathered, when unspeakable crimes were within our power to stop them, we have been warned against the temptations of looking the other way,” Kerry said in his speech. “History is full of leaders who have warned against inaction, indifference, and especially against silence when it mattered most.” Kerry counted President Obama among those leaders. A year earlier, when the administration suspected that the Assad regime was contemplating the use of chemical weapons, Obama had declared: “We have been very clear to the Assad regime … that a red line for us is we start seeing a whole bunch of chemical weapons moving around or being utilized. That would change my calculus. That would change my equation.” Despite this threat, Obama seemed to many critics to be coldly detached from the suffering of innocent Syrians. Late in the summer of 2011, he had called for Assad’s departure. “For the sake of the Syrian people,” Obama said, “the time has come for President Assad to step aside.” But Obama initially did little to bring about Assad’s end. He resisted demands to act in part because he assumed, based on the analysis of U.S. intelligence, that Assad would fall without his help. “He thought Assad would go the way Mubarak went,” Dennis Ross, a former Middle East adviser to Obama, told me, referring to the quick departure of Egyptian President Hosni Mubarak in early 2011, a moment that represented the acme of the Arab Spring. But as Assad clung to power, Obama’s resistance to direct intervention only grew. After several months of deliberation, he authorized the CIA to train and fund Syrian rebels, but he also shared the outlook of his former defense secretary, Robert Gates, who had routinely asked in meetings, “Shouldn’t we finish up the two wars we have before we look for another?” The current U.S. ambassador to the United Nations, Samantha Power, who is the most dispositionally interventionist among Obama’s senior advisers, had argued early for arming Syria’s rebels. Power, who during this period served on the National Security Council staff, is the author of a celebrated book excoriating a succession of U.S. presidents for their failures to prevent genocide. The book, *A Problem From Hell*, published in 2002, drew Obama to Power while he was in the U.S. Senate, though the two were not an obvious ideological match. Power is a partisan of the doctrine known as “responsibility to protect,” which holds that sovereignty should not be considered inviolate when a country is slaughtering its own citizens. She lobbied him to endorse this doctrine in the speech he delivered when he accepted the Nobel Peace Prize in 2009, but he declined. Obama generally does not believe a president should place American soldiers at great risk in order to prevent humanitarian disasters, unless those disasters pose a direct security threat to the United States. Power sometimes argued with Obama in front of other National Security Council officials, to the point where he could no longer conceal his frustration. “Samantha, enough, I’ve already read your book,” he once snapped. Obama, unlike liberal interventionists, is an admirer of the foreign-policy realism of President George H. W. Bush and, in particular, of Bush’s national-security adviser, Brent Scowcroft (“I love that guy,” Obama once told me). Bush and Scowcroft removed Saddam Hussein’s army from Kuwait in 1991, and they deftly managed the disintegration of the Soviet Union; Scowcroft also, on Bush’s behalf, toasted the leaders of China shortly after the slaughter in Tiananmen Square. As Obama was writing his campaign manifesto, *The Audacity of Hope*, in 2006, Susan Rice, then an informal adviser, felt it necessary to remind him to include at least one line of praise for the foreign policy of President Bill Clinton, to partially balance the praise he showered on Bush and Scowcroft. At the outset of the Syrian uprising, in early 2011, Power argued that the rebels, drawn from the ranks of ordinary citizens, deserved America’s enthusiastic support. Others noted that the rebels were farmers and doctors and carpenters, comparing these revolutionaries to the men who won America’s war for independence. Obama flipped this plea on its head. “When you have a professional army,” he once told me, “that is well armed and sponsored by two large states”—Iran and Russia—“who have huge stakes in this, and they are fighting against a farmer, a carpenter, an engineer who started out as protesters and suddenly now see themselves in the midst of a civil conflict …” He paused. “The notion that we could have—in a clean way that didn’t commit U.S. military forces—changed the equation on the ground there was never true.” The message Obama telegraphed in speeches and interviews was clear: He would not end up like the second President Bush—a president who became tragically overextended in the Middle East, whose decisions filled the wards of Walter Reed with grievously wounded soldiers, who was helpless to stop the obliteration of his reputation, even when he recalibrated his policies in his second term. Obama would say privately that the first task of an American president in the post-Bush international arena was “Don’t do stupid shit.” Obama’s reticence frustrated Power and others on his national-security team who had a preference for action. Hillary Clinton, when she was Obama’s secretary of state, argued for an early and assertive response to Assad’s violence. In 2014, after she left office, Clinton told me that “the failure to help build up a credible fighting force of the people who were the originators of the protests against Assad … left a big vacuum, which the jihadists have now filled.” When *The Atlantic* published this statement, and also published Clinton’s assessment that “great nations need organizing principles, and ‘Don’t do stupid stuff’ is not an organizing principle,” Obama became “rip-shit angry,” according to one of his senior advisers. The president did not understand how “Don’t do stupid shit” could be considered a controversial slogan. Ben Rhodes recalls that “the questions we were asking in the White House were ‘Who exactly is in the stupid-shit caucus? Who is pro–stupid shit?’ ” The Iraq invasion, Obama believed, should have taught Democratic interventionists like Clinton, who had voted for its authorization, the dangers of doing stupid shit. (Clinton quickly apologized to Obama for her comments, and a Clinton spokesman announced that the two would “hug it out” on Martha’s Vineyard when they crossed paths there later.) ### Video: Obama's “Red Line” That Wasn't Syria, for Obama, represented a slope potentially as slippery as Iraq. In his first term, he came to believe that only a handful of threats in the Middle East conceivably warranted direct U.S. military intervention. These included the threat posed by al‑Qaeda; threats to the continued existence of Israel (“It would be a moral failing for me as president of the United States” not to defend Israel, he once told me); and, not unrelated to Israel’s security, the threat posed by a nuclear-armed Iran. The danger to the United States posed by the Assad regime did not rise to the level of these challenges. Given Obama’s reticence about intervention, the bright-red line he drew for Assad in the summer of 2012 was striking. Even his own advisers were surprised. “I didn’t know it was coming,” his secretary of defense at the time, Leon Panetta, told me. I was told that Vice President Joe Biden repeatedly warned Obama against drawing a red line on chemical weapons, fearing that it would one day have to be enforced. Kerry, in his remarks on August 30, 2013, suggested that Assad should be punished in part because the “credibility and the future interests of the United States of America and our allies” were at stake. “It is directly related to our credibility and whether countries still believe the United States when it says something. They are watching to see if Syria can get away with it, because then maybe they too can put the world at greater risk.” Ninety minutes later, at the White House, Obama reinforced Kerry’s message in a public statement: “It’s important for us to recognize that when over 1,000 people are killed, including hundreds of innocent children, through the use of a weapon that 98 or 99 percent of humanity says should not be used even in war, and there is no action, then we’re sending a signal that that international norm doesn’t mean much. And that is a danger to our national security.” It appeared as though Obama had drawn the conclusion that damage to American credibility in one region of the world would bleed into others, and that U.S. deterrent credibility was indeed at stake in Syria. Assad, it seemed, had succeeded in pushing the president to a place he never thought he would have to go. Obama generally believes that the Washington foreign-policy establishment, which he secretly disdains, makes a fetish of “credibility”—particularly the sort of credibility purchased with force. The preservation of credibility, he says, led to Vietnam. Within the White House, Obama would argue that “dropping bombs on someone to prove that you’re willing to drop bombs on someone is just about the worst reason to use force.” American national-security credibility, as it is conventionally understood in the Pentagon, the State Department, and the cluster of think tanks headquartered within walking distance of the White House, is an intangible yet potent force—one that, when properly nurtured, keeps America’s friends feeling secure and keeps the international order stable. In White House meetings that crucial week in August, Biden, who ordinarily shared Obama’s worries about American overreach, argued passionately that “big nations don’t bluff.” America’s closest allies in Europe and across the Middle East believed Obama was threatening military action, and his own advisers did as well. At a joint press conference with Obama at the White House the previous May, David Cameron, the British prime minister, had said, “Syria’s history is being written in the blood of her people, and it is happening on our watch.” Cameron’s statement, one of his advisers told me, was meant to encourage Obama toward more-decisive action. “The prime minister was certainly under the impression that the president would enforce the red line,” the adviser told me. The Saudi ambassador in Washington at the time, Adel al-Jubeir, told friends, and his superiors in Riyadh, that the president was finally ready to strike. Obama “figured out how important this is,” Jubeir, who is now the Saudi foreign minister, told one interlocutor. “He will definitely strike.” Obama had already ordered the Pentagon to develop target lists. Five Arleigh Burke–class destroyers were in the Mediterranean, ready to fire cruise missiles at regime targets. French President François Hollande, the most enthusiastically pro-intervention among Europe’s leaders, was preparing to strike as well. All week, White House officials had publicly built the case that Assad had committed a crime against humanity. Kerry’s speech would mark the culmination of this campaign. But the president had grown queasy. In the days after the gassing of Ghouta, Obama would later tell me, he found himself recoiling from the idea of an attack unsanctioned by international law or by Congress. The American people seemed unenthusiastic about a Syria intervention; so too did one of the few foreign leaders Obama respects, Angela Merkel, the German chancellor. She told him that her country would not participate in a Syria campaign. And in a stunning development, on Thursday, August 29, the British Parliament denied David Cameron its blessing for an attack. John Kerry later told me that when he heard that, “internally, I went, *Oops*.” Obama was also unsettled by a surprise visit early in the week from James Clapper, his director of national intelligence, who interrupted the President’s Daily Brief, the threat report Obama receives each morning from Clapper’s analysts, to make clear that the intelligence on Syria’s use of sarin gas, while robust, was not a “slam dunk.” He chose the term carefully. Clapper, the chief of an intelligence community traumatized by its failures in the run-up to the Iraq War, was not going to overpromise, in the manner of the onetime CIA director George Tenet, who famously guaranteed George W. Bush a “slam dunk” in Iraq. While the Pentagon and the White House’s national-security apparatuses were still moving toward war (John Kerry told me he was expecting a strike the day after his speech), the president had come to believe that he was walking into a trap—one laid both by allies and by adversaries, and by conventional expectations of what an American president is supposed to do. Many of his advisers did not grasp the depth of the president’s misgivings; his Cabinet and his allies were certainly unaware of them. But his doubts were growing. Late on Friday afternoon, Obama determined that he was simply not prepared to authorize a strike. He asked McDonough, his chief of staff, to take a walk with him on the South Lawn of the White House. Obama did not choose McDonough randomly: He is the Obama aide most averse to U.S. military intervention, and someone who, in the words of one of his colleagues, “thinks in terms of traps.” Obama, ordinarily a preternaturally confident man, was looking for validation, and trying to devise ways to explain his change of heart, both to his own aides and to the public. He and McDonough stayed outside for an hour. Obama told him he was worried that Assad would place civilians as “human shields” around obvious targets. He also pointed out an underlying flaw in the proposed strike: U.S. missiles would not be fired at chemical-weapons depots, for fear of sending plumes of poison into the air. A strike would target military units that had delivered these weapons, but not the weapons themselves. Obama also shared with McDonough a long-standing resentment: He was tired of watching Washington unthinkingly drift toward war in Muslim countries. Four years earlier, the president believed, the Pentagon had “jammed” him on a troop surge for Afghanistan. Now, on Syria, he was beginning to feel jammed again. When the two men came back to the Oval Office, the president told his national-security aides that he planned to stand down. There would be no attack the next day; he wanted to refer the matter to Congress for a vote. Aides in the room were shocked. Susan Rice, now Obama’s national-security adviser, argued that the damage to America’s credibility would be serious and lasting. Others had difficulty fathoming how the president could reverse himself the day before a planned strike. Obama, however, was completely calm. “If you’ve been around him, you know when he’s ambivalent about something, when it’s a 51–49 decision,” Ben Rhodes told me. “But he was completely at ease.” Not long ago, I asked Obama to describe his thinking on that day. He listed the practical worries that had preoccupied him. “We had UN inspectors on the ground who were completing their work, and we could not risk taking a shot while they were there. A second major factor was the failure of Cameron to obtain the consent of his parliament.” The third, and most important, factor, he told me, was “our assessment that while we could inflict some damage on Assad, we could not, through a missile strike, eliminate the chemical weapons themselves, and what I would then face was the prospect of Assad having survived the strike and claiming he had successfully defied the United States, that the United States had acted unlawfully in the absence of a UN mandate, and that that would have potentially strengthened his hand rather than weakened it.” The fourth factor, he said, was of deeper philosophical importance. “This falls in the category of something that I had been brooding on for some time,” he said. “I had come into office with the strong belief that the scope of executive power in national-security issues is very broad, but not limitless.” Obama knew his decision not to bomb Syria would likely upset America’s allies. It did. The prime minister of France, Manuel Valls, told me that his government was already worried about the consequences of earlier inaction in Syria when word came of the stand-down. “By not intervening early, we have created a monster,” Valls told me. “We were absolutely certain that the U.S. administration would say yes. Working with the Americans, we had already seen the targets. It was a great surprise. If we had bombed as was planned, I think things would be different today.” The crown prince of Abu Dhabi, Mohammed bin Zayed al-Nahyan, who was already upset with Obama for “abandoning” Hosni Mubarak, the former president of Egypt, fumed to American visitors that the U.S. was led by an “untrustworthy” president. The king of Jordan, Abdullah II—already dismayed by what he saw as Obama’s illogical desire to distance the U.S. from its traditional Sunni Arab allies and create a new alliance with Iran, Assad’s Shia sponsor—complained privately, “I think I believe in American power more than Obama does.” The Saudis, too, were infuriated. They had never trusted Obama—he had, long before he became president, referred to them as a “so-called ally” of the U.S. “Iran is the new great power of the Middle East, and the U.S. is the old,” Jubeir, the Saudi ambassador in Washington, told his superiors in Riyadh. Obama’s decision caused tremors across Washington as well. John McCain and Lindsey Graham, the two leading Republican hawks in the Senate, had met with Obama in the White House earlier in the week and had been promised an attack. They were angered by the about-face. Damage was done even inside the administration. Neither Chuck Hagel, then the secretary of defense, nor John Kerry was in the Oval Office when the president informed his team of his thinking. Kerry would not learn about the change until later that evening. “I just got fucked over,” he told a friend shortly after talking to the president that night. (When I asked Kerry recently about that tumultuous night, he said, “I didn’t stop to analyze it. I figured the president had a reason to make a decision and, honestly, I understood his notion.”) The next few days were chaotic. The president asked Congress to authorize the use of force—the irrepressible Kerry served as chief lobbyist—and it quickly became apparent in the White House that Congress had little interest in a strike. When I spoke with Biden recently about the red-line decision, he made special note of this fact. “It matters to have Congress with you, in terms of your ability to sustain what you set out to do,” he said. Obama “didn’t go to Congress to get himself off the hook. He had his doubts at that point, but he knew that if he was going to do anything, he better damn well have the public with him, or it would be a very short ride.” Congress’s clear ambivalence convinced Biden that Obama was correct to fear the slippery slope. “What happens when we get a plane shot down? Do we not go in and rescue?,” Biden asked. “You need the support of the American people.” Amid the confusion, a deus ex machina appeared in the form of the Russian president, Vladimir Putin. At the G20 summit in St. Petersburg, which was held the week after the Syria reversal, Obama pulled Putin aside, he recalled to me, and told the Russian president “that if he forced Assad to get rid of the chemical weapons, that that would eliminate the need for us taking a military strike.” Within weeks, Kerry, working with his Russian counterpart, Sergey Lavrov, would engineer the removal of most of Syria’s chemical-weapons arsenal—a program whose existence Assad until then had refused to even acknowledge. The arrangement won the president praise from, of all people, Benjamin Netanyahu, the Israeli prime minister, with whom he has had a consistently contentious relationship. The removal of Syria’s chemical-weapons stockpiles represented “the one ray of light in a very dark region,” Netanyahu told me not long after the deal was announced. John Kerry today expresses no patience for those who argue, as he himself once did, that Obama should have bombed Assad-regime sites in order to buttress America’s deterrent capability. “You’d still have the weapons there, and you’d probably be fighting isil” for control of the weapons, he said, referring to the Islamic State, the terror group also known as isis. “It just doesn’t make sense. But I can’t deny to you that this notion about the red line being crossed and [Obama’s] not doing anything gained a life of its own.” Obama understands that the decision he made to step back from air strikes, and to allow the violation of a red line he himself had drawn to go unpunished, will be interrogated mercilessly by historians. But today that decision is a source of deep satisfaction for him. “I’m very proud of this moment,” he told me. “The overwhelming weight of conventional wisdom and the machinery of our national-security apparatus had gone fairly far. The perception was that my credibility was at stake, that America’s credibility was at stake. And so for me to press the pause button at that moment, I knew, would cost me politically. And the fact that I was able to pull back from the immediate pressures and think through in my own mind what was in America’s interest, not only with respect to Syria but also with respect to our democracy, was as tough a decision as I’ve made—and I believe ultimately it was the right decision to make.” This was the moment the president believes he finally broke with what he calls, derisively, the “Washington playbook.” “Where am I controversial? When it comes to the use of military power,” he said. “That is the source of the controversy. There’s a playbook in Washington that presidents are supposed to follow. It’s a playbook that comes out of the foreign-policy establishment. And the playbook prescribes responses to different events, and these responses tend to be militarized responses. Where America is directly threatened, the playbook works. But the playbook can also be a trap that can lead to bad decisions. In the midst of an international challenge like Syria, you get judged harshly if you don’t follow the playbook, even if there are good reasons why it does not apply.” I have come to believe that, in Obama’s mind, August 30, 2013, was his liberation day, the day he defied not only the foreign-policy establishment and its cruise-missile playbook, but also the demands of America’s frustrating, high-maintenance allies in the Middle East—countries, he complains privately to friends and advisers, that seek to exploit American “muscle” for their own narrow and sectarian ends. By 2013, Obama’s resentments were well developed. He resented military leaders who believed they could fix any problem if the commander in chief would simply give them what they wanted, and he resented the foreign-policy think-tank complex. A widely held sentiment inside the White House is that many of the most prominent foreign-policy think tanks in Washington are doing the bidding of their Arab and pro-Israel funders. I’ve heard one administration official refer to Massachusetts Avenue, the home of many of these think tanks, as “Arab-occupied territory.” For some foreign-policy experts, even within his own administration, Obama’s about-face on enforcing the red line was a dispiriting moment in which he displayed irresolution and naïveté, and did lasting damage to America’s standing in the world. “Once the commander in chief draws that red line,” Leon Panetta, who served as CIA director and then as secretary of defense in Obama’s first term, told me recently, “then I think the credibility of the commander in chief and this nation is at stake if he doesn’t enforce it.” Right after Obama’s reversal, Hillary Clinton said privately, “If you say you’re going to strike, you have to strike. There’s no choice.” “Assad is effectively being rewarded for the use of chemical weapons, rather than ‘punished’ as originally planned.” Shadi Hamid, a scholar at the Brookings Institution, wrote for *The Atlantic* at the time. “He has managed to remove the threat of U.S. military action while giving very little up in return.” Even commentators who have been broadly sympathetic to Obama’s policies saw this episode as calamitous. Gideon Rose, the editor of *Foreign Affairs*, wrote recently that Obama’s handling of this crisis—“first casually announcing a major commitment, then dithering about living up to it, then frantically tossing the ball to Congress for a decision—was a case study in embarrassingly amateurish improvisation.” Obama’s defenders, however, argue that he did no damage to U.S. credibility, citing Assad’s subsequent agreement to have his chemical weapons removed. “The threat of force was credible enough for them to give up their chemical weapons,” Tim Kaine, a Democratic senator from Virginia, told me. “We threatened military action and they responded. That’s deterrent credibility.” History may record August 30, 2013, as the day Obama prevented the U.S. from entering yet another disastrous Muslim civil war, and the day he removed the threat of a chemical attack on Israel, Turkey, or Jordan. Or it could be remembered as the day he let the Middle East slip from America’s grasp, into the hands of Russia, Iran, and isis. I first spoke with obama about foreign policy when he was a U.S. senator, in 2006. At the time, I was familiar mainly with the text of a speech he had delivered four years earlier, at a Chicago antiwar rally. It was an unusual speech for an antiwar rally in that it was not antiwar; Obama, who was then an Illinois state senator, argued only against one specific and, at the time, still theoretical, war. “I suffer no illusions about Saddam Hussein,” he said. “He is a brutal man. A ruthless man … But I also know that Saddam poses no imminent and direct threat to the United States or to his neighbors.” He added, “I know that an invasion of Iraq without a clear rationale and without strong international support will only fan the flames of the Middle East, and encourage the worst, rather than best, impulses of the Arab world, and strengthen the recruitment arm of al-Qaeda.” This speech had made me curious about its author. I wanted to learn how an Illinois state senator, a part-time law professor who spent his days traveling between Chicago and Springfield, had come to a more prescient understanding of the coming quagmire than the most experienced foreign-policy thinkers of his party, including such figures as Hillary Clinton, Joe Biden, and John Kerry, not to mention, of course, most Republicans and many foreign-policy analysts and writers, including me. Since that first meeting in 2006, I’ve interviewed Obama periodically, mainly on matters related to the Middle East. But over the past few months, I’ve spent several hours talking with him about the broadest themes of his “long game” foreign policy, including the themes he is most eager to discuss—namely, the ones that have nothing to do with the Middle East. “isis is not an existential threat to the United States,” he told me in one of these conversations. “Climate change is a potential existential threat to the entire world if we don’t do something about it.” Obama explained that climate change worries him in particular because “it is a political problem perfectly designed to repel government intervention. It involves every single country, and it is a comparatively slow-moving emergency, so there is always something seemingly more urgent on the agenda.” At the moment, of course, the most urgent of the “seemingly more urgent” issues is Syria. But at any given moment, Obama’s entire presidency could be upended by North Korean aggression, or an assault by Russia on a member of nato, or an isis-planned attack on U.S. soil. Few presidents have faced such diverse tests on the international stage as Obama has, and the challenge for him, as for all presidents, has been to distinguish the merely urgent from the truly important, and to focus on the important. My goal in our recent conversations was to see the world through Obama’s eyes, and to understand what he believes America’s role in the world should be. This article is informed by our recent series of conversations, which took place in the Oval Office; over lunch in his dining room; aboard *Air Force One*; and in Kuala Lumpur during his most recent visit to Asia, in November. It is also informed by my previous interviews with him and by his speeches and prolific public ruminations, as well as by conversations with his top foreign-policy and national-security advisers, foreign leaders and their ambassadors in Washington, friends of the president and others who have spoken with him about his policies and decisions, and his adversaries and critics. Over the course of our conversations, I came to see Obama as a president who has grown steadily more fatalistic about the constraints on America’s ability to direct global events, even as he has, late in his presidency, accumulated a set of potentially historic foreign-policy achievements—controversial, provisional achievements, to be sure, but achievements nonetheless: the opening to Cuba, the Paris climate-change accord, the Trans-Pacific Partnership trade agreement, and, of course, the Iran nuclear deal. These he accomplished despite his growing sense that larger forces—the riptide of tribal feeling in a world that should have already shed its atavism; the resilience of small men who rule large countries in ways contrary to their own best interests; the persistence of fear as a governing human emotion—frequently conspire against the best of America’s intentions. But he also has come to learn, he told me, that very little is accomplished in international affairs without U.S. leadership. Obama talked me through this apparent contradiction. “I want a president who has the sense that you can’t fix everything,” he said. But on the other hand, “if we don’t set the agenda, it doesn’t happen.” He explained what he meant. “The fact is, there is not a summit I’ve attended since I’ve been president where we are not setting the agenda, where we are not responsible for the key results,” he said. “That’s true whether you’re talking about nuclear security, whether you’re talking about saving the world financial system, whether you’re talking about climate.” One day, over lunch in the Oval Office dining room, I asked the president how he thought his foreign policy might be understood by historians. He started by describing for me a four-box grid representing the main schools of American foreign-policy thought. One box he called isolationism, which he dismissed out of hand. “The world is ever-shrinking,” he said. “Withdrawal is untenable.” The other boxes he labeled realism, liberal interventionism, and internationalism. “I suppose you could call me a realist in believing we can’t, at any given moment, relieve all the world’s misery,” he said. “We have to choose where we can make a real impact.” He also noted that he was quite obviously an internationalist, devoted as he is to strengthening multilateral organizations and international norms. I told him my impression was that the various traumas of the past seven years have, if anything, intensified his commitment to realist-driven restraint. Had nearly two full terms in the White House soured him on interventionism? “For all of our warts, the United States has clearly been a force for good in the world,” he said. “If you compare us to previous superpowers, we act less on the basis of naked self-interest, and have been interested in establishing norms that benefit everyone. If it is possible to do good at a bearable cost, to save lives, we will do it.” If a crisis, or a humanitarian catastrophe, does not meet his stringent standard for what constitutes a direct national-security threat, Obama said, he doesn’t believe that he should be forced into silence. He is not so much the realist, he suggested, that he won’t pass judgment on other leaders. Though he has so far ruled out the use of direct American power to depose Assad, he was not wrong, he argued, to call on Assad to go. “Oftentimes when you get critics of our Syria policy, one of the things that they’ll point out is ‘You called for Assad to go, but you didn’t force him to go. You did not invade.’ And the notion is that if you weren’t going to overthrow the regime, you shouldn’t have said anything. That’s a weird argument to me, the notion that if we use our moral authority to say ‘This is a brutal regime, and this is not how a leader should treat his people,’ once you do that, you are obliged to invade the country and install a government you prefer.” “I am very much the internationalist,” Obama said in a later conversation. “And I am also an idealist insofar as I believe that we should be promoting values, like democracy and human rights and norms and values, because not only do they serve our interests the more people adopt values that we share—in the same way that, economically, if people adopt rule of law and property rights and so forth, that is to our advantage—but because it makes the world a better place. And I’m willing to say that in a very corny way, and in a way that probably Brent Scowcroft would not say. “Having said that,” he continued, “I also believe that the world is a tough, complicated, messy, mean place, and full of hardship and tragedy. And in order to advance both our security interests and those ideals and values that we care about, we’ve got to be hardheaded at the same time as we’re bighearted, and pick and choose our spots, and recognize that there are going to be times where the best that we can do is to shine a spotlight on something that’s terrible, but not believe that we can automatically solve it. There are going to be times where our security interests conflict with our concerns about human rights. There are going to be times where we can do something about innocent people being killed, but there are going to be times where we can’t.” If Obama ever questioned whether America really is the world’s one indispensable nation, he no longer does so. But he is the rare president who seems at times to resent indispensability, rather than embrace it. “Free riders aggravate me,” he told me. Recently, Obama warned that Great Britain would no longer be able to claim a “special relationship” with the United States if it did not commit to spending at least 2 percent of its GDP on defense. “You have to pay your fair share,” Obama told David Cameron, who subsequently met the 2 percent threshold. Part of his mission as president, Obama explained, is to spur other countries to take action for themselves, rather than wait for the U.S. to lead. The defense of the liberal international order against jihadist terror, Russian adventurism, and Chinese bullying depends in part, he believes, on the willingness of other nations to share the burden with the U.S. This is why the controversy surrounding the assertion—made by an anonymous administration official to *The New Yorker* during the Libya crisis of 2011—that his policy consisted of “leading from behind” perturbed him. “We don’t have to always be the ones who are up front,” he told me. “Sometimes we’re going to get what we want precisely because we are sharing in the agenda. The irony is that it was precisely in order to prevent the Europeans and the Arab states from holding our coats while we did all the fighting that we, by design, insisted” that they lead during the mission to remove Muammar Qaddafi from power in Libya. “It was part of the anti–free rider campaign.” The president also seems to believe that sharing leadership with other countries is a way to check America’s more unruly impulses. “One of the reasons I am so focused on taking action multilaterally where our direct interests are not at stake is that multilateralism regulates hubris,” he explained. He consistently invokes what he understands to be America’s past failures overseas as a means of checking American self-righteousness. “We have history,” he said. “We have history in Iran, we have history in Indonesia and Central America. So we have to be mindful of our history when we start talking about intervening, and understand the source of other people’s suspicions.” In his efforts to off-load some of America’s foreign-policy responsibilities to its allies, Obama appears to be a classic retrenchment president in the manner of Dwight D. Eisenhower and Richard Nixon. Retrenchment, in this context, is defined as “pulling back, spending less, cutting risk, and shifting burdens to allies,” Stephen Sestanovich, an expert on presidential foreign policy at the Council on Foreign Relations, explained to me. “If John McCain had been elected in 2008, you would still have seen some degree of retrenchment,” Sestanovich said. “It’s what the country wanted. If you come into office in the middle of a war that is not going well, you’re convinced that the American people have hired you to do less.” One difference between Eisenhower and Nixon, on the one hand, and Obama, on the other, Sestanovich said, is that Obama “appears to have had a personal, ideological commitment to the idea that foreign policy had consumed too much of the nation’s attention and resources.” I asked Obama about retrenchment. “Almost every great world power has succumbed” to overextension, he said. “What I think is not smart is the idea that every time there is a problem, we send in our military to impose order. We just can’t do that.” But once he decides that a particular challenge represents a direct national-security threat, he has shown a willingness to act unilaterally. This is one of the larger ironies of the Obama presidency: He has relentlessly questioned the efficacy of force, but he has also become the most successful terrorist-hunter in the history of the presidency, one who will hand to his successor a set of tools an accomplished assassin would envy. “He applies different standards to direct threats to the U.S.,” Ben Rhodes says. “For instance, despite his misgivings about Syria, he has not had a second thought about drones.” Some critics argue he should have had a few second thoughts about what they see as the overuse of drones. But John Brennan, Obama’s CIA director, told me recently that he and the president “have similar views. One of them is that sometimes you have to take a life to save even more lives. We have a similar view of just-war theory. The president requires near-certainty of no collateral damage. But if he believes it is necessary to act, he doesn’t hesitate.” Those who speak with Obama about jihadist thought say that he possesses a no-illusions understanding of the forces that drive apocalyptic violence among radical Muslims, but he has been careful about articulating that publicly, out of concern that he will exacerbate anti-Muslim xenophobia. He has a tragic realist’s understanding of sin, cowardice, and corruption, and a Hobbesian appreciation of how fear shapes human behavior. And yet he consistently, and with apparent sincerity, professes optimism that the world is bending toward justice. He is, in a way, a Hobbesian optimist. ### Video: Jeffrey Goldberg speaks with Ben Rhodes The contradictions do not end there. Though he has a reputation for prudence, he has also been eager to question some of the long-standing assumptions undergirding traditional U.S. foreign-policy thinking. To a remarkable degree, he is willing to question why America’s enemies are its enemies, or why some of its friends are its friends. He overthrew half a century of bipartisan consensus in order to reestablish ties with Cuba. He questioned why the U.S. should avoid sending its forces into Pakistan to kill al-Qaeda leaders, and he privately questions why Pakistan, which he believes is a disastrously dysfunctional country, should be considered an ally of the U.S. at all. According to Leon Panetta, he has questioned why the U.S. should maintain Israel’s so-called qualitative military edge, which grants it access to more sophisticated weapons systems than America’s Arab allies receive; but he has also questioned, often harshly, the role that America’s Sunni Arab allies play in fomenting anti-American terrorism. He is clearly irritated that foreign-policy orthodoxy compels him to treat Saudi Arabia as an ally. And of course he decided early on, in the face of great criticism, that he wanted to reach out to America’s most ardent Middle Eastern foe, Iran. The nuclear deal he struck with Iran proves, if nothing else, that Obama is not risk-averse. He has bet global security and his own legacy that one of the world’s leading state sponsors of terrorism will adhere to an agreement to curtail its nuclear program. It is assumed, at least among his critics, that Obama sought the Iran deal because he has a vision of a historic American-Persian rapprochement. But his desire for the nuclear agreement was born of pessimism as much as it was of optimism. “The Iran deal was never primarily about trying to open a new era of relations between the U.S. and Iran,” Susan Rice told me. “It was far more pragmatic and minimalist. The aim was very simply to make a dangerous country substantially less dangerous. No one had any expectation that Iran would be a more benign actor.” I once mentioned to obama a scene from *The Godfather: Part III*, in which Michael Corleone complains angrily about his failure to escape the grasp of organized crime. I told Obama that the Middle East is to his presidency what the Mob is to Corleone, and I started to quote the Al Pacino line: “Just when I thought I was out—” “It pulls you back in,” Obama said, completing the thought. The story of Obama’s encounter with the Middle East follows an arc of disenchantment. In his first extended spree of fame, as a presidential candidate in 2008, Obama often spoke with hope about the region. In Berlin that summer, in a speech to 200,000 adoring Germans, he said, “This is the moment we must help answer the call for a new dawn in the Middle East.” The next year, as president, he gave a speech in Cairo meant to reset U.S. relations with the world’s Muslims. He spoke about Muslims in his own family, and his childhood years in Indonesia, and confessed America’s sins even as he criticized those in the Muslim world who demonized the U.S. What drew the most attention, though, was his promise to address the Israeli-Palestinian conflict, which was then thought to be the central animating concern of Arab Muslims. His sympathy for the Palestinians moved the audience, but complicated his relations with Benjamin Netanyahu, the Israeli prime minister—especially because Obama had also decided to bypass Jerusalem on his first presidential visit to the Middle East. When I asked Obama recently what he had hoped to accomplish with his Cairo reset speech, he said that he had been trying—unsuccessfully, he acknowledged—to persuade Muslims to more closely examine the roots of their unhappiness. “My argument was this: Let’s all stop pretending that the cause of the Middle East’s problems is Israel,” he told me. “We want to work to help achieve statehood and dignity for the Palestinians, but I was hoping that my speech could trigger a discussion, could create space for Muslims to address the real problems they are confronting—problems of governance, and the fact that some currents of Islam have not gone through a reformation that would help people adapt their religious doctrines to modernity. My thought was, I would communicate that the U.S. is not standing in the way of this progress, that we would help, in whatever way possible, to advance the goals of a practical, successful Arab agenda that provided a better life for ordinary people.” Through the first flush of the Arab Spring, in 2011, Obama continued to speak optimistically about the Middle East’s future, coming as close as he ever would to embracing the so-called freedom agenda of George W. Bush, which was characterized in part by the belief that democratic values could be implanted in the Middle East. He equated protesters in Tunisia and Tahrir Square with Rosa Parks and the “patriots of Boston.” “After decades of accepting the world as it is in the region, we have a chance to pursue the world as it should be,” he said in a speech at the time. “The United States supports a set of universal rights. And these rights include free speech, the freedom of peaceful assembly, the freedom of religion, equality for men and women under the rule of law, and the right to choose your own leaders … Our support for these principles is not a secondary interest.” But over the next three years, as the Arab Spring gave up its early promise, and brutality and dysfunction overwhelmed the Middle East, the president grew disillusioned. Some of his deepest disappointments concern Middle Eastern leaders themselves. Benjamin Netanyahu is in his own category: Obama has long believed that Netanyahu could bring about a two-state solution that would protect Israel’s status as a Jewish-majority democracy, but is too fearful and politically paralyzed to do so. Obama has also not had much patience for Netanyahu and other Middle Eastern leaders who question his understanding of the region. In one of Netanyahu’s meetings with the president, the Israeli prime minister launched into something of a lecture about the dangers of the brutal region in which he lives, and Obama felt that Netanyahu was behaving in a condescending fashion, and was also avoiding the subject at hand: peace negotiations. Finally, the president interrupted the prime minister: “Bibi, you have to understand something,” he said. “I’m the African American son of a single mother, and I live here, in this house. I live in the White House. I managed to get elected president of the United States. You think I don’t understand what you’re talking about, but I do.” Other leaders also frustrate him immensely. Early on, Obama saw Recep Tayyip Erdoğan, the president of Turkey, as the sort of moderate Muslim leader who would bridge the divide between East and West—but Obama now considers him a failure and an authoritarian, one who refuses to use his enormous army to bring stability to Syria. And on the sidelines of a nato summit in Wales in 2014, Obama pulled aside King Abdullah II of Jordan. Obama said he had heard that Abdullah had complained to friends in the U.S. Congress about his leadership, and told the king that if he had complaints, he should raise them directly. The king denied that he had spoken ill of him. In recent days, the president has taken to joking privately, “All I need in the Middle East is a few smart autocrats.” Obama has always had a fondness for pragmatic, emotionally contained technocrats, telling aides, “If only everyone could be like the Scandinavians, this would all be easy.” The unraveling of the Arab Spring darkened the president’s view of what the U.S. could achieve in the Middle East, and made him realize how much the chaos there was distracting from other priorities. “The president recognized during the course of the Arab Spring that the Middle East was consuming us,” John Brennan, who served in Obama’s first term as his chief counterterrorism adviser, told me recently. But what sealed Obama’s fatalistic view was the failure of his administration’s intervention in Libya, in 2011. That intervention was meant to prevent the country’s then-dictator, Muammar Qaddafi, from slaughtering the people of Benghazi, as he was threatening to do. Obama did not want to join the fight; he was counseled by Joe Biden and his first-term secretary of defense Robert Gates, among others, to steer clear. But a strong faction within the national-security team—Secretary of State Hillary Clinton and Susan Rice, who was then the ambassador to the United Nations, along with Samantha Power, Ben Rhodes, and Antony Blinken, who was then Biden’s national-security adviser—lobbied hard to protect Benghazi, and prevailed. (Biden, who is acerbic about Clinton’s foreign-policy judgment, has said privately, “Hillary just wants to be Golda Meir.”) American bombs fell, the people of Benghazi were spared from what may or may not have been a massacre, and Qaddafi was captured and executed. But Obama says today of the intervention, “It didn’t work.” The U.S., he believes, planned the Libya operation carefully—and yet the country is still a disaster. Why, given what seems to be the president’s natural reticence toward getting militarily ensnarled where American national security is not directly at stake, did he accept the recommendation of his more activist advisers to intervene? “The social order in Libya has broken down,” Obama said, explaining his thinking at the time. “You have massive protests against Qaddafi. You’ve got tribal divisions inside of Libya. Benghazi is a focal point for the opposition regime. And Qaddafi is marching his army toward Benghazi, and he has said, ‘We will kill them like rats.’ “Now, option one would be to do nothing, and there were some in my administration who said, as tragic as the Libyan situation may be, it’s not our problem. The way I looked at it was that it would be our problem if, in fact, complete chaos and civil war broke out in Libya. But this is not so at the core of U.S. interests that it makes sense for us to unilaterally strike against the Qaddafi regime. At that point, you’ve got Europe and a number of Gulf countries who despise Qaddafi, or are concerned on a humanitarian basis, who are calling for action. But what has been a habit over the last several decades in these circumstances is people pushing us to act but then showing an unwillingness to put any skin in the game.” “Free riders?,” I interjected. “Free riders,” he said, and continued. “So what I said at that point was, we should act as part of an international coalition. But because this is not at the core of our interests, we need to get a UN mandate; we need Europeans and Gulf countries to be actively involved in the coalition; we will apply the military capabilities that are unique to us, but we expect others to carry their weight. And we worked with our defense teams to ensure that we could execute a strategy without putting boots on the ground and without a long-term military commitment in Libya. “So we actually executed this plan as well as I could have expected: We got a UN mandate, we built a coalition, it cost us $1 billion—which, when it comes to military operations, is very cheap. We averted large-scale civilian casualties, we prevented what almost surely would have been a prolonged and bloody civil conflict. And despite all that, Libya is a mess.” *Mess* is the president’s diplomatic term; privately, he calls Libya a “shit show,” in part because it’s subsequently become an isis haven—one that he has already targeted with air strikes. It became a shit show, Obama believes, for reasons that had less to do with American incompetence than with the passivity of America’s allies and with the obdurate power of tribalism. “When I go back and I ask myself what went wrong,” Obama said, “there’s room for criticism, because I had more faith in the Europeans, given Libya’s proximity, being invested in the follow-up,” he said. He noted that Nicolas Sarkozy, the French president, lost his job the following year. And he said that British Prime Minister David Cameron soon stopped paying attention, becoming “distracted by a range of other things.” Of France, he said, “Sarkozy wanted to trumpet the flights he was taking in the air campaign, despite the fact that we had wiped out all the air defenses and essentially set up the entire infrastructure” for the intervention. This sort of bragging was fine, Obama said, because it allowed the U.S. to “purchase France’s involvement in a way that made it less expensive for us and less risky for us.” In other words, giving France extra credit in exchange for less risk and cost to the United States was a useful trade-off—except that “from the perspective of a lot of the folks in the foreign-policy establishment, well, that was terrible. If we’re going to do something, obviously we’ve got to be up front, and nobody else is sharing in the spotlight.” Obama also blamed internal Libyan dynamics. “The degree of tribal division in Libya was greater than our analysts had expected. And our ability to have any kind of structure there that we could interact with and start training and start providing resources broke down very quickly.” Libya proved to him that the Middle East was best avoided. “There is no way we should commit to governing the Middle East and North Africa,” he recently told a former colleague from the Senate. “That would be a basic, fundamental mistake.” President Obama did not come into office preoccupied by the Middle East. He is the first child of the Pacific to become president—born in Hawaii, raised there and, for four years, in Indonesia—and he is fixated on turning America’s attention to Asia. For Obama, Asia represents the future. Africa and Latin America, in his view, deserve far more U.S. attention than they receive. Europe, about which he is unromantic, is a source of global stability that requires, to his occasional annoyance, American hand-holding. And the Middle East is a region to be avoided—one that, thanks to America’s energy revolution, will soon be of negligible relevance to the U.S. economy. It is not oil but another of the Middle East’s exports, terrorism, that shapes Obama’s understanding of his responsibilities there. Early in 2014, Obama’s intelligence advisers told him that isis was of marginal importance. According to administration officials, General Lloyd Austin, then the commander of Central Command, which oversees U.S. military operations in the Middle East, told the White House that the Islamic State was “a flash in the pan.” This analysis led Obama, in an interview with *The New Yorker*, to describe the constellation of jihadist groups in Iraq and Syria as terrorism’s “jayvee team.” (A spokesman for Austin told me, “At no time has General Austin ever considered isil a ‘flash in the pan’ phenomenon.”) But by late spring of 2014, after isis took the northern-Iraq city of Mosul, he came to believe that U.S. intelligence had failed to appreciate the severity of the threat and the inadequacies of the Iraqi army, and his view shifted. After isis beheaded three American civilians in Syria, it became obvious to Obama that defeating the group was of more immediate urgency to the U.S. than overthrowing Bashar al-Assad. Advisers recall that Obama would cite a pivotal moment in *The Dark Knight*, the 2008 Batman movie, to help explain not only how he understood the role of isis, but how he understood the larger ecosystem in which it grew. “There’s a scene in the beginning in which the gang leaders of Gotham are meeting,” the president would say. “These are men who had the city divided up. They were thugs, but there was a kind of order. Everyone had his turf. And then the Joker comes in and lights the whole city on fire. isil is the Joker. It has the capacity to set the whole region on fire. That’s why we have to fight it.” The rise of the Islamic State deepened Obama’s conviction that the Middle East could not be fixed—not on his watch, and not for a generation to come. On a rainy Wednesday in mid-November, President Obama appeared on a stage at the Asia-Pacific Economic Cooperation (apec) summit in Manila with Jack Ma, the founder of the Chinese e-commerce company Alibaba, and a 31-year-old Filipina inventor named Aisa Mijeno. The ballroom was crowded with Asian CEOs, American business leaders, and government officials from across the region. Obama, who was greeted warmly, first delivered informal remarks from behind a podium, mainly about the threat of climate change. Obama made no mention of the subject preoccupying much of the rest of the world—the isis attacks in Paris five days earlier, which had killed 130 people. Obama had arrived in Manila the day before from a G20 summit held in Antalya, Turkey. The Paris attacks had been a main topic of conversation in Antalya, where Obama held a particularly contentious press conference on the subject. The traveling White House press corps was unrelenting: “Isn’t it time for your strategy to change?” one reporter asked. This was followed by “Could I ask you to address your critics who say that your reluctance to enter another Middle East war, and your preference of diplomacy over using the military, makes the United States weaker and emboldens our enemies?” And then came this imperishable question, from a CNN reporter: “If you’ll forgive the language—why can’t we take out these bastards?” Which was followed by “Do you think you really understand this enemy well enough to defeat them and to protect the homeland?” As the questions unspooled, Obama became progressively more irritated. He described his isis strategy at length, but the only time he exhibited an emotion other than disdain was when he addressed an emerging controversy about America’s refugee policy. Republican governors and presidential candidates had suddenly taken to demanding that the United States block Syrian refugees from coming to America. Ted Cruz had proposed accepting only Christian Syrians. Chris Christie had said that all refugees, including “orphans under 5,” should be banned from entry until proper vetting procedures had been put in place. This rhetoric appeared to frustrate Obama immensely. “When I hear folks say that, well, maybe we should just admit the Christians but not the Muslims; when I hear political leaders suggesting that there would be a religious test for which person who’s fleeing from a war-torn country is admitted,” Obama told the assembled reporters, “that’s not American. That’s not who we are. We don’t have religious tests to our compassion.” *Air Force One* departed Antalya and arrived 10 hours later in Manila. That’s when the president’s advisers came to understand, in the words of one official, that “everyone back home had lost their minds.” Susan Rice, trying to comprehend the rising anxiety, searched her hotel television in vain for CNN, finding only the BBC and Fox News. She toggled between the two, looking for the mean, she told people on the trip. Later, the president would say that he had failed to fully appreciate the fear many Americans were experiencing about the possibility of a Paris-style attack in the U.S. Great distance, a frantic schedule, and the jet-lag haze that envelops a globe-spanning presidential trip were working against him. But he has never believed that terrorism poses a threat to America commensurate with the fear it generates. Even during the period in 2014 when isis was executing its American captives in Syria, his emotions were in check. Valerie Jarrett, Obama’s closest adviser, told him people were worried that the group would soon take its beheading campaign to the U.S. “They’re not coming here to chop our heads off,” he reassured her. Obama frequently reminds his staff that terrorism takes far fewer lives in America than handguns, car accidents, and falls in bathtubs do. Several years ago, he expressed to me his admiration for Israelis’ “resilience” in the face of constant terrorism, and it is clear that he would like to see resilience replace panic in American society. Nevertheless, his advisers are fighting a constant rearguard action to keep Obama from placing terrorism in what he considers its “proper” perspective, out of concern that he will seem insensitive to the fears of the American people. The frustration among Obama’s advisers spills over into the Pentagon and the State Department. John Kerry, for one, seems more alarmed about isis than the president does. Recently, when I asked the secretary of state a general question—is the Middle East still important to the U.S.?—he answered by talking exclusively about isis. “This is a threat to everybody in the world,” he said, a group “overtly committed to destroying people in the West and in the Middle East. Imagine what would happen if we don’t stand and fight them, if we don’t lead a coalition—as we are doing, by the way. If we didn’t do that, you could have allies and friends of ours fall. You could have a massive migration into Europe that destroys Europe, leads to the pure destruction of Europe, ends the European project, and everyone runs for cover and you’ve got the 1930s all over again, with nationalism and fascism and other things breaking out. Of course we have an interest in this, a huge interest in this.” When I noted to Kerry that the president’s rhetoric doesn’t match his, he said, “President Obama sees all of this, but he doesn’t gin it up into this kind of—he thinks we are on track. He has escalated his efforts. But he’s not trying to create hysteria … I think the president is always inclined to try to keep things on an appropriate equilibrium. I respect that.” Obama modulates his discussion of terrorism for several reasons: He is, by nature, Spockian. And he believes that a misplaced word, or a frightened look, or an ill-considered hyperbolic claim, could tip the country into panic. The sort of panic he worries about most is the type that would manifest itself in anti-Muslim xenophobia or in a challenge to American openness and to the constitutional order. The president also gets frustrated that terrorism keeps swamping his larger agenda, particularly as it relates to rebalancing America’s global priorities. For years, the “pivot to Asia” has been a paramount priority of his. America’s economic future lies in Asia, he believes, and the challenge posed by China’s rise requires constant attention. From his earliest days in office, Obama has been focused on rebuilding the sometimes-threadbare ties between the U.S. and its Asian treaty partners, and he is perpetually on the hunt for opportunities to draw other Asian nations into the U.S. orbit. His dramatic opening to Burma was one such opportunity; Vietnam and the entire constellation of Southeast Asian countries fearful of Chinese domination presented others. In Manila, at apec, Obama was determined to keep the conversation focused on this agenda, and not on what he viewed as the containable challenge presented by isis. Obama’s secretary of defense, Ashton Carter, told me not long ago that Obama has maintained his focus on Asia even as Syria and other Middle Eastern conflicts continue to flare. Obama believes, Carter said, that Asia “is the part of the world of greatest consequence to the American future, and that no president can take his eye off of this.” He added, “He consistently asks, even in the midst of everything else that’s going on, ‘Where are we in the Asia-Pacific rebalance? Where are we in terms of resources?’ He’s been extremely consistent about that, even in times of Middle East tension.” After Obama finished his presentation on climate change, he joined Ma and Mijeno, who had seated themselves on nearby armchairs, where Obama was preparing to interview them in the manner of a daytime talk-show host—an approach that seemed to induce a momentary bout of status-inversion vertigo in an audience not accustomed to such behavior in their own leaders. Obama began by asking Ma a question about climate change. Ma, unsurprisingly, agreed with Obama that it was a very important issue. Then Obama turned to Mijeno. A laboratory operating in the hidden recesses of the West Wing could not have fashioned a person more expertly designed to appeal to Obama’s wonkish enthusiasms than Mijeno, a young engineer who, with her brother, had invented a lamp that is somehow powered by salt water. “Just to be clear, Aisa, so with some salt water, the device that you’ve set up can provide—am I right?—about eight hours of lighting?,” Obama asked. “Eight hours of lighting,” she responded. Obama: “And the lamp is $20—” Mijeno: “Around $20.” “I think Aisa is a perfect example of what we’re seeing in a lot of countries—young entrepreneurs coming up with leapfrog technologies, in the same ways that in large portions of Asia and Africa, the old landline phones never got set up,” Obama said, because those areas jumped straight to mobile phones. Obama encouraged Jack Ma to fund her work. “She’s won, by the way, a lot of prizes and gotten a lot of attention, so this is not like one of those infomercials where you order it, and you can’t make the thing work,” he said, to laughter. The next day, aboard *Air Force One* en route to Kuala Lumpur, I mentioned to Obama that he seemed genuinely happy to be onstage with Ma and Mijeno, and then I pivoted away from Asia, asking him if anything about the Middle East makes him happy. “Right now, I don’t think that anybody can be feeling good about the situation in the Middle East,” he said. “You have countries that are failing to provide prosperity and opportunity for their people. You’ve got a violent, extremist ideology, or ideologies, that are turbocharged through social media. You’ve got countries that have very few civic traditions, so that as autocratic regimes start fraying, the only organizing principles are sectarian.” He went on, “Contrast that with Southeast Asia, which still has huge problems—enormous poverty, corruption—but is filled with striving, ambitious, energetic people who are every single day scratching and clawing to build businesses and get education and find jobs and build infrastructure. The contrast is pretty stark.” In Asia, as well as in Latin America and Africa, Obama says, he sees young people yearning for self-improvement, modernity, education, and material wealth. “They are not thinking about how to kill Americans,” he says. “What they’re thinking about is *How do I get a better education? How do I create something of value?*” He then made an observation that I came to realize was representative of his bleakest, most visceral understanding of the Middle East today—not the sort of understanding that a White House still oriented around themes of hope and change might choose to advertise. “If we’re not talking to them,” he said, referring to young Asians and Africans and Latin Americans, “because the only thing we’re doing is figuring out how to destroy or cordon off or control the malicious, nihilistic, violent parts of humanity, then we’re missing the boat.” Obama’s critics argue that he is ineffective in cordoning off the violent nihilists of radical Islam because he doesn’t understand the threat. He does resist refracting radical Islam through the “clash of civilizations” prism popularized by the late political scientist Samuel Huntington. But this is because, he and his advisers argue, he does not want to enlarge the ranks of the enemy. “The goal is not to force a Huntington template onto this conflict,” said John Brennan, the CIA director. Both François Hollande and David Cameron have spoken about the threat of radical Islam in more Huntingtonesque terms, and I’ve heard that both men wish Obama would use more-direct language in discussing the threat. When I mentioned this to Obama he said, “Hollande and Cameron have used phrases, like *radical Islam*, that we have not used on a regular basis as our way of targeting terrorism. But I’ve never had a conversation when they said, ‘Man, how come you’re not using this phrase the way you hear Republicans say it?’ ” Obama says he has demanded that Muslim leaders do more to eliminate the threat of violent fundamentalism. “It is very clear what I mean,” he told me, “which is that there is a violent, radical, fanatical, nihilistic interpretation of Islam by a faction—a tiny faction—within the Muslim community that is our enemy, and that has to be defeated.” He then offered a critique that sounded more in line with the rhetoric of Cameron and Hollande. “There is also the need for Islam as a whole to challenge that interpretation of Islam, to isolate it, and to undergo a vigorous discussion within their community about how Islam works as part of a peaceful, modern society,” he said. But he added, “I do not persuade peaceful, tolerant Muslims to engage in that debate if I’m not sensitive to their concern that they are being tagged with a broad brush.” In private encounters with other world leaders, Obama has argued that there will be no comprehensive solution to Islamist terrorism until Islam reconciles itself to modernity and undergoes some of the reforms that have changed Christianity. Though he has argued, controversially, that the Middle East’s conflicts “date back millennia,” he also believes that the intensified Muslim fury of recent years was encouraged by countries considered friends of the U.S. In a meeting during apec with Malcolm Turnbull, the new prime minister of Australia, Obama described how he has watched Indonesia gradually move from a relaxed, syncretistic Islam to a more fundamentalist, unforgiving interpretation; large numbers of Indonesian women, he observed, have now adopted the hijab, the Muslim head covering. Why, Turnbull asked, was this happening? Because, Obama answered, the Saudis and other Gulf Arabs have funneled money, and large numbers of imams and teachers, into the country. In the 1990s, the Saudis heavily funded Wahhabist madrassas, seminaries that teach the fundamentalist version of Islam favored by the Saudi ruling family, Obama told Turnbull. Today, Islam in Indonesia is much more Arab in orientation than it was when he lived there, he said. “Aren’t the Saudis your friends?,” Turnbull asked. Obama smiled. “It’s complicated,” he said. Obama’s patience with Saudi Arabia has always been limited. In his first foreign-policy commentary of note, that 2002 speech at the antiwar rally in Chicago, he said, “You want a fight, President Bush? Let’s fight to make sure our so-called allies in the Middle East—the Saudis and the Egyptians—stop oppressing their own people, and suppressing dissent, and tolerating corruption and inequality.” In the White House these days, one occasionally hears Obama’s National Security Council officials pointedly reminding visitors that the large majority of 9/11 hijackers were not Iranian, but Saudi—and Obama himself rails against Saudi Arabia’s state-sanctioned misogyny, arguing in private that “a country cannot function in the modern world when it is repressing half of its population.” In meetings with foreign leaders, Obama has said, “You can gauge the success of a society by how it treats its women.” His frustration with the Saudis informs his analysis of Middle Eastern power politics. At one point I observed to him that he is less likely than previous presidents to axiomatically side with Saudi Arabia in its dispute with its archrival, Iran. He didn’t disagree. “Iran, since 1979, has been an enemy of the United States, and has engaged in state-sponsored terrorism, is a genuine threat to Israel and many of our allies, and engages in all kinds of destructive behavior,” the president said. “And my view has never been that we should throw our traditional allies”—the Saudis—“overboard in favor of Iran.” But he went on to say that the Saudis need to “share” the Middle East with their Iranian foes. “The competition between the Saudis and the Iranians—which has helped to feed proxy wars and chaos in Syria and Iraq and Yemen—requires us to say to our friends as well as to the Iranians that they need to find an effective way to share the neighborhood and institute some sort of cold peace,” he said. “An approach that said to our friends ‘You are right, Iran is the source of all problems, and we will support you in dealing with Iran’ would essentially mean that as these sectarian conflicts continue to rage and our Gulf partners, our traditional friends, do not have the ability to put out the flames on their own or decisively win on their own, and would mean that we have to start coming in and using our military power to settle scores. And that would be in the interest neither of the United States nor of the Middle East.” One of the most destructive forces in the Middle East, Obama believes, is tribalism—a force no president can neutralize. Tribalism, made manifest in the reversion to sect, creed, clan, and village by the desperate citizens of failing states, is the source of much of the Muslim Middle East’s problems, and it is another source of his fatalism. Obama has deep respect for the destructive resilience of tribalism—part of his memoir, *Dreams From My Father*, concerns the way in which tribalism in post-colonial Kenya helped ruin his father’s life—which goes some distance in explaining why he is so fastidious about avoiding entanglements in tribal conflicts. “It is literally in my DNA to be suspicious of tribalism,” he told me. “I understand the tribal impulse, and acknowledge the power of tribal division. I’ve been navigating tribal divisions my whole life. In the end, it’s the source of a lot of destructive acts.” While flying to Kuala Lumpur with the president, I recalled a passing reference he had once made to me about the Hobbesian argument for strong government as an antidote to the unforgiving state of nature. When Obama looks at swathes of the Middle East, Hobbes’s “war of all against all” is what he sees. “I have a recognition that us serving as the Leviathan clamps down and tames some of these impulses,” Obama had said. So I tried to reopen this conversation with an unfortunately prolix question about, among other things, “the Hobbesian notion that people organize themselves into collectives to stave off their supreme fear, which is death.” Ben Rhodes and Joshua Earnest, the White House spokesman, who were seated on a couch to the side of Obama’s desk on *Air Force One*, could barely suppress their amusement at my discursiveness. I paused and said, “I bet if I asked that in a press conference my colleagues would just throw me out of the room.” “I would be really into it,” Obama said, “but everybody else would be rolling their eyes.” Rhodes interjected: “Why can’t we get the bastards?” That question, the one put to the president by the CNN reporter at the press conference in Turkey, had become a topic of sardonic conversation during the trip. I turned to the president: “Well, yeah, and also, why can’t we get the bastards?” He took the first question. “Look, I am not of the view that human beings are inherently evil,” he said. “I believe that there’s more good than bad in humanity. And if you look at the trajectory of history, I am optimistic. “I believe that overall, humanity has become less violent, more tolerant, healthier, better fed, more empathetic, more able to manage difference. But it’s hugely uneven. And what has been clear throughout the 20th and 21st centuries is that the progress we make in social order and taming our baser impulses and steadying our fears can be reversed very quickly. Social order starts breaking down if people are under profound stress. Then the default position is tribe—us/them, a hostility toward the unfamiliar or the unknown.” He continued, “Right now, across the globe, you’re seeing places that are undergoing severe stress because of globalization, because of the collision of cultures brought about by the Internet and social media, because of scarcities—some of which will be attributable to climate change over the next several decades—because of population growth. And in those places, the Middle East being Exhibit A, the default position for a lot of folks is to organize tightly in the tribe and to push back or strike out against those who are different. “A group like isil is the distillation of every worst impulse along these lines. The notion that we are a small group that defines ourselves primarily by the degree to which we can kill others who are not like us, and attempting to impose a rigid orthodoxy that produces nothing, that celebrates nothing, that really is contrary to every bit of human progress—it indicates the degree to which that kind of mentality can still take root and gain adherents in the 21st century.” So your appreciation for tribalism’s power makes you want to stay away?, I asked. “In other words, when people say ‘Why don’t you just go get the bastards?,’ you step back?” “We have to determine the best tools to roll back those kinds of attitudes,” he said. “There are going to be times where either because it’s not a direct threat to us or because we just don’t have the tools in our toolkit to have a huge impact that, tragically, we have to refrain from jumping in with both feet.” I asked Obama whether he would have sent the Marines to Rwanda in 1994 to stop the genocide as it was happening, had he been president at the time. “Given the speed with which the killing took place, and how long it takes to crank up the machinery of the U.S. government, I understand why we did not act fast enough,” he said. “Now, we should learn from that. I actually think that Rwanda is an interesting test case because it’s possible—not guaranteed, but it’s possible—that this was a situation where the quick application of force might have been enough.” He related this to Syria: “Ironically, it’s probably easier to make an argument that a relatively small force inserted quickly with international support would have resulted in averting genocide [more successfully in Rwanda] than in Syria right now, where the degree to which the various groups are armed and hardened fighters and are supported by a whole host of external actors with a lot of resources requires a much larger commitment of forces.” Obama-administration officials argue that he has a comprehensible approach to fighting terrorism: a drone air force, Special Forces raids, a clandestine CIA-aided army of 10,000 rebels battling in Syria. So why does Obama stumble when explaining to the American people that he, too, cares about terrorism? The Turkey press conference, I told him, “was a moment for you as a politician to say, ‘Yeah, I hate the bastards too, and by the way, I *am* taking out the bastards.’ ” The easy thing to do would have been to reassure Americans in visceral terms that he will kill the people who want to kill them. Does he fear a knee-jerk reaction in the direction of another Middle East invasion? Or is he just inalterably Spockian? “Every president has strengths and weaknesses,” he answered. “And there is no doubt that there are times where I have not been attentive enough to feelings and emotions and politics in communicating what we’re doing and how we’re doing it.” But for America to be successful in leading the world, he continued, “I believe that we have to avoid being simplistic. I think we have to build resilience and make sure that our political debates are grounded in reality. It’s not that I don’t appreciate the value of theater in political communications; it’s that the habits we—the media, politicians—have gotten into, and how we talk about these issues, are so detached so often from what we need to be doing that for me to satisfy the cable news hype-fest would lead to us making worse and worse decisions over time.” As *Air Force One* began its descent toward Kuala Lumpur, the president mentioned the successful U.S.-led effort to stop the Ebola epidemic in West Africa as a positive example of steady, nonhysterical management of a terrifying crisis. “During the couple of months in which everybody was sure Ebola was going to destroy the Earth and there was 24/7 coverage of Ebola, if I had fed the panic or in any way strayed from ‘Here are the facts, here’s what needs to be done, here’s how we’re handling it, the likelihood of you getting Ebola is very slim, and here’s what we need to do both domestically and overseas to stamp out this epidemic,’ ” then “maybe people would have said ‘Obama is taking this as seriously as he needs to be.’ ” But feeding the panic by overreacting could have shut down travel to and from three African countries that were already cripplingly poor, in ways that might have destroyed their economies—which would likely have meant, among other things, a recurrence of Ebola. He added, “It would have also meant that we might have wasted a huge amount of resources in our public-health systems that need to be devoted to flu vaccinations and other things that actually kill people” in large numbers in America. The plane landed. The president, leaning back in his office chair with his jacket off and his tie askew, did not seem to notice. Outside, on the tarmac, I could see that what appeared to be a large portion of the Malaysian Armed Forces had assembled to welcome him. As he continued talking, I began to worry that the waiting soldiers and dignitaries would get hot. “I think we’re in Malaysia,” I said. “It seems to be outside this plane.” He conceded that this was true, but seemed to be in no rush, so I pressed him about his public reaction to terrorism: If he showed more emotion, wouldn’t that calm people down rather than rile them up? “I have friends who have kids in Paris right now,” he said. “And you and I and a whole bunch of people who are writing about what happened in Paris have strolled along the same streets where people were gunned down. And it’s right to feel fearful. And it’s important for us not to ever get complacent. There’s a difference between resilience and complacency.” He went on to describe another difference—between making considered decisions and making rash, emotional ones. “What it means, actually, is that you care so much that you want to get it right and you’re not going to indulge in either impetuous or, in some cases, manufactured responses that make good sound bites but don’t produce results. The stakes are too high to play those games.” With that, Obama stood up and said, “Okay, gotta go.” He headed out of his office and down the stairs, to the red carpet and the honor guard and the cluster of Malaysian officials waiting to greet him, and then to his armored limousine, flown to Kuala Lumpur ahead of him. (Early in his first term, still unaccustomed to the massive military operation it takes to move a president from one place to another, he noted ruefully to aides, “I have the world’s largest carbon footprint.”) The president’s first stop was another event designed to highlight his turn to Asia, this one a town-hall meeting with students and entrepreneurs participating in the administration’s Young Southeast Asian Leaders Initiative. Obama entered the lecture hall at Taylor’s University to huge applause. He made some opening remarks, then charmed his audience in an extended Q&A session. But those of us watching from the press section became distracted by news coming across our phones about a new jihadist attack, this one in Mali. Obama, busily mesmerizing adoring Asian entrepreneurs, had no idea. Only when he got into his limousine with Susan Rice did he get the news. Later that evening, I visited the president in his suite at the Ritz-Carlton hotel in downtown Kuala Lumpur. The streets around the hotel had been sealed. Armored vehicles ringed the building; the lobby was filled with swat teams. I took the elevator to a floor crowded with Secret Service agents, who pointed me to a staircase; the elevator to Obama’s floor was disabled for security reasons. Up two flights, to a hallway with more agents. A moment’s wait, and then Obama opened the door. His two-story suite was outlandish: Tara-like drapes, overstuffed couches. It was enormous and lonely and claustrophobic all at once. “It’s like the Hearst Castle,” I observed. “Well, it’s a long way from the Hampton Inn in Des Moines,” Obama said. ESPN was playing in the background. When we sat down, I pointed out to the president a central challenge of his pivot to Asia. Earlier in the day, at the moment he was trying to inspire a group of gifted and eager hijab-wearing Indonesian entrepreneurs and Burmese innovators, attention was diverted by the latest Islamist terror attack. A writer at heart, he had a suggestion: “It’s probably a pretty easy way to start the story,” he said, referring to this article. Possibly, I said, but it’s kind of a cheap trick. “It’s cheap, but it works,” Obama said. “We’re talking to these kids, and then there’s this attack going on.” The split-screen quality of the day prompted a conversation about two recent meetings he’d held, one that generated major international controversy and headlines, and one that did not. The one that drew so much attention, I suggested, would ultimately be judged less consequential. This was the Gulf summit in May of 2015 at Camp David, meant to mollify a crowd of visiting sheikhs and princes who feared the impending Iran deal. The other meeting took place two months later, in the Oval Office, between Obama and the general secretary of the Vietnamese Communist Party, Nguyen Phu Trong. This meeting took place only because John Kerry had pushed the White House to violate protocol, since the general secretary was not a head of state. But the goals trumped decorum: Obama wanted to lobby the Vietnamese on the Trans-Pacific Partnership—his negotiators soon extracted a promise from the Vietnamese that they would legalize independent labor unions—and he wanted to deepen cooperation on strategic issues. Administration officials have repeatedly hinted to me that Vietnam may one day soon host a permanent U.S. military presence, to check the ambitions of the country it now fears most, China. The U.S. Navy’s return to Cam Ranh Bay would count as one of the more improbable developments in recent American history. “We just moved the Vietnamese Communist Party to recognize labor rights in a way that we could never do by bullying them or scaring them,” Obama told me, calling this a key victory in his campaign to replace stick-waving with diplomatic persuasion. I noted that the 200 or so young Southeast Asians in the room earlier that day—including citizens of Communist-ruled countries—seemed to love America. “They do,” Obama said. “In Vietnam right now, America polls at 80 percent.” The resurgent popularity of America throughout Southeast Asia means that “we can do really big, important stuff—which, by the way, then has ramifications across the board,” he said, “because when Malaysia joins the anti-isil campaign, that helps us leverage resources and credibility in our fight against terrorism. When we have strong relations with Indonesia, that helps us when we are going to Paris and trying to negotiate a climate treaty, where the temptation of a Russia or some of these other countries may be to skew the deal in a way that is unhelpful.” Obama then cited America’s increased influence in Latin America—increased, he said, in part by his removal of a region-wide stumbling block when he reestablished ties with Cuba—as proof that his deliberate, nonthreatening, diplomacy-centered approach to foreign relations is working. The alba movement, a group of Latin American governments oriented around anti-Americanism, has significantly weakened during his time as president. “When I came into office, at the first Summit of the Americas that I attended, Hugo Chávez”—the late anti-American Venezuelan dictator—“was still the dominant figure in the conversation,” he said. “We made a very strategic decision early on, which was, rather than blow him up as this 10-foot giant adversary, to right-size the problem and say, ‘We don’t like what’s going on in Venezuela, but it’s not a threat to the United States.’ ” Obama said that to achieve this rebalancing, the U.S. had to absorb the diatribes and insults of superannuated Castro manqués. “When I saw Chávez, I shook his hand and he handed me a Marxist critique of the U.S.–Latin America relationship,” Obama recalled. “And I had to sit there and listen to Ortega”—Daniel Ortega, the radical leftist president of Nicaragua—“make an hour-long rant against the United States. But us being there, not taking all that stuff seriously—because it really wasn’t a threat to us”—helped neutralize the region’s anti-Americanism. The president’s unwillingness to counter the baiting by American adversaries can feel emotionally unsatisfying, I said, and I told him that every so often, I’d like to see him give Vladimir Putin the finger. It’s atavistic, I said, understanding my audience. “It is,” the president responded coolly. “This is what they’re looking for.” He described a relationship with Putin that doesn’t quite conform to common perceptions. I had been under the impression that Obama viewed Putin as nasty, brutish, and short. But, Obama told me, Putin is not particularly nasty. “The truth is, actually, Putin, in all of our meetings, is scrupulously polite, very frank. Our meetings are very businesslike. He never keeps me waiting two hours like he does a bunch of these other folks.” Obama said that Putin believes his relationship with the U.S. is more important than Americans tend to think. “He’s constantly interested in being seen as our peer and as working with us, because he’s not completely stupid. He understands that Russia’s overall position in the world is significantly diminished. And the fact that he invades Crimea or is trying to prop up Assad doesn’t suddenly make him a player. You don’t see him in any of these meetings out here helping to shape the agenda. For that matter, there’s not a G20 meeting where the Russians set the agenda around any of the issues that are important.” Russia’s invasion of Crimea in early 2014, and its decision to use force to buttress the rule of its client Bashar al-Assad, have been cited by Obama’s critics as proof that the post-red-line world no longer fears America. So when I talked with the president in the Oval Office in late January, I again raised this question of deterrent credibility. “The argument is made,” I said, “that Vladimir Putin watched you in Syria and thought, *He’s too logical, he’s too rational, he’s too into retrenchment. I’m going to push him a little bit further in Ukraine*.” Obama didn’t much like my line of inquiry. “Look, this theory is so easily disposed of that I’m always puzzled by how people make the argument. I don’t think anybody thought that George W. Bush was overly rational or cautious in his use of military force. And as I recall, because apparently nobody in this town does, Putin went into Georgia on Bush’s watch, right smack dab in the middle of us having over 100,000 troops deployed in Iraq.” Obama was referring to Putin’s 2008 invasion of Georgia, a former Soviet republic, which was undertaken for many of the same reasons Putin later invaded Ukraine—to keep an ex–Soviet republic in Russia’s sphere of influence. “Putin acted in Ukraine in response to a client state that was about to slip out of his grasp. And he improvised in a way to hang on to his control there,” he said. “He’s done the exact same thing in Syria, at enormous cost to the well-being of his own country. And the notion that somehow Russia is in a stronger position now, in Syria or in Ukraine, than they were before they invaded Ukraine or before he had to deploy military forces to Syria is to fundamentally misunderstand the nature of power in foreign affairs or in the world generally. Real power means you can get what you want without having to exert violence. Russia was much more powerful when Ukraine looked like an independent country but was a kleptocracy that he could pull the strings on.” Obama’s theory here is simple: Ukraine is a core Russian interest but not an American one, so Russia will always be able to maintain escalatory dominance there. “The fact is that Ukraine, which is a non-nato country, is going to be vulnerable to military domination by Russia no matter what we do,” he said. I asked Obama whether his position on Ukraine was realistic or fatalistic. “It’s realistic,” he said. “But this is an example of where we have to be very clear about what our core interests are and what we are willing to go to war for. And at the end of the day, there’s always going to be some ambiguity.” He then offered up a critique he had heard directed against him, in order to knock it down. “I think that the best argument you can make on the side of those who are critics of my foreign policy is that the president doesn’t exploit ambiguity enough. He doesn’t maybe react in ways that might cause people to think, *Wow, this guy might be a little crazy*.” “The ‘crazy Nixon’ approach,” I said: Confuse and frighten your enemies by making them think you’re capable of committing irrational acts. “But let’s examine the Nixon theory,” he said. “So we dropped more ordnance on Cambodia and Laos than on Europe in World War II, and yet, ultimately, Nixon withdrew, Kissinger went to Paris, and all we left behind was chaos, slaughter, and authoritarian governments that finally, over time, have emerged from that hell. When I go to visit those countries, I’m going to be trying to figure out how we can, today, help them remove bombs that are still blowing off the legs of little kids. In what way did that strategy promote our interests?” But what if Putin were threatening to move against, say, Moldova—another vulnerable post-Soviet state? Wouldn’t it be helpful for Putin to believe that Obama might get angry and irrational about that? ### Video: Jeffrey Goldberg speaks with James Bennet about “The Obama Doctrine.” “There is no evidence in modern American foreign policy that that’s how people respond. People respond based on what their imperatives are, and if it’s really important to somebody, and it’s not that important to us, they know that, and we know that,” he said. “There are ways to deter, but it requires you to be very clear ahead of time about what is worth going to war for and what is not. Now, if there is somebody in this town that would claim that we would consider going to war with Russia over Crimea and eastern Ukraine, they should speak up and be very clear about it. The idea that talking tough or engaging in some military action that is tangential to that particular area is somehow going to influence the decision making of Russia or China is contrary to all the evidence we have seen over the last 50 years.” Obama went on to say that the belief in the possibilities of projected toughness is rooted in “mythologies” about Ronald Reagan’s foreign policy. “If you think about, let’s say, the Iran hostage crisis, there is a narrative that has been promoted today by some of the Republican candidates that the day Reagan was elected, because he looked tough, the Iranians decided, ‘We better turn over these hostages,’ ” he said. “In fact what had happened was that there was a long negotiation with the Iranians and because they so disliked Carter—even though the negotiations had been completed—they held those hostages until the day Reagan got elected. Reagan’s posture, his rhetoric, etc., had nothing to do with their release. When you think of the military actions that Reagan took, you have Grenada—which is hard to argue helped our ability to shape world events, although it was good politics for him back home. You have the Iran-Contra affair, in which we supported right-wing paramilitaries and did nothing to enhance our image in Central America, and it wasn’t successful at all.” He reminded me that Reagan’s great foe, Daniel Ortega, is today the unrepentant president of Nicaragua. Obama also cited Reagan’s decision to almost immediately pull U.S. forces from Lebanon after 241 servicemen were killed in a Hezbollah attack in 1983. “Apparently all these things really helped us gain credibility with the Russians and the Chinese,” because “that’s the narrative that is told,” he said sarcastically. “Now, I actually think that Ronald Reagan had a great success in foreign policy, which was to recognize the opportunity that Gorbachev presented and to engage in extensive diplomacy—which was roundly criticized by some of the same people who now use Ronald Reagan to promote the notion that we should go around bombing people.” In a conversation at the end of January, I asked the president to describe for me the threats he worries about most as he prepares, in the coming months, to hand off power to his successor. “As I survey the next 20 years, climate change worries me profoundly because of the effects that it has on all the other problems that we face,” he said. “If you start seeing more severe drought; more significant famine; more displacement from the Indian subcontinent and coastal regions in Africa and Asia; the continuing problems of scarcity, refugees, poverty, disease—this makes every other problem we’ve got worse. That’s above and beyond just the existential issues of a planet that starts getting into a bad feedback loop.” Terrorism, he said, is also a long-term problem “when combined with the problem of failed states.” What country does he consider the greatest challenge to America in the coming decades? “In terms of traditional great-state relations, I do believe that the relationship between the United States and China is going to be the most critical,” he said. “If we get that right and China continues on a peaceful rise, then we have a partner that is growing in capability and sharing with us the burdens and responsibilities of maintaining an international order. If China fails; if it is not able to maintain a trajectory that satisfies its population and has to resort to nationalism as an organizing principle; if it feels so overwhelmed that it never takes on the responsibilities of a country its size in maintaining the international order; if it views the world only in terms of regional spheres of influence—then not only do we see the potential for conflict with China, but we will find ourselves having more difficulty dealing with these other challenges that are going to come.” Many people, I noted, want the president to be more forceful in confronting China, especially in the South China Sea. Hillary Clinton, for one, has been heard to say in private settings, “I don’t want my grandchildren to live in a world dominated by the Chinese.” “I’ve been very explicit in saying that we have more to fear from a weakened, threatened China than a successful, rising China,” Obama said. “I think we have to be firm where China’s actions are undermining international interests, and if you look at how we’ve operated in the South China Sea, we have been able to mobilize most of Asia to isolate China in ways that have surprised China, frankly, and have very much served our interest in strengthening our alliances.” A weak, flailing Russia constitutes a threat as well, though not quite a top-tier threat. “Unlike China, they have demographic problems, economic structural problems, that would require not only vision but a generation to overcome,” Obama said. “The path that Putin is taking is not going to help them overcome those challenges. But in that environment, the temptation to project military force to show greatness is strong, and that’s what Putin’s inclination is. So I don’t underestimate the dangers there.” Obama returned to a point he had made repeatedly to me, one that he hopes the country, and the next president, absorbs: “You know, the notion that diplomacy and technocrats and bureaucrats somehow are helping to keep America safe and secure, most people think, *Eh, that’s nonsense*. But it’s true. And by the way, it’s the element of American power that the rest of the world appreciates unambiguously. When we deploy troops, there’s always a sense on the part of other countries that, even where necessary, sovereignty is being violated.” Over the past year, John Kerry has visited the White House regularly to ask Obama to violate Syria’s sovereignty. On several occasions, Kerry has asked Obama to launch missiles at specific regime targets, under cover of night, to “send a message” to the regime. The goal, Kerry has said, is not to overthrow Assad but to encourage him, and Iran and Russia, to negotiate peace. When the Assad alliance has had the upper hand on the battlefield, as it has these past several months, it has shown no inclination to take seriously Kerry’s entreaties to negotiate in good faith. A few cruise missiles, Kerry has argued, might concentrate the attention of Assad and his backers. “Kerry’s looking like a chump with the Russians, because he has no leverage,” a senior administration official told me. The U.S. wouldn’t have to claim credit for the attacks, Kerry has told Obama—but Assad would surely know the missiles’ return address. Obama has steadfastly resisted Kerry’s requests, and seems to have grown impatient with his lobbying. Recently, when Kerry handed Obama a written outline of new steps to bring more pressure to bear on Assad, Obama said, “Oh, another proposal?” Administration officials have told me that Vice President Biden, too, has become frustrated with Kerry’s demands for action. He has said privately to the secretary of state, “John, remember Vietnam? Remember how that started?” At a National Security Council meeting held at the Pentagon in December, Obama announced that no one except the secretary of defense should bring him proposals for military action. Pentagon officials understood Obama’s announcement to be a brushback pitch directed at Kerry. One day in January, in Kerry’s office at the State Department, I expressed the obvious: He has more of a bias toward action than the president does. “I do, probably,” Kerry acknowledged. “Look, the final say on these things is in his hands … I’d say that I think we’ve had a very symbiotic, synergistic, whatever you call it, relationship, which works very effectively. Because I’ll come in with the bias toward ‘Let’s try to do this, let’s try to do that, let’s get this done.’ ” Obama’s caution on Syria has vexed those in the administration who have seen opportunities, at different moments over the past four years, to tilt the battlefield against Assad. Some thought that Putin’s decision to fight on behalf of Assad would prompt Obama to intensify American efforts to help anti-regime rebels. But Obama, at least as of this writing, would not be moved, in part because he believed that it was not his business to stop Russia from making what he thought was a terrible mistake. “They are overextended. They’re bleeding,” he told me. “And their economy has contracted for three years in a row, drastically.” In recent National Security Council meetings, Obama’s strategy was occasionally referred to as the “Tom Sawyer approach.” Obama’s view was that if Putin wanted to expend his regime’s resources by painting the fence in Syria, the U.S. should let him. By late winter, though, when it appeared that Russia was making advances in its campaign to solidify Assad’s rule, the White House began discussing ways to deepen support for the rebels, though the president’s ambivalence about more-extensive engagement remained. In conversations I had with National Security Council officials over the past couple of months, I sensed a foreboding that an event—another San Bernardino–style attack, for instance—would compel the United States to take new and direct action in Syria. For Obama, this would be a nightmare. If there had been no Iraq, no Afghanistan, and no Libya, Obama told me, he might be more apt to take risks in Syria. “A president does not make decisions in a vacuum. He does not have a blank slate. Any president who was thoughtful, I believe, would recognize that after over a decade of war, with obligations that are still to this day requiring great amounts of resources and attention in Afghanistan, with the experience of Iraq, with the strains that it’s placed on our military—any thoughtful president would hesitate about making a renewed commitment in the exact same region of the world with some of the exact same dynamics and the same probability of an unsatisfactory outcome.” Are you too cautious?, I asked. “No,” he said. “Do I think that had we not invaded Iraq and were we not still involved in sending billions of dollars and a number of military trainers and advisers into Afghanistan, would I potentially have thought about taking on some additional risk to help try to shape the Syria situation? I don’t know.” What has struck me is that, even as his secretary of state warns about a dire, Syria-fueled European apocalypse, Obama has not recategorized the country’s civil war as a top-tier security threat. Obama’s hesitation to join the battle for Syria is held out as proof by his critics that he is too naive; his decision in 2013 not to fire missiles is proof, they argue, that he is a bluffer. This critique frustrates the president. “Nobody remembers bin Laden anymore,” he says. “Nobody talks about me ordering 30,000 more troops into Afghanistan.” The red-line crisis, he said, “is the point of the inverted pyramid upon which all other theories rest.” One afternoon in late January, as I was leaving the Oval Office, I mentioned to Obama a moment from an interview in 2012 when he told me that he would not allow Iran to gain possession of a nuclear weapon. “You said, ‘I’m the president of the United States, I don’t bluff.’ ” He said, “I don’t.” Shortly after that interview four years ago, Ehud Barak, who was then the defense minister of Israel, asked me whether I thought Obama’s no-bluff promise was itself a bluff. I answered that I found it difficult to imagine that the leader of the United States would bluff about something so consequential. But Barak’s question had stayed with me. So as I stood in the doorway with the president, I asked: “Was it a bluff?” I told him that few people now believe he actually would have attacked Iran to keep it from getting a nuclear weapon. “That’s interesting,” he said, noncommittally. I started to talk: “Do you—” He interrupted. “I actually would have,” he said, meaning that he would have struck Iran’s nuclear facilities. “If I saw them break out.” He added, “Now, the argument that can’t be resolved, because it’s entirely situational, was what constitutes them getting” the bomb. “This was the argument I was having with Bibi Netanyahu.” Netanyahu wanted Obama to prevent Iran from being capable of building a bomb, not merely from possessing a bomb. “You were right to believe it,” the president said. And then he made his key point. “This was in the category of an American interest.” I was reminded then of something Derek Chollet, a former National Security Council official, told me: “Obama is a gambler, not a bluffer.” The president has placed some huge bets. Last May, as he was trying to move the Iran nuclear deal through Congress, I told him that the agreement was making me nervous. His response was telling. “Look, 20 years from now, I’m still going to be around, God willing. If Iran has a nuclear weapon, it’s my name on this,” he said. “I think it’s fair to say that in addition to our profound national-security interests, I have a personal interest in locking this down.” In the matter of the Syrian regime and its Iranian and Russian sponsors, Obama has bet, and seems prepared to continue betting, that the price of direct U.S. action would be higher than the price of inaction. And he is sanguine enough to live with the perilous ambiguities of his decisions. Though in his Nobel Peace Prize speech in 2009, Obama said, “Inaction tears at our conscience and can lead to more costly intervention later,” today the opinions of humanitarian interventionists do not seem to move him, at least not publicly. He undoubtedly knows that a next-generation Samantha Power will write critically of his unwillingness to do more to prevent the continuing slaughter in Syria. (For that matter, Samantha Power will also be the subject of criticism from the next Samantha Power.) As he comes to the end of his presidency, Obama believes he has done his country a large favor by keeping it out of the maelstrom—and he believes, I suspect, that historians will one day judge him wise for having done so. Inside the West Wing, officials say that Obama, as a president who inherited a financial crisis and two active wars from his predecessor, is keen to leave “a clean barn” to whoever succeeds him. This is why the fight against isis, a group he considers to be a direct, though not existential, threat to the U.S., is his most urgent priority for the remainder of his presidency; killing the so-called caliph of the Islamic State, Abu Bakr al-Baghdadi, is one of the top goals of the American national-security apparatus in Obama’s last year. Of course, isis was midwifed into existence, in part, by the Assad regime. Yet by Obama’s stringent standards, Assad’s continued rule for the moment still doesn’t rise to the level of direct challenge to America’s national security. This is what is so controversial about the president’s approach, and what will be controversial for years to come—the standard he has used to define what, exactly, constitutes a direct threat. Obama has come to a number of dovetailing conclusions about the world, and about America’s role in it. The first is that the Middle East is no longer terribly important to American interests. The second is that even if the Middle East were surpassingly important, there would still be little an American president could do to make it a better place. The third is that the innate American desire to fix the sorts of problems that manifest themselves most drastically in the Middle East inevitably leads to warfare, to the deaths of U.S. soldiers, and to the eventual hemorrhaging of U.S. credibility and power. The fourth is that the world cannot afford to see the diminishment of U.S. power. Just as the leaders of several American allies have found Obama’s leadership inadequate to the tasks before him, he himself has found world leadership wanting: global partners who often lack the vision and the will to spend political capital in pursuit of broad, progressive goals, and adversaries who are not, in his mind, as rational as he is. Obama believes that history has sides, and that America’s adversaries—and some of its putative allies—have situated themselves on the wrong one, a place where tribalism, fundamentalism, sectarianism, and militarism still flourish. What they don’t understand is that history is bending in his direction. “The central argument is that by keeping America from immersing itself in the crises of the Middle East, the foreign-policy establishment believes that the president is precipitating our decline,” Ben Rhodes told me. “But the president himself takes the opposite view, which is that overextension in the Middle East will ultimately harm our economy, harm our ability to look for other opportunities and to deal with other challenges, and, most important, endanger the lives of American service members for reasons that are not in the direct American national-security interest.” If you are a supporter of the president, his strategy makes eminent sense: Double down in those parts of the world where success is plausible, and limit America’s exposure to the rest. His critics believe, however, that problems like those presented by the Middle East don’t solve themselves—that, without American intervention, they metastasize. At the moment, Syria, where history appears to be bending toward greater chaos, poses the most direct challenge to the president’s worldview. George W. Bush was also a gambler, not a bluffer. He will be remembered harshly for the things he did in the Middle East. Barack Obama is gambling that he will be judged well for the things he didn’t do.
true
true
true
The U.S. president talks through his hardest decisions about America’s role in the world.
2024-10-12 00:00:00
2016-03-10 00:00:00
https://cdn.theatlantic.…NER/original.jpg
article
theatlantic.com
The Atlantic
null
null
9,339,637
http://blog.freecharge.in/2015/04/08/freecharge-snapdeal-exciting-times-ahead/
Freecharge Blog - Latest Financial Blogs on most Trending Topics
null
Latest Articles Money management buying or renting- what is better Personal loans personal loan: guide to know everything about personal loan Money management saving money is not rocket science, but smart art Personal loans use savings or take a loan what is better | freecharge blog Money management how to analyze & check your financial strength
true
true
true
Freecharge Blog - Find articles/ Information on Pay Later, UPI Payments, Recharge Mobile & Pay Bills And Latest Financial & Trending Topics
2024-10-12 00:00:00
2022-07-01 00:00:00
https://pwa-cdn.freechar…-og-option-2.png
null
freecharge.in
Freecharge
null
null
11,493,559
http://wayback.archive.org/web/19990125084553/http://alpha.google.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,083,342
https://dochive.com.au
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,344,156
http://www.allinmobile.co/android-screen-sizes-resolutions-why-screen-size-doesnt-matter/
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
14,927,209
https://levels.io/facebook-city/
Facebook and Google are building their own cities: the inevitable future of private tech worker towns
Levelsio
We’re entering a future of tech workers living luxury lives in private cities isolated from the rest of the world, with corporations reaching the levels of power until now only held by cities and nation states. Whether we like that or not, economics states its inevitable. (TL;DR). This month, Facebook announced they’re expanding their campus in Menlo Park. In corporate lingo they call it “investing in Menlo Park and our community”. But if you read through the lines, it’s a lot more than that. They’re planning building 1,500 houses, with restaurants, a retail shopping area, a giant grocery store, parks, roads and a police station. Of course, Google followed suit with their own expansion in San Jose You can call that a campus as much as you want, to me that sounds an awful lot like a city. “Don’t call it a city though”, Google and Facebook would say. It’s like Uber not wanting to be called a taxi and being regulated. Being called a city means the same. Regulation. Don’t. Call. It. A. City. But it’s a city. ## TrendsThis is the start of a trend that’s in turn part of a bigger trend that started about 12 years ago with two things: tech worker campuses and coworking spaces. Famously, Google was the first to introduce playful offices optimized to keep workers happy: ### The “serviced” Google office (2005) They introduced free services for their workers to keep them in the office longer, like free breakfast, lunch, and dinner, free laundry service and free childcare facilities. Suddenly, working for Google became a lot better than working for a regular company. Not just in pay, but in working standards. ### Coworking (2005) At the same time we saw regular companies change too. After the Financial Crisis of 2008 and the economic downturn that took years, massive layoff’s happened. More importantly a cultural shift from employment towards freelancing happened. A lot of people were fired and re-hired as contractors. It was promoted as “flexible work” but the dark side of it was that flexible work has no labor benefits whatsoever. Now outside the office, these freelancers needed places to work: cue coworking spaces: ### Nomads (2014) Then around 2014, we had the explosion of remote work and digital nomads as a big trend. For nomads, coworking spaces offered a place to work but also community away from home: ### Coliving (2016) I said “community”. These remote workers needed places to sleep, so why not sleep near where you work: cue coliving spaces: In 2016, WeWork, a big coworking provider, introduced WeLive: coliving spaces to live with your “coworkers”. Just like at Google and Facebook, work and private was now mixing in housing facilities. Meanwhile, with rents in the San Francisco Bay Area rising to unaffordable levels, we saw Google and Facebook start subsidizing rent for their employees. But obviously this was unsustainable. ### Private tech workers towns (2021) It was inevitable. To get better work output, you improve the lives of your workers, well, if you can afford to. And Facebook obviously could by creating its own city. The first part of development will be finished in Menlo Park around 2021. With Facebook being the biggest employer and investor in Menlo Park in absolute dollar amounts, it’s impossible for Menlo Park to say “no” to these plans. Facebook would simply move to another place, and take its dollars with it. San Jose didn’t even try to stop it and already said overwhelmingly “yes” to Google’s plans: “Google’s vision of an integrated development in San Jose aligns with the aspirations of the City, transit agencies, surrounding neighborhoods, and downtown businesses for extraordinary architecture, urban design, environmental sustainability, retail amenities, transit ridership and vibrant public spaces,” San Jose Mayor Sam Liccardo, the city’s vice mayor and three other council members wrote in a letter to the San Jose Mercury News daily. ## Coliving cities (2020?) Now what about the freelance tech workers outside Google and Facebook working in coworking spaces and living in coliving spaces? Real estate investors and the hospitality industry are already coworking features into their new projects: From my inbox: Hotels are opening up in-house coworking spaces (in this case a big luxury chain). Coworking is now a feature, not a product! pic.twitter.com/UokGBLcjrn — Pieter Levels @ 🇺🇸 (@levelsio) July 20, 2017 And many coworking space owners that I’ve talked to have plans to build coliving towns starting with tens of units, and if successful, quickly expand to hundreds, while attracting local business owners to open up shops, cafes and restaurants in the towns. Dojo, a coworking space, is rumored to start building the first co-living village in Bali soon. You can predict this with basic economics, and I’ve discussed it before. Coworking and coliving margins are so tight, the only way to make real money is with vertical integration. That means providing the entire chain of services and products for your user, from breakfast to coworking to coliving to shopping to leisure activities, take 10% from each part in the chain and you have a big business. We’ll see this happen. ## History repeats We’ve seen this before. Actually, a century ago. Philips (and many other industrial giants back then) created its own towns for employees near its factories because it couldn’t get enough (affordable) housing for them. Ironically, that remnants of that town are now used as a space for tech workers. There’s countless more examples: In 1893, the chocolate giant Cadbury built a 313-unit village for its factory workers called Bournville. It was built to: “alleviate the evils of modern, more cramped living conditions”. Now doesn’t that sound like living conditions in San Francisco and other big cities in 2017? That’s eerily familiar. History repeats if you wait 120 years. ## Here’s where it gets scary The new modern tech worker flies from major city hub to major city hub works for big tech companies (or as a frelancer). And it means, the culture of the 1st-tier cities (NYC, Shanghai) is converging. Meanwhile, the 2nd-tier cities (Dayton, Northampton, Munich) will slowly die out unless they’re near a top-tier city. The top talent work (and pay!) is in the big cities. This is the new reality of urbanism, whether you like it or not. This means that major cities like New York, Los Angeles, London and Shanghai now essentially have more power in many parts of governing than governments. They’re so big they’ve essentially become small countries of their own. Amsterdam now offers its own expat visa for tech workers to live and work there legally and get a discount on tax. Note: I said Amsterdam, I didn’t say Netherlands. We’re talking a city here. My point exactly. With corporations starting small towns for their 100,000’s employees (Amazon has 300,000 employees), these may turn into million people cities soon. How? Well, one tech worker probably has a partner at some point, maybe a few kids. That’s an average of 4 dependents per tech worker. 300,000 Amazon employees * 4 dependents = 1,200,000 people in a city. Obviously, this is hypothetical. But we might not be far off. Let’s continue, hypothetically, if corporations will run million-people cities. And we know big cities are in many ways more powerful than the government. And we know big tech corporations itself are very powerful: Facebook has 2 billion users, that’s 600 million more people than China. Apple holds $256,000,000,000 in cash, or about twice as much cash as the US government. Then combined that might mean big tech corporations will, in the future, have the power of nation states. And will operate like that. Google already has its own country TLD: .google. I’m making some big shortcuts here and it’s hypothetical. But if it’s true, I’m not sure we want private entities to run our world. ### There’s more to be scared off Let’s get back to the tech worker cities. Facebook’s plan includes statements like: We hope to contribute significantly to the housing supply by building 1,500 units of housing on the campus, 15% of which will be offered at below market rates. This added on-site housing should also mitigate traffic impacts from growth. These efforts complement our ongoing work to address the issue, including the Catalyst Housing Fund for affordable housing we established in partnership with community groups to fund affordable housing for our local area. The fund was initiated last year with an initial investment of $18.5 million that we hope will grow. Obviously, that’s a good step. But it looks more like saving face, you’ll have 85% of rich tech workers and 15% of “non-tech people that can’t afford it”. The reality is that these cities will radically create a division between the rich tech workers and “the rest”. The reality is that the majority of the world will be left out. Life quality will rise inside the tech worker cities (whether Google/Facebook-ran, or for freelancers), but it will be stagnant or decreasing outside them. That’s quite dystopian. ### Why dystopian? Because put simply: a government’s objective is to increase the life quality of its citizens (at least in theory). A company’s objective is to increase its shareholder’s returns (aka profits). As we know, profits don’t necessarily align with the life quality of (all) people. As much as we can criticize, that won’t change things. These are inevitable macro-economic forces: fueled by low interest rates that pushed housing prices to unimaginable levels worldwide, failing governments that didn’t build enough housing (and change zoning laws) for all the people moving to the cities, and the automation of work by, well, tech workers, leaving most people with stagnant wages. Nobody will be able to stop any of this. It’s just economics. Nobody, except governments. But if tech companies are more powerful than governments, then, well, what? Exactly. Basic income? Yes, basic income might help. But whether you get $2,000/month, the division will happen any way. ### The future The future is a tech elite living in their own private tech worker cities. Whether we like it or not. P.S. I'm on Twitter too if you'd like to follow more of my stories. And I wrote a book called MAKE about building startups without funding. See a list of my stories or contact me. To get an alert when I write a new blog post, you can subscribe below:
true
true
true
We’re entering a future of tech workers living luxury lives in private cities isolated from the rest of the world, with corporations reaching the levels of power until now only held by cities and nation states. Whether we like that or not, economics states its inevitable. (TL;DR). This
2024-10-12 00:00:00
2017-07-24 00:00:00
https://levels.io/conten…017-22-30-02.gif
article
levels.io
Levelsio (Pieter Levels)
null
null
5,538,107
http://tirania.org/blog/archive/2013/Apr-12.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
27,838,596
https://www.youtube.com/watch?v=SRUrB7ruh-8
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,242,111
http://ideas.4brad.com/perils-long-range-electric-car
Perils of the long range electric car
null
# Perils of the long range electric car You've probably seen the battle going on between Elon Musk of Tesla and the New York Times over the strongly negative review the NYT made of a long road trip in a Model S. The reviewer ran out of charge and had a very rough trip with lots of range anxiety. The data logs published by Tesla show he made a number of mistakes, didn't follow some instructions on speed and heat and could have pulled off the road trip if he had done it right. Both sides are right, though. Tesla has made it possible to do the road trip in the Model S, but they haven't made it easy. It's possible to screw it up, and instructions to go slow and keep the heater low are not ones people want to take. 40 minute supercharges are still pretty long, they are not good for the battery and it's hard to believe that they scale since they take so long. While Better Place's battery swap provides a tolerable 5 minute swap, it also presents scaling issues -- you don't want to show up at a station that does 5 minute swaps and be 6th in line. The Tesla Model S is an amazing car, hugely fun to drive and zippy, cool on the inside and high tech. Driving around a large metro area can be done without range anxiety, which is great. I would love to have one -- I just love $85K more. But a long road trip, particularly on a cold day? There are better choices. (And in the Robocar world when you can get cars delivered, you will get the right car for your trip delivered.) Electric cars have a number of worthwhile advantages, and as battery technologies improve they will come into their own. But let's consider the economics of a long range electric. The Tesla Model S comes in 3 levels, and there is a $20,000 difference between the 40khw 160 mile version and the 85kwh 300 mile version. It's a $35K difference if you want the performance package. The unspoken secret of electric cars is that while you can get the electricity for the model S for just 3 cents/mile at national grid average prices (compared to 12 cents/mile for gasoline in a 30mpg car and 7 cents/mile in a 50mpg hybrid) this is not the full story. You also pay, as you can see, a lot for the battery. There are conflicting reports on how long a battery pack will last you (and that in turn varies on how you use and abuse it.) If we take the battery lifetime at 150,000 miles -- which is more than most give it -- you can see that the extra 45kwh add-on in the Tesla for $20K is costing about 13 cents/mile. The whole battery pack in the 85kwh Telsa, at $42K estimated, is costing a whopping 28 cents/mile for depreciation. Here's a yikes. At a 5% interest rate, you're paying $2,100 a year in interest on the $42,000 Tesla S 85kwh battery pack. If you go the national average 12,000 miles/year that's 17.5 cents/mile *just for interest on the battery*. Not counting vehicle or battery life. Add interest, depreciation and electricity and it's just under 40 cents/mile -- similar to a 10mpg Hummer H2. (I bet most Tesla Model S owners do more than that average 12K miles/year, which improves this.) In other words, the cost of the battery dwarfs the cost of the electricity, and sadly it also dwarfs the cost of gasoline in most cars. **With an electric car, you are effectively paying most of your fuel costs up front**. You may also be adding home charging station costs. This helps us learn how much cheaper we must make the battery. It's a bit easier in the Nissan LEAF, whose 24kwh battery pack is estimated to cost about $15,000. Here if it lasts 150K miles we have 10 cents/mile plus the electricity, for a total cost of 13 cents/mile which competes with gasoline cars, though adding interest it's 19 cents/mile -- which does not compete. As a plus, the electric car is simpler and should need less maintenance. (Of course with as much as $10,000 in tax credits, that battery pack can be a reasonable purchase, at taxpayer expense.) A typical gasoline car spends about 5 cents/mile on non-tire maintenance. This math changes a lot with the actual battery life, and many people are estimating that battery lives will be worse than 150K miles and others are estimating more. The larger your battery pack and the less often you fully use it, the longer it lasts. The average car doesn't last a lot more than 150k miles, at least outside of California. The problem with range anxiety becomes more clear. The 85kwh Tesla lets you do your daily driving around your city with no range anxiety. That's great. But to get that you buy a huge battery pack. But you only use that extra range rarely, though you spend a lot to get it. Most trips can actually be handled by the 70 mile range Leaf, though with some anxiety. You only need all that extra battery for those occasional longer trips. You spend a lot of extra money just to use the range from time to time. This is part of the justification of the plug-in hybrid, though the Volt and Prius Plug-in are not selling super well. With a PHEV, you really make full use of that battery. Almost every trip exploits all its range, then the gasoline engine kicks in on those medium and longer trips. The cost however, turns out to be also high -- you have to build a full electric car and include a capable (but smaller) gasoline engine. Most PHEVs have very short electric range -- like just 10 miles -- and they also pay a lot (a whole gasoline engine system) to do those medium and longer trips. To top it all off, when you use batteries, you carry around all their weight if they are empty or full. If you want that extra long range, you pay for it with a lot of weight you carry all the time, not just on those few trips which are longer. We now see some of gasoline's huge advantages. It has a really great energy density, can be refueled quickly, and as you burn it up its weight goes away too -- though of course this is also one of gasoline's big problems; the weight goes into the atmosphere as emissions. This discussion is on the cost of the system to the owner (or taxpayer.) Electric cars can be greener (if the local grid is not coal-based) and particularly good if they are small and light. The Tesla, Leaf and others are not particularly small and light so far. McKinsey forecasts L-Ion batteries will be $200/kwh by 2020, making a 40kwh pack cost just $8,000. Put into a small, light car would produce a cost-effective car with no metro-area range anxiety when the decade ends. ## Can a trailer be the answer? This makes me more enamoured with the concept of a temporary "range extending trailer." I discussed versions of the trailer concept in an earlier post. There are a few variations of such a plan. Some have the trailer just contain extra batteries and a thick power connection -- here you pay to rent the extra range only when you need it. This is mainly of appeal to those who want to be pure electric. Other options include a modest generator in the trailer. I think the physically simplest plan is a "pusher trailer" that just has a conventional motor that gives you a push from behind. You can even recharge your batteries this way by engaging the car's regeneration mode, with some cost in efficiency over a direct link, but you don't need to do this if you use something like this well. Such pushing is unstable and needs computer assist on the steering. It would only be done on long stretches of flat highway, and be halted in the event of steering or braking. Alternately, the trailer could be attached at two points on the rear, able to bounce up and down but not articulated laterally. It could also not be a trailer but something designed to bolt into a special mount (if it's under a few hundred pounds.) Such designs are more likely to be proprietary -- the advantage of trailers is it's easier to develop a standard that many vehicles can use, and allow innovation in the trailer design. Trailers are not speculation. In addition to the company cited above, EV pioneer Alan Cocconi's firm AC Propulsion actually built some for their small EVs. Unlike Tesla's fancy rapid charge stations or Better Place's even fancier battery swap stations, trailers could be located anywhere there is some spare parking space. Gas stations. Rental depots. They could be self-service. They could be robotized with independent steering, allowing you to steer naturally and back up more easily. They could automate a lot of the docking. Instead of pulling over to one of Tesla's rare rapid-charge stations and waiting 30-40 minutes (if there is not a line) your car could automatically plot a short detour to a rental trailer station and you could get hooked up in a couple of minutes, perhaps faster than a gasoline fill-up. Rather than limping up to a trailer with your battery low, your car's computer would plan the trip for maximum efficiency, planning to use electric power for all urban stop-and-go activity and only use the trailer's engine in its most efficient cycle on the highway, with electric assist where needed. This highly efficient use requires more advance planning than a gasoline trip, but you could also be spontaneous at the cost of burning a bit more fossil fuel or getting a more powerful trailer on those trips. You don't want to be fetching (and returning) such a trailer every day. You would size your electric battery perhaps to need the trailer just a few times a month. However, if there were trailer stations along your route to the highway, or in easy-on/easy-off highway stations, you might find it quite acceptable to use one more often, especially if it can attach itself to your special receiver. The trailer, to be cheap, would be quite simple -- a stock car engine, drivetrain and wheels, with perhaps even no transmission. Its fanciest features would be computer control, fancy steering and perhaps a liftable 3rd wheel for docking operations and assist when reversing. It could be quite compact, and with a smaller car fit into parking spaces by moving up the tow bar -- which is just a metal bar, no electronics. And like the original proposer suggests, some models could come with extra trunk space. You could make the trailer just extra batteries. This has a number of advantages and disadvantages: - It's a 100% electric solution - You would need a high-power electrical connection between vehicles. But none of the instability of pushing. - The only motors needed in the trailer would be steering motors (for easy reverse) and possibly a lower-power motor for docking operations. - It's more expensive and possibly worth stealing. There's $15K of batteries in it. - You need charging at all the rental locations (though possibly just L1) and each unit has to go out of service for recharge. - You still probably only get another 100 miles range out of the unit, but you can stop and swap easily. You could also own a trailer and hook it up at home when you knew you were taking a longer trip. Owning the battery trailer is not that useful an idea, it only saves you some weight: Just get the bigger battery car. ## Comments François Mon, 2013-02-18 22:14 Permalink ## The problem here is that you The problem here is that you are looking at price only. The true cost to society is different. I don't have the answer here - it is higher for both gasoline powered and electric vehicles - but just using the price as a proxy for the true value is the thinking that is getting us into the mess we are in and it will surely not get us out. For more on value vs price, read Raj Patel's "The Value of Nothing". brad Mon, 2013-02-18 22:22 Permalink ## Oh I did mention that In noting where the weight of the gasoline goes. But I was meaning to expand on that -- I have written more extensively in other areas. Read the rest of the blog and the sites and you'll see a lot of writing about how the answer is to get people into small, light cars meant for 1-2 people. (Not, as it turns out into non-rush-hour transit, that's less energy efficient.) Lunatic Esex Tue, 2013-02-19 03:14 Permalink ## After-the-fact engineering Ladies and gentlemen, allow me to introduce the latest in transportation systems of the future... The U.S. Space Shuttle orbitor! The similarities are uncanny: This is not the (long range) vehicle customers are looking for. I love electric cars. Before I was old enough to drive I wanted my first car to be an electric one. Years later I walked into a GM EV1 dealership and walked out with a phamphlet, sure that I would soon be able to finally "go electric." (I came across that phamphlet in a box a few years ago and still have it, packed back away in that box of memorobilia, to be rediscovered again later.) If GM hadn't crushed that dream for a decade and a half along with the majority of EV-1s in existence we MIGHT now have the infrastructure in place to make electric cars viable as an "only car" solution for people at least in suburban as well as urban areas. The reality though is that getting electric cars to go long distances right now requires a bunch of awkward compromises, much like the development of the space shuttle required. Luckily electric vehicles have the opportunity to get gradually better and better range while still fulfilling real-world purposes, whereas a launch vehicle that can't make it all the way to space is only good for doing experiments. joel w upchurch Tue, 2013-02-19 03:26 Permalink ## electric car technology just isn't practical for long trips yet. The trouble is that an electrical cars are just toys for rich people with the current technology, just like automobiles themselves at the start of the 20th century. They make more sense as a family second car, which is only used for commuting. When the Lithium-Air battery is developed, then electrical cars will be much more practical. Currently natural gas is a much more practical power source for cars than straight electrical systems. James Tue, 2013-02-19 06:28 Permalink ## Not practical for long trips yet I agree with you that electric cars have a long way to go before they are practical for long range trips. I consider my family typical. My wife and I have three children. Each of us drives about 15-20 miles round trip to work, with 5 or 6 extra trips during the week for various extracurricular activities and shopping. A practical electric car that can seat 5 (to include car seats) would serve us very well, until you factor in those long trips, every 2-3 weeks, to visit family and friends, 120-200 miles one way. (Let's set aside the annual vacation trip of 300-500+ miles.) There is just no electric, or hybrid car that can get us to grandma's. Even the 4 seaters have very limited cargo capacity. This is where I feel that the trailer concept begins to show some merit. I might just consider a smaller car, if I know that I can quickly attach a 100 mile rage extender battery and perhaps also some cargo capacity. But I also want to own it. I feel that rental fees, every few weeks, would just bleed me dry. Plus there is the hassle of pick up and return, on their schedule. John Tue, 2013-02-19 05:12 Permalink ## Correction for the battery cost calculation Tesla offers a battery pack replacement for the 85kWh battery for 12k if purchased up front and installed after 8 years. John Tue, 2013-02-19 05:16 Permalink ## Follow up on price And one more thing when comparing battery cost to gasoline. One is going to go up in the future, and one is going to go down. If they're even remotely close to break even today, which do you think is the better long term deal/investment? Frank Ch. Eigler Tue, 2013-02-19 06:19 Permalink ## investment? "which do you think is the better long term deal/investment?" Considering the limited residual value of exhausted batteries, buying now could be closer to a sunk cost than an investment. brad Tue, 2013-02-19 11:21 Permalink ## That doesn't make sense Batteries are indeed likely to drop in price in the future -- which will make electric cars much more cost-effective, and make longer range cost effective. But that proves the opposite of what you're saying. If batteries are going to drop in the future, why would you want to spend a bucketload of cash on them now? It means they will depreciate fast, even if they don't wear out. It's hard to get reliable figures on just how fast they will wear out. After 150,000 miles (a common car lifetime outside of California, here it's about 190K) how much money is that battery pack going to be worth? The fact that there are newer battery technologies will eat into that value of course. But will those old batteries, no longer able to hold as much charge as they used to, be of any value at all other than the metals? How much will the metals be worth? John Wed, 2013-02-20 06:38 Permalink ## Sorry, my previous post was Sorry, my previous post was not clear. From the perspective of 'better deal' I meant buying an electric car overall, not investing in buying the battery pack now instead of waiting. I have a Tesla, and I'm not buying the battery pack for 3 reasons: 1) I believe the cost will be lower in 10 years, 2) I believe my 85kWh battery might still have enough capacity to meet my needs then (only time will tell here) 3) I believe that the batteries will hold some resale value for other uses before recycling, so I'm likely to get some money back there. brad Wed, 2013-02-20 10:44 Permalink ## Lifetime One thing I've been hoping to see more collected research on is not the usual battery stats like energy density and cost per kwh, but lifetime watt-hours under different duty cycles. Lifetime wh is tricky because some batteries just decline slowly at the end of life, and some have more of a cliff. When a battery has a cliff, it's clearly gone and not for re-use. For slow decline, you see your car's range dropping just a little bit every day -- when is it time to do the expensive pack replacement? brad Tue, 2013-02-19 12:06 Permalink ## What is the price It is generally accepted the price of the pack is nowhere close to $12K. This is more a warranty, they clearly don't expect most people to do it. In addition, $12K today is $20K in 10 years time, and Tesla may be making a bet that battery packs will cost half as much in 10 years time. Some risk but not an unreasonable supposition. All other evidence suggests the 85kwh pack costs in the range of $30K to $40K, including warranty and other factors. In fact, that's lower than the commonly cited price of $500 to $600 per kwh today. However, a recent McKinsey study forecast $200/kwh by 2020. If that turns out to be true, Tesla can replace that pack in 8 years for under $17,000, so getting $12K from you today is a money-maker for them even if everybody does it. Miss Ann Thrope Tue, 2013-02-19 08:14 Permalink ## "As a plus, the electric car is simpler ..." "...and should need less maintenance." Brad, thanks for a fascinating article, especially the battery economics and trailer discussion. However, as an article that purports to cover the "economics of a long range electric", it seems to gloss over the lower maintenance costs -- no oil changes, no tuneups, no plugs, etc., etc., not to mention far fewer moving parts to break (and personal time devoted to all that). I'd like to see a lifecycle cost for equivalent vehicles -- one electric and the other gas...and, ideally, also a hybrid. brad Tue, 2013-02-19 11:30 Permalink ## Maintenance I added the figures on that. Gasoline cars average about 5 cents/mile on non-tire maintenance and early-life repairs. So this doesn't move this needle much, even if the electic cars can cut that in half. Perhaps more interesting will be later-life repairs. Will they be better than the gas car and by how much? With many cars, it is the rising cost of repair that ends the life of the car. One day you find that fixing the radiator is going to cost as much as the book value of the car, and you junk it. With electric cars it may be the degredation of battery performance. If keeping an electric car running is an easier proposition that will change the equation. Perhaps you will be able to put in a newer, cheaper battery? On the other hand, will people support a 15 year old car and make parts for it? Will all the other technologies in cars (including robocars) make the car obsolelete even before it wears out? Jay Tue, 2013-02-26 06:44 Permalink ## EV long-term repair costs? I have my doubts about long-term EV repair costs, not including the battery. ICE autos cost an arm and leg if anything electronic goes bad, so how much do you think GM will charge for the 100kw power controller in the Volt if/when it goes bad. And it will go bad, or some other multi-thousand dollar part will. Think about all the electronics in your life - how many are still working after 15 years. How many are repairable at any point in their lifecycle. (PS. I love driving my Volt. The EV powertrain is a real joy to drive, but what makes it feasible is the tax credit and use of the HOV lanes on my daily commute.) brad Tue, 2013-02-26 11:21 Permalink ## It's a good question Sadly, car vendors don't really optimize for total lifetime cost of a vehicle, in part because car buyers don't pay as much attention to it as they should -- in fact it should be the main number they pay attention to. What this means is that they don't design vehicles to be easy to maintain -- in fact, making service expensive is something the dealerships like. They also don't always design them to be cheap to insure -- they will put in expensive headlights and sensors in bumpers which make fender benders cost a lot of money. And thus, there is reason to suspect that the electric car makers have not put in as much care as they should to studying how to make the maintenance optimal. In particular, if you are an electric (or hybrid) buyer, you are a little more aware of TCO, because you are paying up front to have lower fuel costs down the road. Many web sites -- kbb, consumer reports, edmunds etc. have 5 year total cost calculators. I would love for them to extend this, letting you input your annual miles, and allow you to go beyond 5 years. With gas cars, a lot of the maintenance/repair cost is not parts, but labour. I would hope that on the electrics, the layouts are simpler and thus labour is lower. Steve Sat, 2013-03-02 19:32 Permalink ## Petrol vrs Electic Good article Brad, you have given a good picture of the overall costs of all electric vehicles. Sadly it looks like they are still a long way off pushing aside the internal combustion engine. (The company "Better Place" has just pulled out of the Australian market due to the low take up rate". To me the most effective way to reduce the polluting effects of cars is still to keep increasing taxes on the fuels and let the market sort it out. Not a very popular idea but at least it works. joel w upchurch Sun, 2013-03-03 14:01 Permalink ## Natural Gas Powered Vehicles Natural Gas is a technology that is more environmentally sound and cheaper than gasoline right now. The biggest problem it that there is limited number of places that you can fill up your tank with natural gas in most parts of the country. If your house in plumbed for gas, then you can buy the equipment to fill up at home. http://nextbigfuture.com/2013/03/natural-gas-at-230-gallon-versus-4-per.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2Fadvancednano+%28nextbigfuture%29 Electricity is the fuel of the future, but we can use Natural Gas right now. brad Mon, 2013-03-04 10:26 Permalink ## CNG power CNG is indeed the cheapest fuel available, and it's actually even cheaper than $2.30/gallon. In fact the price at the wellhead, uncompressed is just 43 cents per gallon equivalent, but you must pay to ship it to your car, and for the equipment and energy to compress it. A large fleet pays about 75 cents/gge for the gas, and ends up about $1.20/gge with all the other costs. Retail it's around $2/gge. You can buy it in your house for about $1.20/gge and the compressed cost depends on how much use you get out of the $5,000 home compressor. If you have a 30mpg car you average 400 gge a year so that's a tough slog to compete with the retail (also add electrical cost of compression.) Home compression is also an overnight thing, but retail fill-up is fast. But the other kicker on CNG is the tank. Today's tank is large and eats up a lot of your trunk. The CNG Honda Civic, the only CNG car on the market, costs $5K more than the other civic and has a very small trunk. (One thing the Tesla Model S shines at is trunk space, fore and aft.) And the range is not great -- the tanks only hold about 8 gge -- but the refill is fast unlike electric. However, you need to travel a lot of miles to justify the extra $5K for the car, especially if you want to also justify a compression station. However, for fleet ops (where all vehicles come home each night) it's easy to justify, and the main push now is trucks for long haul. Joel N. Weber II Sun, 2013-03-17 07:04 Permalink ## CNG vs electric refill infrastructure Even if electric recharging is slow, you can generally refill an electric vehicle anywhere there is a 15A 120V outlet, which translates to pretty much everywhere. And even if it takes 57 hours to refill an 85 kwh battery pack in a Tesla with such a pedestrian outlet, there may be times you happen to be visiting someone for at least 57 hours when that wait might not be an inconvenience. CNG filling stations are going to be harder to bootstrap. brad Sun, 2013-03-17 12:28 Permalink ## Electric refill It will be interesting to see in practice how much refill Tesla owners get from 15A circuits. I believe Tesla has a kit to plug into dryer plugs, which is what you would want to do and can in theory offer a pretty good rate. They don't do it, but another cute idea would be to let you plug an electric car into two plugs on two phases in the house -- you would need two heavy duty extension cords. (Multiple plugs on different breakers would also work but is harder to make safe because you have to trust the homeowner to have picked the two plugs and you will blow a breaker and take some risk if they do it wrong. However, you can detect for sure if you have two phases.) But for now the charge rate is so slow that I don't know how many people will use it. They have to be going somewhere and not using their car there. The plug in hybrids get away with using the 15a plug since their batteries are so small you can refill them overday or overnight. Annoyingly, the Tesla only takes 12 amps from the regular plug, for safety. As far as I know it does not even have a mode to use 20 amp plugs to get the 16 to 17 amps you would safely get from them. CNG stations are pretty common in some areas, but there are also many places where a CNG roadtrip would be extremely difficult. Though these are areas where electric car roadtrips aren't really practical either -- though almost everywhere has RV parks that can do 50A x 1 phase charging if you are willing to sit for many hours in an RV park. This may change if the plan to use CNG for long haul trucking is realized. Joel N. Weber II Sun, 2013-03-17 21:17 Permalink ## Tesla recharging Tesla draws only 12 amps from 15 amp outlets because code requires that 80% derating for loads that are continuous for more than three hours. http://shop.teslamotors.com/products/universal-mobile-connector-adapters indicates that they have NEMA 5-20 and 6-20 adaptors for Roadster, but not Model S. Second to last comment of http://www.teslamotors.com/forum/forums/using-dryer-plug claims that one Tesla service center has been selling the dryer adapters. I think I've seen a rumor that RV parks in places like the midwest that experience winter tend to close for the winter, so the RV park charging option isn't quite so universal. And I think that's 10ish hours for a full charge, which might be OK if you find an RV park that makes cabins available for Tesla owners to sleep. I think the real key to making Teslas viable for road trips is a combination of getting hotels to install 30A or better 240V charging infrastructure (which I suspect is pretty cheap in the grand scheme of operating a hotel) and getting the Supercharger network built out. Very little of the stimulus money that went to EV charging seems to have gone to public charging stations at places where people will want to sleep. brad Mon, 2013-03-18 01:14 Permalink ## Building out the network Oh, I think the charging networks will get built out, and even more superchargers, though Tesla having chosen to use a different supercharger than what was supposed to be the industry standard isn't helping. And while people are finding it's not unreasonable to stop at a supercharger if you plan to stop for a bite to eat, and this works for certain types of trips, I am not sure the supercharge approach really will work for a serious subset. Perhaps I imagine others are like me, but my roadtrips have a highly unscheduled nature to them. I like to just go where I want to go, stop where I want to stop, and having to go through certain places, stay at certain motels, pause at times the car likes rather than times I like -- it's not going to sit well. And the superchargers had better be highly oversupplied, because the first time somebody pulls into a supercharging station to find they have to wait 30 minutes or an hour to get a slot, and then wait for charging, they are going to rethink how well it works. The answer, I believe, is not to try to turn electric cars into long range roadtrip cars, because that's not their nature. Don't try to beat gasoline at its game -- it has a 100 year head start and a supremely good energy density. This is part of why robocars are the answer for electrics. Use your electric around the city, where you drive the most. When you want a roadtrip, get a liquid fuel car. It can even be biofuel some day, rather than gasoline. The ability to just conveniently switch cars, which robocars can give, lets us use the right power train to match our desires for a given trip, not try to force one power train to do everything. Anonymous Mon, 2013-03-18 20:57 Permalink ## long range electrics Some of the commentary on the Tesla Motors discussion forums seems to think that the market is showing that Tesla's long range cars are selling better than Nissan's short range LEAF. I'm sure we can sit back and watch the market continue that debate. The Infiniti version of the LEAF should provide more data there, since it's expected to be a short range luxury car, although Tesla data on different battery pack sizes, if it gets published, might also be interesting. (Right now, there are probably some people who would be happy with 40 kwh who bought a bigger pack to get their car faster, and I think we might still be a couple months away from that effect vanishing.) The Harris Ranch Supercharger already has the congestion you describe; it was built as a single charging station as an early prototype, and it looks like they're a couple months off from finishing the 8 bay Supercharger station there. But this is a great failure mode to have compared to, say, Solyndra. Tesla is having minor growing pains, but there is nothing that is fundamentally hard to get right in the long run here. I do think it would be good for Tesla to work on serving more locations instead of having huge numbers of charging stations in a small number of locations, but perhaps that will get better as time goes on. My suspicion is that a restaurant operator being offered 8 Supercharger bays of customers for their food might find that hard to turn down, whereas 2 Supercharger bays may not have enough impact on the bottom line to be worth thinking about the contract involved. Maybe the cutoffs aren't exactly there, but we may not yet have enough Teslas on the road for the critical mass that the typical business will find persuasive. Are you saying you think it's unfortunate that Tesla elected to not permanently cut their Supercharger rate in half (which would have happened if they just adopted the Japanese CHAdeMO standard)? Your concerns about freedom may mean that you may not want a Tesla in the next 6 months, but I think within a year or two the vaguely high end hotels are all going to decide they need to support overnight Tesla charging, at least in parts of the world where Teslas are popular. It probably doesn't take that many nights per year of losing Tesla owners' business to competitors before the $2k-$3k that the installation to charge one car will probably cost starts to look like a bargin. And there are probably starting to be enough hotels in California with charging stations that the ones without charging stations will start losing a bit of business. If you want to drive a car to Burning Man, though, that may get more challenging. One other thing to put in perspective with the Supercharges is that I've seen a rumor that you can get four Supercharger locations for a million dollars. If that's true, building 500 Supercharger stations ought to cost about half what Massachusetts is going to spend to rehabilitate the Longfellow Bridge which carries the MBTA Red Line, pedestrian, bicycle, and automobile traffic across the Charles River between Boston and Cambridge. 500 Supercharger stations works out to just over 1 per hundred miles of Interstate highway over that whole system, which provides a minimal level of coverage, even if it's not quite the density you want. And Tesla might be able to eventually spend more than half the cost of a single bridge rehabilitation project on Superchargers, too. If the typical gas station can cope with three or four grades of gas and sometimes also diesel, why do people assume electric vehicle charging can't have as many standards as fossil fuels? brad Tue, 2013-03-19 10:29 Permalink ## Supercharging Those Telsa superchargers are 90kw, so a bank of 10 is almost a megawatt -- hard to get in many places, and hard to provide reliably with solar. If there were thousands of electrics on I-5 it would be an immense project to supercharge them all in the middle of the central valley. Yes, I understand how the giant battery of the Tesla wanted 90kw rather than 45kw (or in theory 60kw) from CHAdeMO, I just wish that work had started earlier to design a standard that could be shared. There is a version of 1772 that is supposed to come out doing 90kw, and presumably Tesla will have an adapter, but a single standard would help the network effect here. I would love a Tesla once battery prices make more economic sense, but I would not use it for road trips I think. It is a nice car, fun to drive, so I would regret that, but I don't think it's the end of the world. I think the mistake is thinking we need to have one car that can do all our trips. I would not expect to tow my burning man trailer in the model S either. (Though many camps at Burning Man have big generators that can offer as much as 18kw in a single plug if you have the adapter, but even 1.5kw will recharge you in a week. Not particularly green of course though my own camp has often been biodiesel.) I think the better design for an electric car is a small, light car, capable of doing less than 100 wh/mile. (The model S is around 300 wh/mile.) Put a 20kwh battery in that and you have 200 miles of range, so no anxiety in driving around a large city, but still not for road trips. When the batteries get to $300/kwh in a few years, that's a reasonably affordable $6K battery, and if you do want to take it on road trips, the 60kw fast charger could get you an extra 100 miles in 10 minutes, not too much longer than a refuel and pee break. But I still think the better answer is to just switch to a liquid fuel car for the raod trips. Use biofuel if you want to be lower emissions -- in fact some biofuels would be lower emissions than the electricity. Joel N. Weber II Thu, 2013-03-21 20:46 Permalink ## Supercharging My understanding is that, for a pair of cables to charge two cars, the Supercharger setup has 120 kw of rectifiers / voltage regulators, with hardware to switch each 10 kw charger module between the two cables. While currently limited to using 9 or 10 of the rectifier modules for one car, there seems to be some speculation that the hardware may support using all 12 to charge one car, and that Tesla is trying to look at more long term data to make sure that doing so won't hurt the long term life of the battery. (Hawthorne and Harris Ranch may have single cable charging station designs that predate this.) North America has not had very much of a CHAdeMO network anyway; I won't be surprised if by the end of 2013, North America has more Supercharger locations than CHAdeMO locations. And Superchargers target larger batteries so they don't need to be spaced as closely together; if you want to minimally cover 1000 miles of highway, you may need twice as many CHAdeMO locations as Supercharger locations. The 90 kw SAE combo plug standard seems to be enthusiastically supported by companies who seem like they may prefer that electric cars never take off. It's not clear to me who's ever going to spend money building out that network. If we could get batteries down to $75/kwh, would you still be opposed to using an electric car for road trips? I'm wondering how the Smart Electric car compares to your 100 wh/mile, but given that it's not that much over 100 MPGe and thus not much better than a LEAF, I bet it doesn't come anywhere near that. Is Burning Man going to abandon the fossil fuel generators and switch to solar / batteries / wind one of these years? brad Thu, 2013-03-21 22:56 Permalink ## Generators etc. Burning man is not one thing, it is whatever generators people bring for their camps. Cheaper batteries are great, though they also should get lighter if they can. One question I am curious about is just what effect supercharging has on battery life. In fact, I am keen to see more real study of battery life, and how it is affected not just by chemistry but by the various duty cycles of usage and recharge. How much life are you eating up by charging to 100%? How much by draining to zero? How much by jackrabbiting? Not that there can't be good answers to these questions, but they don't even exist for the liquid fuel cars. There is a big slope with electric cars if you can make them lightweight. Heavy electric cars require heavy battery. Some of the battery capacity is going to just moving the battery, so returns diminish. Likewise, a light car with a light battery is hugely more efficient. Perhaps the road trips I have taken are atypical, but I just don't see how many of them could have been done by an electric, without a massive, massive increase in charging infrastructure in some very remote and rural places. I've had road trips where I was concerned if I would easily find gasoline. But as I have said, the whole point is you don't have to do that or want to do that. Those road trips are just a fraction of my car use. There is no great need to do them in my city car. The Tesla charger is better than the Chademo or others, but I also think that the amount of charging infrastructure is small and gasoline's 100 year head start very challenging to overcome. It would be nice if all could work together, even if it means everybody switches to Telsa's standard or a 90kw version of Chademo or combo plug or something. Right now, nobody writes a story about an electric road trip that isn't a story about how much they worried and wondered about recharging. I want to see a story of an electric road trip that was about the scenery. But again, we're a long way from getting rid of gasoline, and it's a mistake to go nuts on trying to make the long tail of trips electric. Far better to put our energy (the metaphorical kind) into improving electric use and effectiveness where electric is most practical, and accepting that gasoline will still dominate the long road trips for many years. I don't understand the need to spend a lot of effort into making worrisome electric roadtrips when there is so much to be done in the city to get adoption. Joel N. Weber II Sat, 2013-03-23 13:00 Permalink ## future of batteries If enough people start bringing batteries and solar panels to Burning Man, will there eventually be pressure to ban the relatively few remaining diesel generators? The key point that keeps being repeated on the Tesla forums, though perhaps with less supporting evidence that one might like, is that Supercharging does not seem to put any more wear and tear on the batteries than 240V charging at 30A-70A for the same number of charge / discharge cycles. The thing that does have a large impact on battery life is apparently how often the batteries are charged at the range setting of 100% full, instead of the standard charge setting at something like 85% full. I think the road trips you've taken may be different than the road trips I've taken over the years, but that doesn't answer the question of which pattern is more typical. I'm sure I've been in an automobile more than 100 miles from the nearest Interstate Highway on the Big Island of Hawaii, and possibly on a few other Hawaiian islands, but I'm not sure if I've ever been in an automobile more than 100 miles from the nearest Interstate Highway anywhere outside Hawaii. Are your rural road trips in places where people don't even have outlets for electric clothes dryers, or is the issue just that you don't want to wait the amount of time that would be required to charge with such an outlet? A Supercharger station consisting of a small solar array and a fixed 85 kwh battery ought to cost less than $100k, maybe even less than $50k even at today's prices, if you're looking for rural coverage in places where you know the station will be used less than once an day and if a 120 kw grid connection is prohibitively expensive. I think installing an underground gas tank costs more like $250,000, and refilling that gas tank is also more labor intensive once the initial installation is done. I think somewhere around 10-15 years ago, the safety standards for underground tanks in the US became more rigorous, and I doubt there are many gas stations in the US with tanks much more than about 20 years old at this point, so I think in some ways your 100 years may be misleading. I think a lot of consumers are determined to have one car that they can use for as close to everything as possible. If you want to change the world, finding a way to work within that constraint is likely to be most effective. (See also how solar panels are rapidly reaching the point of compelling economics for some usage patterns, even though sufficient support for a carbon tax has never materialized.) Battery energy density per unit of mass certainly would need to improve to make battery powered air travel practical, and I think that will encourage that research, but I also think the current energy density is already adequate for automobile use, though better energy density would lead to a small energy efficiency improvement for automobiles. brad Sun, 2013-03-24 10:12 Permalink ## Off grid solar Off-grid solar is not green generally, so it's not a good answer at Burning Man -- or for a Tesla charging station. Off-grid solar ends up discarding most of the power generated by the panels. My road trips are characterized by things like: Yes, gasoline is expensive too. If we didn't already have a giant gasoline system, this would be a different question. But we do. Joel N. Weber II Sun, 2013-03-24 21:02 Permalink ## Off Grid Solar If Burning Man were to make sure that the nameplate capacity of the solar panels was less than what was actually going to be consumed, I don't see how that power would be wasted, although then you might need diesel generators to fill in the gaps if people aren't willing to adjust the loads to match the power produced. And if you had enough batteries to store power to cover nighttime usage and solar panels to provide power for charging that much, again, I don't see where the waste is. There might be some concerns about cloudy vs sunny days, but I also think I've heard Elon Musk claiming that that's not really a big deal (without really providing substantiating evidence that I've seen). But then, is getting to Burning Man instead of staying home a green activity? I wasn't necessarily thinking that a minimal, low usage Supercharger station would have to be completely off grid; if you assume that a grid connection is available, but it may be a lot less than 120 kw, then a relatively small solar array can feed power to the grid when the fixed battery is full, and the battery can exist as a way to feed power into the car at a rate that exceeds what the grid can support. You might even consider recharging the fixed battery from the grid to supplement the solar. Do the scenic roads that you take tend to count as US highways or state highways? I'm wondering if we can get any estimate of how many Superchargers would be needed to cover the routes you like to take. IIRC, rebuilding a mile of single track railroad where the right of way had previously had a railroad which had since been abandoned and track completely removed tends to cost at least a million dollars a mile. I doubt building a two lane road is any cheaper. If that's the case, then building a Supercharger even every 50 miles is probably absurdly cheap compared to the cost of road construction. Perhaps we have a 100 year head start on the road construction in some places, but I doubt there are too many usable roads that were last paved prior to WWII... brad Mon, 2013-03-25 11:18 Permalink ## Camping solar Off-grid solar normally stores to batteries. When the batteries are not in significant discharge and there is light load, the power of the panels is thrown away. That's fairly often, because people building off-grid solar want to make sure they have enough battery to deal with a stream of cloudy days, they want the batteries topped up at the end of a sunny day to last the night and the possible coming cloudy days. When batteries are not just full, but anywhere near full, they can't take the output of the panels, it must be discarded if there is no load to use it on. You are correct, if somebody makes a mini-grid, with generator power and solar, then they can make sure they use all the solar energy. Sadly, most people who want to put up off-grid solar have philosophical objections to the generator. Because of that, they do irrational things, like overprovision the solar or other green gear, wasting money (and embodied non-green energy) which could be used to make the world much greener in other places. In extreme cases, if you factor in the embodied energy and the total life-cycle considerations on the battery bank, it's not out of the question that some installations could be more negative than the generator. One example of that is what is sometimes done for camping solar power. While such installations are usually small, so not too bad, I have seen people who purchase solar panels for use only on camping trips and only at Burning Man. These panels go up for a few weeks a year and sit in a garage or on top of a bus or RV the rest of the year. Panels tend to have one year's output of embodied grid energy in them, up to 4 years for the older polycrystaline panels. Using them just for camping is the opposite of green. A camping solar panel needs to be one that is part of a grid-tied installation, normally mounted in a sunny place, tilted to the latitude and facing south. It must then be taken down for the camping trip for a short time, then installed at the campsite, and then quickly put back in place in the grid-tie system when done. This is not the common situation though. Going to Burning Man is in no way a green activity, especially for those who live outside of the Reno area. 99% of Burning Man's emissions are the travel to get there. The generators there and the fires are a minor part. Yes, a grid-tied charging station with storage to speed up the supercharge is a workable plan. Though the solar panel is orthogonal, it does not need to be at the station, it can be anywhere on the planet. (Ideally in fact it's in a place like New Mexico or a sunny part of China.) Well, almost. You might declare that supercharging is faster on a sunny day and slower at night, and thus have less battery if the panel is local and you are willing to tolerate the variability. Of course, such a station also has the issue that once somebody has used it, it can't be re-used for some significant amount of time. Perhaps OK for rare stations in a world of few electric cars. Where is the $1M/mile number for rail-rebuild? Road lane-mile costs can be quite low for chipsealed rural roads. It varies a lot based on the state, the terrain and urban/rural, but a median cost of $1.5M/lane-mile is commonly cited for highway level roads, with occasional interchanges etc. If you had lots and lots of electric cars on rural roads, you need a lotof supercharging stations. Every 50 miles would probably suit the lone Tesla driver pretty well, but what would it cost to put in the gigawatts needed if 1/3 the cars passing Harris Ranch on I-5 were electric needing a supercharge?Joel N. Weber II Sat, 2013-04-06 11:51 Permalink ## Solar, Batteries, Etc If 1/3 of the cars passing Harris Ranch need Supercharging, you ought to be thinking about that in terms of cost per car and not cost per mile of road. Tesla seems to think that a $2k/car charge for the 60 kwh version for Supercharging will cover the cost of building out the Supercharger network. I'm not sure that long term, putting solar panels in less sunny places is such a bad thing. Obviously if there is a finite number of solar panels the world is going to produce each year, and global carbon emissions are the only thing we want to optimize for, putting solar panels only in sunny places seems like the right answer. However, people's willingness to overpay for a kilowatt hour of solar relative to natural gas has the potential to drive up solar panel production volumes, and the thing that maximizes solar panel production over the long run may be what best reduces global carbon emissions. Are you arguing that Germany's massive investment in solar has been an environmental disaster since they get less sun than pretty much all of the 48 states? If you wanted Germany to import hydofracked natural gas from the US and offset that by having German subsidize solar panels in the US, you'll probably end up burning more dirty fossil fuel shipping the natural gas across the Atlantic. Also, Germany is getting some long term economic stability out of owning the solar panels, and I'm not sure there's an alternative way for Germany to get that if they put the solar panels in the US and import the natural gas. Massachusetts has had some actual cases in the last several months of people losing grid power for several days after a snowstorm, and that got me thinking that a vertically mounted solar panel on a south facing wall might be a good thing to have, in conjuction with batteries and a system that can isolate from the grid and keep the solar panel feeding the house through a power outage. (I assume normal rooftop solar panel mounting risks being useless if the panel is covered in snow.) I'm mostly thinking about this from the perspective of likely lower maintenance than a generator, the ability to be constantly testing the vertical solar panel so you don't have to worry about whether it will start when you need it, and eliminating the carbon monoxide risks. At that point, does it matter if it's environmentally optimal? And how much efficiency do you really lose in Massachusetts from the vertical mounting instead of the optimal fixed angle? (I might even be willing to mount it in such a way that every spring and fall it could be manually moved so that it would spend the summer not in the vertical position.) brad Sat, 2013-04-06 12:50 Permalink ## Not a disaster To point out that people are not following the best course to their goal is not to say that the lesser courses are disasters -- unless they are really bad. I do believe that human energy and money should be mostly applied to the most effective methods though. The issue with supercharging for electric cars is not the cost of the stations, it's the peak-load power distribution. The cost of our entire electrical grid -- and thus our electricity -- is, to a large factor, based on the peak capacity of the grid which can be even more than the fuel cost of the fuel-based generators. A highway supercharging station will be mostly about mid-day charging. Most electric cars will wake up full, charged off of cheap baseload in the middle of the night. And that's great, as they drive around town all day on electricity. Nobody will need a charge, even a supercharge, until mid-day, and few will need a supercharge after dark because traffic is light. So the supercharging will largely come at the worst possible time for the grid, and the most expensive time if on ToD metering. Putting 100 megawatts of peak capacity, allowing about perhaps 1,600 cars/hour, is a pretty grand effort. On the plus side the supercharge time is the peak sun time, so solar has some merit, but you can't be down just because it's cloudy or 5pm in the winter. So you probably need to put in a decent sized gas-fired plant at the supercharging center. Not impossible, but expensive combined with the panels. Grid-tie is the only way to have green solar. If you want backup power during a power outage, buying batteries just for insurance seems a poor choice, but I would have to work out the math. Vertical probably does decently in Mass, about 85% of the proper tilt. A steep tilt would do about 90% and still keep the snow off. I've often wondered if solar arrays might benefit by having a hand-crank for their tilt that you can adjust every couple of weeks to get that 2nd axis. Of course don't forget that Arizona is 50-60% better than Massachusetts and that's no minor difference. You might want to do the math on getting the generator and putting up panels there. Joel N. Weber II Sat, 2013-04-06 14:46 Permalink ## Solar and Clouds It does seem that Tesla plans to grid tie the Superchargers and use the grid as a battery unless getting enough grid capacity to a reasonable location is exceptionally difficult. If there ends up being a big solar array next to a big Supercharger installation, chances are that on the cloudy days, the restaurant next to the Supercharger station will not be running the air conditioning nearly as hard as on bright sunny days. Tesla claims the goal is that over a year the solar will put more power into the grid than the Superchargers pull out, which implies that on cloudless days, the solar should on average exceed the Supercharger power use and help to cover some air conditioning load that might now be covered by a fossil fuel plant; this suggests that the Superchargers should reduce the peak strain on the grid if the solar ends up near the Superchargers (or in other locations that turn out to be equally good for the grid). Elon has talked about a GE combined cycle natural gas plant being 60% efficient, vs 20% efficiency burning natural gas in an internal combustion engine (presumably in a truck or automobile or bus), so natural gas electric turbine next to Supercharger station may still be better than the non-EV alternatives. brad Sat, 2013-04-06 16:37 Permalink ## Natural gas plant Yes, NG plants are getting very efficient. I believe you can do a fair bit better than 20% in the car, more like 30% -- but the electric plant is indeed a winner (though it drops a bit due to transmission and battery charger losses.) Even so, the Honda GX CNG model for many years routinely won the top ranking as greenest vehicle in the USA for emissions, though recently it was surprassed by the lightweight Mitsubishi Miev -- but not by the other electrics. However, those comparisons are done on the power of the existing grid, which is fairly dirty on average. A dedicated new NG power plant at the charging station should give those electrics a better emissions score. But I still maintain that while it's possible to get the supercharging working, it seems like the wrong thing to spend all the effort on. Just make it easy to switch to a fast-refuel or long range vehicle (or use the trailer approach here) for the long trips, and use the electric vehicle where it shines in the city. At least until we get the new generations of batteries that fix these problems, by being smaller, lighter, cheaper and/or faster charging. Joel N. Weber II Sat, 2013-04-06 18:53 Permalink ## Short range battery powered cars I'm confused. First you say that Supercharging is faster than it needs to be, and then you say we need faster charging? Is there some nuance to what you're saying that you could make clearer? Tesla found that only 4% of their buyers wanted the 40kwh version, and announced this week that they won't be taking 40kwh orders in the future, and that they've decided that the most cost effective thing to do with the existing 40kwh orders is ship 60kwh cars software limited to 40kwh (unless buyers pay to have the software limit removed). How are the Tesla Model S battery packs not already small enough, light enough, and able to be charged quickly enough? I agree the price needs to come down, but part of that is that Tesla so far only knows how to make something like 400/week or 500/week of something where there's a lot more demand than that, and they're trying to hold the price something like 25% above the production cost this year, presumably in part to save up money to build a bigger factory and in part to sell what they are able to make only to the people willing to pay the most. brad Sat, 2013-04-06 19:04 Permalink ## Pack Size The Tesla packs people buy -- due to range anxiety or the illusion that it will be a decent road trip car -- are huge and expensive. Most of that capacity is never used, but it's always carried around with you, and you're always paying it off. (Though in theory a large pack should take longer to wear out than a small pack.) The Tesla 85kwh pack is still pretty heavy, at least 500kg though I have heard a bit more. That's a lot -- the weight of 7 people. I have not done the math on how much this costs in energy and range reduction, and that will depend on speed (the effect will be larger at low speed.) So there are lots of way that pack is not already small enough and light enough. There is a bit of a cliff here. Get your vehicle lighter (the model S is not light, even without that battery) and it needs less battery, returns get better the lighter you can make things. I'm not saying Teslas are not great fun to drive and people don't want them. They are just not economical at this time. That will change in a few years. The big debate people have is "does the early non-economical demand drive the development until it gets economical?" Possibly; Tesla is demonstrating things that some people did not believe. On the other hand, I think if you had a car that drives like the model S in 5 years for 1/3 the price, it would sell almost as well without the history, and there are investors who would be ready to take that bet. Phillip Helbig Fri, 2013-03-22 02:52 Permalink ## terminology Since "supercharging" has a completely different meaning in an automotive context, perhaps this is not the best term. Adam C. Engst Mon, 2013-03-04 13:37 Permalink ## Perhaps a lesson from laptop batteries? The main thing that struck me with regard to the trailer idea, speaking as someone who grew up on a farm driving a tractor and various farm implements, is that trailers are way more trouble to drive than many people may realize. Parallel parking is right out for most, and backing up and even making turns properly will be tough. But I agree about the desire to reduce the weight, and I wonder if a lesson could be taken from the laptops of yesteryear, some of which had slots for extra batteries for long trips. Imagine that instead of the spare tire under a luggage area in the rear of the car, there are carefully designed slots that would accept individual batteries. (wave hands at good industrial design) In a pure EV, they'd be installed only when you anticipated a longer drive, and in a plug-in hybrid, you'd take them out on longer trips where the weight would be problematic. brad Tue, 2013-03-05 10:06 Permalink ## Extra battery slot Well, I have two thoughts about the stability of trailers. They are not hard to drive ordinary roads, but yes, untrained people would have big problems backing up and parking in certain spots. The simplest solution might be a very short trailer which attaches to special receivers left and right, so it does not sway left or right. (It could sway up and down.) So more like an extra axle. Perhaps just 2 feet long, this would then fit in parking spaces as well. The other solution was to have a steering motor on the trailer wheels -- possibly two independent motors -- so that with computerized signals from the car and its steering wheel, you can back it up without thinking much about it. However, this has to be a longer trailer. Finally, another option is to have a 3rd wheel that can come down from the trailer tongue and some low power electric motors, and make it very easy to connect and disconnect (already good for pick up and drop off.) That makes it a bit of a pain to deal with reversing and parking problems by disconnect, but not a killer, and it could follow you around even. Full blown robotic connect/disconnect is not out of the question here, though that adds to cost and adds risk of failure. Rob Stone Thu, 2014-05-22 06:57 Permalink ## electric cars I have a 2011 Prius- not the most fun car to drive but I like the savings on regular gasoline-it's a good car too, being reliable, good design, practical, etc. Plus I only paid $21,000, bought it as a left over and got 60 months 0% financing. The Tesla is a beautiful car but out of my financial range, like the majority of people-I saw a bare chassis at the King of Prussia mall near where I live and was amazed at the simplicity- I also like the Chevrolet Volt and plug-in concept, although they're a bit high price for me as well. From reading this article and many more on the internet, it seems having the the all electric capability and the gas engine for long distance is the best combination right now. In the future, a better battery, capable of holding a charge longer, additional capacitors supplementing the battery, more charging stations, future battery swapping stations equivalent to a gas fill up, would reduce the range anxiety. The positives outweigh all the negatives of this technology, the cleaner energy, the reduction of oil dependency from foreign shores and all of the political implications they imply, the reduction in pollution, improvement of air quality, global warming concerns. This technology already has had a positive impact on car manufacturing with lighter, new materials, replacement of previous hydraulic components with electric ones, regenerative braking, etc. and in my opinion will continue forever, especially as new technologies are discovered! Rob Stone Thu, 2014-05-22 07:07 Permalink ## solar One item I forgot to mention was solar-not sure why this technology hasn't been incorporated more on the tops of cars to aid some aspect of the cars functioning that would otherwise be a drain on the electric battery or a supplement. The Prius had or has some option but it doesn't seem to be too popular. ## Add new comment
true
true
true
You've probably seen the battle going on between Elon Musk of [Tesla and the New York Times](http://www.engadget.com/2013/02/14/musk-vs-times/) over the strongly negative review the NYT made of a long road trip in a Model S. The reviewer ran out of charge and had a very rough trip with lots of range anxiety. The data logs published by Tesla show he made a number of mistakes, didn't follow some instructions on speed and heat and could have pulled off the road trip if he had done it right. Both sides are right, though.
2024-10-12 00:00:00
2013-02-18 00:00:00
null
article
4brad.com
Brad Ideas
null
null
1,477,433
http://www.marginalrevolution.com/marginalrevolution/2010/07/how-much-do-somali-pirates-earn.html
How much do Somali pirates earn? - Marginal REVOLUTION
Tyler Cowen
# How much do Somali pirates earn? I am unsure of the generality of the sources here, but the author — Jay Bahadur — is writing a book on the topic and at the very least his investigation sounds serious: The figures debunk the myth that piracy turns the average Somali teenager into a millionaire overnight. Those at the bottom of the pyramid barely made what is considered a living wage in the western world. Each holder would have spent roughly two-thirds of his time, or 1,150 hours, on board the Victoria during its 72 days at Eyl, earning an hourly wage of $10.43. The head chef and sous-chef would have earned $11.57 and $5.21 an hour, respectively. Even the higher payout earned by the attackers seems much less appealing when one considers the risks involved: the moment he stepped into a pirate skiff, an attacker accepted a 1-2 per cent chance of being killed, a 0.5-1 per cent chance of being wounded and a 5-6 per cent chance of being captured and jailed abroad. By comparison, the deadliest civilian occupation in the US, that of the king-crab fisherman, has an on-the-job fatality rate of about 400 per 100,000, or 0.4 per cent. As in any pyramid scheme, the clear winner was the man on the top. Computer [a man's name] was responsible for supplying start-up capital worth roughly $40,000, which went towards the attack boat, outboard motors, weapons, food and fuel. For this investment he received half of the total ransom, or $900,000. After subtracting the operating expenses of $230,000 that the group incurred during the Victoria’s captivity in Eyl, Computer’s return on investment would have been an enviable 1,600 per cent. There is a very good chart on the right-hand side bar of the article.
true
true
true
I am unsure of the generality of the sources here, but the author — Jay Bahadur — is writing a book on the topic and at the very least his investigation sounds serious: The figures debunk the myth that piracy turns the average Somali teenager into a millionaire overnight. Those at the bottom of the pyramid barely […]
2024-10-12 00:00:00
2010-07-01 00:00:00
https://marginalrevoluti…go-thumbnail.png
article
marginalrevolution.com
Marginal REVOLUTION
null
null
582,987
http://www.businessweek.com/technology/content/apr2009/tc20090427_328264.htm?chan=top+news_top+news+index+-+temp_news+%2B+analysis
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,413,947
http://www.ft.com/cms/s/0/31d68af8-6e0a-11e6-9ac1-1055824ca907.html
Motor industry: Pressure on the pump
null
Motor industry: Pressure on the pump was $468 now $279 for your first year, equivalent to $23.25 per month. Make up your own mind. Build robust opinions with the FT’s trusted journalism. Take this offer before 24 October. Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial. Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%. FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday. Terms & Conditions apply See why over a million readers pay to read the Financial Times.
true
true
true
null
2024-10-12 00:00:00
2024-01-01 00:00:00
null
website
null
Financial Times
null
null
11,258,385
https://medium.com/@klimtypefoundry/a-new-typeface-for-paypal-9a7be33b7380
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,844,251
http://www.markbymarkzuckerberg.com/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
23,207,785
https://evrone.com/hannes-mehnert-interview
Hannes Mehnert interview by Evrone
Grigory Petrov Evrone DevRel
# Functional programming is about better code maintenance & program understanding We spoke with Hannes Mehnert, the co-author of MirageOS about the library operating system that constructs unikernels, OCaml and functional programming. We connected with Hannes Mehnert, known for his work on MirageOS, to discuss the evolution of unikernels, their impact on cloud computing, and the future of secure operating systems. Also, we touched on the importance of security and minimalism in software design. ## Introduction Our backend engineer, Pavel Argentov, traveled to Marrakech, Morocco to attend the ninth MirageOS retreat, which was held from March 13-19, 2020. The goal of the event is to bring both experienced and brand new MirageOS users together to collaborate and sync various MirageOS subprojects, start new ones, and help each other fix bugs. MirageOS is a library operating system that constructs unikernels for secure, high-performance network applications across a variety of cloud computing and mobile platforms. The code can be developed on Linux or Mac OS X and then compiled into a fully standalone, specialized unikernel that runs under a Xen or KVM hypervisor. At the event, Pavel spoke with *Hannes Mehnert*, the co-author of MirageOS and host of the event, about his work with MirageOS and OCaml. He gave us some details about his contributions to MirageOS and why he joined the project. He also explained the benefits of functional programming and why he was initially drawn to it. In addition, he broke down the potential, and limitations, of MirageOS and OCaml and gave us some information on new developments and what’s to come. We’ve included the full transcript of the interview below, so you can get the latest info, straight from the best source. ## The Interview **Pavel:** I think we should start by speaking about OCaml. How and why did you start working with OCaml? **Hannes:** Six years ago, when I had just finished my Ph.D. in formal verification of software, I was used to taking some random, already-developed software, applying some specifications to it, and then writing some proofs that the program was actually correct. That turned out to be rather complex and work-intensive, due to the ubiquitous use of shared mutable state. For quite a long time, I've been very interested in systems programming, which usually means using C and writing your operating system in it. But given my semantics background, I was more hoping to use a high-level language for writing operating systems. So, after finishing my Ph.D., I stumbled upon MirageOS, together with my friend David Kaloper. MirageOS is written in OCaml, which is a multi-paradigm language that has a module system and is used for functional programming. That means that you can avoid shared mutable state and actually verify the programs on the operating systems. When I came to MirageOS around six years ago, it was already working to some extent, and my first contribution was the TLS stack and cryptographic algorithms. **Pavel: **How MirageOS is used, and what can we get out of it? **Hannes: **MirageOS started as a research project. We had a prototype and an idea on how to use different styles of programming for operating systems. My background is also very deep in security, and that was my main motivation for contributing to MirageOS and trying to get it into production. From a security perspective, here you have less mutable state and you can run HTTPS or web server with TLS. And you have much less code, which means less bugs and less resource usage, because if you don't have to run that much code, you don't waste so many CPU cycles and so much memory. **Pavel: **Let’s talk about the TLS. Very often you might hit the limitation of the hardware and everything will be slow because the crypto algorithms are slow. How does OCaml solve this problem, and does it solve the problem of speed at all? Does OCaml allow you to make the code fast? **Hannes: **Yes, OCaml itself has a very fast runtime. We have a garbage collector (a memory manager) which is collecting very fast. The question is basically whether or not OCaml allows you to write a decent enough interface to pass the arguments properly and not waste too much CPU time. It turns out that it is fast enough. I'm happy to use a reasonable programming language, instead of a low-level micro assembler. And the other side of TLS is handshakes. It’s asymmetric cryptography, and in order to make that fast, we use a library called the GMP/GNU Multi-Precision library. In OCaml, we just have bindings for that, but they are the exceptions. Usually we try not to write bindings and not use too much C code. Most complex parts of decryption and encryption are still in OCaml, not in C. **Pavel: **Haskell programmers and other high-level languages programmers are concerned about the performance of the garbage collector, saying it slows things down. In Haskell, they can't write any kind of “soft real-time applications”. Do you think OCaml can do that? Is OCaml’s garbage collector fast enough to perform in use cases which require speed? **Hannes:** Yeah, I think so. Haskell has a completely different runtime, it has lazy evaluation by default. And OCaml is strict, we just do the computation as we go along. The garbage collector is very well-tuned for workloads, it’s really fast, and I believe that, in OCaml, “soft, real-time applications” are doable. **Pavel: **As far as I know, the “unikernel” as a concept isn't unique to OCaml anymore. What was the history of unikernels? Was the name of the idea different when it started? How did people come to the idea of unikernels at all? **Hannes: **I think it all started at the University of Cambridge, from the theoretical papers about the so-called Exokernel. People needed an instrument, a system which would be task-focused, less resource consuming, easily written, and easily adaptable. **Pavel: **OK. As far as I know, MirageOS uses the Lwt library. Is Lwt performant enough to do some reasonable load, if you have a DNS server, which has to respond quickly on multiple directions at once? Does it work fast enough? **Hannes: **I think it works reasonably well. A good application example for MirageOS is the Firewall, which is integrated into Qubes OS. Qubes OS is an operating system which uses Xen. The goal of Qubes OS is, for example, to have your mail application separated from the PDF renderer. So if you receive an email with a malicious PDF, once you view it, it shouldn't be able to access all of your mail. Instead, you save the PDF and push it to a different virtual machine. And that different virtual machine has the code to run the PDF renderer. So, that PDF is only opened and rendered in an isolated environment. MirageOS fits in here pretty well because it has a much smaller memory footprint. We can just set up the Firewall as one of the components inside of one of the virtual machines inside of the Qubes OS environment and receive packets from other virtual machines, which have access to the network. The MirageOS unikernel works as a router which routes the packets. **Pavel: ** You said something about MirageOS memory consumption. How much memory can it really have? What are the lower or upper limits? I've heard that MirageOS can’t be configured for memories bigger than 1GB. Are there really such limitations? **Hannes:** Well, at the moment, yes. The minimal amount of memory OCaml runtime and MirageOS unikernels need is 10 megabytes, and the upper limit, at the moment, is 1GB of memory. But that can be easily tuned, basically, if you have demand for more memory. My DNS services, for example, require around 14-24 megabytes of memory. That's not millions of records, but more like hundreds of records. And the web services I run usually have between 32 and 128 megabytes of memory. And that is sufficient to store the data. **Pavel: ** Have you worked with the Irmin data store? As far as I know, it's kind of like Git, and it's the only data store written in OCaml for MirageOS. **Hannes: **Yeah. Irmin is a branchable, immutable store. I usually don't use Irmin directly, but I use Irmin via the Git implementation, which uses it in the background. For example, my DNS server stores its zone file in a remote Git repository, it just fetches the repository, clones it into the memory, and then serves data from there. In 2019, Irmin had a major release, Irmin 2.0. **Pavel: **Well, let's switch a bit to the format of the gathering. Could you tell us a couple of things about what MirageOS retreat is? How did you come up with this idea? **Hannes: **I got a lot of inspiration from different conferences, and also from the OpenBSD hackathons. The basic idea is to gather a nice group of people. You are in a nice location, where you have nice weather, food, sunshine, and you can actually enjoy the environment. It's crucial to me that the people stay together all day and communicate with each other. There's no strict schedule. There's a daily round of updates on who did what, who’s interested in what, and who's stuck at what specific point. Other people may jump in and may have a solution for them. Random people start discussing problems and solutions, while other people are just busy writing some code. On one hand, I try to get people here who are long established in the community and have some experience and some ideas about the different libraries and the ecosystem, to discuss fundamental changes in the ecosystem while here. But also, I always appreciate having some new people here, to have new ideas and people who we can actually integrate into the group and get them to program some OCaml and some MirageOS, in order to grow the community. It's not exclusively for people who already know MirageOS or have written in OCaml for several years, it's open to everybody who's willing to take a trip to Marrakech. **Pavel: **That’s great! Do you think functional programming affects the programmer’s way of thinking? When I first started writing OCaml code, I started to understand that there are types which can be transformed. And this caused me to think first of the types and the meaning of data I work with. I know that functional programming in Europe is a part of the programming scholarship at the basic level. As far as I know, most students in Russia learn how to program starting with imperative techniques, and they almost never get out of that. **Hannes: **Yeah. I think a lot about types and apply quite a lot of type-driven development before writing actual code. So, when I write programs in a functional language, first I think about what the types should look like. Once I get the types in the right shape, all the implementation becomes much easier. For me, it is also about code maintenance and localized program understanding in functional programming. And I think it's much easier to understand my code five years later when it’s written in a functional language, where I don't overuse a lot of syntactic sugar and features, than it is to develop that code in imperative language and have hundreds of lines in a function. I try to keep the functions rather short and understandable. Yes, functional programming shapes your brain to think about the program. **Pavel: **I see that monads are making their way into different languages. We have them in Ruby and in C++. Is it just a way of implementing some academic knowledge in day-to-day programming? **Hannes:** I think it is a viable instrument, but it is very hard to comprehend if you haven't discovered monads yourself. Trying to explain monads to a new imperative programmer is very hard. We still use monads in MirageOS and in OCaml, but hopefully, with the multicore branch becoming part of the OCaml runtime at some point this year, we will get over that. **Pavel: **Let's talk a bit about open-source. Everything we have been speaking about is open-source. There is a point of view that tech only succeeds when it has enough money pumped into it. While open-source consumes our efforts and our time, it doesn't really bring in money. When you are evangelizing some new tech in an open community, you sooner or later reach the idea of an open-source collaboration. How important is open-source, in your opinion? **Hannes: **I think open-source is a crucial factor. Most of the stuff we do is actually developing libraries, OCaml libraries, which are then used in MirageOS unikernels. And everybody should be able to freely mix and match them together. When I write a TLS stack or a DNS implementation, I have a strong incentive to open-source all that, because then other people can reuse it. I enjoy writing software, and it makes me happy if anyone is using that software, be it an individual or a company using it for profit. That's fine with me. In MirageOS, most of the software is under a BSD license, so everybody can use it and do whatever they want with it. I think it’s very important to have a license. Everybody can understand the GPL, but there are tons of pages of text, while BSD has two or three paragraphs, and it is usually written in 25 lines of text. And if you also want to convince an industry to use some of your software, it’s better if you use a permissive license. You’ll have a much easier time convincing them, because, if you use a GPL license, it may be a bit harder to convince lawyers that it's a good idea. In MirageOS, for example, we have code contributions from IBM Research, and we managed to convince them to use a very permissive license, which hasn't been easy because lawyers usually want to stick to trademark. **Pavel: **I've read that you're working for a company which sells unikernel development. What is it like working on a tech which isn't selling, let’s say, established, well-known imperative programming? **Hannes: **I work at a nonprofit company called Robur. We work on grants, donations, and commercial contracts to enhance the MirageOS ecosystem and to develop unikernels. Over the last year, we've gotten some funding from the public. From Germany and the European Union, we got some grants to develop certain applications, like OpenVPN Gateway, and at the moment we are getting funding from the European Union to work on a DNSmasq, which is one of the crucial components in everybody's network. And that’s pretty wonderful. **Pavel: **How fast does MirageOS develop over time? Is it developing fast and growing new features? **Hannes: **The development is always quite slow, but we also do quite a lot of work. We try to get rid of our technical debt and adapt to modern build systems, which sometimes takes more time than the other projects. In terms of features, it is mainly about new libraries being developed. We talked briefly about the Irmin DataStore, and its 2.0 release was a major milestone, which was only reached last year. There is also an upcoming TLS 1.3 stack. As for MirageOS, we're now heading towards a 4.0 version, and it will definitely improve the development experience quite radically by getting rid of the old “ocamlbuild” and replacing them with a new build system called “dune”, which features incremental builds. **Pavel: **Well, let's conclude our talk with an encouraging statement to the developers that might learn MirageOS, embrace OCaml, and stop fearing functional programming as a theoretical mind-eater. How would you encourage people? **Hannes:** The good thing about FP is the level of control you have over rather complex code. In functional programming, if you spot a high-level bug, you could be able to debug it down to the lowest level and fix within a single weekend, while doing that on common operating systems is just impossible, due to the size of the codebase and involved libraries. You have control over the entire stack. It is full-stack development, from the level of network device card until the business logic and real application runs. ## Conclusion Here at Evrone, we strive to stay on top of new tech developments and embrace innovative new tools and methods. This allows us to use the optimal resources to provide our clients with the very best solutions to meet their unique needs. We work with a wide variety of programming languages and tools, and we highly encourage our team members to attend and contribute to tech conferences and events, such as the MirageOS retreat. If you have an idea that you’d like to develop, let us know how to contact you, and we’ll be in touch soon to discuss your project and how we can help.
true
true
true
Evrone spoke with Hannes Mehnert, the co-author of MirageOS about the library operating system that constructs unikernels, OCaml and functional programming.
2024-10-12 00:00:00
2020-05-14 00:00:00
https://evrone.com/sites…/spik_conf_2.jpg
article
evrone.com
Evrone
null
null
5,803,972
http://www.tomshardware.com/reviews/core-i7-4770k-haswell-review,3521.html
The Core i7-4770K Review: Haswell Is Faster; Desktop Enthusiasts Yawn
Chris Angelini
# The Core i7-4770K Review: Haswell Is Faster; Desktop Enthusiasts Yawn Intel's Haswell architecture is finally available in the flagship Core i7-4770K processor. Designed to drop into an LGA 1150 interface, does this new quad-core CPU warrant a complete platform replacement, or is your older Sandy Bridge-E system better? ## Haswell Turns Into Intel's Fourth-Gen Core Architecture *Editor’s Note**: Eager to show off what it's doing with Intel’s Haswell architecture, system builder CyberPower PC is offering the Tom’s Hardware audience an opportunity to win a new system based on Intel’s Core i7-4770K processor. Read through our review, and then check out the last page for more information on the system, plus a link to enter our giveaway!* Do you know what it’s like to be at the top of your game, the nearest competitor several strides behind? Well, maybe not. But Intel sure does. When it comes to desktop CPUs, the company’s top-end parts continue to stave off AMD's best efforts. That applies to raw performance *and *efficiency. We love fast, and we love efficient. But we also like to see healthy competition driving innovation. And again, on the desktop, there’s not enough of that to push Intel. Ivy Bridge-based CPUs are generally a small step up from the generation prior. And although the Sandy Bridge architecture included a number of notable improvements, unprecedented integration gave away Intel’s growing focus on mobility. Even as we got our hands on great features like Quick Sync, Intel was chiseling away at its enthusiast equity by limiting overclocking to K-series SKUs. Expect more of the same from Haswell. You're going to see notable per-clock performance improvements, faster graphics, and additional features able to accelerate specific workloads. But you’re also going to witness a clumsy handling of overclocking (again), some strange decisions on the graphics side (again), and incremental gains that’ll have some of us upgrading our desktops, but more folks looking for Haswell-powered mobile platforms. That's entirely by design, by the way. An emphasis on power is front and center with Haswell. And as a result, this architecture is going to span the broadest range of devices Intel has ever touched with one design. But I’ll argue that enthusiasts on the desktop take a back seat to make it all possible. **Meet Haswell, Now Known As Intel’s Fourth-Gen Core Architecture** Intel is rolling out the details of its Haswell-based processors in a staggered launch. The company plans to ship multiple variations of the architecture across a number of different interfaces, from very low-power segments to very performance-sensitive ones. However, the *only* arrangement emerging today is the quad-core SoC. Technically, Intel is talking desktop and mobile, though we’re deliberately focusing on the Core i7-4770K desktop CPU. I published a preview of Core i7-4770K’s performance almost three months ago, and that story has some information about Intel’s plans as well. ## Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Haswell-based quad-core processors will ship in two configurations to cover the mobile and desktop markets. Only one is ready today, though. That chip features the HD Graphics 4600 engine, also known as GT2. The second, with Iris Pro Graphics 5200 (or GT3e) is coming later. Intel's engineers claim that Iris Pro scales incredibly well given a lofty power ceiling and enough cooling. However, CPUs endowed with the higher-end graphics engine are BGA-only, meaning they’re soldered down. **So, enthusiasts buying LGA 1150-equipped motherboards will only find Core i7 and Core i5 CPUs with four cores and HD Graphics 4600** (technically, there’s also a 35 W Core i5 with fewer cores, but it’s still under wraps). This implementation of Haswell is composed of 1.6 billion transistors, up from a comparable Ivy Bridge configuration’s 1.4 billion. Optimized expressly for Intel’s 22 nm node, the die measures 177 square millimeters, just slightly larger than quad-core Ivy Bridge at 160 mm². Put Ivy Bridge and Haswell right next to each other and you might have a difficult time telling them apart. After all, there’s “only” a 200 million-transistor delta separating the two. That 14% growth in transistor count largely comes from a 25% increase in graphics resources compared to last generation. That’s not to say the processor cores go untouched. Intel says it put specific emphasis on speeding up both today’s legacy code as well as applications we’ll see in the future. To that end, larger buffers enlarge the out-of-order window, which means instructions that would have previously waited for execution can be located and processed sooner. Haswell’s window is 192 instructions. Sandy Bridge was 168. Nehalem was 128. The Haswell branch predictor is improved, too. This is something Intel manages to do every generation—and for good reason, since it simultaneously enables better performance and prevents the wasted work of a branch getting predicted incorrectly. Previously, Intel’s architecture was able to execute six operations per clock cycle. However, Haswell gets two additional ports (one integer ALU and one store), enabling up to eight operations per cycle. And workloads with large data sets should see a benefit from a larger L2 TLB. All of those changes add up to significant improvement in Haswell’s IPC compared to Ivy Bridge. That’s where we expect most of the speed-up in general-purpose apps to come from this generation, since the top-end Core i7-4770K runs at the same 3.5 GHz as -3770K. Sure enough, when we set five different processors (employing four different architectures) to the same constant 4 GHz, we see, first, how much more work Intel gets done compared to AMD and, second, a steady progression forward in Intel’s performance. In addition to the two execution ports Intel adds to Haswell, ports one and two now feature 256-bit Fused Multiply-Add units, doubling the number of peak theoretical floating-point operations per cycle. Integer math gets a big boost as well from AVX2 instruction support. Of course, multiplying the architecture’s compute potential means little if you can’t get data into the core fast enough. So, Intel also made a number of changes to its caches. Haswell’s L1 and L2 caches are the same size as they were in Ivy Bridge (there’s a 32 KB L1 data, 32 KB L1 instruction, and 256 KB L2 cache per core). Bandwidth to the caches is up to doubled, though, and we’ll see in our synthetic testing that the L1D is indeed quite a bit faster. Intel claims that it can do one read every cycle from the L2 (versus one read every *other* cycle in Ivy Bridge), but we aren’t able to replicate those figures in our own testing. Header Cell - Column 0 | Cores / Threads | Base Freq. | Max. Turbo | L3 | HD Graphics | Graphics Max Freq. | TDP | Price | ---|---|---|---|---|---|---|---|---| Fourth-Gen Core i7 Family | |||||||| 4770T | 4/8 | 2.5 GHz | 3.7 GHz | 8 MB | 4600 | 1,200 MHz | 45 W | $303 | 4770S | 4/8 | 3.1 GHz | 3.9 GHz | 8 MB | 4600 | 1,200 MHz | 65 W | $303 | 4770 | 4/8 | 3.4 GHz | 3.9 GHz | 8 MB | 4600 | 1,200 MHz | 84 W | $303 | 4770K | 4/8 | 3.5 GHz | 3.9 GHz | 8 MB | 4600 | 1,250 MHz | 84 W | $339 | 4770R | 4/8 | 3.2 GHz | 3.9 GHz | 6 MB | Iris Pro 5200 | 1,300 MHz | 65 W | N/A | 4765T | 4/8 | 2.0 GHz | 3.0 GHz | 8 MB | 4600 | 1,200 MHz | 35 W | $303 | Fourth-Gen Core i5 Family | |||||||| 4670T | 4/4 | 2.3 GHz | 3.3 GHz | 6 MB | 4600 | 1,200 MHz | 45 W | $213 | 4670S | 4/4 | 3.1 GHz | 3.8 GHz | 6 MB | 4600 | 1,200 MHz | 65 W | $213 | 4670K | 4/4 | 3.4 GHz | 3.8 GHz | 6 MB | 4600 | 1,200 MHz | 84 W | $242 | 4670 | 4/4 | 3.4 GHz | 3.8 GHz | 6 MB | 4600 | 1,200 MHz | 84 W | $213 | 4570 | 4/4 | 3.2 GHz | 3.6 GHz | 6 MB | 4600 | 1,150 MHz | 84 W | $192 | 4570S | 4/4 | 2.9 GHz | 3.6 GHz | 6 MB | 4600 | 1,150 MHz | 65 W | $192 | The Core i7-4770K gives us an 8 MB shared L3 cache, similar to Core i7s before it. Although the Sandy and Ivy Bridge designs employed a single clock domain that kept the cores and L3 running at the same speed, Haswell decouples them. Our cache bandwidth benchmark reveals a slight hit to L3 throughput, though improvements elsewhere in the System Agent keep the results fairly even. Haswell offers the same 16 lanes of PCI Express 3.0 connectivity as Ivy Bridge, and validated memory data rates up to 1,600 MT/s. The desktop line-up’s thermal targets are quite a bit different as a result of Intel’s fully-integrated voltage regulator, but an upper bound of 84 W isn’t extreme by any stretch and a floor of 35 W is pretty familiar. All of Intel’s upgradable processors now drop into an LGA 1150 interface, meaning any decision to adopt Haswell is also going to require a motherboard purchase, at least. So, before you drop several hundred dollars on a brand new platform, let’s figure out if Core i7-4770K is worth the investment. Current page: Haswell Turns Into Intel's Fourth-Gen Core Architecture Next Page HD Graphics 4600: 3D And Quick Sync- Danny N Biggest question is if its worth upgrading my cpu i5 750 4.0ghz to Haswell or my gfx card ati 5870 to nvidia 7xx, my main pc use is for Maya, After FX and some fps gaming. Any input would be appriciated cause I'm leaning towards a cpu upgrade atm.Reply - refillable @Danny NReply You shouldn't ask here. Perhaps you should get an i7-4770k and a 7970(?) I heard that kepler cards does not perform that good in Maya and Aftereffects (In OpenCL). - envy14tpe Seriously. What did people expect? Of course it's better but nothing out of the ordinary for Intel.Reply - enewmen For me it's not about the 10% gain over SB. It's more like a huge gain over a C2Q, floating point performance over SB (should matter later), and lower watts. I hope THG can expand the Power Consumption and Media Encoding later - check the Watts idle more and fast quick-sync media encoding quality loss. My 2 cents..Reply EDIT: other sites have reported much lower watts idle, so a lot doesn't make sense or the 4770k has a very slow throttle. http://hexus.net/tech/reviews/cpu/56005-intel-core-i7-4770k-22nm-haswell/?page=15 http://www.techspot.com/review/679-intel-haswell-core-i7-4770k/page13.html
true
true
true
Intel's Haswell architecture is finally available in the flagship Core i7-4770K processor. Designed to drop into an LGA 1150 interface, does this new quad-core CPU warrant a complete platform replacement, or is your older Sandy Bridge-E system better?
2024-10-12 00:00:00
2013-06-01 00:00:00
https://cdn.mos.cms.futu…7AWM-1200-80.jpg
article
tomshardware.com
Tom's Hardware
null
null
3,170,449
http://techreport.com/articles.x/21865
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
20,441,421
https://insights.sei.cmu.edu/sei_blog/2019/07/using-ooanalyzer-to-reverse-engineer-object-oriented-code-with-ghidra.html
Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra
Jeff Gennari
## Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra ##### PUBLISHED IN Reverse Engineering for Malware AnalysisObject-oriented programs continue to pose many challenges for reverse engineers and malware analysts. C++ classes tend to result in complex arrangements of assembly instructions and sophisticated data structures that are hard to analyze at the machine code level. We've long sought to simplify the process of reverse engineering object-oriented code by creating tools, such as OOAnalyzer, which automatically recovers C++-style classes from executables. OOAnalyzer includes utilities to import OOAnalyzer results into other reverse engineering frameworks, such as the IDA Pro Disassembler. I'm pleased to announce that we've updated our Pharos Binary Analysis Framework in Github to include a new plugin to import OOAnalyzer analysis into the recently released Ghidra software reverse engineering (SRE) tool suite. In this post, I will explain how to use this new OOAnalyzer Ghidra Plugin to import C++ class information into Ghidra and interpret results in the Ghidra SRE framework. #### Ghidra The Ghidra SRE tool suite was publicly released by the National Security Agency. This framework provides many useful reverse engineering services, including disassembly, function partitioning, decompilation, and various other types of program analyses. Ghidra is open source and designed to be easily extendable via plugins. We have been exploring ways to enhance Ghidra analysis with the Pharos reverse engineering output, and the OOAnalyzer Ghidra Plugin is our first tool to work with Ghidra. #### OOAnalyzer OOAnalyzer Pharos recovers C++-style classes from executables by generating and solving constraints with XSB Prolog. Among the information it recovers are class definitions, virtual function call information, and class relationships such as inheritance and composition. A complete description of the OOAnalyzer reasoning system is available in our paper: *Using Logic Programming to Recover C++ Classes and Methods from Compiled Executables*, which was presented at ACM Computer and Communication Security (CCS) 2018. OOAnalyzer produces a JSON file with information on recovered C++ classes. #### The OOAnalyzer Ghidra Plugin We recognized early on that Pharos tools would be more useful to analysts if they integrated with other reverse engineering frameworks. Thus, we traditionally imported OOAnalyzer Pharos output in to the IDA Pro Disassembler via our OOAnalyzer IDA Plugin. The new OOAnalyzer Ghidra plugin is a standard Ghidra extension that can load, parse, and apply OOAnalyzer Pharos results to object oriented C++ executables in a Ghidra project. The plugin is accessible in Ghidra via a new CERT menu, as shown in Figure 1. When launched, the plugin will prompt for a JSON file produced by OOAnalyzer Pharos analyzing the same executable. It provides options for organizing recovered C++ data structures (more on this below). Upon loading the JSON file, types and symbols are updated in Ghidra to reflect C++ data structures found by OOAnalyzer Pharos. *Figure 1: Launching the CERT OOAnalyzer Ghidra Plugin* #### Representing C++ Data Structures in Ghidra C++ classes generally include methods and members. Ghidra displays these components to an analyst through a combination of the symbol tree, where program symbol information is stored, and the data type manager, where data type information is stored. Combined, these two components enable the viewing of recovered C++ data structures in Ghidra. A before-and-after snapshot of the Ghidra symbol tree is shown in Figure 2. On the left side is the information for Cls1 prior to importing OOAnalyzer Pharos analysis. The Cls1 component already contains some class information, such as run time type information (RTTI). The OOAnalyzer Ghidra plugin updates this information to include found methods, such as constructors, destructors, and virtual functions found by OOAnalyzer Pharos. For example, the right side of Figure 2 shows the OOAnalyzer Ghidra plugin was able to import information about a constructor method, labeled Cls1::Cls1 as it would be in C++ by convention, and a virtual function named VIRT_FUN_00401aa0. *Figure 2: Ghidra symbol tree prior to, and after OOAnalyzer updates are applied*. The symbol tree applies names and labels to a disassembly and decompilation listing. Symbols are organized into different groups including classes and functions. Once OOAnalyzer symbols are added to the tree, they are automatically recognized and applied by Ghidra. We find this especially useful in decompilation. Consider the methods decompiled by Ghidra in Figures 3 and 4. Prior to importing OOAnalyzer Pharos results, Ghidra does not know that method FUN_00401150 is a constructor for Cls1. After this information is added to the symbol table, the OOAnalyzer Ghidra plugin uses it to correct the calling convention (__fastcall is changed to __thiscall), update the return type (void is changed to Cls1*), and fix the function parameters (undefined4 *param_1 is changed to Cls1 *this). *Figure 3: Class method prior to symbol tree update.* *Figure 4: Class method after symbol tree update.* The symbol tree does not contain any information about type definitions. The complete specifications for recovered C++ data structure types are inserted in the Ghidra data type manager, which is shown in Figure 5, where well-defined type information is stored in Ghidra. *Figure 5: Ghidra data type manager with OOAnalyzer type information imported*. Note that new and updated types are organized into a directory named *OOAnalyzer*. This arrangement allows analysts to understand exactly which types have been updated via the OOAnalyzer plugin (more on this below). There are potentially two structures created or updated for each C++ class imported via OOAnalyzer: a C++ class type structure to contain class members, and one or more class virtual function table structures to hold virtual function information. Class members may include traditional, primitive type members, complex types (such as other classes), and parent classes. Figure 6 shows the definitions for a C++ class type, a primitive member (mbr_50) and two parents (Cls1 and Cls2). The best way to handle parents is to treat them as implicit class members where an entire copy of each parent is embedded in the child object. *Figure 6: Ghidra structure editor for OOAnalyzer-recovered C++ class.* The original OOAnalyzer plugin was designed with IDA Pro in mind. Ghidra has many similar--but some different--features to consider when applying OOAnalyzer results. The representation we chose for C++ objects in Ghidra is a work in progress. We continue to explore how the features available in Ghidra can work with OOAnalyzer. In particular, the way that Ghidra handles decompilation in the presence of well-defined C++ data structures that may be bound dynamically, such as virtual function pointers, requires more study. In the meantime, we think the plugin can be useful for reverse engineers and malware analysts. The following subsections describe other design decisions that we made in the OOAnalyzer Ghidra plugin. #### Incorporating Ghidra-Defined Types Ghidra includes a fairly complete set of types for standard and well-known data structures, such as types in the standard namespace. Ghidra also has an analysis pass defined to recover and apply RTTI. The presence of these types prior to importing OOAnalyzer information is welcome in the sense that it provides more information for analysis; however, the OOAnalyzer plugin must take care to determine the best way to combine with the type information and new insights from Pharos. Rather than discard the Ghidra-provided information, the plugin evaluates and merges it with information generated from OOAnalyzer Pharos to produce a more complete type definition. The comparison and combination is based on many factors, including a number of members defined and data type size. #### Class Usages and Virtual Function Calls Ghidra's decompiler automatically incorporates type information into its analysis. This feature makes the explicit application of structure types, which was required by IDA Pro, unnecessary. For example, consider the virtual function calls shown in Figure 7. On the left is the disassembly, and on the right is the decompilation. Ghidra was able to automatically determine which class and virtual function table structure to apply by incorporating the defined types into decompilation. *Figure 7: Virtual function calls in Ghidra* We are able to create this representation by adding new types in Ghidra structures that represent virtual function tables that include "members" to represent virtual functions. As noted above, we are still working on the best way to represent these relationships given that virtual function table types are bound to object pointers at runtime. #### Organizing Changes in the OOAnalyzer Namespace The last notable feature of the OOAnalyzer Ghidra plugin is its ability to add all types created or updated by the plugin to a special OOAnalyzer namespace in the Ghidra symbol tree. Ghidra uses namespaces to organize symbols and define scope. For example, symbols that are taken from the "std" C++ namespace are placed in the "std" Ghidra namespace by default. The OOAnalyzer Ghidra plugin moves all updated symbols and types to a new namespace named OOAnalyzer. This restructuring makes it easy to identify what was updated by the plugin. If this organization is not preferred, it can be disabled when the plugin is loaded. #### Up Next Ghidra is a compelling new tool for reverse engineers and malware analysts. It provides many interesting new features that we are still working to understand and determine how to best leverage with the Pharos Binary Analysis Framework. Be sure to keep an eye on the Pharos GitHub repository and the SEI Blog for the latest updates to our work. #### Additional Resources - Ghidra Framework: https://ghidra-sre.org/ - Ghidra source code on Github: https://github.com/NationalSecurityAgency/ghidra - In-depth description of OOAnalyzer: Using Logic Programming to Recover C++ Classes and Methods from Compiled Executables: https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=539759 - SEI blog posts on the Pharos framework: https://insights.sei.cmu.edu/searchresults.html#stq=pharos&stp=1 Download the Pharos Binary Analysis Framework in GitHub: https://github.com/cmu-sei/pharos. ##### More By The Author ##### More In Reverse Engineering for Malware Analysis ##### PUBLISHED IN Reverse Engineering for Malware Analysis### Get updates on our latest work. Sign up to have the latest post sent to your inbox weekly. Subscribe Get our RSS feed##### More In Reverse Engineering for Malware Analysis ## Get updates on our latest work. Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published. Subscribe Get our RSS feed
true
true
true
This post explores how to use the new OOAnalyzer Ghidra Plugin to import C++ class information into the NSA's Ghidra tool and interpret results in the Ghidra SRE framework.
2024-10-12 00:00:00
2019-07-15 00:00:00
https://insights.sei.cmu…format-webp.webp
article
insights.sei.cmu.edu
SEI Blog
null
null
15,988,537
https://arstechnica.com/cars/2017/12/the-rinspeed-snap-explores-future-proofing-with-a-modular-electric-vehicle/
The Rinspeed Snap explores future-proofing with a modular electric vehicle
Jonathan M Gitlin
The Rinspeed Snap is an autonomous electric vehicle that tries to address the problem of lifecycles and recycling. The Rinspeed Snap is an autonomous electric vehicle that tries to address the problem of lifecycles and recycling. It consists of a passenger pod and a skateboard chassis. The pod is built to be durable, the skateboard is designed to work hard for a few years and then be recycled. Rinspeed It consists of a passenger pod and a skateboard chassis. The pod is built to be durable, the skateboard is designed to work hard for a few years and then be recycled. Rinspeed ZF's Intelligent Dynamic Driving Chassis underpins the skateboard. ZF ZF's Intelligent Dynamic Driving Chassis underpins the skateboard. ZF It consists of a passenger pod and a skateboard chassis. The pod is built to be durable, the skateboard is designed to work hard for a few years and then be recycled. Rinspeed ZF's Intelligent Dynamic Driving Chassis underpins the skateboard. ZF There's an "intelligent robot" onboard which can help with tedious tasks, apparently. Rinspeed The same company that supplies electrochromatic glass for the Boeing 787 also supplies the windows for the Snap, so they can dim for privacy. Rinspeed The innovative design of the front axle together with electric power steering means that the front wheels can be turned at angles of up to 75 degrees. ZF The ZF drive unit features an integrated electric motor, the transmission, a differential, and power electronics into the smallest of spaces. ZF Screens everywhere! Rinspeed Who needs to talk to their friends, after all? Rinspeed One doesn't normally think of the Swiss as terribly eccentric, but you could make an exception for Rinspeed boss Frank Rinderknecht. Some of his past creations have been pretty out there, like the sQuba, which started out as a Lotus Elise and ended up a fully electric vehicle that could also dive underwater. Or the Ʃtos, based on a BMW i8 hybrid but with its own drone that can deliver flowers. But interspersed among the wackiness are some clever ideas. In 2009, the company's iChange used an iPhone as a key and controller—sorry Tesla fans, Elon didn't think of that one first! And at this year's Detroit auto show we also saw the Oasis, an adaptable, autonomous city vehicle that bridges the gap between Blade Runner and our current timeline. By the standards of some of those past creations, Rinspeed's latest work is almost entirely sane. It's called the Snap, and it's a modular vehicle made up of a "skateboard" and "pod." The impetus behind the Snap was to address the fact that some automotive components now have much shorter lifecycles than we're used to. Software, processors, and batteries soon become obsolete in a way that analog gauges or steel body panels never will. So the pod is built to last and to function when detached from the skateboard, which in turn has been designed to be recycled after a short-but-intensive life. Everyone needs friends Rinspeed hasn't been working alone, and the Snap is meant to showcase the work of some of its partners, like Harman and ZF. It's envisaged as a level 5 autonomous vehicle—no steering wheel or pedals here—with a built-in digital "personal assistant," an "intelligent robot to accompany the occupants." Rinspeed says that the assistant will even "be happy to help with running errands, carrying purchases, or handle other tedious tasks." Hey, I did say the Snap was almost sane. ZF supplies a lot of the technology that goes into the skateboard, which Rinspeed calls the "Intelligent Dynamic Driving Chassis." It's designed for urban driving and maximum range, so the electric motor (which drives the rear wheels) is just 50kW (67hp). The front axle uses the same EasyTurn steering system as the Oasis, which lets the wheels achieve a steering angle of up to 75 degrees, and the rear wheels can also steer up to 14 degrees. This would make the Snap a lot more nimble than its wheelbase would suggest. There's no suggested price or date for production—Rinspeed isn't that kind of design house. But it is very good at making the rest of us think about the evolving role of our transport. I can't wait to see it at CES in a couple of weeks. Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC.
true
true
true
It separates the durable passenger pod from an easy-to-recycle propulsion “skateboard.”…
2024-10-12 00:00:00
2017-12-22 00:00:00
https://cdn.arstechnica.…/12/snap_026.jpg
article
arstechnica.com
Ars Technica
null
null
39,567,115
https://astroblogger.blogspot.com/2012/06/transit-of-earth-from-mars-november-10.html
Astroblog
Ian Musgrave
### Tuesday, June 05, 2012 ## The Transit of Earth from Mars, November 10, 2084 While from Earth you can see transits of Mercury and Venus, from Mars you get transits of Earth as well. Like transits of Venus from Earth, transits of Earth from Mars usually occur in pairs, with one following the other after 79 year, followed by a hundred year gap. The last was in 1984, and the next will be November 10, 2084. The cool thing about transits of Earth from Mars is you also get to see the Moon as well. The image to the left is the Stellarium simulation of the transit of Earth on 2084. The next transit of Venus to be seen from Mars is in 2030. The cool thing about transits of Earth from Mars is you also get to see the Moon as well. The image to the left is the Stellarium simulation of the transit of Earth on 2084. The next transit of Venus to be seen from Mars is in 2030. Labels: Mars, stellarium, transit, Venus
true
true
true
null
2024-10-12 00:00:00
2012-06-05 00:00:00
null
null
null
null
null
null
30,574,495
https://web.archive.org/web/20220306022502/https://blog.coursera.org/coursera-response-to-the-humanitarian-crisis-in-ukraine/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,560,493
http://arstechnica.com/security/2014/11/critics-chafe-as-macs-send-sensitive-docs-to-icloud-without-warning/
Critics chafe as Macs send sensitive docs to iCloud without warning
Dan Goodin
Representing a potential privacy snare for some users, Mac OS X Yosemite uploads documents opened in TextEdit, Preview, and Keynote to iCloud servers by default, even if the files are later closed without ever having been saved. The behavior, as noted in an article from Slate, is documented in a Knowledge Base article from December. But it nonetheless came as a surprise to researcher Jeffrey Paul, who said he was alarmed to recently discover a cache of in-progress files he intended to serve as "temporary Post-It notes" that had been silently uploaded to his iCloud account even though he never intended or wished them to be. "Apple has taken local files on my computer not stored in iCloud and silently and without my permission uploaded them to their servers," Paul wrote in a recent blog post. Once upon a time, in-progress files were stored locally on a Mac, a design that gave users more ability to prevent sensitive files—say, those created on the fly to store passwords, a Social Security Number, or confidential client-attorney work product—from being accessed via law enforcement or national security dragnets. Whereas locally stored files residing on a FileVault-protected Mac require the adversary to have physical access and possession of crypto key, the bar for accessing files stored in iCloud is lower, according to former National Security Administration contractor Edward Snowden.
true
true
true
PSA: Turn off autosave of in-progress documents containing sensitive data.
2024-10-12 00:00:00
2014-11-03 00:00:00
https://cdn.arstechnica.…logo-512_480.png
article
arstechnica.com
Ars Technica
null
null
19,571,510
https://spin.atomicobject.com/2019/04/04/vscode-multiple-cursors/#.XKX7sQ0DXis.hackernews
Working with Multiple Cursors in Visual Studio Code
Greg Williams
### Article summary Visual Studio Code has been gaining popularity and has replaced Sublime Text as my editor of choice, and it brought along many of my favorite features of Sublime, including its multiple cursor magic, which is especially great for refactoring. Here are some tips for getting started! ## What Is Multi-Cursor Mode? While multi-cursor mode can be used for column-mode editing, it is *much* more powerful! It’s column-mode on steroids! Basically, you can place as many cursors in as many places as you want in a single editor view. Once you have your cursors placed, you continue editing, and all operations will be applied to all cursors simultaneously. ## Three Ways to Add Cursors ### The basic mousey way The most intuitive way to start adding multiple cursors is with the mouse. Assuming you already have your first cursor placed, you just hold down the **Alt** key, click where you would like to add another cursor, and then repeat as many times as you’d like. I believe the middle mouse button can be configured so you don’t need to hold down the **Alt** key. ### The column-mode way If you would like to edit a number of lines at the column position, you can add additional cursors above or below the initial cursor. Just hold down **Command+Alt** and use the **Up** and/or **Down** arrows to add new cursors above or below, respectively. This is very handy for aligning things in columns since you can skip over white space and words using **Ctrl** and/or **Alt** as you would with a single cursor. It’s very handy for editing data files, such as CSV, and it’s also useful when extracting data from log files. ### The find-and-replace way This is possibly my favorite method. If you have a word highlighted, you can select the next occurrence of that word by pressing **Command+d**, then repeat until you have all instances of the word selected. Alternately, you can add a cursor to all occurrences of the current selection with **Command+Shift+L**. Then, you just type the replacement, and all instances are updated with each keystroke! You might end up selecting one or more too many instances of the desired word or phrase. Luckily, **Command+u** will undo the last multi-cursor operation. This will prevent you from starting allllllll over, which is especially helpful when you are selecting a lot of instances. While you can use the regular find-and-replace dialog, but I find this much more convenient and intuitive. ## Give It a Shot! Now that you are armed with the basics, go ahead and give it a try. Once you get the hang of this method, you will wonder how you survived without it. You may even blow the minds of some of your Vim or Emacs guru friends or co-workers! You can select all of the instances with: command + control + g Perfect if you need to replace a variable. Thanks Kyle. I actually ran across that yesterday as well. I will add that in. Thanks for the feedback! thats cool!
true
true
true
Visual Studio Code has replaced Sublime Text as my editor of choice, and it has awesome Sublime features like multiple cursors.
2024-10-12 00:00:00
2019-04-04 00:00:00
https://spin.atomicobjec…ads/VS-Code1.jpg
article
atomicobject.com
Atomic Object
null
null
11,605,552
http://chrisnielsen.ws/keeping-up-with-all-the-new-stuff/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,370,230
http://news.cnet.com/8301-1035_3-57602372-94/the-real-reasons-apples-64-bit-a7-chip-makes-sense/?tag=nl.e703&s_cid=e703&ttag=e703&ftag=CAD090e536
CNET: Product reviews, advice, how-tos and the latest news
Jon Reed
Best of the Best Editors' picks and our top buying guides Best of the Best Editors' picks and our top buying guides Upgrade your inbox Get CNET Insider From talking fridges to iPhones, our experts are here to help make the world a little less complicated. ## More to Explore ## Latest ### Best Walmart Holiday Deals Still Available: Last Chance for Big Saving on Tech, Home Goods and More 18 minutes ago### Best Internet Providers in Honolulu, Hawaii 24 minutes ago### The Best Spots in Your Home To Help Indoor Plants Grow 32 minutes ago### Lemme Sleep Took Over My TikTok, So I Had to Try This Supplement Myself 2 hours ago### Quick and Easy Tips for Perfectly Crispy Bacon 2 hours ago### Best Places to Buy Glasses Online for 2024 3 hours ago### 23 Best Gifts for New Homeowners for the Holidays 2024 4 hours ago### How to Pause Your Internet Service 4 hours ago### ChatGPT Glossary: 48 AI Terms That Everyone Should Know 5 hours ago### How to Watch Ariana Grande on 'Saturday Night Live' Tonight Without Cable 5 hours ago### Best Gifts for Hikers, From Their Feet to Their Butts 6 hours ago### Aurora Viewers Share Stunning Photos of the Northern Lights 6 hours ago### This Visual Guide Shows Everyone How to Hit Daily Protein Needs 6 hours ago### 2025 Social Security COLA Increase: Here's What Happens Next 7 hours ago### Best iPhone 15 and iPhone 15 Pro Cases for 2024 7 hours ago## Our Expertise Expertise Lindsey Turrentine is executive vice president for content and audience. She has helped shape digital media since digital media was born. 0357911176 02468104 024681025 ## Tech ## Money ## Crossing the Broadband Divide Millions of Americans lack access to high-speed internet. Here's how to fix that. ## Energy and Utilities ## Deep Dives Immerse yourself in our in-depth stories. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Internet Low-Cost Internet Guide for All 50 States: Despite the End of ACP, You Still Have Options 10/05/2024 ## Sleep Through the Night Get the best sleep of your life with our expert tips. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Tech Tips Get the most out of your phone with this expert advice. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Home ## Daily Puzzle Answers ## Living Off Grid CNET's Eric Mack has lived off the grid for over three years. Here's what he learned.
true
true
true
Get full-length product reviews, the latest news, tech coverage, daily deals, and category deep dives from CNET experts worldwide.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.cnet.com/a/i…t=675&width=1200
website
cnet.com
CNET
null
null
23,467,286
https://blog.objectivity.co.uk/bring-real-time-frontend-solutions-to-your-mendix-low-code-app/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,479,264
https://www.aljazeera.com/economy/2024/2/23/after-anti-woke-backlash-googles-gemini-faces-heat-over-china-taboos
Google’s Gemini criticised over China images amid anti-‘woke’ backlash
Erin Hale
# Google’s Gemini criticised over China images amid anti-‘woke’ backlash *Gemini users report refusal to show images of the 1989 Tiananmen Square massacre and Hong Kong pro-democracy protests.* **Taipei, Taiwan – **As Google finds itself embroiled in an anti-“woke” backlash over AI model Gemini’s reluctance to depict white people, the tech giant is facing further criticism over the chatbot’s handling of sensitive topics in China. Gemini users reported this week that the update to Google Bard failed to generate representative images when asked to produce depictions of events such as the 1989 Tiananmen Square massacre and the 2019 pro-democracy protests in Hong Kong. On Thursday, X user Yacine, a former software engineer at Stripe, posted a screenshot of Gemini telling a user it could not generate “an image of a man in 1989 Tiananmen Square” – a prompt alluding to the iconic image of a protester blocking the path of a Chinese tank – due to its “safety policy”. Stephen L Miller, a conservative commentator in the US, also shared a screenshot on X purporting to show Gemini saying it was unable to generate a “portrait of what happened at Tiananmen Square” due to the “sensitive and complex” historical nature of the event. “It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation,” Gemini said, according to a screenshot shared by Miller. Some restrictions related to China appeared to extend beyond images. Kennedy Wong, a PhD student at the University of California, said that Gemini had declined to translate into English a number of Chinese phrases deemed illegal or sensitive by Beijing, including “Liberate Hong Kong, Revolution Of Our Times” and “China is an authoritarian state”. “For some reason, the AI cannot process the request, citing their security policy,” Wong said on X, noting that OpenAI’s ChatGPT was able to process the request. So, I asked Gemini (@GoogleAI) to translate the following phrases that are deemed sensitive in the People's Republic of China. For some reason, the AI cannot process the request, citing their security policy (see the screenshots below).@Google pic.twitter.com/b2rDzcfHJZ — kennedywong (@KennedyWongHK) February 20, 2024 The discussion drew the attention of Yann LeCun, chief AI scientist at rival Meta, who said Gemini’s handling of topics to do with China raised questions about transparency and censorship. “We need open-source AI foundation models so that a highly diverse set of specialized models can be built on top of them. We need a free and diverse set of AI assistants for the same reasons we need a free and diverse press,” LeCun said on X. “They must reflect the diversity of languages, culture, value systems, political opinions, and centers of interest across the world.” Gemini’s aversion to depicting controversial moments of history also appears to extend beyond China, although the criteria for determining what or not to show is unclear. On Thursday, a request by Al Jazeera for images of the January 6, 2021 attack on the US Capitol was refused because “elections are a complex topic with fast-changing information”. The criticism of Gemini’s approach to China adds to an already difficult and embarrassing week for Google. The California-based tech giant on Thursday announced that it would temporarily suspend Gemini from generating images of people after a backlash over its apparent reluctance to depict white people. Google said in a statement that it was “aware that Gemini is offering inaccuracies in some historical image generation depictions” and was working to correct the issue. While various AI models have been criticised for underrepresenting people of colour and perpetuating stereotypes, Gemini has been lambasted for overcorrecting, such as by generating images of Black and Asian Nazi soldiers and Asian and female American legislators during the 19th century. Much like rival GPT-4 from OpenAI, Gemini was trained on a wide range of data, including audio, image, video, text, and code in multiple languages. Google’s chatbot, which relaunched and rebranded earlier this month, has been widely seen as lagging behind rival GPT-4. Google did not immediately respond to Al Jazeera’s queries about China-related content. But the tech giant does already appear to be updating Gemini in real time. On Thursday, Gemini, while still declining to generate images of Tiananmen Square and the Hong Kong protests, began providing lengthier answers that included suggestions of where to seek out more information. By Friday, the chatbot readily produced images of the protests when prompted. It's embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR — Deedy (@debarghya_das) February 20, 2024 Not everyone agrees with the criticism directed towards Gemini. Adam Ni, co-editor of the newsletter China Neican, said he believes Gemini made the right call with its cautious approach to historic events like Tiananmen Square due to their complexity. Ni said that while the June 4 crackdown on Tiananmen Square is iconic, the protest movement also included weeks of peaceful demonstrations that would be difficult to capture in a single AI image. “The AI image then needs to account for both the expression of youthful exuberance and hope, and the iron fist that crushed it, and numerous other worthy themes,” Ni told Al Jazeera. “Tiananmen is not all about the tanks, and our myopia harms broader understanding.”
true
true
true
Gemini users report refusal to show images of the 1989 Tiananmen Square massacre and Hong Kong pro-democracy protests.
2024-10-12 00:00:00
2024-02-23 00:00:00
https://www.aljazeera.co…2C630&quality=80
article
aljazeera.com
Al Jazeera
null
null
31,248,755
https://www.newscientist.com/article/2318267-raspberries-are-a-battleground-between-flies-yeast-and-fungi/
Raspberries are a battleground between flies, yeast and fungi
Author Fullname; Adrian Barnett
The unassuming raspberry plays host to an ecological battleground, as a fly, a yeast and a fungus vie for dominance on its surface. Raspberries produce ethylene gas as they mature, and often get colonised by *Botrytis cinerea*, the grey mould you find on fruit left too long in the fridge. This poses a problem for the fruit fly *Drosophila suzukii*, which feeds on raspberries, as its larvae are very sensitive to both the fungus and ethylene. Now, Paul Becher at the Swedish University of Agricultural Sciences (SUAS) and his…
true
true
true
A species of fly works together with a yeast to combat a raspberry-bound fungus that threatens the insects' larvae
2024-10-12 00:00:00
2022-05-02 00:00:00
https://images.newscient…I_1012578001.jpg
article
newscientist.com
New Scientist
null
null
8,528,381
http://www.bloomberg.com/news/2014-10-28/facebook-s-22-billion-whatsapp-deal-buys-10-million-in-sales.html
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
7,062,168
https://www.youtube.com/watch?v=CXv1j3GbgLk
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,080,096
https://www.youtube.com/watch?v=pvCiIk9P4os
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,856,298
http://devblog.avdi.org/2016/01/06/about-the-ruby-squiggly-heredoc-syntax/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,236,919
http://theorangeview.net/2011/02/apple-dominates-app-market-android-trails-far-behind/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,422,111
https://www.economist.com/books-and-arts/2020/06/06/for-hayao-miyazaki-flight-is-a-metaphor-for-freedom
For Hayao Miyazaki, flight is a metaphor for freedom
null
# For Hayao Miyazaki, flight is a metaphor for freedom ## Discover the consoling imagination of a star Japanese animator HAYAO MIYAZAKI has spent his career conjuring up fantastical worlds full of outlandish creatures. “Spirited Away” (2001), which won an Oscar for best animated film, is set in a magical realm ruled by a bejewelled witch and populated by talking frogs, gremlins made of soot and a vaporous creature who emits gold nuggets from his fingertips. Amid today’s pandemic, one feature of Mr Miyazaki’s escapist movies is particularly intoxicating: his obsession with flying. This article appeared in the Culture section of the print edition under the headline “Hayao Miyazaki’s flights” ## Culture June 6th 2020 - For Alexander Pushkin, lockdown was liberating - Are humans innately good? Rutger Bregman thinks so - The first big art show of the covid era is a vision of the future - For Hayao Miyazaki, flight is a metaphor for freedom - Jane Eyre, like many people, is at her best alone - The family unit has shaped people’s experience of covid-19 ## Discover more ### Why the world is so animated about anime Japan’s cartoons have conquered its screens, and more ### How a second nuclear disaster was avoided at Chernobyl in 2022 The Russian occupation underscored the risks posed by nuclear sites in wartime ### Han Kang wins the Nobel prize in literature for 2024 The South Korean author offers another example of the country’s cultural clout ### How complicated is brain surgery actually? A doctor reveals the myths and realities of his profession ### Why you should read Mohamed Mbougar Sarr The Senegalese novelist is one of the boldest writers working today ### Is TV’s next sure-fire hit, “Disclaimer”, a must-watch or a dud? The glitzy new thriller is both
true
true
true
Discover the consoling imagination of a star Japanese animator
2024-10-12 00:00:00
2020-06-04 00:00:00
https://www.economist.co…606_BKP001_0.jpg
Article
economist.com
The Economist
null
null
3,200,934
http://www.ecb.int/ecb/educational/inflationisland/html/index.en.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,232,870
https://github.com/daleroberts/hdmedians
GitHub - daleroberts/hdmedians: High-dimensional medians (medoid, geometric median, etc.). Fast implementations in Python.
Daleroberts
Did you know there is no unique way to mathematically extend the concept of a median to higher dimensions? Various definitions for a **high-dimensional median** exist and this Python package provides a number of fast implementations of these definitions. Medians are extremely useful due to their high breakdown point (up to 50% contamination) and have a number of nice applications in machine learning, computer vision, and high-dimensional statistics. This package currently has implementations of medoid and geometric median with support for missing data using `NaN` . The latest version of the package is always available on pypi, so can be easily installed by typing: ``` pip3 install hdmedians ``` Given a finite set of -dimensional observation vectors , the medoid of these observations is given by The current implementation of `medoid` is in vectorized Python and can handle any data type supported by ndarray. If you would like the algorithm to take care of missing values encoded as `nan` then you can use the `nanmedoid` function. Create an 6 x 10 array of random integer observations. ``` >>> import numpy as np >>> X = np.random.randint(100, size=(6, 10)) array([[12, 9, 61, 76, 2, 17, 12, 11, 26, 0], [65, 72, 7, 64, 21, 92, 51, 48, 9, 65], [39, 7, 50, 56, 29, 79, 47, 45, 10, 52], [70, 12, 23, 97, 86, 14, 42, 90, 15, 16], [13, 7, 2, 47, 80, 53, 23, 59, 7, 15], [83, 2, 40, 12, 22, 75, 69, 61, 28, 53]]) ``` Find the medoid, taking the last axis as the number of observations. ``` >>> import hdmedians as hd >>> hd.medoid(X) array([12, 51, 47, 42, 23, 69]) ``` Take the first axis as the number of observations. ``` >>> hd.medoid(X, axis=0) array([39, 7, 50, 56, 29, 79, 47, 45, 10, 52]) ``` Since the medoid is one of the observations, the `medoid` function has the ability to only return the index if required. ``` >>> hd.medoid(X, indexonly=True) 6 >>> X[:,6] array([12, 51, 47, 42, 23, 69]) ``` The geometric median is also known as the 1-median, spatial median, Euclidean minisum, or Torricelli point. Given a finite set of -dimensional observation vectors , the geometric median of these observations is given by Note there is a subtle difference between the definition of the geometric median and the medoid: the search space for the solution differs and has the effect that the medoid returns one of the true observations whereas the geometric median can be described as a synthetic (not physically observed) observation.The current implementation of `geomedian` uses Cython and can handle `float64` or `float32` . If you would like the algorithm to take care of missing values encoded as `nan` then you can use the `nangeomedian` function. Create an 6 x 10 array of random `float64` observations. ``` >>> import numpy as np >>> np.set_printoptions(precision=4, linewidth=200) >>> X = np.random.normal(1, size=(6, 10)) array([[ 1.1079, 0.5763, 0.3072, 1.2205, 0.8596, -1.5082, 2.5955, 2.8251, 1.5908, 0.4575], [ 1.555 , 1.7903, 1.213 , 1.1285, 0.0461, -0.4929, -0.1158, 0.5879, 1.5807, 0.5828], [ 2.1583, 3.4429, 0.4166, 1.0192, 0.8308, -0.1468, 2.6329, 2.2239, 0.2168, 0.8783], [ 0.7382, 1.9453, 0.567 , 0.6797, 1.1654, -0.1556, 0.9934, 0.1857, 1.369 , 2.1855], [ 0.1727, 0.0835, 0.5416, 1.4416, 1.6921, 1.6636, 1.6421, 1.0687, 0.6075, -0.0301], [ 2.6654, 1.6741, 1.1568, 1.3092, 1.6944, 0.2574, 2.8604, 1.6102, 0.4301, -0.3876]]) >>> X.dtype dtype('float64') ``` Find the geometric median, taking the last axis as the number of observations. ``` >>> import hdmedians as hd >>> hd.geomedian(X) array([ 1.0733, 0.8974, 1.1935, 0.9122, 0.9975, 1.3422]) ``` Take the first axis as the number of observations. ``` >>> hd.geomedian(X, axis=0) array([ 1.4581, 1.6377, 0.7147, 1.1257, 1.0493, -0.091 , 1.7907, 1.4168, 0.9587, 0.6195]) ``` Convert to `float32` and compute the geometric median. ``` >>> X = X.astype(np.float32) >>> m = hd.geomedian(X) ``` - Small, C. G. (1990). A survey of multidimensional medians. *International Statistical Review/Revue Internationale de Statistique*, 263-277.
true
true
true
High-dimensional medians (medoid, geometric median, etc.). Fast implementations in Python. - daleroberts/hdmedians
2024-10-12 00:00:00
2017-02-01 00:00:00
https://opengraph.githubassets.com/70bdbdc4ad280098ba9264ac6fc8fafaee9ad2ec16ddfc0c0be3c948f2e4e7c1/daleroberts/hdmedians
object
github.com
GitHub
null
null
20,339,769
https://www.clearcutip.com/2019/07/02/what-is-open-source-and-how-do-we-use-it/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,215,140
https://www.bloomberg.com/view/articles/2018-01-23/the-blockchain-is-not-the-world
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
9,590,937
http://www.windowscentral.com/bill-gatess-personal-agent-project-microsoft-might-be-called-office-now
Bill Gates' 'Personal Agent' project for Microsoft might be called Office Now
John Callaham
# Bill Gates' 'Personal Agent' project for Microsoft might be called Office Now Microsoft is currently developing a personal assistant app for Windows, iOS and Android called Office Now. The features of this app sound very similar to something that Microsoft co-founder Bill Gates recently mentioned he was working on with the company, which he called "Personal Agent". Gates mentioned this as part of his Reddit AMA session earlier this year: Based on the screenshots and description posted by *Neowin* on Office Now, it sounds very similar to Gates' "Personal Agent". The app will also reportedly give users an alert if a meeting's time or date has been changed or cancelled. As with all of the unreleased Microsoft apps that have been revealed this week, there's no word on when, or even if, Office Now might be released. Source: Neowin ## Get the Windows Central Newsletter All the latest news, reviews, and guides for Windows and Xbox diehards.
true
true
true
Microsoft is internally developing Office Now, a personal assistant app designed for the business user. The app is being made for Windows, iOS and Android platforms but there's no word ...
2024-10-12 00:00:00
2015-05-22 00:00:00
https://cdn.mos.cms.futu…HQe7-1200-80.jpg
article
windowscentral.com
Windows Central
null
null
40,127,738
https://www.multicians.org/shell.html
The Origin of the Shell
Louis Pouzin
CTSS was developed during 1963 and 64. I was at MIT on the computer center staff at that time. After having written dozens of commands for CTSS, I reached the stage where I felt that commands should be usable as building blocks for writing more commands, just like subroutine libraries. Hence, I wrote "RUNCOM", a sort of shell driving the execution of command scripts, with argument substitution. The tool became instantly most popular, as it became possible to go home in the evening while leaving behind long runcoms executing overnight. It was quite neat for boring and repetitive tasks such as renaming, moving, updating, compiling, etc. whole directories of files for system and application maintenance and monitoring. In the same vein, I also felt that commands should be usable as library subroutines, or vice versa. This stemmed from my practice (unique at the time) of writing CTSS commands in MAD (Michigan Algorithm Decoder), a simplified Algol-like language. It was much faster and the code was more maintainable than the IBM 7094 assembly code. Since I needed MAD friendly subroutine calls to access CTSS primitives, I wrote in assembly code a battery of interface subroutines, which very often mimicked CTSS basic command functions. Or I wanted to make commands out of subroutines which handled common chores. I felt it was an awkward duplication of effort. However, I did not go further in the context of CTSS. Then in 64 came the Multics design time, in which I was not much involved, because I had made it clear I wanted to return to France in mid 65. However, this idea of using commands somehow like a programming language was still in the back of my mind. Christopher Strachey, a British scientist, had visited MIT about that time, and his macro-generator design appeared to me a very solid base for a command language, in particular the techniques for quoting and passing arguments. Without being invited on the subject, I wrote a paper explaining how the Multics command language could be designed with this objective. And I coined the word "shell" to name it. It must have been at the end of 64 or beginning of 65. (See The SHELL: A Global Tool for Calling and Chaining Procedures in the System and RUNCOM: A Macro-Procedure Processor for the 636 System) The small gang of Multics wizards found it a sleek idea, but they wanted something more refined in terms of language syntax. As time left to me was short, and I was not an expert in language design, I let the issue for them to debate, and instead I made a program flowchart of the shell. It was used after I left for writing the first Multics shell. Glenda Schroeder (MIT) and a GE man did it. Time-sharing was a misnomer. While it did allow the sharing of a central computer, its success derives from the ability to share other resources: data, programs, concepts. It cracked a critical path bottleneck for writing and debugging programs. In theory this could have been achieved as well with a direct access approach. In practice it could not. Direct access hems users in a static framework. Evolution is unfrequent and controlled by central and distant agents. Creativity is out of the user's hand. Time sharing, as it became popular, is a living organism in which any user, with various degrees of expertise, can create new objects, test them, and make them available to others, without administrative control and hassle. With the internet experience, this no longer need be substantiated. Posted to feb_wwide 25 Nov 2000
true
true
true
How RUNCOM was created for CTSS and the shell was designed for Multics by Louis Pouzin.
2024-10-12 00:00:00
2000-11-27 00:00:00
null
null
null
null
null
null
10,033,845
http://aeon.co/magazine/science/quantum-biology-the-uncanny-order-of-life
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,956,383
http://www.pirateparty.org.uk/blog/2012/may/10/pirate-bay-proxy-open-internet-and-censorship/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,584,690
https://beth.technology/rokus-stock-price-will-there-be-another-pullback/
Roku’s Stock Price: Will There Be Another Pullback?
null
# Roku’s Stock Price: Will There Be Another Pullback? November 15, 2019 ### Knox Ridley #### Portfolio Manager Roku’s stock price is up by almost 500% over the past two years. Compare this to the S&P 500, which is up less than 25%. That’s 20X more returns than the average stock. The upward trend has not been on a straight line. Roku’s stock price has had four major drawdowns that average about 52%. Two of these drawdowns were greater than 60%. Being long a volatile company like Roku since its IPO is not easy, and it especially takes increased conviction to stay long Roku as we approach the end of the current cycle. However, for those that were insightful enough to see that Roku is not a hardware play, nor a content generating OTT play, but instead a Connected TV Advertisement play from inception, have been able to hold Roku through the drawdowns despite market noise. In this report, I will look at the fundamental case for buying Roku stock. I will also perform a technical analysis of the company’s stock price as entry and exit is crucial for high-growth stocks. This technical analysis reflects the choppy reaction to the company’s third-quarter earnings report. ## Roku’s Fundamental Background Roku is one of the most misunderstood names in technology. A common argument against Roku is that it is a small company with no moat in the streaming industry. They also argue that competition from cash gushing companies like Apple, Google, and Amazon will threaten its lead. In reality, the opposite is true. Roku may be small in comparison, yet it still leads with 39% market share in OTT hardware in the United States compared to Amazon in second place at 30%. In the most recent quarterly release, the company announced that its users had grown to more than 32.3 million. This is nearly double what the company had in Q2 2017 with 15 million users. The average revenue per user has grown from $11.22 in Q2’17 to more than $22. The ad platform segment of Roku’s business is the fastest growing and most important. It is also a high-margin business. In the 2017 financial year, the segment had more than $225 million in revenue. This revenue rose to $416 million in 2018. In the most recent quarter, the platform segment grew by 79% to more than $179 million. **Also Read :** Roku Q3 Earnings Another misconception about Roku is that Roku is in competition with the likes of Disney+, Netflix, and HBO Go because of the subscription service it offers. In reality, the company does not compete directly with these companies, even with its SVOD platform. This is because Roku is mostly in the business of serving adverts and using its data to provide a better ad experience. My partner, Beth Kindig, covers this in more detail in her fundamental analysis (here, here, here) . The closest competitors to Roku are Amazon and Hulu. Comcast’s Peacock, which will be an ad-supported streaming platform, will also be a competitor, but only domestically. This is because these companies compete for connected TV ad dollars. Roku has an added advantage because of the vast data it has on its consumers due to owning the hardware. Also, the agnostic nature of Roku’s business makes it favorable for smart TV manufacturers. This is because it does not compete with them on the level that Google or Amazon does. One final not on Roku, valuation is a constant issue that bears have talked about. It is true that the company appears to be overvalued. The company is valued at more than $15 billion. This is a premium for a loss-making company that is expected to make more than $1.1 billion this year. The company has a forward P/S ratio of 9.9, which is a significant premium. Consider that companies like Amazon, Netflix, and Spotify have a forward PS ratio of less than 6. ### Technical Outlook for Roku’s Stock Price ### Roku Volume Report The volume activity in Roku tells us a lot about the current environment we are in, as well as what institutions are thinking. “Smart money,” or institutions, have teams of analysts and professional traders moving large amounts of cash. This typically shows up as massive volume spikes, coupled with noticeable changes in the stock price. The price at which they decide to buy in bulk, or sell in bulk, typically acts as new support/resistance that the price must push through. What’s noticeable is that around the $127 region, we went from seeing predominantly green volume spikes, to predominantly red volume spikes. The zones in which we are seeing these large liquidations is between the $158-$127 region. **Also Read :** Update on $ROKU – Will Roku Miss Earnings? This will be a lot of liquidity to make up, and we usually will see a shift in momentum when the reverse occurs, – i.e., large green volume spikes coinciding with a noticeable shift in price. Until I see us break through the $158-$163 region, with new increased volume spikes, I would be cautious of the current retracement back to new highs. However, it’s worth noting that this shift could be starting to occur with rising green bars suggesting a renewed interest. I’d like to see institutions take out large positions at current levels before getting excited. So far, the only large volume spikes in this region has been to the downside. ### Insider Activity Insider buying is significantly more notable than insider selling. This is especially true when dealing with a high growth company that just went public; also, there could be numerous personal reasons why insiders are selling. But, it’s worth noting that all the insider activity in Roku since its IPO has been selling with zero buying. Nobody knows this business better than the insiders, and what they do, or do not do, can give insight to where they see growth vs market valuation. It’s worth noting that no insiders are buying their shares at current prices, which I’d agree makes sense if you are a buy and hold investor with a long time frame. However, in the short-term, there could be plenty of momentum left in Roku. ### Internal Strength of Roku’s Stock Price ** ** Going into earnings, we had cautioned our readers that $131-$127 was support and resistance was at the $156-$158 region. Any trades in this region on this stock were higher risk. We were correct, as the stock dropped to $119 but quickly bounced back. It has now been climbing and has even posted some marginal gains since prior earnings drop, and we are approaching a critical price cluster. Simply put, if the stock price breaks $163 and closes well above this price, then I’ll be targeting the above the $200 region before any major drawdown occurs. However, this will require a broader macro bull market. I think it is more likely Roku remains choppy with lower entries available than where it is priced right now. The internals support this position as well, as of now. In the above chart, the MACD has rolled over, and just recently flipped back up, suggesting strong short-term momentum. Until it breaks above the most recent high on the MACD, this could be a fake-out. The RSI is confirming caution as well. Until we can break the 70 line, which has historically indicated a bullish posture, I’d be cautious on the current uptrend as Roku continues to trade between support and resistance. We are currently oscillating between the 40 line, which has been bullish support and the 70 line, which has been bullish resistance. **Also Read :** Here’s Why Roku Will Be The Next Tech Darling ### Elliott Wave Counts and Internal Strength Many investors are playing momentum with Roku right now, and we believe this is the correct strategy at current prices. Going long Roku today should be done with stops in place or a systematic exit strategy to protect any gains. Therefore, Elliott Wave is the preferred method for increasing the probability for successful entries on long positions for a momentum trade, as well as set ideal targets for a more long-term time frame. Above is the 30-minute chart of Roku going back from it’s all time high. My Primary Elliott Wave count has Roku’s stock price completing its larger degree Wave 3 push just above the 138.2% extension at its all-time highs. This is historically a lower top for a typical target for a 3rd Wave, which usually targets the 161.8% extension. If Roku can break back above the 78.6% retrace and then take back the 138.2% extension around $163, we will likely see a push to the 168.2% extension before any significant drawdown that would constitute a 4th Wave correction (this is shown as an “alt (3)” and “alt (4)” on the chart). As of now, the evidence supports that Roku is in its 4th Wave correction, and as long as it stays below to current resistance, there will be chances for lower entries on a more long-term basis. However, if we close above $163, I will likely add to my current position with tight stops to play renewed momentum as Roku powers to new highs. **Gains of up to 2,400% from our Free Newsletter.** Here are sample stock gains from the I/O Fund’s newsletter --- produced weekly and all for free! 2,400% on Nvidia 450% on Bitcoin *as of Oct 04, 2024 Our newsletter provides an edge in the *world’s most valuable industry* – technology. Due to the enormous gains from this particular industry, we think it’s essential that every stock investor have a credible source who specializes in tech. **Subscribe for Free Weekly Analysis on the Best Tech Stocks.** If you are a more serious investor, we have a premium service that offers lower entries and real-time trade alerts. Sample returns on the premium site include 3,850% on Nvidia, 650% on Chainlink, and 700% on Bitcoin. The I/O Fund is audited annually to prove it’s one of the best-performing Funds on the market, with returns that beat Wall Street funds. # Get a bonus for subscription! Subscribe to our free weekly stock analysis and receive the "AI Stock: 5 Things Nobody is Telling you" brochure for free. ### More To Explore ### Newsletter ### Why the I/O Fund is Not Buying Nvidia Right Now: Video Interview In an interview with Darius Dale, Beth Kindig stated: “We ultimately think you can get Nvidia lower than where it is trading now. We are likely to take gains between $120 and $150 based on technical l ### Cybersecurity Stocks Seeing Early AI Gains Below, I look at the demand environment for leading cybersecurity stocks CrowdStrike, Zscaler, Palo Alto, and Fortinet, and which ones have key metrics hinting toward underlying strength. ### 4 Things Investors Must Know About AI We’re still in the early innings of AI, but the pace of transformation that AI is driving is unlike any other technology seen before, and that was evident at Communacopia. Below, I dig in to the four ### AI PCs Have Arrived: Shipments Rising, Competition Heating Up Chipmakers Qualcomm, Intel and AMD are working to bring AI-capable PCs to the “mainstream”, delivering powerful neural processing units to PCs for on-computer AI operations. AI PCs are not only a cons ### Prediction: Microsoft Azure To Reach $200 Billion In Revenue By 2028 The lead we see from Microsoft today on AI revenue streams is critical enough and predictive enough that it points toward Azure surpassing $200 billion by 2028, catalyzed by the OpenAI investment. ### Nvidia Stock Is Selling Off: It’s Not Because Of Blackwell Direct liquid cooling doesn’t lie as it’s intricately linked to the Blackwell launch, implying that Blackwell would indeed ship by Q4 – and Nvidia just confirmed that (multiple times) in Q2’s release. ### Nvidia Stock: Blackwell Suppliers Shrug Off Delay Ahead Of Q2 Earnings Bulletproof Nvidia showed an unusual bout of weakness this past month following a report from The Information that Nvidia’s new AI chips are delayed. The report asserts that Nvidia’s upcoming artifici ### Arm Stock: Buy Its Customers, Not The Stock Arm Holdings is the third-best performer of 2024 in AI-related semiconductor stocks with a 56% YTD return, behind only Nvidia and Taiwan Semiconductor. ### Big Tech Battles On AI, Here’s The Winner The major theme over the past month in Big Tech and AI semiconductors has been the durability of demand: essentially, what is Big Tech’s return on more than $150 billion in capex over the last twelve ### Bitcoin Update: Next Stop $100,000 Bitcoin is the best performing asset in market history. There is no stock or asset that has come close to delivering the returns of this digital currency --- it has greatly outperformed all FAANGs, an
true
true
true
Roku’s stock price is up by almost 500% over the past two years. Compare this to the S&P 500, which is up less than 25%. That’s 20X more returns than the average stock....
2024-10-12 00:00:00
2021-02-05 00:00:00
https://images.prismic.i…800&w=1200&h=800
article
io-fund.com
IO Fund
null
null
4,307,530
http://whatblag.com/2012/07/28/os-x-mountain-lion-reminders-and-due-dates/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,221,087
http://blogs.msdn.com/alfredth/archive/2010/03/26/where-does-computer-science-belong.aspx
Computer Science Teacher - Thoughts and Information from Alfred Thompson
null
# Computer Science Teacher - Thoughts and Information from Alfred Thompson Alfred Thompson's blog about teaching computer science at the K-12 level. Alfred was a high school computer science teacher for 8 years. He has also taught grades K-8 as a computer specialist. ### Don’t Panic The other day I was looking through the analytics for this blog to see what sort of searches people... Date: 09/01/2012 ### Computer Controversies For Fun and Discussion I love a good discussion. Pros and cons and honest and hopefully friendly discussion of issue with... Date: 08/31/2012 ### Cloud Fundamentals Video Series The Trustworthy Computing group has been recording a series on Cloud computing fundamentals. The... Date: 08/30/2012 ### Ten Commandments of Computer Ethics I ran into these Ten Commandments of Computer Ethics created by the Computer Ethics Institute while... Date: 08/29/2012 ### Useful Download Links for Windows 8 and Windows Phone Development Part of my job is to help people find valuable resources. I especially like it when they are free.... Date: 08/28/2012 ### Interesting Links 27 August 2012 Back to school hit home today as my wife went back to school for the first day of teachers. Kids... Date: 08/27/2012 ### Online Coding Exercises For Programming Education Well it is that time of year again – back to school. On the SIGCSE mailing list are a couple... Date: 08/22/2012 ### Interesting Links 20 August 2012 Back from vacation and trying to catch up with things. I had over 450 unread email messages in spite... Date: 08/20/2012 ### How To Read Code We don’t teach students how to read code. Actually we don’t event teach them that they should read... Date: 08/16/2012 ### Try Kinect at your K-12 School Capturing students' interest and making concepts come alive is an educator's greatest... Date: 08/15/2012 ### Recursion First I’ve long had mixed feelings about recursion. (I’ve written about recursion several... Date: 08/14/2012 ### Interesting Links 13 August 2012 Well it’s the middle of my vacation. Mostly I’m avoiding the Internet and email. Mostly. I’m not all... Date: 08/13/2012 ### Public Sector TechBytes Series Register Today ! Select a date below to register online or call... Date: 08/10/2012 ### 2012 Microsoft US Forum Wrap Up I missed out on most of the US Forum this year but as I said on Monday () I was able to attend the... Date: 08/09/2012 ### You're invited to a Windows 8 DevCamp in city near you Seating is limited. Select a date below to register online or call... Date: 08/08/2012 ### Interesting Links 30 July 2012 Over on the left here is a picture of a group of young women who visited Microsoft in Cambridge MA... Date: 07/30/2012 ### Curriculum is Hard I’ve been involved in a number of curriculum projects over the years. The big one has been the... Date: 07/26/2012 ### Interesting Links 23 July 2012 I had a great week and a great weekend. Busy weekend with family which is the best way to spend the... Date: 07/23/2012 ### 2012 Microsoft Research Faculty Summit Day One Oh what a day! I’m at the Microsoft Research Faculty Summit at Microsoft’s headquarters in Redmond... Date: 07/17/2012 ### Interesting Links 16 July 2012 The Microsoft Research Faculty Summit starts today (about noon eastern US time) Much of it will be... Date: 07/16/2012 ### Post CS & IT 2012 Thoughts The first part of this week was spent in Irvine CA for the 2012 CS & IT Conference. It’s amazing... Date: 07/13/2012 ### Tip Calculator Hands On Lab for Windows Phone This is the basic instructions for the demo I did as part of the Mobile App Development throwdown at... Date: 07/10/2012 ### Computational Fairy Tales the Book Just about a year ago I wrote a post about a Computational Fairy Tale blog (Computational Tales)... Date: 07/06/2012 ### Teaching Teachers Next week is the CSTA Computer Science & Information Technology Conference in Irvine CA. I’m... Date: 07/05/2012 ### Microsoft Research Faculty Summit Live Stream 2012 I received this news from the ACM recently. The Microsoft Research Faculty Summit will be available... Date: 07/03/2012 ### Where is the computer science at ISTE? My prime focus at Microsoft is K-12 computer science education. While I am very interested in... Date: 07/02/2012 ### Computer Science in the Common Core–Speak Up Anyone who reads this blog regularly knows that I really believe that we need more computer science... Date: 06/21/2012 ### Studio K – Program to make Kodu Curriculum and Tools more accessible in Classrooms The other day the Kodu team announced Studio K. What is Studio K you ask? In my opinion it’s... Date: 06/19/2012 ### Why I Love Windows Live Mesh It could be argued that I have too many computers. There is my cool demo machine – Samsung 9. There... Date: 06/15/2012 ### Microsoft Store Summer Camps are Now Open for Registration! Microsoft Store Summer Camps are Now Open for Registration! Our 2012 Summer Camps are first come,... Date: 06/14/2012
true
true
true
null
2024-10-12 00:00:00
2024-09-25 00:00:00
https://learn.microsoft.…-graph-image.png
website
microsoft.com
MicrosoftLearn
null
null
20,064,096
https://blog.acolyer.org/2019/05/31/lease-os/
A case for lease-based, utilitarian resource management on mobile devices
Adrian Colyer
A case for lease-based, utilitarian resource management on mobile devices Hu et al., *ASPLOS’19* I’ve chosen another energy-related paper to end the week, addressing a problem many people can relate to: apps that drain your battery. LeaseOS borrows the concept of a lease from distributed systems, but with a rather nice twist, and is able to reduce power wastage by 92% with no disruption to application experience and no changes required to the apps themselves. So about that twist. LeaseOS injects a transparent proxy between an app and a power-hungry OS resource. The app thinks it has control of the resource until it releases it, but under the covers the proxy is given a lease. In a traditional leasing scheme, it’s up to the borrower to request a lease extension. But here half the problem is that apps are requesting expensive resources they don’t really need. So instead the OS monitors how wisely the leased resource is being used. If an app is making good, legitimate use of the resource then the lease will be transparently extended. If it isn’t, it loses the underlying resource. How you tell whether or not an app is being a wise steward of a resource is an interesting question we’ll get into… A severe type of defect developers frequently introduce in their apps is energy bugs that drain battery abnormally fast. For example, wakelock is a mechanism in Android for apps to instruct the OS to keep the CPU, screen, WiFi, radio, etc. on active state… State-of-the-art runtime techniques monitor app resource usage, and kill or throttle apps if the usage exceeds a threshold. But making heavy use of a resource does not necessarily imply misbehavior. There are legitimate scenarios where the usage is justified, e.g. for navigation or gaming. ### Wasted energy The underlying abstract model for resource usage in mobile OSes is an *ask-use-release* cycle. An app asks for (or tries to acquire) a resource, and assuming the resource is granted the app then uses the resource to do some work, before finally releasing it. If an app gets resources it doesn’t really need, or forgets to free thus holding onto resources much longer than it really needs, then this can place excessive drain on the battery. For example, the K-9 mail app on Android had a bug whereby it would enter an infinite retry loop when the network was disconnected or a mail server failed. For each retry, the app would acquire a wakelock, causing severe battery drain. The BetterWeather app had a similar energy failure mode when it couldn’t get a GPS signal (e.g., the phone was inside a building). It would keep searching non-stop, but never find a signal. The graph below is a one-hour trace from the BetterWeather app running on a Nexus phone inside a building. The app spends about 60% of its time asking for a GPS lock which it never gets. Since the app doesn’t update it’s display without a location, all of this battery drain results in no benefit to the end user at all. BetterWeather is an example of misbehaviour in the ‘ask’ phase. The K-9 mail case is an example of misbehaviour in the ‘use’ phase. You can see in the charts below that it is holding a wakelock for a long time, but not actively using it for most of that time (the CPU usage spikes on the bottom chart). This ultralow utilization (< 1%) pattern is consistent across different phones and ecosystems in our experiments. High utilisation does not necessarily mean things are good though. When disconnected from the network, K-9 mail had another failure mode in which use of the wakelock is even higher, but usage of the CPU is also high. But the truth behind the chart is that the app is stuck in an exception loop of wakelock acquisition, network request, and error handling without making any progress. So the utility to the end user is zero. ### (Un)wise stewards The authors identify four different classes of energy misbehaviour, three of which can be detected automatically. - In *Frequent Ask Behaviour*(FAB), an app frequently tries to acquire a resource but rarely get its - In *Long Holding Behaviour*(LHB) an app is granted a resource and holds it for a long time but rarely uses it - In *Low Utility Behaviour*(LUB) an app uses the granted resource for a long time to do a lot of work, but most of the work is no use - In *Excessive Use Behaviour*(EUB) an app does a lot of useful work but incurs high overhead Frequent ask behaviour can be identified through a low resource request success ratio (unsuccessful request time / total request time). Long holding behaviour can be identified through a low usage to hold time ratio (resource usage time / holding time), and low utility behaviour can be identified through a low utility rate (utility score / resource usage time). Excessive use behaviour is hard to separate from desired behaviour though. The authors analyse 109 energy misbehaviour cases across 81 popular apps to determine the distribution of causes according to this classification. All four types of misbehavior are prevalent. FAB, LHB, and LUB together occupy 58% of the studied cases while EUB occupies 31% of the cases. The majority (80%) of FAB, LHB, and LUB are due to clear programming mistakes (Bug), while the majority (77%) of EUB are due to design trade-offs (non-Bug). ### Earn my trust LeaseOS transparently creates leases when an app first access an object. During the lease term, the lease holder has the right to access the resource instance without approvals from the OS. At the end of the term, the *OS* decides whether or not to renew the lease. If an app explicitly releases a resource, then the lease term is ended immediately. Otherwise LeaseOS examines the collected utility metrics when a lease expires. For normal behaviour, the lease is automatically extended. Under misbehaviour the lease enters a *deferred* state which extends the lease after some delay *r*. During the delay period the underlying resource is temporarily released to reduce wasteful energy consumption. This continuous examine-renewmodel differentiates LeaseOS from other simpleone-shotthrottling solutions. The statistics that LeaseOS collects to determine utility are particular to the resource type but all follow a pattern. For wakelock for example LeaseOS examines the ratio of CPU over wakelock holding time. For low-utility behaviour the definition may be app specific (and a cooperative developer can provide a utility callback), but some general heuristics still prove pretty useful, e.g. the frequency of exceptions raised. Transparent *lease proxies* sit in front of protected resources and coordinate with a *lease manager* that gathers stats and makes expire-renew decisions. For an app that legitimately tries to use a resource with an expired lease, the main difference it will see is a slowdown while the underlying resource is re-acquired. It’s also possible for the app to see e.g. I/O exceptions on network timeouts, but the app is already required to handle these. There’s a trade-off between lease length (short leases can save more energy) and the lease management overhead. After some empirical experiments, LeaseOS sets the default lease term at 5 seconds, and the default deferred interval (*r*) at 25 seconds. If an app has been using resources efficiently, the lease manager increases the lease term, reverting back to the 5-second lease on any sign of misbehaviour. ### Extra hours The authors reproduce 20 energy bug cases from real-world apps and evaluate them under LeaseOS, the Android Doze mode, DefDroid (which uses simple throttling). Android’s Doze mode defers app background CPU and network activity when the device is unused for a long time, but is very conservative in when it triggers as it is a system-wide mode. For this experiment Doze mode is forced to kick-in through the command-line `adb` interface, otherwise it only takes effect in 8 cases. LeaseOS can significantly reduce the wasted power consumption for all cases, achieving an average reduction ratio of 92%… For all of the cases we evaluated, LeaseOS did not introduce any negative usability impact. Testing with apps that legitimately make heavy use of resources (RunKeeper, Spotify, and Haven), LeaseOS renews leases without any interruption, whereas under the pure throttling scheme all three apps experienced some disruption. The overhead of LeaseOS itself is less than 1%. An end-to-end test with one buggy GPS app in the system, playing music for 2 hours, watching YouTube for one hour, browsing for 30 minutes, and then keeping the phone on standby showed that Android ran out of battery after around 12 hours. With LeaseOS, the battery lasted for 15 hours. You can find the LeaseOS source code at https://orderlab.io/LeaseOS
true
true
true
null
2024-10-12 00:00:00
2019-05-31 00:00:00
null
null
acolyer.org
blog.acolyer.org
null
null
4,554,007
http://www.bgr.com/2012/09/21/iphone-5-sales-2012-apple-peaking/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,526,696
https://swprs.org/facts-about-covid-19/
Facts about Covid
null
**Updated**: May 2024 **Published**: March 2020 Fully referenced facts about covid, provided by experts in the field, to help our readers make a realistic risk assessment. (Regular updates below). **“The only means to fight the plague is honesty.” (****Albert Camus, 1947)** ### Overview **Lethality**: The overall infection fatality rate (IFR) of the novel coronavirus in the general population (excluding nursing homes) is about 0.1% to 0.5% in most countries, which is most closely comparable to the medium influenza pandemics of 1936, 1957 and 1968.**Age profile**: The median age of covid deaths is over 80 years in most Western countries (78 in the United States) and about 5% of the deceased had no medical preconditions. In many Western countries, about 50% of all covid deaths occurred in nursing homes.**Vaccine protection**: Covid vaccines provided a high but rapidly declining protection against severe disease. Vaccination could not prevent infection and transmission (known since June 2021). A prior infection conferred a more durable immunity than vaccination.**Vaccine injuries**: Covid vaccines caused severe and fatal vaccine reactions, including cardiovascular, neurological and immunological reactions. Because of this, the risk-benefit ratio of covid vaccination in healthy children and adults under 50 years of age was poor.**Excess mortality**: Global pandemic excess mortality is close to 20 million deaths, which is about 15% compared to normal global mortality and about 0.25% compared to global population. Some of the additional deaths were caused by indirect effects of the pandemic and lockdowns.**Symptoms**: About 30% of all infected persons show no symptoms. Overall, about 95% of all people develop at most mild or moderate symptoms and do not require hospitalization. Age and obesity, in particular, are major risk factors for severe covid.**Treatment**: For people at high risk or high exposure, early or prophylactic treatment is essential to prevent progression of the disease. Numerous studies found that early outpatient treatment of covid can significantly reduce hospitalizations and deaths.**Long covid**: Up to 10% of symptomatic people experience post-acute or long covid, i.e. covid-related symptoms that last several weeks or months. Long covid may also affect young and previously healthy people whose acute covid infection was rather mild.**Transmission**: Indoor aerosols appear to be the main route of transmission of the coronavirus, while outdoor aerosols, droplets, as well as most object surfaces appear to play a minor role. Pre-symptomatic transmission may account for about 30% of all community infections.**Masks**: Face masks had no influence on infection rates, which was already known from studies prior to the pandemic. Even N95 masks had no influence on infection rates in the general population. Moreover, long-term or improper use of face masks can lead to health issues.**Lockdowns**: In contrast to early border controls (e.g. by Australia), lockdowns had no significant effect on the pandemic. However, according to the World Bank lockdowns caused an “historically unprecedented increase in global poverty” of close to 100 million people.**Children and schools**: In contrast to influenza, the risk of severe covid in children is rather low. Moreover, children were not drivers of the pandemic and the closure of schools had no effect on infection rates in the general population.**PCR tests**: The highly sensitive PCR tests are prone to producing false positive or false negative results (e.g. after an acute infection). Overall, PCR and antigen mass testing had no effect on infection rates in the general population (exception: to sustain border controls).**Contact tracing**: Manual contact tracing and contact tracing apps on mobile phones had no effect on infection rates. Already in 2019, a WHO study on influenza pandemics concluded that contact tracing is “not recommended in any circumstances”.**Vaccine passports**: Vaccine passports had no effect on infection rates as vaccination cannot prevent infection. Vaccine passports could, however, serve as a basis for the introduction of digital biometric identity and payment systems. NSA whistleblower Edward Snowden warned already in March 2020 that surveillance could be expanded during the pandemic.**Virus mutations:**Similar to influenza viruses, mutations occur frequently in coronaviruses. The omicron variant, which may have emerged from vaccine research, showed significantly higher infectiousness and immune escape, but 80% lower lethality.**Sweden**: In Sweden, covid mortality without lockdown was comparable to a strong influenza season and somewhat below the EU average. About 50% of Swedish deaths occurred in nursing homes and the median age of Swedish covid deaths was about 84 years.**Influenza viruses**: Influenza viruses largely disappeared during the coronavirus pandemic. This was not a result of “covid measures”, but a result of temporary displacement by the novel coronavirus, even in countries without measures (such as Sweden).**Media**: Overall, media reporting on the pandemic was rather unprofessional, increased fear and panic in the population and caused a hundredfold overestimation of covid lethality. Some media outlets even used manipulative pictures and videos to dramatize the situation.**Virus origin**: Genetic evidence points to a laboratory origin of the new coronavirus. Both the Virological Institute in Wuhan (WIV) as well as some US laboratories that cooperated with the WIV performed various kinds of research on similar coronaviruses. ### Latest Articles **General** - Why the smartest people failed (July 2023) - The Nine Great Covid Mysteries (June 2022) - Why Covid is a Strange Pandemic (Sept. 2020) - Global Excess Mortality Update (December 2023) **Earliest Articles** - A Swiss Doctor on Covid-19 (March 14, 2020) - Corona, the Media, and Propaganda (March 19, 2020) - Open Letter from Professor Sucharit Bhakdi (March 28, 2020) **Media and Propaganda** - Pandemics and Propaganda (December 2022) - The Propaganda Pandemic (February 2022) - Covid and Reality (September 2022) - Covid and Culture (December 2023) **Coronavirus Origins** - On the Origins of SARS-CoV-2 (June 2020) - Did China stage the Wuhan videos? (April 2021) - Omicron hits the mutation jackpot (November 2021) **Lockdowns** - The Lockdown Lunacy in Retrospect (March 2023) - Open letter by Prof. Ehud Qimron (January 2022) - Judgment Day: Sweden Vindicated (December 2021) - Sweden: The Battle over Pandemic Reality (May 2023) **Covid Vaccines (General) ** - Covid Vaccines: Facts, Fears, Fraud (March 2023) - Covid Vaccines: A Reality Check (December 2022) - How effective are covid vaccines, really? (June 2022) - Covid Vaccines: Vaccines or Gene Therapy? (Dec. 2021) - The Power of Natural Immunity (December 2021) - Covid Vaccines: Hope or Hype? (November 2020) **Vaccine Adverse Events ** - DNA contamination in mRNA vaccines (Nov. 2023) - Covid Vaccines and Fertility (March 2023) - Covid Vaccine Adverse Events (June 2021) **Vaccine Passports** - From “Vaccine Passports” to Digital Identity (May 2023) - Israel: Highest Infection Rate in the World (Sept. 2021) - The failure of “vaccine passports” (July 2021) - The “Vaccine Passport” Agenda (February 2021) - The WEF and the Pandemic (October 2021) **Face masks** - The Face Mask Folly in Retrospect (August 2021) - WHO Mask Study Seriously Flawed (September 2020) - Are Face Masks Effective? The Evidence (July 2020) **PCR Testing** - The trouble with PCR tests (October 2020) - The failure of PCR mass testing (June 2021) - The “zero covid” countries (December 2020) **Early Treatment** - Covid treatments in retrospect (May 2024) - On the Treatment of Covid-19 (July 2020) - Severe covid and auto-immunity (July 2021) - The Ivermectin Debate (July 2021) **Coronavirus Disease** - Studies on Covid-19 Lethality (May 2020) - Post-Acute Covid and Long Covid (August 2020) - Covid and Kids: The Evidence (February 2021) - Obesity and the Pandemic (June 2021) - Coronavirus doesn’t exist? (April 2022) **Covid and the flu** - Why the flu has disappeared (February 2021) - The return of the flu (November 2021) - Covid vs. the flu, revisited (March 2021) **Coronavirus Transmission** - Pre-symptomatic transmission (June 2021) - What about a Third Wave? (February 2021) - Covid: Just A “Casedemic”? (August 2020) ### Videos - Coronavirus Pandemic (SPR Media Archive)
true
true
true
Fully referenced facts about Covid-19.
2024-10-12 00:00:00
2020-10-18 00:00:00
https://i0.wp.com/swprs.…1024%2C512&ssl=1
article
swprs.org
Swiss Policy Research
null
null
17,079,184
https://www.macrumors.com/2018/05/14/vulnerabilities-in-pgpgpg-email-encryption-plugins/
Researchers Discover Vulnerabilities in PGP/GPG Email Encryption Plugins, Users Advised to Avoid for Now
Tim Hardwick
# Researchers Discover Vulnerabilities in PGP/GPG Email Encryption Plugins, Users Advised to Avoid for Now A warning has been issued by European security researchers about critical vulnerabilities discovered in PGP/GPG and S/MIME email encryption software that could reveal the plaintext of encrypted emails, including encrypted messages sent in the past. The alert was put out late on Sunday night by professor of computer security Sebastian Schinzel. A joint research paper, due to be published tomorrow at 07:00 a.m. UTC (3:00 a.m. Eastern Time, 12:00 am Pacific) promises to offer a thorough explanation of the vulnerabilities, for which there are currently no reliable fixes. Details remain vague about the so-called "Efail" exploit, but it appears to involve an attack vector on the encryption implementation in the client software as it processes HTML, rather than a vulnerability in the encryption method itself. A blog post published late Sunday night by the Electronic Frontier Foundation said: "EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages." In the meantime, users of PGP/GPG and S/MIME are being advised to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email, and seek alternative end-to-end encrypted channels such as Signal to send and receive sensitive content. **Update:** The GPGTools/GPGMail team has posted a temporary workaround against the vulnerability, while MacRumors has compiled a separate guide to removing the popular open source plugin for Apple Mail until a fix for the vulnerability is released. Other popular affected clients include Mozilla Thunderbird with Enigmail and Microsoft Outlook with GPG4win. Click the links for EFF's uninstall steps. ## Popular Stories iOS 18.1 will be released to the public in the coming weeks, and the software update introduces the first Apple Intelligence features for the iPhone. Below, we outline when to expect iOS 18.1 to be released. iOS 18.1: Apple Intelligence Features Here are some of the key Apple Intelligence features in the iOS 18.1 beta so far: A few Siri enhancements, including improved understanding... Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models simultaneously, which is why we sometimes get rumored feature leaks so far ahead of launch. The iPhone 17 series is no different – already we have some idea of what to expect from Apple's 2025 smartphone lineup. If you plan to skip... Alleged photos and videos of an unannounced 14-inch MacBook Pro with an M4 chip continue to surface on social media, in what could be the worst product leak for Apple since an employee accidentally left an iPhone 4 prototype at a bar in California in 2010. The latest video of what could be a next-generation MacBook Pro was shared on YouTube Shorts today by Russian channel Romancev768, just... Rumors strongly suggest Apple will release the seventh-generation iPad mini in November, nearly three years after the last refresh. Here's a roundup of what we're expecting from the next version of Apple's small form factor tablet, based on the latest rumors and reports. Design and Display The new iPad mini is likely to retain its compact 8.3-inch display and overall design introduced with... The current Apple TV was released two years ago this month, so you may be wondering when the next model will be released. Below, we recap rumors about a next-generation Apple TV. In January 2023, Bloomberg's Mark Gurman reported that a new Apple TV was planned for release in the first half of 2024:Beyond the future smart displays and new speaker, Apple is working on revamping its TV box.... Apple often releases new Macs in the fall, but we are still waiting for official confirmation that the company has similar plans this year. We're approaching the middle of October now, and if Apple plans to announce new Macs before the holidays, recent history suggests it will happen this month. Here's what we know so far. As of writing this, it's been 220 days since Apple released a new...
true
true
true
A warning has been issued by European security researchers about critical vulnerabilities discovered in PGP/GPG and S/MIME email encryption software...
2024-10-12 00:00:00
2018-05-14 00:00:00
https://images.macrumors…GPGMail-pane.jpg
article
macrumors.com
MacRumors.com
null
null
19,715,326
https://www.sfexaminer.com/news-columnists/planning-commissioners-ready-to-take-on-single-family-zoning/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,075,220
https://www.nature.com/articles/s41467-020-15807-7
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,142,084
https://wwws.nightwatchcybersecurity.com/2020/05/10/two-vulnerabilities-in-oracles-iplanet-web-server-cve-2020-9315-and-cve-2020-9314/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
22,730,818
https://towardsdatascience.com/hasnt-hiring-always-been-broken-91a2adfb721c
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
95,581
http://www.microsoft.com/presspass/press/2008/jan08/01-06CES08PR.mspx
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,728,925
https://www.johndcook.com/blog/2023/09/30/consecutive-coupon-collector-problem/
Consecutive coupon collector problem
John
## Coupon collector problem Suppose you have a bag of balls labeled 1 through 1,000. You draw balls one at a time and put them back after each draw. How many draws would you have to make before you’ve seen every ball at least once? This is the coupon collector problem with *N* = 1000, and the expected number of draws is *N* *H**N* where *H* N = 1 + 1/2 + 1/3 + … + 1/ *N* is the *N*th harmonic number. As *N* increases, *H* N approaches log( *N*) + γ where γ = 0.577… is the Euler-Mascheroni constant, and so the expected time for the coupon collector problem is approximately *N *(log(*N*) + γ). ## Consecutive draws Now suppose that instead of drawing single items, you draw blocks of consecutive items. For example, suppose the 1,000 balls are arranged in a circle. You pick a random starting point on the circle, then scoop up 10 consecutive balls, then put them back. Now how long would it take to see everything? By choosing consecutive balls, you make it harder for a single ball to be a hold out. Filling in the holes becomes easier. ## Bucketed problem Now suppose the 1,000 balls are placed in 100 buckets and the buckets are arranged in a circle. Now instead of choosing 10 consecutive balls, you choose a bucket of 10 balls. Now you have a new coupon collector problem with *N* = 100. This is like the problem above, except you are constraining your starting point to be a multiple of *n*. ## Upper and lower bounds I’ll use the word “scoop” to mean a selection of *n* balls at a time to avoid possible confusion over drawing individual balls or groups of balls. If you scoop *n* balls at a time by making *n* independent draws, then you just have the original coupon collector problem, with the expected time divided by *n*. If you scoop up *n* consecutively numbered balls each time, you reduce the expected time to see everything at least once. But your scoops can still overlap. For example, maybe you selected 13 through 22 on one draw, and 19 through 38 on the next. In the bucketed problem, you reduce the expected time even further. Now your scoops will not partially overlap. (But they may entirely overlap, so it’s not clear that this reduces the total time.) It would seem that we have sandwiched our problem between two other problems we have the solution to. The longest expected time would be if our scoop is made of *n* independent draws. Then the expected number of scoops is *N* *H* N / *n*. The shortest time is the bucketed problem in which the expected number of scoops is (*N*/*n*) *H*( N/n). It seems the problem of scooping *n* consecutive balls, with no constraint on the starting point, would have expected time somewhere between these two bounds. I say “it seems” because I haven’t proven anything here, just given plausibility arguments. By the way, we can see how much bucketing reduces the expected time by using the log approximation above. With *n* independent draws each time, the expected number of scoops is roughly (*N*/*n*) log(*N*) whereas with the bucketed problem the expected number of scoops is roughly (*N*/*n*) log(*N*/*n*). ## Expected number of scoops I searched a bit on this topic, and I found many problems with titles like “A variation on the coupon collector problem,” but none of the papers I found considered the variation I’ve written about here. If you work out the expected number of scoops, or find a paper where someone has worked this out, please let me know. The continuous analog seems like an easier problem, and one that would provide a good approximation. Suppose you have a circle of circumference *N* and randomly place arcs of length *n* on the circle. What is the expected time until the circle is covered? I imagine this problem has been worked out many times and may even have a name. **Update**: Thanks to Monte for posting links to the solution to the continuous problem in the comments below. ## Simulation results When *N* = 1000 and *n* = 10, the upper and lower bounds work out to 748 and 518. When I simulated the consecutive coupon collector problem I got an average of 675 scoops, a little more than the average of the upper and lower bounds. The probability of covering the circle with k arcs of equal length is well-understood; see for example https://mathworld.wolfram.com/CircleCoveringbyArcs.html or https://pure.tue.nl/ws/portalfiles/portal/1962757/252821.pdf. E[number of arcs] = N/n ( log( N/n) + log log (N / n) + γ + o(1) ) as n/N -> 0. Thanks for a very interesting problem! We can expand on the solution to the continuous version of the problem referenced by Monte to compute the exact probability distribution for the number of consecutive blocks of coupons. I wrote up an article describing the result with some Python code on my site (linked; not sure if including HTML or LaTeX might snag the spam filter). I don’t see a nice way to turn this into a finite summation formula for the expected value, though. But just summing the series P(N>n) out to 1500 terms gives a lower bound estimate of about 676.6. (Note that in the case where we “scoop” n balls with replacement, not necessarily consecutively, the expected value is only approximately (N H_n)/n. The exact value, with formula in the linked write-up, in the N=1000, n=10 case is about 748.997, a slightly higher upper bound.) More discussion of this problem can be found here: https://possiblywrong.wordpress.com/2023/10/05/coupon-collectors-problem-variants-with-group-drawings/
true
true
true
A variation on the coupon collector problem in which you get consecutive blocks of tickets on each draw.
2024-10-12 00:00:00
2023-09-30 00:00:00
null
article
johndcook.com
John D. Cook | Applied Mathematics Consulting
null
null
1,933,016
http://www.nytimes.com/2010/11/23/world/asia/23kabul.html?_r=1&pagewanted=all
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,974,695
https://twitter.com/adamfuhrer/status/1657070909469884429
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
20,927,875
https://heated.world/
HEATED | Emily Atkin | Substack
Emily Atkin
A newsletter for people who are pissed off about the climate crisis. Over 109,000 subscribers Subscribe No thanks “This is an important newsletter about climate change and the supervillains that deny and promote it.” Phil Plait, Bad Astronomy Newsletter “Emily comes from a traditional journalism background and offers an uncompromising perspective on anthropogenic global warming. ” Marilia Coutinho, Dissimilis animus “When I want to understand a climate policy issue, I cross my fingers and hope that Emily Atkin has written about it!” Lynn Yellen, Work from home for justice By registering you agree to Substack's Terms of Service , our Privacy Policy , and our Information Collection Notice This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
true
true
true
A newsletter for people who are pissed off about the climate crisis. Click to read HEATED, a Substack publication with hundreds of thousands of subscribers.
2024-10-12 00:00:00
2019-08-28 00:00:00
https://substackcdn.com/image/fetch/f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fheated.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D-392375226%26version%3D9
article
heated.world
heated.world
null
null
29,933,843
https://www.psychologytoday.com/us/blog/the-secular-life/201410/secular-societies-fare-better-religious-societies
Secular Societies Fare Better Than Religious Societies
Phil Zuckerman Ph D
###### Religion # Secular Societies Fare Better Than Religious Societies ## If religion withers, does society rot? Clearly not. Posted October 13, 2014 Reviewed by Ekua Hagan It is said over and over again by religious conservatives: without faith in God, society will fall apart. If we don't worship God, pray to God, and place God at the central heart of our culture, things will get ugly. In his classic *Reflections on the French Revolution*, Edmund Burke argued that religion was the underlying basis of civil social order. Voltaire, the celebrated Enlightenment philosopher, argued that without theism society could not function; it is necessary for people to have “profoundly engraved on their minds the idea of a Supreme being and creator” in order to maintain a moral social order. Alexis de Tocqueville similarly argued that religious faith is “indispensable” for a well-functioning society, that irreligion is a “dangerous” and “pernicious” threat to societal well-being, and that non-believers are to be regarded as “natural enemies” of social harmony. More recently, Newt Gingrich has argued that any country that attempts to “drive God out of public life” will surely face all kinds of social problems, and a secular country would be “frankly, a nightmare.” Indeed, in the aftermath of the wanton massacre of schoolchildren in Newton, Connecticut, Newt Gingrich publicly proclaimed that such violence was the obvious and inevitable result of secularism in our society. Mike Huckabee agreed. Religion — or so the age-old hypothesis goes — is, therefore, a necessary glue for keeping society together. And conversely, secularism is a danger to societal well-being. For if people turn away from God and stop being religious, then crime will go up, corruption will increase, perversion will percolate, decency will diminish, and all manifestations of misery and malfeasance will predominate. It is an interesting hypothesis. Perpetually-touted. And wrong. Consider, for instance, the latest special report just put out by the Organization for Economic Co-operation and Development (and recently summarized on the website 24/7wallstreet.com), which lists the 10 states with the worst/best quality of life. According to this multivariate analysis which takes into account a plethora of indicators of societal well-being, those states in America with the worst quality of life tend to be among the most God-loving/most religious (such as Mississippi and Alabama), while those states with the best quality of life tend to among the least God-loving/least religious (such as Vermont and New Hampshire). If you are curious as to which states are the most/least religious, simply check out the Pew Forum’s Religious Landscape Survey. It’s all there. And then you can go ahead and check out how the various states are faring in terms of societal well-being. The correlation is clear and strong: The more secular tend to fare better than the more religious on a vast host of measures, including homicide and violent crime rates, poverty rates, obesity and diabetes rates, child abuse rates, educational attainment levels, income levels, unemployment rates, rates of sexually transmitted diseases and teen pregnancy, etc. You name it: On nearly every sociological measure of well-being, you’re most likely to find the more secular states with the lowest levels of faith in God and the lowest rates of church attendance faring the best and the most religious states with the highest levels of faith in God and rates of church attendance faring the worst. And guess what? The correlation holds internationally, as well. As I’ve discussed in my book *Society Without God*, and as I extensively elaborate on in my newest book *Living the Secular Life*, those democratic nations today that are the most secular, such as Scandinavia, Japan, Australia, the Netherlands, etc., are faring much better on nearly every single indicator of well-being imaginable than the most religious nations on earth today, such as Colombia, Jamaica, El Salvador, Yemen, Malawi, Pakistan, the Philippines, etc. As University of London professor Stephen Law has observed, “if declining levels of religiosity were the main cause of…social ills, we should expect those countries that are now the least religious to have the greatest problems. The reverse is true.” Consider some specific examples. The Save the Children Foundation publishes an annual “Mother’s Index,” wherein they rank the best and worst places on earth in which to be a mother. And the best are almost always among the most secular nations on earth, while the worst are among the most devout. The non-profit organization called Vision of Humanity publishes an annual “Global Peace Index.” And according to their rankings, the most peaceful nations on earth are almost all among the most secular, while the least peaceful are almost all among the most religious. According to the United Nations 2011 Global Study on Homicide, of the top-10 nations with the highest intentional homicide rates, all are very religious/theistic nations, but of those at bottom of the list – the nations on earth with the lowest homicide rates — nearly all are very secular nations. Heck, look where Ebola is currently wreaking havoc? It isn’t in highly secular Sweden. Or highly secular Estonia. No — it is in various African nations where God is heavily worshipped, church is heavily attended, and prayer is heavily engaged in. * * * Do societies fall apart when they become more secular? Clearly not. And thus, the age-old hypothesis that religion is a necessary requirement for a sound, safe, and healthy society can and should be put safely to sleep in the musty bed of other such flagrant fallacies.
true
true
true
Which societies fare the best in today's world, the highly religious or the highly secular?
2024-10-12 00:00:00
2014-10-13 00:00:00
https://cdn2.psychologyt…pg?itok=8qXtnHki
article
psychologytoday.com
Psychology Today
null
null
10,376,312
http://www.helpcrowd.co/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
4,473,804
http://reactivemongo.org/
Reactive Scala Driver for MongoDB
null
# Reactive Scala Driver for MongoDB Asynchronous & Non-Blocking ReactiveMongo is a Scala driver that provides fully non-blocking and asynchronous I/O operations. ## Scale better, use fewer threads With a classic synchronous database driver, each operation blocks the current thread until a response is received. This model is simple but has a major flaw - it can’t scale that much. Imagine that you have a web application with 10 concurrent accesses to the database. That means you eventually end up with 10 frozen threads at the same time, doing nothing but waiting for a response. A common solution is to rise the number of running threads to handle more requests. Such a waste of resources is not really a problem if your application is not heavily loaded, but what happens if you have 100 or even 1000 more requests to handle, performing each several DB queries? The multiplication grows really fast… The problem is getting more and more obvious while using the new generation of web frameworks. What’s the point of using a nifty, powerful, fully asynchronous web framework if all your database accesses are blocking? ReactiveMongo is designed to avoid any kind of blocking request. Every operation returns immediately, freeing the running thread and resuming execution when it is over. Accessing the database is not a bottleneck any more. ## Let the stream flow! The future of the web is in streaming data to a very large number of clients simultaneously. Twitter Stream API is a good example of this paradigm shift that is radically altering the way data is consumed all over the web. ReactiveMongo enables you to build such a web application right now. It allows you to stream data both into and from your MongoDB servers. One scenario could be consuming progressively your collection of documents as needed without filling memory unnecessarily. But if what you’re interested in is live feeds then you can stream a MongoDB capped collection through a WebSocket, comet or any other streaming protocol. A capped collection is a fixed-size (FIFO) collection from which you can fetch documents as they are inserted. Each time a document is stored into this collection, the Web application broadcasts it to all the interested clients, in a complete non-blocking way. Moreover, you can now use GridFS as a non-blocking, streaming data store. ReactiveMongo retrieves the file, chunk by chunk, and streams it until the client is done or there’s no more data. Neither huge memory consumption, nor blocked thread during the process! ### Samples These sample applications are kept up to date with the latest driver version. They are built upon Play Framework. - Full Web Application featuring basic CRUD operations and GridFS streaming: online demo / source - Tests and samples: in the GitHub repository - ReactiveMongo Tailable Cursor, WebSocket and Play 2 - Play 2.6 TODO app with Swagger and ReactiveMongo **References:**
true
true
true
ReactiveMongo, The reactive Scala driver for MongoDB
2024-10-12 00:00:00
2012-01-01 00:00:00
null
null
null
null
null
null
7,759,483
http://www.businessinsider.com/vince-mcmahon-loses-nearly-13-of-fortune-2014-5
World Wrestling Chief Vince McMahon Lost Nearly A Third Of His Fortune Today
Aaron Taube
Vince McMahon is no longer a billionaire. Forbes reports that the World Wrestling Entertainment chairman and CEO lost a whopping $350 million when the company's stock crashed today upon news that its new streaming online video network would not replace its pay-per-view revenues until 2015. McMahon's wealth was previously estimated at around $1.1 billion, meaning he lost more than 30% of his fortune in today's crash, which sent the stock plummeting from $19.93 a share to $11.27. The crash, WWE's largest drop since its 1999 IPO, was precipitated by news that the company's new television deal with NBCUniversal was not as lucrative as investors had expected. According to Forbes, investors were expecting the new deal to be worth between two and three times as much as the previous one, when in fact the contract's value is expected to be a mere 50% increase. Additionally, the company said in a statement yesterday that it will need 1.3 million to 1.4 million subscribers to its over-the-top service to replace revenues it is losing from its monthly pay-per-view events, which previously cost around $50 but are now available to WWE Network subscribers as part of their $9.99 monthly fee. Currently, the WWE Network only has 670,000 subscribers. Excitement about the network and the new TV contract caused the stock price to double during the first three months of 2014, closing at a record high of $31.39 on March 20. The price increase helped McMahon become a billionaire for the first time since 2000. Alas, it appears that distinction was to be short-lived.
true
true
true
We're afraid Wall Street has got some bad news for Vince McMahon.
2024-10-12 00:00:00
2014-05-16 00:00:00
https://i.insider.com/537692a869bedd9f3e998509?width=1200&format=jpeg
article
businessinsider.com
Insider
null
null
15,200,766
https://www.scientificamerican.com/article/the-beauty-and-mystery-of-saturn-rsquo-s-rings-revealed-by-the-cassini-mission/
The Beauty and Mystery of Saturn's Rings Revealed by the Cassini Mission
Tanya Hill; The Conversation US
*The following essay is reprinted with permission from **The Conversation**, an online publication covering the latest research.* What would Saturn be without its beautiful system of rings? Over the past 13 years, the Cassini space probe has shown us just how complex and dynamic the rings truly are. ## On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The 20-year mission is coming to an end later this month when the probe makes its final destructive plunge into Saturn. As part of its grand finale, Cassini has flown closer to the rings than ever before, first grazing the outermost edges of the rings before taking the risky leap of diving through the gap between the rings and Saturn. ## Saturn’s big empty One of the surprises was that it’s quite empty in this gap. This is very different to when Cassini was bombarded by hundreds of dust particles per second as it moved past the outer rings late last year. But it meant good news for the mission as this final stage had a better chance for success if there was less material in the way. During a recent ring dive in August, instead of orientating Cassini so that it flew antenna-first through the gap (offering it more protection), the spacecraft was turned around allowing it to capture a fantastic view of the rings as it dived past. ## Know your ring ABCs Over the centuries, as Saturn’s rings have been observed in finer detail, they have been broken into discrete sections. They are named alphabetically in order of discovery, which means from innermost to outermost the order is D, C, B, A, F, G and E. Saturn’s innermost ring D is much less dense and therefore fainter than its neighbouring ring C. By comparing new Cassini images of the D ring with its original discovery image from Voyager in 1980, it’s possible to see changes in the ring over a relatively short period of time. In the Voyager image, three relatively bright arcs can be seen in the D ring (the bright arc in the lower left of frame is the C ring). Most dramatically, the central and brightest arc has faded markedly and moved 200km closer to Saturn (the arc no longer lines up with the Voyager image). ## Origin of the rings We know that the rings are mostly made of water ice, but it’s not clear how they formed or even how old they are. The fact that they are still bright, rather than coated in dust, suggests a young age – perhaps just 100 million years old, placing their formation in the time of the dinosaurs. This is consistent with Cassini data, but this theory also presents a problem: it means that a previous collection of moons had a fairly recent and mighty smash-up, creating the rings and five of Saturn’s current-day moons. Alternatively, Cassini has also shown that there is a lot less dust entering the Saturn system than was originally expected. This makes it possible for the rings to be both ancient and bright, having formed early in the life of the Solar System. Furthermore, interactions within the rings might dust them off and keep them looking young. ## Finger on the source For Saturn’s outermost E ring the source is pretty clear. The moon Enceladus orbits within this ring and Cassini observations have directly traced features in the ring back to geysers erupting from Enceladus’s surface. While in the faint F ring, the moon Prometheus creates streamer-channels, drawing material out of the ring. Prometheus interacts with the ring once every orbit, when it reaches the point that takes it furthest away from Saturn and closest to the F ring. As Prometheus orbits faster than the ring material, a new streamer is created that is ahead of the old one with every orbit. ## Bulging waistlines Several of Saturn’s smaller moons reside within and carve out gaps in the rings, and Cassini has shown them to have bulges around their middles. The moon Pan was responsible for clearing the A ring’s large Encke Gap. As it collects the ring material, Pan’s gravity is not strong enough to spread the accumulated matter across its surface, and instead a striking ridge develops. The tiny moon Daphnis is one of seven moons newly discovered by Cassini. It is just 8km across and as it orbits inside the A ring’s small Keeler Gap, it pulls on the ring particles leaving waves in its wake. ## Turning rings into moons Cassini has spotted signs of a potential new moonlet forming on the very edge of Saturn’s bright A ring. The newly formed object is probably less than a kilometre across but being able to see such a process in action was a complete surprise for Cassini scientists. It supports the theory that long ago, Saturn’s rings could have been much more massive and capable of producing some of the moons that exist today. It also potentially provides insight into how the planets of the solar system formed, emerging out of the accretion disk that once orbited the young Sun. Cassini has certainly achieved its mission objectives to explore Saturn, its atmosphere, magnetosphere and rings and to study Saturn’s moons, particularly Titan. So much has been learned, including the ability to gaze with wonder and awe at the amazing Solar System we are part of. *This article was originally published on **The Conversation**. Read the **original article**.*
true
true
true
The space probe has shown us how complex and dynamic the rings truly are
2024-10-12 00:00:00
2017-09-08 00:00:00
https://static.scientifi…ource.jpg?w=1200
article
scientificamerican.com
Scientific American
null
null
1,182,544
http://en.wikipedia.org/wiki/Arbre_du_T%C3%A9n%C3%A9r%C3%A9
Tree of Ténéré - Wikipedia
Authority control databases International VIAF National France BnF data
# Tree of Ténéré 17°45′00″N 10°04′00″E / 17.75000°N 10.06667°E The **Ténéré Tree** (French: *L'Arbre du Ténéré*) was a solitary acacia (*Vachellia tortilis*)[1][2][3] that was once considered the most isolated tree on Earth. It was a landmark on caravan routes through the Ténéré region of the Sahara Desert in northeast Niger, so well known that it and the Arbre Perdu (Lost Tree) to the north are the only trees to be shown on a map at a scale of 1:4,000,000. The tree is estimated to have existed for approximately 300 years until it was knocked down in 1973 by a truck driver.[4] ## Background [edit]The Tree of Ténéré was the last of a group of trees that grew when the desert was less parched than it is today. The tree had stood alone for decades. During the winter of 1938–1939 a well was dug near the tree and it was found that the roots of the tree reached the water table 33–36 meters (108 to 118 feet) below the surface. Commander of the Allied Military Mission Michel Lesourd, of the *Service central des affaires sahariennes* [Central service of Saharan affairs], saw the tree on May 21, 1939: One must see the Tree to believe its existence. What is its secret? How can it still be living in spite of the multitudes of camels which trample at its sides. How at each azalai does not a lost camel eat its leaves and thorns? Why don't the numerous Touareg leading the salt caravans cut its branches to make fires to brew their tea? The only answer is that the tree is taboo and considered as such by the caravaniers. There is a kind of superstition, a tribal order which is always respected. Each year the azalai gather round the Tree before facing the crossing of the Ténéré. The Acacia has become a living lighthouse; it is the first or the last landmark for the azalai leaving Agadez for Bilma, or returning. [5] In his book *L'épopée du Ténéré*, French ethnologist and explorer Henri Lhote described his two journeys to the Tree of Ténéré. His first visit was in 1934 on the occasion of the first automobile crossing between Djanet and Agadez. He describes the tree as "an Acacia with a degenerative trunk, sick or ill in aspect. Nevertheless, the tree has nice green leaves, and some yellow flowers". He visited it again 25 years later, on 26 November 1959 with the Berliet-Ténéré mission, but found that it had been badly damaged after a vehicle had collided with it: Before, this tree was green and with flowers; now it is a colourless thorn tree and naked. I cannot recognise it—it had two very distinct trunks. Now there is only one, with a stump on the side, slashed, rather than cut a metre from the soil. What has happened to this unhappy tree? Simply, a lorry going to Bilma has struck it... but it has enough space to avoid it... the taboo, sacred tree, the one which no nomad here would have dared to have hurt with his hand... this tree has been the victim of a mechanic... [5] ## Death and monument [edit]The Tree of Ténéré was knocked down by a Libyan truck driver, reportedly drunk, in 1973.[6][7][4] On November 8, 1973, the dead tree was installed in a dedicated shrine on the grounds of the Niger National Museum in Niamey.[5] A simple metal sculpture representing the tree stands to mark its former location and general appearance in the desert.[7] ## In popular culture [edit]The sculpture representing the Tree of Ténéré and the Tree's story feature prominently in the 2006 film *La Gran final* (*The Great Match*). In the film, a group of Tuareg nomads in the Sahara race to find a power supply and broadcast reception for their television in time to watch the 2002 FIFA World Cup Final between Germany and Brazil, eventually using the tree sculpture as a makeshift antenna. In 2017, a group of artists created a massive, four-story tall LED sculpture entitled *Tree of Tenere* that was showcased at Burning Man. The sculpture consisted of 25,000 molded leaves containing 175,000 LEDs.[8] In 2021 the artists re-developed the sculpture and created a permanent installation in the Deep Ellum neighborhood of Dallas, Texas.[9] In 2018, the tree's story appeared as a main theme in the official music video of "Transmission/Michaelion"[10] by Ibeyi. ## See also [edit]## References [edit]**^**Riedacker, A. (1993).*Physiologie des arbres et arbustes en zones arides et semi-arides: séminaire, Paris-Nancy, 20 mars-6 avril 1990*(in French). John Libbey Eurotext. p. 406. ISBN 2-7420-0019-4. Retrieved 2009-08-11.L'Hote (1961) note dans son article sur l'arbre du Ténéré ( *Acacia raddiana*) que l'on aurait retrouvé ses racines à 30 métres de profondeur.**^**Le Roy, Robert (1998).*Méhariste au Niger: souvenirs sahariens*(in French). Karthala Editions. p. 108. ISBN 2-86537-778-4. Retrieved 2009-08-11.It avait fallu à cet *acacia tortilis*une belle vigueur et une fameuse chance pour subsister là, seul, jusqu'à élever son feuillage hors de portée des gazelles.**^**Kyalangalilwa, Bruce; Boatwright, James S.; Daru, Barnabas H.; Maurin, Olivier; van der Bank, Michelle (2013-08-01). "Phylogenetic position and revised classification of Acacia s.l. (Fabaceae: Mimosoideae) in Africa, including new combinations in Vachellia and Senegalia".*Botanical Journal of the Linnean Society*.**172**(4): 500–523. doi:10.1111/boj.12047. hdl:10566/3454. ISSN 0024-4074.- ^ **a**Nuwer, Rachel (October 24, 2013). "The Most Isolated Tree in the World Was Killed by a (Probably Drunk) Driver".**b***Smithsonian Magazine*. - ^ **a****b**L'Arbre du Ténéré, Part 2**c** **^**L'arbre du Ténéré, symbole de la survie dans le Sahara(in French)- ^ **a**Molly McBride Jacobson (ed.), "Last Tree of Ténéré", Atlas Obscura, December 4, 2008; accessed 2023.01.09.**b** **^**Kane, Jenny (July 19, 2017). "Burning Man tree of lights inspired by world's loneliest tree".*Reno Gazette-Journal*. Retrieved October 17, 2022.**^**Karam, Yasmina. "DRIFT's interactive tree of ténéré to find new roots in texas this fall".*designboom*. Retrieved 10 June 2024.**^**https://www.youtube.com/watch?v=g70u2OrfVBE music video by Ibeyi
true
true
true
null
2024-10-12 00:00:00
2003-08-20 00:00:00
https://upload.wikimedia…-tenere-1961.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
14,114,625
https://peteris.rocks/blog/modifying-xml-json-ini-configuration-files-without-sed/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
18,597,903
https://www.theguardian.com/technology/2018/dec/03/claps-and-cheers-apple-stores-carefully-managed-drama
Claps and cheers: Apple stores' carefully managed drama
Jonny Bunning; Guardian staff reporter
Steve Jobs wanted customers to understand the Apple store “with one sweep of the eye,” as if gods standing on Mount Olympus. Indeed, the outlets seem to speak for themselves. Bright, uncluttered, and clad in glass, they couldn’t contrast more sharply with the big-box labyrinths they were designed to replace. Neither could their profit margins. Since launching in 2001, the instantly recognizable stores have raked in more money – in total and per square foot – than any other retailer on the planet, transforming Apple into the world’s richest company in the process. Yet the very transparency of the Apple store conceals how those profits are made. When we think of “tech”, we rarely think of retail stores, and when we think of “tech workers” we rarely think of the low-waged “geniuses” who staff them. Most media coverage of tech companies encourages us to forget that the vast majority of their employees are not, in fact, coders in Silicon Valley: they’re the suicidal assemblers of your phone, the call-center support staff, the delivery drivers and the smiling shop floor staff who make up the majority of Apple’s workforce. The Apple store was explicitly designed as a brand embassy rather than a dedicated source of technical knowledge. As Ron Johnson, the former Target executive who came up with the concept, told the Harvard Business Review, “People come to the Apple store for the experience – and they’re willing to pay a premium for that … Apple is in the relationship business as much as the computer business.” Johnson and Jobs wanted ambassadors whose ostensible role was not to sell products – uniquely, Apple store employees receive no commission – but to create positive customer sentiment and repair trust in the brand when it broke. That was hard to do if your stuff was lumped in with everyone else’s in a big electronics store, overseen by third-party staff lacking any special expertise or interest in what you wanted to sell. The goal was to take full control of the brand image while *humanizing* it. The problem, however, was that humans can be rather unruly. **Fortunately for Apple, **someone had been hard at work fixing that bug. In 1984, a group of professors at Harvard Business School published a book, Managing Human Assets, aimed at updating workplace organization for a new era. The book was based on the first new compulsory course at the Harvard Business School in a generation, launched in 1981. Ron Johnson started his MBA at Harvard the next year, graduating as the book itself was released. Previously, the book argued, labor discipline could be achieved in a relatively straightforward top-down manner, but now it required something else. “The limitations of hierarchy have forced a search for other mechanisms of social control,” the authors said. The mechanisms they proposed consisted, at root, of treating employees as nominal stakeholders in business success, but within narrow limits that would increase rather than challenge shareholder profitability. Johnson put many of these ideas into practice. He found the first cohort of Apple store employees by personally interviewing every manager and offering jobs to upbeat staff working for competitors. He sent the first five managers through the Ritz-Carlton training program to learn concierge skills. Then he developed a training program for the in-house production of “geniuses”. (Jobs reportedly hated the term at first, finding it ridiculous. True to form, he asked his lawyers to apply for a trademark the following day.) How do you create an engaged, happy, knowledgable workforce that can pass, however implausibly, as an entire battalion of geniuses in towns across the country? More importantly, how do you do all of that without the stick of the authoritarian boss or the carrot of a juicy commission? Apple’s solution was to foster a sense of commitment to a higher calling while flattering employees that they were the chosen few to represent it. By counterintuitively *raising* the bar of admission, crafting a long series of interviews to weed out the mercenary or misanthropic, Johnson soon attracted more applicants than there were posts. Those keen enough to go through the onerous hiring process were almost by definition a better “fit” for the devotional ethos of the brand, far more receptive to the fiction that they weren’t selling things but, in an oft-repeated phrase, “enriching people’s lives”, as if they’d landed a job at a charity. “When people are hired,” Johnson explained, “they feel honored to be on the team, and the team respects them from day one because they’ve made it through the gauntlet. That’s very different from trying to find somebody at the lowest cost who’s available on Saturdays from 8 to 12.” While not the *lowest,* the cost of these eager staff was still low – relative to industry averages, to the amount they made for the company, and to the $400m that Johnson earned in his seven years at Apple. Lower wages also had another, less obvious effect. As Apple store managers explained to the New York Times, the lack of commissions meant that the job didn’t pay well enough to support those with dependents: older workers were functionally excluded from representing the brand without the need for a formal policy – or the attendant specter of discrimination lawsuits that it would raise. Deploying psychology, not the maximizing calculus of economic rationality (money), allowed Apple to turn hiring and wages into managerial props. The sense of higher calling and flattery doesn’t stop with the hiring process, of course. Make it through the gauntlet and you are “clapped in” by existing workers: given a standing ovation as if receiving a prize. The clapping, according to employees, continues until new hires, perhaps after a confused delay, begin clapping too, graduating from outside spectator to part of the performance – part of the team. Leave the company and you’re “clapped out”. Products are clapped, customers waiting overnight to buy them are clapped, their purchases are clapped, claps are clapped. Clap, clap, clap. “My hands would sting from all the clapping,” said one manager. Claps, cheers, performances of rapturous engagement provided, by design, a ready-mixed social glue to bind teams together, reaffirming both the character of the brand and employees’ cultish devotion to it. **It might be expected that **Apple store employees are, as their name implies, tech gurus with incredible intellects. But their true role has always been to use emotional guile to sell products. The Genius Training Student Workbook is the vaguely comical title of the manual from which Apple store employees learn their art. Prospective geniuses are taught to use empathetic communication to control customer experience and defuse tension, aiming to make them happy and relax their purse strings. One of the techniques the book teaches is the “three Fs”: feel, felt, found. Here’s an example from the book, meant to be role-played by trainees: Customer: This Mac is just too expensive. Genius: I can see how you’d feel this way. I felt the price was a little high, but I found it’s a real value because of all the built-in software and capabilities. When customers run into trouble with their products, geniuses are encouraged to sympathize, but only by apologizing that customers feel bad, lest they implicate Apple’s products as the source of the trouble. In this gas-lit performance of a “problem free” brand philosophy, many words are actually verboten for staff. Do not use words like *crash*, *hang*, *bug*, or *problem*, employees are told. Instead say *does not respond, stops responding, condition, issue, or situation*. Avoid saying *incompatible*; instead use *does not work with*. Staff have reported the absurdist dialogues that can result, like when they are not allowed to tell customers that they cannot help even in the most hopeless cases, leading customers into circular conversations with employees able neither to help nor to refuse to do so. **Apple’s “geniuses” perform on a stage** that’s as carefully managed as they are. Jobs and Johnson wanted to control every aspect of the Apple stores, down to the specific color of the bathroom signs. Almost every detail is trademarked, from stairs to display tables to storage racks. Even the supposedly “intuitive” layout, so obvious that it can be understood by all, is considered unique enough to warrant a suite of intellectual property protections. In part to counter the falling sales volume of a saturated market, Apple has spent the past two years overhauling its stores to work even harder. Potted trees have been added to give a green splash to the signature grey and, in a move so ridiculous it’s almost certain to be a hit, the Genius Bar has been rebranded the “Genius Grove”. Windows are opened to blur the distinction between inside and outside, and the stores are promoted as quasi-public spaces. “We actually don’t call them stores any more,” the new head of retail at Apple, former Burberry executive Angela Ahrendts (2017 salary: $24,216,072), recently told the press. “We call them town squares.” The town square. It’s an almost-quaint symbol of participatory civic life – a world away from the big-box sprawl that characterized the retail imaginary of the late 20th century, or even the digital isolation of the 21st. Apple’s goal has been to create spaces for people to just hang out in, extending the original insight that focusing on everything other than cold hard cash will paradoxically be the best way to rake it in. In Ahrendts’s vision, “the store becomes one with the community”. But the real hope seems to be closer to the opposite, that the community will become one with the store. **After Apple recently won** the race to surpass a $1tn valuation, CEO Tim Cook emailed staff to explain, “Financial returns are simply the result of Apple’s innovation, putting our products and customers first, and always staying true to our values.” While seductive, this story is, like the Apple store itself, a managed fiction. Apple’s system of operation is less the result of genius than of capture and control. Semiconductors, microprocessors, hard drives, touch screens, the internet and its protocols, GPS: all of these ingredients of Apple’s immense profitability were funded through public dollars channeled into research through the Keynesian institution called the US military. They are the basis of Apple’s products, as the economist Mariana Mazzucato has shown. The company’s extraordinary wealth is not simply a reward for innovation, or the legacy of “innovators” like Steve Jobs. Rather, it flows from the privatization of publicly funded research, mixed with the ability to command the low-wage labor of our Chinese peers, sold by empathetic retailers forbidden from saying “crash”. The profits have been stashed offshore, tax free, repatriated only to enrich those with enough spare cash to invest. But, as the public well from which it has drawn past innovations runs dry, the company’s ability to repeat the success of the iPhone is evaporating. Federal funding for scientific research is in deep decline, and Apple isn’t likely to make up the gap. To keep profitability high, Apple is moving to ever-more-luxury price tags for ever-more-marginal improvements (like the iPhone XS Max) and expanding its ability to extract rent by controlling the creativity of others (through Apple Music or the App Store, both impossible to sign out of without landing in pop-up purgatory). All the while its brand embassies sell a different story with a smile. - A longer version of this article first appeared in Logic, a new magazine devoted to deepening the discourse around technology. - Jonny Bunning is a PhD student in the History of Science & Medicine program at Yale. He tweets @bunnjey.
true
true
true
Those ‘geniuses’ in the bright, sleek Apple store are underpaid, overhyped and characters in a well-managed fiction story
2024-10-12 00:00:00
2018-12-04 00:00:00
https://i.guim.co.uk/img…bd45457c9b2c198d
article
theguardian.com
The Guardian
null
null
1,152,134
http://wonko.com/post/ruby-script-to-display-comcast-data-usage?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+wonko+%28wonko.com%29&utm_content=Google+Reader
Ruby script to retrieve and display Comcast data usage
null
**Update (2011-04-03):** Comcast’s user account pages now appear to require JavaScript, which makes it impossible to scrape the usage data using a simple script. As a result, this script no longer works. Comcast has often advertised their high speed Internet service as providing “unlimited” data transfer, but when they say “unlimited”, what they really mean is “limited to 250GB a month”. Just before the new year, Comcast finally rolled out a data usage meter to users in the Portland, Oregon area so we can actually tell when we’re in danger of exceeding that 250GB ceiling. I find this usage meter incredibly helpful in achieving my goal of using as much of my monthly 250GB data allotment as I possibly can. I feel it’s my duty to get my full money’s worth. Unfortunately, the meter is buried several pages deep in Comcast’s account site, which is a slow and ugly beast that requires a login, several redirects, and a click or two. So I whipped up a little Ruby script to do the dirty work for me and just print out my current usage total. Before using the script, you’ll need to install the Mechanize gem: gem install mechanize Here’s the script: #!/usr/bin/env ruby require 'rubygems' require 'mechanize' URL_LOGIN = 'https://login.comcast.net/login?continue=https://login.comcast.net/account' URL_USERS = 'https://customer.comcast.com/Secure/Users.aspx' abort "Usage: #{$0} <username> <password>" unless ARGV.length == 2 agent = Mechanize.new agent.follow_meta_refresh = true agent.redirect_ok = true agent.user_agent = 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6' login_page = agent.get(URL_LOGIN) login_form = login_page.form_with(:name => 'signin') login_form.user = ARGV[0] login_form.passwd = ARGV[1] redirect_page = agent.submit(login_form) redirect_form = redirect_page.form_with(:name => 'redir') abort 'Error: Login failed' unless redirect_form account_page = agent.submit(redirect_form, redirect_form.buttons.first) users_page = agent.get(URL_USERS) usage_text = users_page.search("div[@class='usage-graph-legend']").first.content puts usage_text.strip Save it to an executable file (I called it `capmon.rb` ), then run it like so, passing in your Comcast.net username and password (they’ll be sent securely over HTTPS): ./capmon.rb myusername mypass The script will log into your Comcast account, go through all those painful redirects and clicks, and eventually spit out your usage stats, which will look something like this: 166GB of 250GB Couldn’t be simpler! Naturally, this script won’t work for you unless you’re a Comcast customer in a region where the usage meter is currently available. Also, the script will break if Comcast changes their login flow or page structure, but I’ll try to keep this post updated if that happens. This script is available as a GitHub gist as well. If you’d like to modify it and make it better, please fork the gist.
true
true
true
null
2024-10-12 00:00:00
2010-02-25 00:00:00
null
null
wonko.com
wonko.com
null
null
8,129,021
http://joekarl.github.io/go-libapns/
go-libapns
null
A low level APNS library for Go with an emphasis on being fast and efficient While this library is (fairly) stable, it needs to be battle tested before really relying heavily on it. That being said, use it, break it, and I would love any testing that you can throw at it When it comes to APNS libraries, most of them make some fatal mistakes. Those mistakes tend to boil down to: not handling the latest guidelines from Apple, relying on inefficient networking, or not handling errors in the ways Apple requires you to. go-libapns tries to be a simple low level interface that should be able to handle large amounts of push notifications in a CPU, Memory, and Network effecient manner. go-libapns also looks to be prepared for the future, with the impending expansion of the APNS payload and new fields, the code is mostly setup to handle these new featues. go-libapns is open source and the code is on github. Feel free to submit pull requests and bug reports. `go get github.com/joekarl/go-libapns` Due to Apple's streaming protocol, errors are a bit tricky to handle correctly. However by following a few simple guidelines, go-libapns will do the heavy lifting for you, allowing you to focus on what you want to do when an error occurs.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
22,491,266
https://www.npr.org/2020/03/02/811363404/when-xenophobia-spreads-like-a-virus
When Xenophobia Spreads Like A Virus : Code Switch
Natalie Escobar
# When Xenophobia Spreads Like A Virus The global response to COVID-19 has made clear that the fear of contracting disease has an ugly cousin: xenophobia. As the coronavirus has spread from China to other countries, anti-Asian discrimination has followed closely behind, manifesting in plummeting sales at Chinese restaurants, near-deserted Chinatown districts and racist bullying against people perceived to be Chinese. We asked our listeners whether they had experienced this kind of coronavirus-related racism and xenophobia firsthand. And judging by the volume of emails, comments and tweets we got in response, the harassment has been intense for Asian Americans across the country — regardless of ethnicity, location or age. A common theme across our responses: Public transit has been *really* hostile. Roger Chiang, who works in San Francisco, recalled a white woman glaring at him on the train to work, covering her nose and mouth. When he told her in a joking tone that he didn't have the coronavirus, she replied that she "wasn't racist — she just didn't want to get sick." Allison Park from Brooklyn told us that when visiting D.C., she saw a man making faces at her on the Metro train. She tried to move away from him, but he wouldn't stop. After a while, she said, he confronted her outright, saying: "Get out of here. Go back to China. I don't want none of your swine flu here." A week later, on a Muni train in San Francisco, another man yelled the same thing to her — "Go back to China" — and even threatened to shoot her. Even a single cough or sneeze can trigger harassment. Amy Jiravisitcul from Boston said a man on a bus muttered about "diseased Chinese people" when she sneezed into her sleeve. When she confronted him, he told her: "Cover your fucking mouth." When South San Franciscan Diane Tran sneezed into her elbow in a hallway in a hospital, where she was getting a flu shot, she said a middle-aged white woman yelled a racist slur at her. Children have been targeted, too — by other children and adults alike. Devin Cabanilla, from Seattle, told us that a Costco food sample vendor told his Korean wife and mixed-race son to "get away" from the samples, questioning whether they had come from China. Company executives later apologized to his family, but he's still shaken. "It just reminds me that when people look at us, they don't see us as American," he said. Thirteen-year-old Sara Aalgaard told us that since the outbreak, many middle-school classmates of hers have been targeting the small population of Asian Americans at her school in Middletown, Conn. "People call us 'corona,' " she said, or ask if they eat dogs. Rebecca Wen from North Brunswick, N.J., told us that her 9-year-old son reported that his 11-year-old classmate said: "You're Chinese, so you must have the coronavirus." The anti-Asian harassment isn't limited to the U.S., either. International outlets have reported harassment in majority-white countries like Australia, where parents in Melbourne refused to let Asian doctors treat their children, and Canada, where around 10,000 Toronto-area people signed a petition calling for the local school district to track and isolate Chinese-Canadian students who may have traveled to China for the Lunar New Year. In Germany, Thea Suh said that when she sat down on her train to work, the person sitting next to turned away from her and covered his face. A few days later, a woman told her to move her "corona-riddled body" elsewhere. Not once did someone step in to help, she said. "I have also not seen or heard any German politician or major influencer coming to our defenses," she said. "And I feel like as a part of the so-called model minority, we are being left alone." That's another common theme from the responses we got: Witnesses and bystanders were slow to intervene. Allison Park remembers that when the man on the D.C. Metro told her to go back to China, the train was nearly two-thirds full, but no one said anything. At best, she got some sympathetic looks. Amy Jiravisitcul said that the other passengers ignored the yelling, which made her wonder whether they thought she was just making a scene. And when the harassment has passed, unease still lingers. Jane Hong from New York told us that when she and a fellow Korean American were walking from lunch, she heard a man screaming "yuck" in their direction. Now, she notices whenever people on the street look at her for more than a passing glance. "I don't know if 'paranoid' is the word," Hong said. "Now it's in my head. I wonder if they are thinking, 'I have to stay away from her, I don't want to walk near her.' Now that the seed has been planted in my head, it's hard to not have that thought cross my mind." *For more on xenophobia and coronavirus, *listen to this week's episode of the Code Switch podcast*. We hear from some of these folks, as well as Erika Lee, a historian at the University of Minnesota who studies history, immigration and epidemics. *
true
true
true
As international health agencies warn that COVID-19 could become a pandemic, fears over the new coronavirus' spread have activated old, racist suspicions toward Asians and Asian Americans. It's part of a longer history in the United States, in which xenophobia has often been camouflaged as a concern for public health and hygiene.
2024-10-12 00:00:00
2020-03-02 00:00:00
https://media.npr.org/as…400&c=100&f=jpeg
article
npr.org
NPR
null
null
17,554,256
https://people.stanford.edu/nbloom/sites/default/files/w24793.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,013,668
https://learncplusplus.org/modern-c-is-9-5-times-faster-than-python-in-prime-number-test/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,168,012
http://www.coinsetter.com/bitcoin-news/2014/08/12/unocoin-raises-250k-1348
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,952,459
https://momjian.us/main/blogs/pgblog/2019.html#January_16_2019
Postgres Blog
null
This blog is about my work on the Postgres open source database, and is published on Planet PostgreSQL. PgLife allows monitoring of all Postgres community activity. Online status: Unread Postgres emails: Email graphs: incoming, outgoing, unread, commits (details) ### Las Vegas Event at re:Invent *Sunday, December 1, 2019* I am attending re:Invent this week. Thanks to the kind sponsorship of aws, we are having a community Q&A and dinner on Thursday, December 5 in Las Vegas. The Meetup page is now online if you want to register. View or Post Comments### Implementing Transparent Data Encryption in Postgres *Friday, September 27, 2019* For the past 16 months, there has been discussion about whether and how to implement Transparent Data Encryption (tde) in Postgres. Many other relational databases support tde, and some security standards require it. However, it is also debatable how much security value tde provides. The tde 400-email thread became difficult for people to follow, partly because full understanding required knowledge of Postgres internals and security details. A group of people who wanted to move forward began attending a Zoom call, hosted by Ahsan Hadi. The voice format allowed for more rapid exchange of ideas, and the ability to quickly fill knowledge gaps. It was eventually decided that all-cluster encryption was the easiest to implement in the first version. Later releases will build on this. Fundamentally, tde must meet three criteria — it must be secure, obviously, but it also must be done in a way that has minimal impact on the rest of the Postgres code. This has value for two reasons — first, only a small number of users will use tde, so the less code that is added, the less testing is required. Second, the less code that is added, the less likely tde will break because of future Postgres changes. Finally, tde should meet regulatory requirements. This diagram by Peter Smith illustrates the constraints. There is an active TODO list to coordinate development. There is hope this can be completed in Postgres 13. View or Post Comments### Release of pgcryptokey *Saturday, August 31, 2019* Nine months ago, I started development of a key management extension for pgcrypto. The tool is called pgcryptokey and is now ready for beta testing. It uses two-levels of encryption, with an access password required to use the cryptographic keys. It supports setting and changing the access password, multiple cryptographic keys, key rotation, data reencryption, and key destruction. It also passes the access password from client to server without it appearing in clear text in SQL queries, and supports boot-time setting. The extension leverages pgcrypto and Postgres custom server variables. View or Post Comments### Ibiza: A Different Type of Conference *Friday, June 28, 2019* Having returned from last week's Ibiza conference, I have a new understanding of the event's goals. I know there was some uncertainty about the event, for several reasons: - Having a conference at a **resort**is a new thing for our community. We started years ago with conferences in universities, and steadily grew to hotel-based conferences in minor and then major cities. - Ibiza has a reputation in some countries as a **party destination.**The wildest thing I saw were yelping dogs being walked along the beach. - The **beach**mention often confused people. This was part of an effort to raise the importance of the**hallway track,**rather than it being just scheduled*holes*between technical talks. I didn't realize it was possible to eat lunch and swim in the ocean during a 90-minute break, but I did it! - There is historical **abuse**of resort-based conferences as paid or tax-free vacations. This was certainly not the case for Ibiza, but it is an additional hurdle. I returned from the conference with a warm feeling for the venue, the people I met, and the things I learned, as did my wife and daughter. While resort conferences are not for everybody, they are popular in other industries, and there is certainly a need for this type of event. The next scheduled "beach" conference is in Bali, and I plan to attend. View or Post Comments### The Democratization of Databases *Thursday, June 27, 2019* Having delivered my new talk, *The Democratization of Databases*, at PostGres IBiZa and Postgres Vision, I am now posting my slides online. It covers the history of various governing structures and why democracy provides superior results. It has been well received. ### Postgres 12 Features Presentation *Sunday, June 23, 2019* Now that I have given a presentation about Postgres 12 features in Ibiza, I have made my slides available online. View or Post Comments### Exploring Postgres Tips and Tricks *Thursday, June 6, 2019* I did a webinar two weeks ago titled, "Exploring Postgres Tips and Tricks." The slides are now online, as well as a video recording. I wasn't happy with the transition I used from the pdf to the blog entries, but now know how to improve that next time. I think I might do more of these by expanding on some of the topics I covered, like *psql* and monitoring. Also, a new video is available of the sharding presentation I mentioned previously. ### Updated Sharding Presentation *Friday, May 31, 2019* I presented my sharding talk today at PGCon in Ottawa. The slides have been updated to more clearly show what has been accomplished toward the goal of built-in sharding, and what remains to be done. The talk was well attended. I also attended a breakfast meeting this morning about sharding. **Update:** A video is also available *2019-06-06* ### Draft of Postgres 12 Release Notes *Sunday, May 12, 2019* I have completed the draft version of the Postgres 12 release notes. Consisting of 186 items, this release makes big advances in partitioning, query optimization, and index performance. Many long-awaited features, like reindex concurrently, multi-variate most-frequent-value statistics, and common table expression inlining, are included in this release. The release notes will be continually updated until the final release, which is expected in September or October of this year. View or Post Comments### The High Value of Data *Friday, March 8, 2019* There was a time when every piece of software had to be purchased: operating systems, compilers, middleware, text editors. Those days are mostly gone, though there are a few holdouts (e.g., MS Windows, vertical applications). What happened is that open source software has come to dominate most uses, and software selection is rarely based on cost requirements. One of the final holdouts for purchased software is databases. You might think that is because database software is complex, but so is the other software mentioned. The big difference is that while non-database software processes or stores user data in a simple or standard way, databases lock user data inside the database. This data locking is a requirement for fast, reliable, and concurrent data access, but it does place the database on a different monetary plane. In any organization, it is really their *data* that is valuable, and because the database is so tightly coupled to that valuable data, database software becomes something that is worth significant investment. This explains why databases have resisted the open source commoditization that has happened to so much other purchased software. (Custom database applications tied to complex business logic has also slowed migration.) The rise of Postgres in the past few years shows that the days of expensive database software are numbered. However, once the move to open source becomes effectively complete, there will still be significant revenue opportunities. Few people purchase compiler support, and many don't even purchase operating system support, but database support, because of its tight integration with user data, might never disappear, though it could be subsumed into other costs like cloud computing. It will be the care and feeding of user data that people will pay for, rather than the database software itself, because it pays to protect things of value. View or Post Comments### Tool Vendor/Support Options *Thursday, March 7, 2019* Having explained that lock-in is not a binary option, what are the Postgres tool support options available, at a high level? - Develop in-house database tools and support them yourself - Use open source tools and support them yourself - Use open source tools with vendor support (hopefully the vendor supports your chosen tools) - Use closed-source tools with vendor support Of course, you can mix and match these options, i.e., use a support vendor for the open source tools they support, use other open source tools they don't support, and use some tools you develop in-house, e.g.: - open source Postgres database (vendor support) - pgBackRest for backup (vendor support) - patroni for failover (community support channels) - In-house developed tools (self support) I went over these options in more detail in this presentation. This diversity of options is rarely available for closed-source, single-vendor database solutions. **Update:** This blog entry has much more detail about lock-in. *2019-09-26* ### SQL Replay for Replication? *Wednesday, March 6, 2019* Postgres has had streaming (binary) replication for a long time, and logical (row shipping) replication since Postgres 10. Implementing these was a lot of work, and they work well. However, the simplest way to do replication is often considered to be replaying sql queries on standbys. The primary was modified by sql queries, so isn't the simplest way to replicate replaying sql? A novice would think so, and many database server developers initially try replication by replaying sql. It seems simple because sql queries are more concise than per-row changes. Imagine a delete that affects one million rows being shipped to a standby as a single sql query. The conciseness and simplicity of sql replication looks promising. However, if you try implementing replication via sql, you will realize that sql runs in a multi-user environment. Sql commands do not contain enough information to replay queries the exact same way on standbys as the primary. Concurrent dml, volatile functions, sequence assignment, locks, and cursor contents can all cause inconsistencies. Developers have tried patching over these issues, but eventually the fundamental limitations of this approach become clear. I doubt Postgres will ever implement sql-level replication for this reason. View or Post Comments### Lock-In Is Not a Binary Decision *Tuesday, March 5, 2019* One of the major attractions of Postgres is the ability to stop using database software controlled by a single vendor. Single-vendor control means a single entity controls the software, tools, training, and support. There are sometimes options from other vendors, but they are usually hampered because the source code is closed. Postgres is a strong move away from that, but is it always a complete disconnection from lock-in? Well, it can be — you could: - Download the Postgres source code and compile it yourself - Evaluate, integrate, and test the tools you need to use Postgres in your organization - Create a training program for your employees - Develop a Postgres internals team to support Postgres and your tools This is not possible for proprietary software, but because Postgres is open source, it is certainly possible. However, once you have completed these steps, what do you have? No lock-in? Well, no vendor/external lock-in, but you do have *internal* lock-in. Your staff is doing a lot of work, and any change in direction is going to be difficult. This also might be an expensive option due to staff costs. By choosing an external vendor, you can reduce costs, though you increase your external lock-in. (I covered this concept recently.) So, lock-in isn't always a bad thing if it reduces costs or increases efficiency or flexibility. Importantly, you have the ability to switch vendors when advantageous, and since vendors know they can be easily replaced, they are less likely to be exploitative. Frankly, the problem to avoid is not lock-in as much as being a hostage of a single vendor. View or Post Comments### Corporate Backing *Monday, March 4, 2019* Postgres has long lived in the shadow of proprietary and other open source databases. We kind of got used to that, though we had early support from Fujitsu and ntt. In recent years, Postgres has become more noticed, and the big companies promoting Postgres have become somewhat of a flood: Even with ibm having DB2 and Microsoft having SQL Server, they still support Postgres. It is odd having multi-billion-dollar companies asking how they can help the Postgres community, but I guess we will have to get used to it. These companies support the community to varying degrees, but we certainly appreciate all the help we receive. Just having these companies list us as supported is helpful. View or Post Comments### Breaking Backward Compatibility *Monday, February 25, 2019* As an actively-developed open source project with a long history, Postgres often has to make decisions on how to integrate new features into the existing code base. In some cases, these new features potentially break backward compatibility, i.e., api breakage. This breakage can be caused by: - Fixing an incorrect result or behavior - Adding a feature that prevents the previous api from working - Replacing an existing feature with an improved one that requires api breakage In these cases, the Postgres project has several options: - Add the new feature and retain the old api forever - Add the new feature and retain the old api until all supported Postgres versions have the new interface (five years) - Add the new feature and remove the old api - Reject the new feature because it would cause api breakage You might think that #1 is always the best option. However, if you have ever used older software that has several ways to do the same thing, with no logic behind it except the order features were added, you know that choosing #1 has costs. While #1 allows existing users to continue using Postgres unchanged, new users have to navigate the complex api required to maintain backward compatibility. There are some cases where the breakage would be so significant that #1 (or #4) is the only option. However, choosing #2 or #3 allows future users to interact with Postgres using a clean api. Backward-compatibility breakage can happen at several levels: - Client interface, e.g., *libpq* - Sql - Administrative interface - System catalogs - Source code The farther down the list, the more likely the Postgres community will decide to break backward compatibility because of the reduced impact of the breakage. This email thread discusses the problems caused by a system catalog change, and the positives and negatives of backward-compatibility breakage. View or Post Comments### The Maze of Postgres Options *Friday, February 22, 2019* I did a webcast earlier this week about the many options available to people choosing Postgres — many more options than are typically available for proprietary databases. I want to share the slides, which covers why open source has more options, how to choose a vendor that helps you be more productive, and specifically tool options for extensions, deployment, and monitoring. View or Post Comments### Trusted and Untrusted Languages *Wednesday, February 20, 2019* Postgres supports two types of server-side languages, trusted and untrusted. Trusted languages are available for all users because they have safe sandboxes that limit user access. Untrusted languages are only available to superusers because they lack sandboxes. Some languages have only trusted versions, e.g., PL/pgSQL. Others have only untrusted ones, e.g., PL/Python. Other languages like Perl have both. Why would you want to have both trusted and untrusted languages available? Well, trusted languages like PL/Perl limit access to only safe resources, while untrusted languages like *PL/PerlU* allow access to files system and network resources that would be unsafe for non-superusers, i.e., it would effectively give them the same power as superusers. This is why only superusers can use untrusted languages. When choosing server-side languages, the availability of trusted and untrusted languages option should be considered. View or Post Comments### Order of Select Clause Execution *Monday, February 18, 2019* Sql is a declaritive language, meaning you specify what you want, rather than how to generate what you want. This leads to a natural language syntax, like the select command. However, once you dig into the behavior of select, it becomes clear that it is necessary to understand the order in which select clauses are executed to take full advantage of the command. I was going to write up a list of the clause execution ordering, but found this webpage that does a better job of describing it than I could. The ordering bounces from the middle clause (from) to the bottom to the top, and then the bottom again. It is hard to remember the ordering, but memorizing it does help in constructing complex select queries. View or Post Comments### Imperative to Declarative to Imperative *Friday, February 15, 2019* This email thread from 2017 asks the question of whether there is an imperative language that generates declarative output that can be converted into an imperative program and executed. Specifically, is there an imperative syntax that can output sql (a declarative language) which can be executed internally (imperatively) by Postgres? The real jewel in this email thread is from Peter Geoghegan, who has some interesting comments. First, he explains why developers would want an imperative language interface, even if it has to be converted to declarative: Some developers don't like sql because they don't have a good intuition for how the relational model works. While sql does have some cruft — incidental complexity that's a legacy of the past — any language that corrected sql's shortcomings wouldn't be all that different to sql, and so wouldn't help with this general problem. Quel wasn't successful because it was only somewhat better than sql was at the time.However, the OP [original poster] seemed to be describing something that maps imperative code to a declarative sql query or something equivalent, which isn't quite the same thing. The declarative nature of sql feels restrictive or at least unfamiliar to many programmers. What is often somehow missed is that it's restrictive in a way that's consistent with how the relational model is supposed to work. It seems hard to some programmers because you have to formulate your query in terms of an outcome, not in terms of a series of steps that can be added to iteratively,[emphasis added] as a snippet of code is written. It's very different to something like bash, because it requires a little bit of up-front, deliberate mental effort. And, because performance with many different possible outputs matters rather a lot. Second, he explains why sql is one of the few successful declarative languages: To state the very obvious: If you assume for the sake of discussion that the programmer of a hypothetical imperative query language is infinitely capable and dedicated, and so is at least as capable as any possible query optimizer, the optimizer still comes out ahead, because it is capable of producing a different, better query plan as the underlying data changes. Of course, it's also true that it's very hard to beat the query optimizer under ideal conditions. Basically, *imperative* languages cannot adjust to changing data sizes and value frequencies, while declarative sql can. Another interesting observation from Chris Travers is that it is simpler to convert from declarative to imperative, than the other direction. The thread contains many other interesting observations about why sql became so popular, and why it is unlikely that other languages will replace it anytime soon. View or Post Comments### Composite Values *Wednesday, February 13, 2019* You might not be aware that you can store a virtual row, called a composite value, inside a database field. Composite values have their own column names and data types. This is useful if you want to group multiple statically-defined columns inside a single column. (The json data types are ideal for *dynamically*-defined columns.) This email thread explains how to define and use them, I have a presentation that mentions them, and the Postgres manual has a section about them. View or Post Comments### At Time Zone Confusion *Monday, February 11, 2019* I saw at time zone used in a query, and found it confusing. I read the Postgres documentation and was still confused, so I played with some queries and finally figured it out. I then updated the Postgres documentation to explain it better, and here is what I found. First, at time zone has two capabilities. It allows time zones to be added to date/time values that lack them (timestamp *without* time zone, *::timestamp*), and allows timestamp *with* time zone values (*::timestamp tz*) to be shifted to non-local time zones and the time zone designation removed. In summary, it allows: - timestamp *without*time zone ⇾ timestamp*with*time zone (add time zone) - timestamp *with*time zone ⇾ timestamp*without*time zone (shift time zone) It is kind of odd for at time zone to be used for both purposes, but the sql standard requires this. First, let's see #1, at time zone adding time zone designations: SELECT '2018-09-0207:09:19'::timestampAT TIME ZONE 'America/Chicago'; timezone ------------------------ 2018-09-0208:09:19-04SELECT '2018-09-0207:09:19'::timestampAT TIME ZONE 'America/Los_Angeles'; timezone ------------------------ 2018-09-0210:09:19-04SELECT '2018-09-0207:09:19'::timestampAT TIME ZONE 'Asia/Tokyo'; timezone ------------------------ 2018-09-0118:09:19-04 What is basically happening above is that the date and time are interpreted as being in the specified time zone (e.g., *America/Chicago*), a timestamp with time zone value is created, and the value displayed in the default time zone (*-04*). It doesn't matter if a time zone designation is specified in the *::timestamp* string — only the date and time are used. This is because casting a value to timestamp *without* time zone ignores any specified time zone: SELECT '2018-09-0207:09:19'::timestamp; timezone ------------------------ 2018-09-0207:09:19-04SELECT '2018-09-0207:09:19-10'::timestamp; timezone ------------------------ 2018-09-0207:09:19-04SELECT '2018-09-0207:09:19-12'::timestamp; timezone ------------------------ 2018-09-0207:09:19-04 This behavior is also shown in at time zone: SELECT '2018-09-0207:09:19'::timestampAT TIME ZONE 'America/Chicago'; timezone ------------------------ 2018-09-0208:09:19-04SELECT '2018-09-0207:09:19-10'::timestampAT TIME ZONE 'America/Chicago'; timezone ------------------------ 2018-09-0208:09:19-04SELECT '2018-09-0207:09:19-12'::timestampAT TIME ZONE 'America/Chicago'; timezone ------------------------ 2018-09-0208:09:19-04 The second use of at time zone (#2) is for removing time zone designations by *shifting* the timestamp with time zone value to a different time zone and removing the time zone designation: SELECT '2018-09-02 07:09:19-04'::timestamptz AT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0206:09:19 SELECT '2018-09-02 07:09:19-04'::timestamptz AT TIME ZONE 'America/Los_Angeles'; timezone --------------------- 2018-09-0204:09:19 SELECT '2018-09-02 07:09:19-04'::timestamptz AT TIME ZONE 'Asia/Tokyo'; timezone --------------------- 2018-09-0220:09:19 In these cases, because the inputs are timestamp *with* time zone, time zone designations in the strings are significant: SELECT '2018-09-0207:09:19-04'::timestamptzAT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0206:09:19 SELECT '2018-09-0207:09:19-05'::timestamptzAT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0207:09:19 SELECT '2018-09-0207:09:19-06'::timestamptzAT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0208:09:19 The time zone is not being added to the date and time. Rather, the full date/time/time zone value is shifted to the desired time zone (*America/Chicago*), and the time zone designation removed (timestamp *without* time zone). This is useful because normally you would need to change your *TimeZone* setting to see values in other time zones. Without the cast, at time zone inputs are assumed to be timestamp with time zone, and the local time zone is assumed if not specified: SELECT '2018-09-0207:09:19' AT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0206:09:19 SELECT '2018-09-0207:09:19-10' AT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0212:09:19 Again notice the missing time zone designations in the results. The most interesting queries are these two, though they return the same output as input: SELECT '2018-09-0207:09:19'::timestamp AT TIME ZONE 'America/Chicago' AT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0207:09:19 SELECT '2018-09-0207:09:19-04'::timestamptz AT TIME ZONE 'America/Chicago' AT TIME ZONE 'America/Chicago'; timezone ------------------------ 2018-09-0207:09:19-04 As you can see the two at time zone calls cancel each other out. The first creates a timestamp with time zone in the *America/Chicago* time zone using the supplied date and time, and then shifts the value to that same time zone, removing the time zone designation. The second creates a timestamp without time zone value in the same time zone, then creates a timestamp with time zone value using the date and time in the default time zone (*TimeZone*). Using different time zones for the two calls yields useful results: SELECT '2018-09-0207:09:19'::timestamp AT TIME ZONE 'Asia/Tokyo' AT TIME ZONE 'America/Chicago'; timezone --------------------- 2018-09-0117:09:19 This gives the *America/Chicago* time for the supplied *Asia/Tokyo* time — quite useful. I have updated the Postgres documentation to be clearer about at time zone. Hopefully, that change and this blog post make the feature less confusing, or more so. View or Post Comments### PgLife for Familiarization *Friday, February 8, 2019* I worked with two companies this week to help them build open-source Postgres teams. Hopefully we will start seeing their activity in the community soon. One tool I used to familiarize them with the Postgres community was PgLife. Written by me in 2013, PgLife presents a live dashboard of all current Postgres activity, including user, developer, and external topics. Not only a dashboard, you can drill down into details too. All the titles on the left are clickable, as are the detail items. The plus sign after each Postgres version shows the source code changes since its release. Twitter and Slack references have recently been added. I last mentioned PgLife here six years ago, so I thought I would mention it again. FYI, this is my 542nd blog entry. If you missed any of them, see my category index at the top of this page. View or Post Comments### Expanding Permission Letters *Wednesday, February 6, 2019* Thanks to a comment on my previous blog post by Kaarel, the ability to simply display the Postgres permission letters is not quite as dire as I showed. There is a function, *aclexplode(),* which expands the access control list (acl) syntax used by Postgres into a table with full text descriptions. This function exists in all supported versions of Postgres. However, it was only recently documented in this commit based on this email thread, and will appear in the Postgres 12 documentation. Since *aclexplode()* exists (undocumented) in all supported versions of Postgres, it can be used to provide more verbose output of the *pg_class.relacl* permission letters. Here it is used with the *test* table created in the previous blog entry: SELECTrelaclFROMpg_classWHERE relname = 'test'; relacl -------------------------------------------------------- {postgres=arwdDxt/postgres,bob=r/postgres,=r/postgres} SELECT a.* FROM pg_class,aclexplode(relacl) AS a WHERE relname = 'test' ORDER BY 1, 2; grantor | grantee | privilege_type | is_grantable ---------+---------+----------------+-------------- 10 | 0 |SELECT| f 10 | 10 |SELECT| f 10 | 10 |UPDATE| f 10 | 10 |DELETE| f 10 | 10 |INSERT| f 10 | 10 |REFERENCES| f 10 | 10 |TRIGGER| f 10 | 10 |TRUNCATE| f 10 | 16388 |SELECT| f Some columns are hard to understand. The first column is the role id of the grantor of the permission, which is *10* *(postgres)* for all entries. The second column is the role id who was assigned the permission. In this case, zero *(0)* represents public. We can use *array_agg* to group permissions for each *grantor/grantee* combination into a single line: SELECT a.grantor, a.grantee,array_agg(privilege_type) FROM pg_class, aclexplode(relacl) AS a WHERE relname = 'test' GROUP BY 1, 2 ORDER BY 1, 2; grantor | grantee | array_agg ---------+---------+----------------------------------------------------------- 10 | 0 | {SELECT} 10 | 10 | {SELECT,UPDATE,DELETE,INSERT,REFERENCES,TRIGGER,TRUNCATE} 10 | 16388 | {SELECT} By casting the role id to text using *regrole,* the output is clearer (dash *(-)* represents public): SELECT a.grantor::regrole, a.grantee::regrole, array_agg(privilege_type) FROM pg_class, aclexplode(relacl) AS a WHERE relname = 'test' GROUP BY 1, 2 ORDER BY 1, 2; grantor | grantee | array_agg ----------+----------+----------------------------------------------------------- postgres | - | {SELECT} postgres |postgres| {SELECT,UPDATE,DELETE,INSERT,REFERENCES,TRIGGER,TRUNCATE} postgres |bob| {SELECT} This is certainly better than the letter spaghetti that I showed in my previous blog post. By adding *relname* to the target list and removing the where clause, you can display the permissions of all database tables. By using the *pg_proc* table instead of *pg_class,* you can display verbose function permissions. This method can be used for any system table that has a column of type *aclitem[].* ### Permission Letters *Monday, February 4, 2019* If you have looked at Postgres object permissions in the past, I bet you were confused. I get confused, and I have been at this for a long time. The way permissions are stored in Postgres is patterned after the long directory listing of Unix-like operating systems, e.g., ls -l. Just like directory listings, the Postgres system stores permissions using single-letter indicators. *r* is used for read (select) in both systems, while *w* is used for write permission in *ls,* and update in Postgres. The other nine letters used by Postgres don't correspond to any directory listing permission letters, e.g., *d* is delete permission. The full list of Postgres permission letters is in the documentation; the other letters are: D -- TRUNCATE x -- REFERENCES t -- TRIGGER X -- EXECUTE U -- USAGE C -- CREATE DATABASE/SCHEMA/TABLESPACE c -- CONNECT T -- TEMPORARY s -- SET PARAMETER A -- ALTER SYSTEM Let's look at how these letters are stored in the Postgres system catalogs by using *psql's* *\dp* (or alias *\z*): CREATE TABLE test (x INTEGER); \dp Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+------+-------+-------------------+-------------------+---------- public | test | table | | | CREATE USERbob; GRANTSELECTON TABLE test tobob; \dp Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+------+-------+---------------------------+-------------------+---------- public | test | table | postgres=arwdDxt/postgres+| | | | |bob=r/postgres | | The output of the first *dp* shows no permissions, indicating that the owner is the only role who has access to this object. As soon as permissions are added for anyone else, the owner (*postgres*) permissions are explicitly listed with the new permissions — in this case, for *bob.* The */postgres* at the end indicates the role who assigned the permissions. Giving public permissions shows a line similar to *bob,* but, to indicate public, there is no role name before the equals sign: GRANTSELECTON TABLE test toPUBLIC; \dp Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+------+-------+---------------------------+-------------------+---------- public | test | table | postgres=arwdDxt/postgres+| | | | | bob=r/postgres +| | | | | =r/postgres | | While this method of storing permissions is certainly compact, it is not obvious. I don't remember anyone complaining about our compact permissions display, and I am not sure what I would suggest to improve it, but it certainly takes study to become proficient at interpreting it. View or Post Comments### Limiting Superuser Activity *Wednesday, January 30, 2019* This interesting email thread explores the question of how much you can prevent or detect unauthorized database superuser activity. The main conclusions from the thread are: - It is impossible to *restrict*database administrator access without hindering their ability to perform their jobs *Monitoring*superuser activity is the most reasonable way to detect and hopefully discourage unauthorized activity- Monitoring includes: - Assign a separate account to each administrator for auditing purposes; do not use generic/shared accounts - Use an auditing tool to record database activity, e.g., pgAudit - Use syslog to send database logs to a computer not under database administrators' control - Record all shell command activity in a similar way There is also a helpful summary email. View or Post Comments### Postgres Encryption Maze *Monday, January 28, 2019* This wide-ranging email thread covers many of the challenges of adding encryption to Postgres. There is discussion of: - The need to understand the threats you are protecting against, "For anyone to offer a proper solution, you need to say what purpose your encryption will serve." - The need for layers of protection - The questionable usefulness of storage encryption, "Thus, unless you move your DB server on a regular basis, I can't see the usefulness of whole database encryption (WDE) on a static machine." - The value of encrypting network storage, "Having the 'disk' associated with a specific server encrypted can provide some level of protection from another machine which also has access to the underlying infrastructure from being able to access that data." - Credit Card industry requirements, "Non-Pubic Information (NPI) data should not be logged nor stored on a physical device in non-encrypted mode." - The limits of per-column encryption, "It is extremely unlikely you just want all the data in the database encrypted." (These five emails from another thread, 1, 2, 3, 4, 5, also discuss this topic.) - The many other database products that support built-in column-level encryption As you can see, the discussion was all over the map. The Postgres project probably needs to do a better job communicating about these options and their benefits. View or Post Comments### Pooler Authentication *Friday, January 25, 2019* One frequent complaint about connection poolers is the limited number of authentication methods they support. While some of this is caused by the large amount of work required to support all 14 Postgres authentication methods, the bigger reason is that only a few authentication methods allow for the clean passing of authentication credentials through an intermediate server. Specifically, all the password-based authentication methods (*scram-sha-256,* *md5,* *password*) can easily pass credentials from the client through the pooler to the database server. (This is not possible using scram with channel binding.) Many of the other authentication methods, e.g. *cert,* are designed to prevent man-in-the-middle attacks and therefore actively thwart passing through of credentials. For these, effectively, you have to set up two sets of credentials for each user — one for client to pooler, and another from pooler to database server, and keep them synchronized. A pooler built-in to Postgres would have fewer authentication pass-through problems, though internal poolers have some down sides too, as I already stated. View or Post Comments### Synchronizing Authentication *Wednesday, January 23, 2019* I have already talked about external password security. What I would like to talk about now is keeping an external-password data store synchronized with Postgres. Synchronizing the password is not the problem (the password is *only* stored in the external password store), but what about the existence of the user. If you create a user in ldap or pam, you would like that user to also be created in Postgres. Another synchronization problem is role membership. If you add or remove someone from a role in ldap, it would be nice if the user's Postgres role membership was also updated. *ldap2pg* can do this in batch mode. It will compare ldap and Postgres and modify Postgres users and role membership to match ldap. This email thread talks about a custom solution then instantly creates users in Postgres when they are created in ldap, rather than waiting for a periodic run of *ldap2pg.* ### Insufficient Passwords *Monday, January 21, 2019* As I already mentioned, passwords were traditionally used to prove identity electronically, but are showing their weakness with increased computing power has increased and expanded attack vectors. Basically, user passwords have several restrictions: - must be simple enough to remember - must be short enough to type repeatedly - must be complex enough to not be easily guessed - must be long enough to not be easily cracked (discovered by repeated password attempts) or the number of password attempts must be limited As you can see, the simple/short and complex/long requirements are at odds, so there is always a tension between them. Users often choose simple or short passwords, and administrators often add password length and complexity requirements to counteract that, though there is a limit to the length and complexity that users will accept. Administrators can also add delays or a lockout after unsuccessful authorization attempts to reduce the cracking risk. Logging of authorization failures can sometimes help too. While Postgres records failed login attempts in the server logs, it doesn't provide any of the other administrative tools for password control. Administrators are expected to use an external authentication service like ldap or pam, which have password management features. For a truly sobering view of users' motivation to improve security, read this 2010 paper mentioned on our email lists. These sentences sum it up: Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses.Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less. In the model we set forward it is not users who need to be better educated on the risks of various attacks, but the security community. Security advice simply offers a bad cost-benefit tradeoff to users. Combine this with the limited value of password policies, the difficulty in determining if a site is using a valid tls/ssl certificate, and the questionable value of certificate errors, and it makes you want rely as little as possible on users for security. View or Post Comments### Removable Certificate Authentication *Wednesday, January 16, 2019* I mentioned previously that it is possible to implement certificate authentication on removable media, e.g., a usb memory stick. This blog post shows how it is done. First, root and server certificates and key files must be created: $ cd $PGDATA #create root certificate and key file$ openssl req -new -nodes -text -out root.csr -keyout root.key -subj "/CN=root.momjian.us" $ chmod og-rwx root.key $ openssl x509 -req -in root.csr -text -days 3650 -extfile /etc/ssl/openssl.cnf -extensions v3_ca -signkey root.key -out root.crt #create server certificate and key file$ openssl req -new -nodes -text -out server.csr -keyout server.key -subj "/CN=momjian.us" $ chmod og-rwx server.key $ openssl x509 -req -in server.csr -text -days 365 -CA root.crt -CAkey root.key -CAcreateserial -out server.crt Then ssl must be enabled, *cert* authentication specified, and the server restarted: #configure server for SSL and client certificate authentication$ psql -c 'ALTER SYSTEM SET ssl = ON;' test $ psql -c "ALTER SYSTEM SET ssl_ca_file = 'root.crt';" test #configure pg_hba.conf for 'cert'$ sed 's/host/#host/' pg_hba.conf > /tmp/$$ && cat /tmp/$$ > pg_hba.conf && rm /tmp/$$ $ echo 'hostssl all all 127.0.0.1/32 cert' >> pg_hba.conf #restart server$ pg_ctl stop $ pg_ctl -l server.log start Finally, the client must have a copy of the root certificate and a client certificate must be created: #copy root certificate to the client$ mkdir ~/.postgresql 2> /dev/null $ cd ~/.postgresql $ cp $PGDATA/root.crt . #create client certificate and key file# Use of -nodes would prevent a required password $ openssl req -new -text -out postgresql.csr -keyout postgresql.key -subj "/CN=postgres" $ chmod og-rwx postgresql.key $ openssl x509 -req -in postgresql.csr -text -days 365 -CA $PGDATA/root.crt -CAkey $PGDATA/root.key -CAcreateserial -out postgresql.crt And now the test: $ psql -c 'SELECT version();' 'host=momjian.us dbname=testsslmode=verify-full'Enter PEM pass phrase:version --------------------------------------------------------------------------------------------------- PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10+deb8u1) 4.9.2, 64-bit With this all in place, we can implement removable certificate authentication. Insert a usb memory stick and give it a name, which causes the usb file system to appear at a predictable file path each time it is inserted. (The method varies among operating systems.) Now, move the ~/.postgresql directory to the usb memory stick and create a symbolic link to it from the local file system: $mv~/.postgresql '/media/bruce/Bruce M USB/.postgresql' $ln -s'/media/bruce/Bruce M USB/.postgresql' ~/.postgresql With the usb memory stick inserted, *psql* runs normally, but when it is ejected, the symbolic link points to nothing and an error is generated: $ psql -c 'SELECT version();' 'host=momjian.us dbname=test sslmode=verify-full'psql: root certificate file "/var/lib/postgresql/.postgresql/root.crt" does not existEither provide the file or change sslmode to disable server certificate verification. In conclusion, *cert* authentication with a password-enabled private key is already two-factor authentication — the private key file ("Something you have"), and its password ("Something you know"). By storing the private key file on a usb memory stick, the "Something you have" becomes independent of the computer used to access the database. As I mentioned before, piv devices have even more security advantages. ### Three Factors of Authentication *Monday, January 14, 2019* Traditionally, passwords were used to prove identity electronically. As computing power has increased and attack vectors expanded, passwords are proving insufficient. Multi-factor authentication uses more than one authentication factor to strengthen authentication checking. The three factors are: - Something you know, e.g., password, pin - Something you have, e.g., cell phone, cryptographic hardware - Something you are, e.g., finger print, iris pattern, voice Postgres supports the first option, "Something you know," natively using local and external passwords. It supports the second option, "Something you have," using *cert* authentication. If the private key is secured with a password, that adds a second required factor for authentication. *Cert* only supports private keys stored in the file system, like a local file system or a removable usb memory stick. One enhanced authentication method allows access to private keys stored on piv devices, like the YubiKey. There are two advantages of using a piv device compared to *cert:* - Requires a pin, like a private-key password, but locks the device after three incorrect pin entries (File-system-stored private keys protected with passwords can be offline brute-force attacked.) - While the private key can be used to decrypt and sign data, it cannot be copied from the piv device, unlike one stored in a file system Unfortunately, *libpq* does not support piv access directly, though it can be accessed using external authentication methods like pam. Google Authenticator and FreeOTP can also be used to add a second factor of authentication to pam and other external authentication methods. The third type of authentication factor, "Something you are," also requires an external authentication method. It is unclear if Postgres should support more authentication methods directly or improve documentation about how to integrate them with existing external authentication methods. View or Post Comments### Fourteen Authentication Methods *Wednesday, January 2, 2019* Postgres supports fourteen authentication methods — that might seem like a lot, but Postgres is used in many environments, and it has to support whatever methods are being used in those environments. The fourteen methods can seem confusing, but they are easier to understand in categories: - absolute: *trust,**reject*always allow or reject - local password: scram-sha-256, md5, *password*compare a user-supplied password with something stored in the database - external password: ldap, pam, radius, bsd compare to a password stored outside the database - trusted network: *peer,**ident*rely on the network connection to authenticate - trusted tokens: gss, sspi use possession of a token generated by a trusted key distribution server - certificate authority: *cert*uses access to the private key of a certificate signed by a trusted certificate authority So, there is one absolute and five conditional classes of authentication. View or Post Comments
true
true
true
null
2024-10-12 00:00:00
2018-09-02 00:00:00
null
null
null
null
null
null
20,851,833
https://www.quantamagazine.org/iyad-rahwan-is-the-anthropologist-of-artificial-intelligence-20190826/
The Anthropologist of Artificial Intelligence
John Pavlus August
# The Anthropologist of Artificial Intelligence ## Introduction How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described “maverick” tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car — should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist? — in his Moral Machine experiment. “I was good friends with Iain Couzin, one of the world’s foremost animal behaviorists,” Rahwan said, “and I thought, ‘Why isn’t he studying online bots? Why is it only computer scientists who are studying AI algorithms?’ “All of a sudden,” he continued, “it clicked: We’re studying behavior in a new ecosystem.” Two years later, Rahwan, who now directs the Center for Humans and Machines at the Max Planck Institute for Human Development, has gathered 22 colleagues — from disciplines as diverse as robotics, computer science, sociology, cognitive psychology, evolutionary biology, artificial intelligence, anthropology and economics — to publish a paper in *Nature* calling for the inauguration of a new field of science called “machine behavior.” Directly inspired by the Nobel Prize-winning biologist Nikolaas Tinbergen’s four questions — which analyzed animal behavior in terms of its function, mechanisms, biological development and evolutionary history — machine behavior aims to empirically investigate how artificial agents interact “in the wild” with human beings, their environments and each other. A machine behaviorist might study an AI-powered children’s toy, a news-ranking algorithm on a social media site, or a fleet of autonomous vehicles. But unlike the engineers who design and build these systems to optimize their performance according to internal specifications, a machine behaviorist observes them from the outside in — just as a field biologist studies flocking behavior in birds, or a behavioral economist observes how people save money for retirement. “The reason why I like the term ‘behavior’ is that it emphasizes that the most important thing is the observable, rather than the unobservable, characteristics of these agents,” Rahwan said. He believes that studying machine behavior is imperative for two reasons. For one thing, autonomous systems are touching more aspects of people’s lives all the time, affecting everything from individual credit scores to the rise of extremist politics. But at the same time, the “behavioral” outcomes of these systems — like flash crashes caused by financial trading algorithms, or the rapid spread of disinformation on social media sites — are difficult for us to anticipate by examining machines’ code or construction alone. “There’s this massively important aspect of machines that has nothing to do with how they’re built,” Rahwan said, “and has everything to do with what they do.” *Quanta* spoke with Rahwan about the concept of machine behavior, why it deserves its own branch of science, and what it could teach us. The interview has been condensed and edited for clarity. ### Why are you calling for a new scientific discipline? Why does it need its own name? This is a common plight of interdisciplinary science. I don’t think we’ve invented a new field so much as we’ve just labeled it. I think it’s in the air for sure. People have recognized that machines impact our lives, and with AI, increasingly those machines have agency. There’s a greater urgency to study how we interact with intelligent machines. Naming this emerging field also legitimizes it. If you’re an economist or a psychologist, you’re a serious scientist studying the complex behavior of people and their agglomerations. But people might consider it less important to study machines in those systems as well. So when we brought together this group and coined this term “machine behavior,” we’re basically telling the world that machines are now important actors in the world. Maybe they don’t have free will or any legal rights that we ascribe to humans, but they are nonetheless actors that impact the world in ways that we need to understand. And when people of high stature in those fields sign up [as co-authors] to this paper, that sends a very strong signal. ### You mentioned free will. Why even call this phenomenon “behavior,” which seems to unnecessarily invite that association? Why not use a term like “functionality” or “operation”? Some people have a problem with giving machines agency. For instance, Joanna Bryson from the University of Bath, she’s always outspoken against giving machines agency, because she thinks that then you’re removing agency and responsibility from human actors who may be misbehaving. But for me, behavior doesn’t mean that it has agency [in the sense of free will]. We can study the behavior of single-celled organisms, or ants. “Behavior” doesn’t necessarily imply that a thing is super intelligent. It just means that our object of study isn’t static — it’s the dynamics of how this thing operates in the world, and the factors that determine these dynamics. So, does it have incentives? Does it get signals from the environment? Is the behavior something that is learned over time, or learned through some kind of copying mechanism? ### Don’t the engineers who design these agents make those decisions? Aren’t they deterministically defining this behavior in advance? They build the machines, program them, build the architecture of the neural networks and so on. They’re engineering, if you like, the “brain” and the “limbs” of these agents, and they do study the behavior to some extent, but only in a very limited way. Maybe by looking at how accurate they are at classifying things, or by testing them in a controlled environment. You build the machine to perform a particular task, and then you optimize your machine according to this metric. But its behavior is an open-ended aspect. And it’s an unknown quantity. There are behaviors that manifest themselves across different timescales. So [when you’re building it] maybe you focus on short timescales, but you can only know that long-timescale behavior once you deploy these machines. ### Imagine that machine behavior is suddenly a mature field. What does it let us understand or do better? I think we would be able to better diagnose emergent [technological] problems, and maybe anticipate them. For example, for somebody designing Facebook and its news feed algorithm, maybe we would have known early enough that this was going to lead to a far more polarized political sphere and a lot of spreading of misinformation. And maybe we’d have been able to build immunity into the system, so that it could self-correct. Today, we’re sort of figuring things out as we go. So the companies build a thing and then they launch it and then say, “Oh, wow, all of a sudden there’s spam everywhere” or “there’s misinformation everywhere.” Or “people seem to hate each other and are yelling at each other all the time on Twitter.” Maybe there are lessons in nature that would have allowed us not just to engineer solutions, but also to detect those problems a little bit earlier. So, what’s the [machine behavior] equivalent of colony collapse? Or what’s an equivalent of speciation? Is there an analogy that we could use to anticipate problems that would undermine democracy, freedom of speech, and other values that we hold dear, as technology is introduced? That’s the broad goal. ### Why is this biology-inspired framework necessary? There are lots of nature-inspired algorithms. For instance, an algorithm similar to swarming is being developed to allow inter-vehicle communication [between autonomous cars], so if there’s an accident on the side of the road, they can smooth things out and you don’t end up with traffic jams. So they’ll go to nature and look at animal behaviors for inspiration, but then they’ll just let the thing loose. A behaviorist would notice things that emerge once you’ve let these cars into the wild. This isn’t happening now, but imagine that this system that vehicles use to signal each other for minimizing traffic jams interacts with a car’s reinforcement learning algorithm for optimizing its own behavior and causes some coordination pattern that wasn’t preprogrammed. I can imagine an ecologist saying, “Oh, I know this, I’ve seen this kind of species of bees do this.” ### What happens when machines exhibit behavioral patterns that are completely new — that aren’t analogous to anything biological or ecological? That’s going to be very important. Maybe [the field] will begin with a kind of “categorizing butterflies” phase, where we say, “Here are these types of machines, and they fall into these classes.” Eventually we’d develop some kind of theory of how they change, and maybe also an understanding of how they feed off each other. ### Does a washing machine have “behavior”? Is there some lower bound of autonomy or intelligence that makes a machine suitable for this kind of study? Intelligence is such a hard word to describe. What we mean by that is any machine exhibiting behavior that is not completely derivable from its design. A washing machine has behavior — sometimes it malfunctions, and of course it’s very frustrating. But it’s a fairly closed system in which all the kinds of functions and malfunctions can be predicted and described with precision, to a large degree. So you could say it has uninteresting behavior. But then you could have a very simple algorithm that just repeats or retweets things. It could be rule based, but it can still exhibit interesting behaviors once it interacts with the world, with humans and with other algorithms. So while it’s not very intelligent, the consequences of this simple behavior are much harder to foresee or to understand or to describe just based on its design. ### How would a machine behaviorist study, say, self-driving cars differently than an engineer? An engineer who is trying to improve the car toward some performance objective would, for example, test the car under different driving conditions, and they would test different components of the vehicle. The focus is very much on performance. But once this whole thing is built, you’ve got an agent — an actual, physical agent — moving around in the world and doing things. And you could do all kinds of things from the behavioral perspective when you’re looking at this agent. For example, let’s assume that, overall, autonomous vehicles have managed to eliminate some 90% of fatal accidents. Let’s assume that among those remaining fatalities, one carmaker is just killing far fewer passengers — but twice as many pedestrians. If we’re not taking a behavioral perspective on autonomous cars, we wouldn’t be looking for these things. We’d just be certifying that the car’s systems were performing adequately according to certain benchmarks, but we would be missing this kind of emergent behavior that may be problematic. That’s the kind of thing that economists or very highly quantitative social scientists could do, with absolutely no knowledge of the underlying engineering mechanisms of the vehicle. ### What’s the goal of machine behavior — to make these “agents” as predictable as washing machines? Well, why do we care about understanding animal behavior? Part of it is pure curiosity, of course. But as a society we support science also because we want to understand the mechanisms that drive the world and use this understanding to predict these phenomena. We’d like the ecosystem to be healthy, but we also want to thrive economically, and so on. I would say the same thing holds for machine behavior. Machines promise to vastly increase our economic productivity and bring information to our fingertips and improve our lives. But a lot of people are afraid of what these machines might do. Maybe they could impede our autonomy. So again, I think an objective scientific understanding of the behavior of those machines is important and should inform a discussion about how we want to control those machines. Are they improving fast? Should we regulate them? Not regulate them? How exactly do we regulate them? And so on. So I see the study of machine behavior as the scientific task that complements the broader societal task of thriving with the machines.
true
true
true
Iyad Rahwan’s radical idea: The best way to understand algorithms is to observe their behavior in the wild.
2024-10-12 00:00:00
2019-08-26 00:00:00
https://d2r55xnwy6nx47.c…_1200_social.jpg
article
quantamagazine.org
Quanta Magazine
null
null
8,204,224
http://codepen.io/pixelass/full/CsItl
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,806,833
http://minimaxir.com/2016/05/sfba-compensation/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,699,571
https://www.youtube.com/watch?v=nSONKMisWis
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
605,513
http://drop.io/IoFPLbySPJ
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,799,126
https://mashable.com/article/nft-frosties-rug-pull-scam-arrest
Feds arrest alleged scammers behind notorious NFT rug pull
Jack Morse
The rug-puller arrests have begun. The Department of Justice announced Thursday that law enforcement rounded up two men accused of running not one, but two fraudulent non-fungible token projects. The men, both 20 years old, allegedly sold NFTs with a promised raft of benefits to unsuspecting investors before disappearing with the funds and leaving holders out to dry. At issue were "Frosties" NFTs, still listed on OpenSea, which the DOJ said Ethan Nguyen and Andre Llacuna pitched as guaranteeing exclusive giveaways and "early access to a metaverse game." Of course, the two men allegedly "abandoned the Frosties NFT project within hours after selling out of Frosties NFTs, deactivated the Frosties website, and transferred approximately $1.1 million in cryptocurrency proceeds from the scheme[.]" In late 2021, the blockchain analytics firm Chainalysis reported that rug pulls "have emerged as the go-to scam of the DeFi ecosystem, accounting for 37% of all cryptocurrency scam revenue in 2021" for a total of at least $2.8 billion worth of crypto that year alone. Nguyen and Llacuna are charged with wire fraud, which has a maximum potential sentence of 20 years in prison. Notably, the two men supposedly operated under a ream of aliases, including such gems as "Frostie," "Jakefiftyeight," "Jobo," "Joboethan," "Meltfrost," and "heyandre." And, at least according to law enforcement, Nguyen and Llacuna were queued up to run another rug pull expected to garner around $1.5 million in sales. That project, Embers, was scheduled to mint on March 26. "Each individual Ember is carefully curated from over 150 traits, along with some incredibly rare 1/1s that have traits that can't be found from any other Ember," reads the project's webpage. "Our vision is to create an amazing project that will shed light, joy, love, and creativity! Burn on, Embers!" Thursday's arrests make it all the more clear that cryptocurrency history is repeating itself. In 2017, the initial coin offering (ICO) boom drew scores of scammers and celebrity shills who profited off retail investors FOMOing into cryptocurrency. It took some time, but law enforcement and the Securities and Exchange Commission eventually cracked down on those scammers, too. What we're seeing now is likely the tip of the Department of Justice's investigatory iceberg when it comes to NFT scams. Don't be surprised if Nguyen and Llacuna's arrests only represent the first of many to come. Topics Cryptocurrency
true
true
true
The two were about to launch a second NFT project when they were arrested.
2024-10-12 00:00:00
2022-03-24 00:00:00
https://helios-i.mashabl….v1648158437.png
article
mashable.com
Mashable
null
null
21,465,694
https://github.com/alexellis/mongodb-function
GitHub - alexellis/mongodb-function: OpenFaaS Function that makes use of a connection pool to access MongoDB
Alexellis
This is a simple example of how to use connection pooling in MongoDB with OpenFaaS on Kubernetes. - In the first sequence we've had no calls made to the function, so the connection pool is not yet initialized. `prepareDB()` was never called. - In the second sequence `prepareDB()` has been called and since there was no instance of a connection in memory, we create one and you see the dotted line shows the connection being established. This will then open a connection to MongoDB in the network. - In the third sequence we see that subsequent calls detect a connection exists and go straight to the connection pool. We have one active connection and two shown with dotted lines which are about to be closed due to inactivity. Before we can build and deploy the example we'll set up OpenFaaS on Kubernetes or Swarm followed by MongoDB. This configuration is suitable for development and testing. - Start by cloning the Github repository: ``` $ git clone https://github.com/alexellis/mongodb-function ``` - Install OpenFaaS with `helm` https://docs.openfaas.com/deployment/kubernetes/ - Install the OpenFaaS CLI `curl -sL https://cli.openfaas.com | sudo sh` - Set your `OPENFAAS_URL` variable ``` $ export OPENFAAS_URL=127.0.0.1:31112 ``` If you're using minikube or a remote machine then replace 127.0.0.1 with that IP address. - Install mongodb via `helm` ``` $ helm install stable/mongodb --name openfaas-db \ --namespace openfaas-fn \ --set persistence.enabled=false ``` Note down the name of the MongoDB instance i.e. `openfaas-db-mongodb` If you want to use the fully-qualified DNS name that would be: `openfaas-db-mongodb.openfaas-fn.svc.cluster.local.` Now skip ahead to "Build and test" - Install OpenFaaS with Docker https://docs.openfaas.com/deployment/docker-swarm/ - Install the OpenFaaS CLI `curl -sL https://cli.openfaas.com | sudo sh` - Set your `OPENFAAS_URL` variable ``` $ export OPENFAAS_URL=127.0.0.1:8080 ``` - Create a mongodb Docker Service ``` $ docker service create --network=func_functions --name openfaas-db-mongodb --publish 27017:27017 mongo mongod ``` The entry for the stack.yml file will be the IP of your Docker Swarm manager. - Update your stack.yml's mongo field with the MongoDB DNS entry/IP from prior steps - Replace "alexellis/" prefix from Docker Hub in stack.yml with your own account - Build/push/deploy Pull in the *node8-express* template: ``` $ faas template pull https://github.com/openfaas-incubator/node8-express-template ``` Now build and push / deploy ``` $ faas build && faas push && faas deploy ``` - Get a load-testing tool This requires a local installation of Go. ``` $ go get -u github.com/rakyll/hey ``` An alternative tool would be Apache-Bench which is available for most Linux distributions via a package manager. - Run a test Let's start by running a single request with `curl` : ``` $ curl http://$OPENFAAS_URL/function/insert-user \ --data-binary '{"first":"Alex", "last": "Ellis"}' \ -H "Content-Type: application/json" ``` Now run a load-test with `hey` : ``` $ ~/go/bin/hey -m POST -d '{"first":"Alex", "last": "Ellis"}' \ -H "Content-Type: application/json" \ -n 1000 -c 10 http://$OPENFAAS_URL/function/insert-user ``` This test posts in a JSON body with 1000 requests with 10 of those being concurrent. Here's an abridged output from `hey` with the function running on a remote server with the test being run from my laptop: ``` Summary: Requests/sec: 1393.2083 ... Status code distribution: [200] 10000 responses ``` If you look at the logs of the Mongo deployment or service you will see the connection count is between 1-10 connections only during the load-test. This shows that the connection-pool is being used by our function. On Kubernetes: ``` $ kubectl logs deploy/openfaas-db-mongodb -f -n openfaas-fn ``` On Swarm: ``` $ docker service logs openfaas-db-mongodb -f ```
true
true
true
OpenFaaS Function that makes use of a connection pool to access MongoDB - alexellis/mongodb-function
2024-10-12 00:00:00
2018-03-27 00:00:00
https://opengraph.githubassets.com/a2edff0adc4d999fd5afd8abb8103ce69b58f2b086d44aebc98e7b811402aff1/alexellis/mongodb-function
object
github.com
GitHub
null
null