text
stringlengths
14
100k
meta
dict
Lives: Spokane, Washington; Occupation: Retail cashier; Age: 19; Born: September 22; Ht: 5'2"; Wt: 115 pounds; Bras: 32A; Panties: Booty shorts or thongs; Anal: I like being fingered; BJs: Swallow; Masturbate: I do it all the time. We flew Eden down to Miami so she could have a vacation...oh, and so that she could shoot her first-ever masturbation scenes. "This is all like a dream come true for me," she told us. "I don't really travel that much. In fact, this is my first time on the East Coast. I love it down here! I get to wear my bikinis everywhere I go. "Back home, I worked as a lifeguard for a summer. I wasn't very good because most of the kids I was watching were bigger than me! I spent most of the time at the pool flirting with the boys who lounged around getting a tan. To this day, the smell of sunscreen lotion turns me on like crazy. When I went to the beach down here in Miami, my bikini bottoms were soaked through with my sticky pussy juices." Related Tags
{ "pile_set_name": "OpenWebText2" }
Image caption Mel Giedroyc can currently be seen on a different Saturday night show - Let It Shine Former Great British Bake Off host Mel Giedroyc has revealed she was once offered the chance to appear as a contestant on Strictly Come Dancing - but turned it down. "I love watching it so much I almost didn't want to spoil the pleasure by being on it," she told Radio Times. The 48-year-old said it was tricky for a woman her age to be on the show. "You're not the comedy old bag yet, which would be the joy of going on Strictly," she said. "If I did it, I'd want to be Ann Widdecombe. I'd want to be out there getting the laughs, being dragged around." The presenter may not have strutted her stuff in a ball gown, but she can still be seen on a prime time Saturday night show - fronting BBC One's talent search Let It Shine. Image caption Mel and Sue announced their departure from The Great British Bake Off in September The gig comes after Giedroyc stepped down as co-host of the Great British Bake Off, along with Sue Perkins, when it was announced the hit show was moving to Channel 4. Giedroyc said the furore surrounding the move was "a pretty weird time". "The press were camped out on my doorstep. My eldest daughter actually saw a few of them off, which I was very, very proud of," she said. "I'm not the kind of person who would court that sort of attention. I have a very private existence and I had to slightly clench my buttocks during that." Follow us on Facebook, on Twitter @BBCNewsEnts, or on Instagram at bbcnewsents. If you have a story suggestion email [email protected].
{ "pile_set_name": "OpenWebText2" }
Steve Bruce has launched a bid for Swansea's Neil Taylor and Modou Barrow as he moves to further strengthen his Aston Villa squad. Bruce is targeting a raid on the struggling Premier League club for Taylor, the left-back, and winger Barrow, with Ghana international Jordan Ayew emerging as a possible makeweight in the deal. Villa have spent around £10million in the January transfer window but Bruce wants to make at least two more signings before Tuesday's deadline.
{ "pile_set_name": "OpenWebText2" }
Citation: Morin A, Urban J, Sliz P (2012) A Quick Guide to Software Licensing for the Scientist-Programmer. PLoS Comput Biol 8(7): e1002598. https://doi.org/10.1371/journal.pcbi.1002598 Editor: Fran Lewitter, Whitehead Institute, United States of America Published: July 26, 2012 Copyright: © Morin et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The work was supported by the National Science Foundation grant 0639193 (PS). The funders had no role in the preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Computing is ubiquitous in every domain of scientific research. Software is the means by which scientists harness the power of computers, and much scientific computing relies on software conceived and developed by other practicing researchers. The task of creating scientific software, however, does not end with the publication of computed results. Making the developed software available for inspection and use by other scientists is essential to reproducibility, peer-review, and the ability to build upon others' work [1], [2]. In fulfilling expectations to distribute and disseminate their software, scientist-programmers are required to be not only proficient scientists and coders, but also knowledgeable in legal strategies for licensing their software. Navigating the often complex legal landscape of software licensing can be overwhelming, even for sophisticated programmers. Institutional technology transfer offices (TTOs) exist to help address this need, but due to mismatches in expectations or specific domain knowledge, interactions between scientists and TTO staff can result in suboptimal outcomes. As practitioners in the scientific computing and technology law fields, we have witnessed firsthand the confusion and difficulties associated with licensing scientifically generated software. SBGrid.org is a consortium of scientific software developers and users in hundreds of biomedical research laboratories worldwide. As facilitator and middleman between developers and end-users, we commonly assist in the dissemination and use of scientifically generated software. Through research and advocacy, the Samuelson Law, Technology and Public Policy Clinic works with software developers and other creators on licensing issues, particularly issues related to facilitating “open access” to scientific, technical, or creative materials. Together, we offer a primer on software licensing with a focus on the particular needs of the scientist software developer. The aim of this guide is to help scientists better engage with their institutional TTO when choosing software licenses. Why Software Licenses Are Important Licenses are important tools for setting specific terms on which software may be used, modified, or distributed. Based on the copyright protection automatically granted to all original works, a software license—essentially, a set of formal permissions from the copyright holder—may include specific “conditions” of use, and are an important part of the legally binding contract between program author (or rights owner) and end-user. Without a license agreement, software may be left in a state of legal uncertainty in which potential users may not know which limitations owners may want to enforce, and owners may leave themselves vulnerable to legal claims or have difficulty controlling how their work is used. This is equally true for software that is commercialized and offered for a fee, and software that is made available without cost to others. While end-users often balk at overly restrictive software licenses, the uncertainty caused when no license is given can also discourage those wishing to make use of a piece of code. It is important to note that licenses can be used to facilitate access to software as well as restrict it. Software Licensing in Academic and Research Environments For a license to be valid it must be granted by the owner of the work's intellectual property (IP) rights. Under the policies of most academic and research institutions, researchers who have created a piece of software are unlikely to own full rights to their works. Instead, the institution generally holds or shares legal right to developed software. Institutions' policies on IP ownership vary, but in most cases your institution will be the legal “rights owner,” and will be the entity that actually grants the license you choose for your software. Although many types of licenses, especially of the “free and open source” variety, are simple enough for the non-legal expert to understand and apply (Figure 1), it is generally necessary to consult your institutions' TTO before imposing a license. See below for more information about working with your institution in applying a license. PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Figure 1. Example of FOSS license with “academic” style copyright statement. The example shown is the entirety of a 2-Clause BSD [8] license with copyright statement (at top, within quotes). The text of the license is in black. Red highlighted text is where the copyright holder applying the license inserts their specific information. Application of this and many FOSS licenses simply require that the text of the license be included (usually as “License.txt”) in the directory containing the distributed program binary and or source code. https://doi.org/10.1371/journal.pcbi.1002598.g001 Types of Software Licenses Colloquially speaking, the spectrum of software licensing strategies can be divided into three categories: “proprietary,” “free and open source,” or a hybrid of the two. Proprietary Licensing This strategy is familiar from the “click-thru” agreements that govern commercial software packages. The primary purpose of a proprietary software license is to limit the use of software according to the rights owner's business strategy. As a result, proprietary licenses are often very restrictive for end-users. They typically allow use of the software only for its stated purpose, often only on a single computer, forbid users from copying, redistributing, or altering the work, and specifically prohibit the creation of derivatives using parts of the work. Importantly, programs under proprietary licenses are typically distributed only in binary form and forbid examination of the program code or reverse engineering of any part of the program. In academic settings, proprietary software may occasionally release source code “for inspection purposes only” due to scientific publishing and peer-review requirements (Table 1). PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Table 1. Summary of select attributes of cited licenses types. https://doi.org/10.1371/journal.pcbi.1002598.t001 Free and Open Source Software (FOSS) Licensing Free and open source software (FOSS) represents a fundamentally different approach to software licensing. The primary intent of FOSS is to maximize openness and minimize barriers to software use, dissemination, and follow-on innovation. There are a wide variety of popular FOSS licenses [3], each of which vary in some important ways, but all grant free (as in freedom), open, and non-discriminatory access and rights to modify licensed software and associated source code. A common misconception is that FOSS is synonymous with “noncommercial.” In fact, as described by the two most influential definitions of FOSS [3], [4], “non-discriminatory” means that no category of user or distributor can be prohibited, including for-profit commercial entities. As such, FOSS-licensed software can be, and regularly is, commercially exploited. Some cited benefits of a FOSS strategy include widespread adoption, user contributions, and ease of collaboration [5]. Additionally, because of their open and non-discriminatory nature, FOSS licenses can simplify continued development and collaboration when researchers switch institutions, and when they collaborate across institutions. FOSS can also help to extend the useful lifetime of a piece of software beyond the direct involvement of the creators. We discuss some important differences in FOSS licenses below. Hybrid Software Licensing Some software developers find that their needs are not well met by using either proprietary or FOSS licensing models exclusively. In these cases, “hybrid” (also called dual- or multi-licensing) approaches—combining a FOSS license with a proprietary “closed” license—are sometimes used. Under this strategy, the rights owner chooses which license to apply on a case-by-case basis. When ownership and licensing rights are clear, these licensing schemes can maintain some of the benefits of FOSS while also permitting creators to employ multiple business models [6]. The downside can be a significant added burden for the rights owner in applying, administering, and enforcing multiple licenses. This has generally limited the adoption of hybrid license models to large software development initiatives. Terms, Concepts, and Examples Useful in Understanding Software Licenses Open Source versus Closed Source Source code is the human readable form of a computer programming language. “Open source” refers to licenses that require the source code be available to users, and that users be able to reuse, modify, and distribute the code [3]. Without access to source code, researchers cannot effectively inspect, understand, or manipulate the inner workings of a program. Source code availability is of increased importance in the context of scientific research, where peer review, reproducibility, and building upon prior work are integral to the advancement of science. Source code access helps researchers quickly identify and remedy bugs that might lead to spurious results and adapt programs or pieces of code to suit individual needs, and allows expert users to contribute to code development on an informal basis. An active open source user community participating in maintaining and improving the code base can free the original developer to concentrate on major enhancements or move on to other projects without sacrificing continued utility of the software. Permissive versus Copyleft “Permissive” and “copyleft” are terms used to compare legal philosophies and attributes of FOSS licenses to traditional proprietary licenses. Permissive licenses are those that place the fewest restrictions on users and adopters, often only requiring that the original creators be attributed in any distribution or derivative of the software or source code. For example, permissively licensed software may be incorporated into “closed” proprietary programs with no requirement that the source code be disclosed if the combined software is distributed. Permissive open source licenses are also sometimes called “research” or “academic” style licenses because of their origins in, and frequent use by, academic institutions [7]. Examples of popular permissive FOSS licenses include the Berkeley Software Distribution (BSD) [8], MIT [9], Apache [10], and Educational Community License (ECL) [11] licenses. The BSD and MIT licenses are often mentioned interchangeably due to very similar language and terms that accomplish largely identical goals. The primary intent of these licenses is to allow the use, distribution, and modification of your code for any purpose, while making sure that you as the creator receive credit for your work (see Figure 1 for an example of an FOSS license with an academic style attribution/citation copyright statement). The Apache and ECL licenses are similar in effect to the BSD/MIT, but include a license for patents related to the software (this can be desirable or not, depending on the situation—see below). The ECL differs from Apache in a slightly weakened patent grant to accommodate the often complex IP environments of academic institutions. For developers who want to guarantee perpetual open source access to their work, some licenses employ the concept of copyleft, a punning reference to “copyright.” Copyleft uses copyright's legal framework to guarantee continued open access to a software and its source code. This is done by requiring, as a condition of the license, that any derivative works also be distributed under the same licensing terms as the original. These copyleft licensing terms are also sometimes referred to as reciprocity or “share-alike” provisions. Because of these reciprocity requirements, copyleft licenses are considered “restrictive” licenses, though these restrictions guarantee perpetual open access. Examples of popular copyleft FOSS licenses include the GNU General Public License (GPL) [12], GNU Lesser General Public License (LGPL) [13], and the Mozilla Public License (MPL) [14]. The GNU Licenses are the most well known of all the FOSS licenses and have a strong community of supporters and advocates. Of these, the GPL has the strongest reciprocity requirements and is considered a “strong” copyleft license. The LGPL (the “Lesser GPL,” denoting its weaker copyleft requirements) is very similar to the GPL from which it is derived, but allows for linking to proprietary code under certain circumstances. Similarly, the MPL allows copyleft to be applied to some parts of the code and not others. The LGPL and MPL are considered a compromise between the strong copyleft of GPL and permissive licenses such as the BSD/MIT. Compatibility, Proliferation, Fragmentation, and Directionality A fundamental goal of FOSS is to promote the free exchange of ideas and technology without fear of infringing the rights of others. Ideally, code licensed under like-minded FOSS terms should be freely combinable to create new products. Compatibility is the attribute of software licenses that allows combining of program code. To be compatible, license terms must be free of contradictory or mutually exclusive requirements. Alas, some FOSS licenses contain terms “incompatible” with other FOSS licenses, thereby diluting the ability to easily combine code. This unfortunate situation has been exacerbated by the proliferation of incompatible FOSS licenses, many of which differ in only trivial ways. The Open Source Initiative (OSI) [15] was created in part to reduce the fragmentation of the FOSS license space cause by incompatible and redundant licenses. OSI thus strongly encourages using an existing FOSS license instead of creating a new, “bespoke” license, and offers a categorization of licenses to help developers avoid redundancy [16]. In general, the more restrictive the license, the less compatible it is with other licenses. Proprietary licensed software, by design, cannot be incorporated into other codebases absent a separately negotiated licensing agreement. License compatibility is further complicated, however, in that it is directional. License directionality refers to how a license behaves differently with code feeding into it (upstream, or backward-compatible) or out of it (downstream, or forward-compatible) (Figure 2). For example, a permissive license like the BSD is forward-compatible with nearly any other kind of license, but backward-compatible only with other permissive licenses. Likewise, a copyleft license like the GPL can incorporate (upstream) both permissive and other GPL'd code, but the resulting software may only be licensed (downstream) under the GPL. PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Figure 2. Schematic representation of license directionality. In general, permissively licensed code is forward compatible with any other license type. However, only permissive licenses, such as the BSD and MIT, can feed into other permissive licenses. Restrictive licenses like the GPL are backward compatible with themselves and permissive licenses, but must adopt the restrictive license from then on. Proprietary licenses can incorporate upstream permissively licensed code, but by definition are incompatible with any other downstream license. Grey represents actions that are not permitted without negotiating a separate license agreement with the rights owner. https://doi.org/10.1371/journal.pcbi.1002598.g002 Directionality is an important reason why, if you're trying to integrate code written by others with your own, you'll want to be aware of what license the code you are incorporating carries. When attempting to combine code from multiple projects each under different license types, issues of compatibility can become very complex. “Form” versus “Bespoke” Licenses FOSS license are generally form licenses, meaning that their terms are standardized and a developer need only apply them (Figure 1). This standardization is critical to the success of FOSS strategies because it maximizes license compatibility and minimizes the cost of administering and understanding the terms of a given license. Conversely, bespoke licenses are custom-tailored for each individual project. Tailored licenses allow for greater control, but require more resources to develop and administer and are highly likely to be incompatible with other licensing schemes. Nearly all proprietary licenses are bespoke. Hybrid and Multi-Licensed Software These license schemes differ from single licensing in allowing rights owners to choose which licenses best serve their needs on a case-by-case basis. One form of multi-licensing permits users and contributors to select among multiple licenses offered by the rights owner. Another example is when owners enter into separate “side” agreements not to enforce certain provisions of FOSS licenses, often for a fee. Limiting the reach of FOSS licenses in this manner is controversial within the open source community due to the partial circumvention of share-alike principles. MySQL [17] and Oracle Berkeley DB [18] (BDB) are two well-known examples of multi-licensed software and are both made freely available for use, distribution, and modification under open source licenses. However, each of these programs is additionally offered for a fee under alternative licenses more amenable to proprietary business strategies. FOSS Licenses and Commercialization It is a common misconception that FOSS licensing strategies preclude commercialization. In fact, OSI-approved [3] FOSS licenses cannot discriminate against commercial use. (This is one reason why institutional TTOs have sometimes preferred a bespoke “non-profit-use-only” license.) Though FOSS licenses preclude charging for the license rights themselves, developers are free to charge a fee for additional services such as technical support, priority feature development, consultation, etc. Hybrid licensing schemes (see above) offer further avenues for FOSS commercialization. Choosing a Software License Determining which license will work best for you can require some thought, and depends not only on specific attributes of your software, but also on your particular goals. While both FOSS and proprietary licenses generally require attribution and include standard protections such as disclaimers of warranty, they differ in key aspects both philosophical and practical. If you want… …the widest possible distribution and adoption, fewest restrictions on users, open and transparent source code, peer review, community contributions to the codebase, and easy incorporation of your code by others… then a permissive FOSS license such as the BSD/MIT, Apache, or ECL licenses may work well. Because of the few requirements on users, these licenses are amongst the easiest to apply and administer, and promote unfettered incorporation of your code into other software—including copyleft or commercial software. Despite their general permissiveness, they do assure continued author attribution in any and all redistributions or derivative works. …to assure the benefits and openness of FOSS in all future derivatives of your work, open and transparent source code, peer review, community contributions to the codebase, and the potential incorporation of your code into other copyleft-licensed works… then you should consider a copyleft FOSS license like the GPL, LGPL, or MPL. These licenses, by requiring anyone who distributes the unmodified or modified code to do so under the same license, guarantee perpetual open source of your work. Some copyleft licenses, such as the GPL, have particularly strong developer communities, encouraging community contributions to your software. The copyleft requirements of these licenses can sometimes, however, dissuade others from adopting or incorporating your code. …the ability to separately pursue proprietary models while leveraging the wide distribution, adoption, community contributions, and other benefits of open source software… then a hybrid or multi-license scheme may be appropriate. Hybrid or multi-licensing can achieve the benefits of both open source and proprietary software licenses. However, as in everything, there is no free lunch. The legal, administrative, and organizational complexity of managing multiple licenses, as well as other administrative costs, often limits multi-license schemes to large software projects whose anticipated revenue stream justify the cost of dedicated licensing personnel. As noted above, this strategy is sometimes also controversial within FOSS developer communities. …protect the confidentiality of your source code, reserve maximum control over the distribution and use of your software, and derive licensing revenue… then you should consider a proprietary license. Institutional TTOs sometimes default towards applying proprietary licenses due to staff's greater familiarity with them and a desire to preserve what is perceived (sometimes inaccurately) as the maximum potential for commercial exploitation. Institutions receiving public funds will typically license proprietary software to other academic or non-profit users at no charge but require a fee for licensing to for-profit and industry users. Applying a License to Your Software Once you have chosen a license strategy for your software, the usual first step in applying it is to contact your institutional TTO. Although many FOSS licenses are easy to apply even by the non-legal-expert, as researchers and academics it is unlikely you personally own all of the rights to your work. Instead, these rights typically belong to, or are at least shared with, your institution. Therefore it is usually necessary to work with your institution when applying a license. TTOs exist to help you make and execute these types of decisions. Nonetheless, coming with a clear idea of what kinds of licenses are available, which one you want, and why, will likely be both appreciated by your TTO staff and result in a more favorable outcome for you. Once you've contacted your TTO, the process generally begins by helping the staff understand the “who, what, why, where, and how” of your work: how it works, who would be interested in it, what the innovation is, why you made it, where the funding came from, and other similar facts. Once TTO staff have this general understanding, they will discuss with you possible IP schemes—everything from placing the work in the public domain to creating a company to commercialize it. Most of the time, some form of license arrangement will be preferred. Be prepared, however. Some institutions' philosophies on protecting and exploiting IP are more aggressive than others. You may need to explain, for example, why using a FOSS license does not preclude commercialization (see above), why you think commercialization is not the most appropriate goal for your work, or why broad dissemination is an important goal for you. If you wish to propose a license that limits or forgoes the potential for generating revenue, you may first have to convince your TTO staff that your work lacks commercial value. While the process can sometimes be a bit of a negotiation, most institutions care a great deal about the scientific and societal impact of their IP, and we find that it is rare for an institution to act contrary to the express wishes of the creator of a work. Knowing what you want and why you want it should go far in making the licensing process as painless as possible. The Complication of Software Patents An additional reason to contact your TTO before applying a license is software patents. Modern TTOs arose following the Bayh-Dole Act of 1980, which allows US research institutions to patent inventions developed using public funds and to license those patents [19], [20]. Because the vast majority of academic and research inventions are unlikely to have significant commercial value, most are never patented, but institutions typically require the disclosure of any patentable invention to the TTO. Many FOSS licenses (like the BSD or MIT licenses) are agnostic regarding patents, while some explicitly include patent grants in the license text (like the Apache or GPL licenses) (Table 1). Software patents are highly complex and generally outside the scope of this guide, but be aware that your TTO will want to discuss patent strategy, as well as copyright. Software Licensing and the Open Culture of Science The needs and obligations of academic and publically funded research create unique considerations for scientist-programmers choosing a software license. Unlike in the software industry, where licensing strategy is primarily a matter of business strategy, it can be highly beneficial for scientists to publish, disseminate, and share the fruits of their work as widely as possible, independent of commercial potential. In addition, academic ethics encourage the wide sharing of research materials and information, including code. For programmers, this generally means sharing not just the binary executable, but also the source code so that others may use, validate, reproduce, and extend the work. FOSS licenses such as those listed above are consistent with the open culture and obligations of scientific research, as well as the attribution and citation benefits academics have come to rely on. Permissive licenses may be preferred due to their ease of application and universal downstream compatibility. Copyleft licenses may be useful in accommodating upstream encumbered code or preferred by researchers seeking to assure perpetual open access, but their reciprocity requirements can limit downstream options. Hybrid licensing schemes, due to their added complexity, are more limited in their utility, but if appropriate, can offer many of the benefits of both proprietary and open source models. Due to their closed and restrictive nature, proprietary software licensing schemes should probably be avoided whenever possible. As with other restrictive license models, the administrative burden of managing compliance and collecting revenues can be significant. For this reason, if anticipated total revenues are not high, it can often be more beneficial for scientists to take advantage of the reputational benefits and increased influence that come with the wide adoption and dissemination open licensing models encourage. More broadly, especially in the context of scientific openness, collaboration, and peer review, the lack of available source code is a substantial drawback. In contrast to open source code, closed-source programs are essentially “black boxes” in the research workflow [21], opaque to both reviewers and users. The failure to release source code can be detrimental to the validation and acceptance of scientific results derived using the software. Although some traditional “bespoke” academic licenses attempt to mitigate the negative effects of proprietary licensing by offering software “free for non-profit use” or by publishing source code “for inspection only”, this nullifies the many significant benefits of community contribution, collaboration, and increased adoption that come with open source licensing. Editorial Comment Andreas Prlić, Hilmar Lapp, Software Editors PLoS Computational Biology Scientists are “dwarfs, standing on the shoulders of giants” (Bernard of Chartres). That is, in their pursuit to acquire new knowledge, they are building on the work of others. For this to be possible, already established scientific information must be widely accessible and reusable. This need for access to information is in conflict with a desire, the one to protect the value of intellectual innovation. Copyright laws have been created with the goal of protecting the rights of copyright holders for a certain amount of time. In fact, in our software-dependent information age, few laws are influencing our professional (and personal) pursuits more than these. For example, at the time of writing this article, the two software giants Oracle and Google are facing each other in court over the question of whether Google's use of the Java programming language's application programming interface (API) infringed on Oracle's copyright. The outcome of the trial could have an impact on the freedom of software developers to use APIs and thus potentially hinder software interoperability. Clearly, when developing software, choosing the terms under which the software can be reused, distributed, and built upon is an important consideration. Yet, many scientists and scientific developers have little training in or knowledge of the consequences of the choices they can make. Depending on how licenses are used they can either protect individuals' ability to capitalize on their creative works or ensure the public's ability to reuse. Licenses differ where in this spectrum they are positioned. This article, the “Quick Guide to Software Licensing for the Scientist-Programmer,” provides a summary of a variety of licenses and discusses their benefits and disadvantages. We hope that this guide helps in illuminating the seemingly complex jungle of licensing choices and their consequences, and that it serves as counsel to scientists and developers for what license is best suited in a particular situation. PLoS Computational Biology supports open and unrestricted access to scientific publication and software. To foster a culture of open exchange and reuse of software, we have recently created a new category of Software Articles. For a manuscript to be published under this category in PLoS Computational Biology, we require that all software uses a license that is approved as open source by the Open Source Initiative (OSI). The approval criteria (http://www.opensource.org/docs/osd) set forth by OSI emphasize that the distribution terms must allow the software to be freely re-used, re-distributed, or modified. These requirements ensure transparency and reproducibility and, if applied to scientific software, push science forward by allowing researchers to build on existing work.
{ "pile_set_name": "OpenWebText2" }
Unlike most of the other projects in this book, NoSQL is not a tool, but an ecosystem composed of several complimentary and competing tools. The tools branded with the NoSQL monicker provide an alternative to SQL-based relational database systems for storing data. To understand NoSQL, we have to understand the space of available tools, and see how the design of each one explores the space of data storage possibilities. If you are considering using a NoSQL storage system, you should first understand the wide space of options that NoSQL systems span. NoSQL systems do away with many of the traditional comforts of relational database systems, and operations which were typically encapsulated behind the system boundary of a database are now left to application designers. This requires you to take on the hat of a systems architect, which requires a more in-depth understanding of how such systems are built. 13.1. What's in a Name? In defining the space of NoSQL, let's first take a stab at defining the name. Taken literally, a NoSQL system presents a query interface to the user that is not SQL. The NoSQL community generally takes a more inclusive view, suggesting that NoSQL systems provide alternatives to traditional relational databases, and allow developers to design projects which use Not Only a SQL interface. In some cases, you might replace a relational database with a NoSQL alternative, and in others you will employ a mix-and-match approach to different problems you encounter in application development. Before diving into the world of NoSQL, let's explore the cases where SQL and the relational model suit your needs, and others where a NoSQL system might be a better fit. 13.1.1. SQL and the Relational Model SQL is a declarative language for querying data. A declarative language is one in which a programmer specifies what they want the system to do, rather than procedurally defining how the system should do it. A few examples include: find the record for employee 39, project out only the employee name and phone number from their entire record, filter employee records to those that work in accounting, count the employees in each department, or join the data from the employees table with the managers table. To a first approximation, SQL allows you to ask these questions without thinking about how the data is laid out on disk, which indices to use to access the data, or what algorithms to use to process the data. A significant architectural component of most relational databases is a query optimizer, which decides which of the many logically equivalent query plans to execute to most quickly answer a query. These optimizers are often better than the average database user, but sometimes they do not have enough information or have too simple a model of the system in order to generate the most efficient execution. Relational databases, which are the most common databases used in practice, follow the relational data model. In this model, different real-world entities are stored in different tables. For example, all employees might be stored in an Employees table, and all departments might be stored in a Departments table. Each row of a table has various properties stored in columns. For example, employees might have an employee id, salary, birth date, and first/last names. Each of these properties will be stored in a column of the Employees table. The relational model goes hand-in-hand with SQL. Simple SQL queries, such as filters, retrieve all records whose field matches some test (e.g., employeeid = 3, or salary > $20000). More complex constructs cause the database to do some extra work, such as joining data from multiple tables (e.g., what is the name of the department in which employee 3 works?). Other complex constructs such as aggregates (e.g., what is the average salary of my employees?) can lead to full-table scans. The relational data model defines highly structured entities with strict relationships between them. Querying this model with SQL allows complex data traversals without too much custom development. The complexity of such modeling and querying has its limits, though: Complexity leads to unpredictability. SQL's expressiveness makes it challenging to reason about the cost of each query, and thus the cost of a workload. While simpler query languages might complicate application logic, they make it easier to provision data storage systems, which only respond to simple requests. There are many ways to model a problem. The relational data model is strict: the schema assigned to each table specifies the data in each row. If we are storing less structured data, or rows with more variance in the columns they store, the relational model may be needlessly restrictive. Similarly, application developers might not find the relational model perfect for modeling every kind of data. For example, a lot of application logic is written in object-oriented languages and includes high-level concepts such as lists, queues, and sets, and some programmers would like their persistence layer to model this. If the data grows past the capacity of one server, then the tables in the database will have to be partitioned across computers. To avoid JOINs having to cross the network in order to get data in different tables, we will have to denormalize it. Denormalization stores all of the data from different tables that one might want to look up at once in a single place. This makes our database look like a key-lookup storage system, leaving us wondering what other data models might better suit the data. It's generally not wise to discard many years of design considerations arbitrarily. When you consider storing your data in a database, consider SQL and the relational model, which are backed by decades of research and development, offer rich modeling capabilities, and provide easy-to-understand guarantees about complex operations. NoSQL is a good option when you have a specific problem, such as large amounts of data, a massive workload, or a difficult data modeling decision for which SQL and relational databases might not have been optimized. 13.1.2. NoSQL Inspirations The NoSQL movement finds much of its inspiration in papers from the research community. While many papers are at the core of design decisions in NoSQL systems, two stand out in particular. Google's BigTable [CDG+06] presents an interesting data model, which facilitates sorted storage of multi-column historical data. Data is distributed to multiple servers using a hierarchical range-based partitioning scheme, and data is updated with strict consistency (a concept that we will eventually define in Section 13.5). Amazon's Dynamo [DHJ+07] uses a different key-oriented distributed datastore. Dynamo's data model is simpler, mapping keys to application-specific blobs of data. The partitioning model is more resilient to failure, but accomplishes that goal through a looser data consistency approach called eventual consistency. We will dig into each of these concepts in more detail, but it is important to understand that many of them can be mixed and matched. Some NoSQL systems such as HBase sticks closely to the BigTable design. Another NoSQL system named Voldemort replicates many of Dynamo's features. Still other NoSQL projects such as Cassandra have taken some features from BigTable (its data model) and others from Dynamo (its partitioning and consistency schemes). 13.1.3. Characteristics and Considerations NoSQL systems part ways with the hefty SQL standard and offer simpler but piecemeal solutions for architecting storage solutions. These systems were built with the belief that in simplifying how a database operates over data, an architect can better predict the performance of a query. In many NoSQL systems, complex query logic is left to the application, resulting in a data store with more predictable query performance because of the lack of variability in queries NoSQL systems part with more than just declarative queries over the relational data. Transactional semantics, consistency, and durability are guarantees that organizations such as banks demand of databases. Transactions provide an all-or-nothing guarantee when combining several potentially complex operations into one, such as deducting money from one account and adding the money to another. Consistency ensures that when a value is updated, subsequent queries will see the updated value. Durability guarantees that once a value is updated, it will be written to stable storage (such as a hard drive) and recoverable if the database crashes. NoSQL systems relax some of these guarantees, a decision which, for many non-banking applications, can provide acceptable and predictable behavior in exchange for improved performance. These relaxations, combined with data model and query language changes, often make it easier to safely partition a database across multiple machines when the data grows beyond a single machine's capability. NoSQL systems are still very much in their infancy. The architectural decisions that go into the systems described in this chapter are a testament to the requirements of various users. The biggest challenge in summarizing the architectural features of several open source projects is that each one is a moving target. Keep in mind that the details of individual systems will change. When you pick between NoSQL systems, you can use this chapter to guide your thought process, but not your feature-by-feature product selection. As you think about NoSQL systems, here is a roadmap of considerations: Data and query model: Is your data represented as rows, objects, data structures, or documents? Can you ask the database to calculate aggregates over multiple records? Durability: When you change a value, does it immediately go to stable storage? Does it get stored on multiple machines in case one crashes? Scalability: Does your data fit on a single server? Do the amount of reads and writes require multiple disks to handle the workload? Partitioning: For scalability, availability, or durability reasons, does the data need to live on multiple servers? How do you know which record is on which server? Consistency: If you've partitioned and replicated your records across multiple servers, how do the servers coordinate when a record changes? Transactional semantics: When you run a series of operations, some databases allow you to wrap them in a transaction, which provides some subset of ACID (Atomicity, Consistency, Isolation, and Durability) guarantees on the transaction and all others currently running. Does your business logic require these guarantees, which often come with performance tradeoffs? Single-server performance: If you want to safely store data on disk, what on-disk data structures are best-geared toward read-heavy or write-heavy workloads? Is writing to disk your bottleneck? Analytical workloads: We're going to pay a lot of attention to lookup-heavy workloads of the kind you need to run a responsive user-focused web application. In many cases, you will want to build dataset-sized reports, aggregating statistics across multiple users for example. Does your use-case and toolchain require such functionality? While we will touch on all of these consideration, the last three, while equally important, see the least attention in this chapter. 13.2. NoSQL Data and Query Models The data model of a database specifies how data is logically organized. Its query model dictates how the data can be retrieved and updated. Common data models are the relational model, key-oriented storage model, or various graph models. Query languages you might have heard of include SQL, key lookups, and MapReduce. NoSQL systems combine different data and query models, resulting in different architectural considerations. 13.2.1. Key-based NoSQL Data Models NoSQL systems often part with the relational model and the full expressivity of SQL by restricting lookups on a dataset to a single field. For example, even if an employee has many properties, you might only be able to retrieve an employee by her ID. As a result, most queries in NoSQL systems are key lookup-based. The programmer selects a key to identify each data item, and can, for the most part, only retrieve items by performing a lookup for their key in the database. In key lookup-based systems, complex join operations or multiple-key retrieval of the same data might require creative uses of key names. A programmer wishing to look up an employee by his employee ID and to look up all employees in a department might create two key types. For example, the key employee:30 would point to an employee record for employee ID 30, and employee_departments:20 might contain a list of all employees in department 20. A join operation gets pushed into application logic: to retrieve employees in department 20, an application first retrieves a list of employee IDs from key employee_departments:20 , and then loops over key lookups for each employee:ID in the employee list. The key lookup model is beneficial because it means that the database has a consistent query pattern—the entire workload consists of key lookups whose performance is relatively uniform and predictable. Profiling to find the slow parts of an application is simpler, since all complex operations reside in the application code. On the flip side, the data model logic and business logic are now more closely intertwined, which muddles abstraction. Let's quickly touch on the data associated with each key. Various NoSQL systems offer different solutions in this space. Key-Value Stores The simplest form of NoSQL store is a key-value store. Each key is mapped to a value containing arbitrary data. The NoSQL store has no knowledge of the contents of its payload, and simply delivers the data to the application. In our Employee database example, one might map the key employee:30 to a blob containing JSON or a binary format such as Protocol Buffers , Thrift , or Avro in order to encapsulate the information about employee 30. If a developer uses structured formats to store complex data for a key, she must operate against the data in application space: a key-value data store generally offers no mechanisms for querying for keys based on some property of their values. Key-value stores shine in the simplicity of their query model, usually consisting of set , get , and delete primitives, but discard the ability to add simple in-database filtering capabilities due to the opacity of their values. Voldemort, which is based on Amazon's Dynamo, provides a distributed key-value store. BDB offers a persistence library that has a key-value interface. Key-Data Structure Stores Key-data structure stores, made popular by Redis , assign each value a type. In Redis, the available types a value can take on are integer, string, list, set, and sorted set. In addition to set / get / delete , type-specific commands, such as increment/decrement for integers, or push/pop for lists, add functionality to the query model without drastically affecting performance characteristics of requests. By providing simple type-specific functionality while avoiding multi-key operations such as aggregation or joins, Redis balances functionality and performance. Key-Document Stores Key-document stores, such as CouchDB , MongoDB , and Riak , map a key to some document that contains structured information. These systems store documents in a JSON or JSON-like format. They store lists and dictionaries, which can be embedded recursively inside one-another. MongoDB separates the keyspace into collections, so that keys for Employees and Department, for example, do not collide. CouchDB and Riak leave type-tracking to the developer. The freedom and complexity of document stores is a double-edged sword: application developers have a lot of freedom in modeling their documents, but application-based query logic can become exceedingly complex. BigTable Column Family Stores HBase and Cassandra base their data model on the one used by Google's BigTable. In this model, a key identifies a row, which contains data stored in one or more Column Families (CFs). Within a CF, each row can contain multiple columns. The values within each column are timestamped, so that several versions of a row-column mapping can live within a CF. Conceptually, one can think of Column Families as storing complex keys of the form (row ID, CF, column, timestamp), mapping to values which are sorted by their keys. This design results in data modeling decisions which push a lot of functionality into the keyspace. It is particularly good at modeling historical data with timestamps. The model naturally supports sparse column placement since row IDs that do not have certain columns do not need an explicit NULL value for those columns. On the flip side, columns which have few or no NULL values must still store the column identifier with each row, which leads to greater space consumption. Each project data model differs from the original BigTable model in various ways, but Cassandra's changes are most notable. Cassandra introduces the notion of a supercolumn within each CF to allow for another level of mapping, modeling, and indexing. It also does away with a notion of locality groups, which can physically store multiple column families together for performance reasons. 13.2.2. Graph Storage One class of NoSQL stores are graph stores. Not all data is created equal, and the relational and key-oriented data models of storing and querying data are not the best for all data. Graphs are a fundamental data structure in computer science, and systems such as HyperGraphDB and Neo4J are two popular NoSQL storage systems for storing graph-structured data. Graph stores differ from the other stores we have discussed thus far in almost every way: data models, data traversal and querying patterns, physical layout of data on disk, distribution to multiple machines, and the transactional semantics of queries. We can not do these stark differences justice given space limitations, but you should be aware that certain classes of data may be better stored and queried as a graph. 13.2.3. Complex Queries There are notable exceptions to key-only lookups in NoSQL systems. MongoDB allows you to index your data based on any number of properties and has a relatively high-level language for specifying which data you want to retrieve. BigTable-based systems support scanners to iterate over a column family and select particular items by a filter on a column. CouchDB allows you to create different views of the data, and to run MapReduce tasks across your table to facilitate more complex lookups and updates. Most of the systems have bindings to Hadoop or another MapReduce framework to perform dataset-scale analytical queries. 13.2.4. Transactions NoSQL systems generally prioritize performance over transactional semantics. Other SQL-based systems allow any set of statements—from a simple primary key row retrieval, to a complicated join between several tables which is then subsequently averaged across several fields—to be placed in a transaction. These SQL databases will offer ACID guarantees between transactions. Running multiple operations in a transaction is Atomic (the A in ACID), meaning all or none of the operations happen. Consistency (the C) ensures that the transaction leaves the database in a consistent, uncorrupted state. Isolation (the I) makes sure that if two transactions touch the same record, they will do without stepping on each other's feet. Durability (the D, covered extensively in the next section), ensures that once a transaction is committed, it's stored in a safe place. ACID-compliant transactions keep developers sane by making it easy to reason about the state of their data. Imagine multiple transactions, each of which has multiple steps (e.g., first check the value of a bank account, then subtract $60, then update the value). ACID-compliant databases often are limited in how they can interleave these steps while still providing a correct result across all transactions. This push for correctness results in often-unexpected performance characteristics, where a slow transaction might cause an otherwise quick one to wait in line. Most NoSQL systems pick performance over full ACID guarantees, but do provide guarantees at the key level: two operations on the same key will be serialized, avoiding serious corruption to key-value pairs. For many applications, this decision will not pose noticeable correctness issues, and will allow quick operations to execute with more regularity. It does, however, leave more considerations for application design and correctness in the hands of the developer. Redis is the notable exception to the no-transaction trend. On a single server, it provides a MULTI command to combine multiple operations atomically and consistently, and a WATCH command to allow isolation. Other systems provide lower-level test-and-set functionality which provides some isolation guarantees. 13.2.5. Schema-free Storage A cross-cutting property of many NoSQL systems is the lack of schema enforcement in the database. Even in document stores and column family-oriented stores, properties across similar entities are not required to be the same. This has the benefit of supporting less structured data requirements and requiring less performance expense when modifying schemas on-the-fly. The decision leaves more responsibility to the application developer, who now has to program more defensively. For example, is the lack of a lastname property on an employee record an error to be rectified, or a schema update which is currently propagating through the system? Data and schema versioning is common in application-level code after a few iterations of a project which relies on sloppy-schema NoSQL systems. 13.3. Data Durability Ideally, all data modifications on a storage system would immediately be safely persisted and replicated to multiple locations to avoid data loss. However, ensuring data safety is in tension with performance, and different NoSQL systems make different data durability guarantees in order to improve performance. Failure scenarios are varied and numerous, and not all NoSQL systems protect you against these issues. A simple and common failure scenario is a server restart or power loss. Data durability in this case involves having moved the data from memory to a hard disk, which does not require power to store data. Hard disk failure is handled by copying the data to secondary devices, be they other hard drives in the same machine (RAID mirroring) or other machines on the network. However, a data center might not survive an event which causes correlated failure (a tornado, for example), and some organizations go so far as to copy data to backups in data centers several hurricane widths apart. Writing to hard drives and copying data to multiple servers or data centers is expensive, so different NoSQL systems trade off durability guarantees for performance. 13.3.1. Single-server Durability The simplest form of durability is a single-server durability, which ensures that any data modification will survive a server restart or power loss. This usually means writing the changed data to disk, which often bottlenecks your workload. Even if you order your operating system to write data to an on-disk file, the operating system may buffer the write, avoiding an immediate modification on disk so that it can group several writes together into a single operation. Only when the fsync system call is issued does the operating system make a best-effort attempt to ensure that buffered updates are persisted to disk. Typical hard drives can perform 100-200 random accesses (seeks) per second, and are limited to 30-100 MB/sec of sequential writes. Memory can be orders of magnitudes faster in both scenarios. Ensuring efficient single-server durability means limiting the number of random writes your system incurs, and increasing the number of sequential writes per hard drive. Ideally, you want a system to minimize the number of writes between fsync calls, maximizing the number of those writes that are sequential, all the while never telling the user their data has been successfully written to disk until that write has been fsync ed. Let's cover a few techniques for improving performance of single-server durability guarantees. Control fsync Frequency Memcached is an example of a system which offers no on-disk durability in exchange for extremely fast in-memory operations. When a server restarts, the data on that server is gone: this makes for a good cache and a poor durable data store. Redis offers developers several options for when to call fsync . Developers can force an fsync call after every update, which is the slow and safe choice. For better performance, Redis can fsync its writes every N seconds. In a worst-case scenario, the you will lose last N seconds worth of operations, which may be acceptable for certain uses. Finally, for use cases where durability is not important (maintaining coarse-grained statistics, or using Redis as a cache), the developer can turn off fsync calls entirely: the operating system will eventually flush the data to disk, but without guarantees of when this will happen. Increase Sequential Writes by Logging Several data structures, such as B+Trees, help NoSQL systems quickly retrieve data from disk. Updates to those structures result in updates in random locations in the data structures' files, resulting in several random writes per update if you fsync after each update. To reduce random writes, systems such as Cassandra, HBase, Redis, and Riak append update operations to a sequentially-written file called a log. While other data structures used by the system are only periodically fsync ed, the log is frequently fsync ed. By treating the log as the ground-truth state of the database after a crash, these storage engines are able to turn random updates into sequential ones. While NoSQL systems such as MongoDB perform writes in-place in their data structures, others take logging even further. Cassandra and HBase use a technique borrowed from BigTable of combining their logs and lookup data structures into one log-structured merge tree. Riak provides similar functionality with a log-structured hash table. CouchDB has modified the traditional B+Tree so that all changes to the data structure are appended to the structure on physical storage. These techniques result in improved write throughput, but require a periodic log compaction to keep the log from growing unbounded. Increase Throughput by Grouping Writes Cassandra groups multiple concurrent updates within a short window into a single fsync call. This design, called group commit, results in higher latency per update, as users have to wait on several concurrent updates to have their own update be acknowledged. The latency bump comes at an increase in throughput, as multiple log appends can happen with a single fsync . As of this writing, every HBase update is persisted to the underlying storage provided by the Hadoop Distributed File System (HDFS) , which has recently seen patches to allow support of appends that respect fsync and group commit. 13.3.2. Multi-server Durability Because hard drives and machines often irreparably fail, copying important data across machines is necessary. Many NoSQL systems offer multi-server durability for data. Redis takes a traditional master-slave approach to replicating data. All operations executed against a master are communicated in a log-like fashion to slave machines, which replicate the operations on their own hardware. If a master fails, a slave can step in and serve the data from the state of the operation log that it received from the master. This configuration might result in some data loss, as the master does not confirm that the slave has persisted an operation in its log before acknowledging the operation to the user. CouchDB facilitates a similar form of directional replication, where servers can be configured to replicate changes to documents on other stores. MongoDB provides the notion of replica sets, where some number of servers are responsible for storing each document. MongoDB gives developers the option of ensuring that all replicas have received updates, or to proceed without ensuring that replicas have the most recent data. Many of the other distributed NoSQL storage systems support multi-server replication of data. HBase, which is built on top of HDFS, receives multi-server durability through HDFS. All writes are replicated to two or more HDFS nodes before returning control to the user, ensuring multi-server durability. Riak, Cassandra, and Voldemort support more configurable forms of replication. With subtle differences, all three systems allow the user to specify N , the number of machines which should ultimately have a copy of the data, and W < N , the number of machines that should confirm the data has been written before returning control to the user. To handle cases where an entire data center goes out of service, multi-server replication across data centers is required. Cassandra, HBase, and Voldemort have rack-aware configurations, which specify the rack or data center in which various machines are located. In general, blocking the user's request until a remote server has acknowledged an update incurs too much latency. Updates are streamed without confirmation when performed across wide area networks to backup data centers. 13.4. Scaling for Performance Having just spoken about handling failure, let's imagine a rosier situation: success! If the system you build reaches success, your data store will be one of the components to feel stress under load. A cheap and dirty solution to such problems is to scale up your existing machinery: invest in more RAM and disks to handle the workload on one machine. With more success, pouring money into more expensive hardware will become infeasible. At this point, you will have to replicate data and spread requests across multiple machines to distribute load. This approach is called scale out, and is measured by the horizontal scalability of your system. The ideal horizontal scalability goal is linear scalability, in which doubling the number of machines in your storage system doubles the query capacity of the system. The key to such scalability is in how the data is spread across machines. Sharding is the act of splitting your read and write workload across multiple machines to scale out your storage system. Sharding is fundamental to the design of many systems, namely Cassandra, HBase, Voldemort, and Riak, and more recently MongoDB and Redis. Some projects such as CouchDB focus on single-server performance and do not provide an in-system solution to sharding, but secondary projects provide coordinators to partition the workload across independent installations on multiple machines. Let's cover a few interchangeable terms you might encounter. We will use the terms sharding and partitioning interchangeably. The terms machine, server, or node refer to some physical computer which stores part of the partitioned data. Finally, a cluster or ring refers to the set of machines which participate in your storage system. Sharding means that no one machine has to handle the write workload on the entire dataset, but no one machine can answer queries about the entire dataset. Most NoSQL systems are key-oriented in both their data and query models, and few queries touch the entire dataset anyway. Because the primary access method for data in these systems is key-based, sharding is typically key-based as well: some function of the key determines the machine on which a key-value pair is stored. We'll cover two methods of defining the key-machine mapping: hash partitioning and range partitioning. 13.4.1. Do Not Shard Until You Have To Sharding adds system complexity, and where possible, you should avoid it. Let's cover two ways to scale without sharding: read replicas and caching. Read Replicas Many storage systems see more read requests than write requests. A simple solution in these cases is to make copies of the data on multiple machines. All write requests still go to a master node. Read requests go to machines which replicate the data, and are often slightly stale with respect to the data on the write master. If you are already replicating your data for multi-server durability in a master-slave configuration, as is common in Redis, CouchDB, or MongoDB, the read slaves can shed some load from the write master. Some queries, such as aggregate summaries of your dataset, which might be expensive and often do not require up-to-the-second freshness, can be executed against the slave replicas. Generally, the less stringent your demands for freshness of content, the more you can lean on read slaves to improve read-only query performance. Caching Caching the most popular content in your system often works surprisingly well. Memcached dedicates blocks of memory on multiple servers to cache data from your data store. Memcached clients take advantage of several horizontal scalability tricks to distribute load across Memcached installations on different servers. To add memory to the cache pool, just add another Memcached host. Because Memcached is designed for caching, it does not have as much architectural complexity as the persistent solutions for scaling workloads. Before considering more complicated solutions, think about whether caching can solve your scalability woes. Caching is not solely a temporary band-aid: Facebook has Memcached installations in the range of tens of terabytes of memory! Read replicas and caching allow you to scale up your read-heavy workloads. When you start to increase the frequency of writes and updates to your data, however, you will also increase the load on the master server that contains all of your up-to-date data. For the rest of this section, we will cover techniques for sharding your write workload across multiple servers. 13.4.2. Sharding Through Coordinators The CouchDB project focuses on the single-server experience. Two projects, Lounge and BigCouch, facilitate sharding CouchDB workloads through an external proxy, which acts as a front end to standalone CouchDB instances. In this design, the standalone installations are not aware of each other. The coordinator distributes requests to individual CouchDB instances based on the key of the document being requested. Twitter has built the notions of sharding and replication into a coordinating framework called Gizzard . Gizzard takes standalone data stores of any type—you can build wrappers for SQL or NoSQL storage systems—and arranges them in trees of any depth to partition keys by key range. For fault tolerance, Gizzard can be configured to replicate data to multiple physical machines for the same key range. 13.4.3. Consistent Hash Rings Good hash functions distribute a set of keys in a uniform manner. This makes them a powerful tool for distributing key-value pairs among multiple servers. The academic literature on a technique called consistent hashing is extensive, and the first applications of the technique to data stores was in systems called distributed hash tables (DHTs). NoSQL systems built around the principles of Amazon's Dynamo adopted this distribution technique, and it appears in Cassandra, Voldemort, and Riak. Hash Rings by Example Figure 13.1: A Distributed Hash Table Ring Consistent hash rings work as follows. Say we have a hash function H that maps keys to uniformly distributed large integer values. We can form a ring of numbers in the range [1, L] that wraps around itself with these values by taking H(key) mod L for some relatively large integer L. This will map each key into the range [1,L]. A consistent hash ring of servers is formed by taking each server's unique identifier (say its IP address), and applying H to it. You can get an intuition for how this works by looking at the hash ring formed by five servers ( A - E ) in Figure 13.1. There, we picked L = 1000 . Let's say that H(A) mod L = 7 , H(B) mod L = 234 , H(C) mod L = 447 , H(D) mod L = 660 , and H(E) mod L = 875 . We can now tell which server a key should live on. To do this, we map all keys to a server by seeing if it falls in the range between that server and the next one in the ring. For example, A is responsible for keys whose hash value falls in the range [7,233], and E is responsible for keys in the range [875, 6] (this range wraps around on itself at 1000). So if H('employee30') mod L = 899 , it will be stored by server E , and if H('employee31') mod L = 234 , it will be stored on server B . Replicating Data Replication for multi-server durability is achieved by passing the keys and values in one server's assigned range to the servers following it in the ring. For example, with a replication factor of 3, keys mapped to the range [7,233] will be stored on servers A , B , and C . If A were to fail, its neighbors B and C would take over its workload. In some designs, E would replicate and take over A 's workload temporarily, since its range would expand to include A 's. Achieving Better Distribution While hashing is statistically effective at uniformly distributing a keyspace, it usually requires many servers before it distributes evenly. Unfortunately, we often start with a small number of servers that are not perfectly spaced apart from one-another by the hash function. In our example, A 's key range is of length 227, whereas E 's range is 132. This leads to uneven load on different servers. It also makes it difficult for servers to take over for one-another when they fail, since a neighbor suddenly has to take control of the entire range of the failed server. To solve the problem of uneven large key ranges, many DHTs including Riak create several `virtual' nodes per physical machine. For example, with 4 virtual nodes, server A will act as server A_1 , A_2 , A_3 , and A_4 . Each virtual node hashes to a different value, giving it more opportunity to manage keys distributed to different parts of the keyspace. Voldemort takes a similar approach, in which the number of partitions is manually configured and usually larger than the number of servers, resulting in each server receiving a number of smaller partitions. Cassandra does not assign multiple small partitions to each server, resulting in sometimes uneven key range distributions. For load-balancing, Cassandra has an asynchronous process which adjusts the location of servers on the ring depending on their historic load. 13.4.4. Range Partitioning In the range partitioning approach to sharding, some machines in your system keep metadata about which servers contain which key ranges. This metadata is consulted to route key and range lookups to the appropriate servers. Like the consistent hash ring approach, this range partitioning splits the keyspace into ranges, with each key range being managed by one machine and potentially replicated to others. Unlike the consistent hashing approach, two keys that are next to each other in the key's sort order are likely to appear in the same partition. This reduces the size of the routing metadata, as large ranges are compressed to [start, end] markers. In adding active record-keeping of the range-to-server mapping, the range partitioning approach allows for more fine-grained control of load-shedding from heavily loaded servers. If a specific key range sees higher traffic than other ranges, a load manager can reduce the size of the range on that server, or reduce the number of shards that this server serves. The added freedom to actively manage load comes at the expense of extra architectural components which monitor and route shards. The BigTable Way Google's BigTable paper describes a range-partitioning hierarchical technique for sharding data into tablets. A tablet stores a range of row keys and values within a column family. It maintains all of the necessary logs and data structures to answer queries about the keys in its assigned range. Tablet servers serve multiple tablets depending on the load each tablet is experiencing. Each tablet is kept at a size of 100-200 MB. As tablets change in size, two small tablets with adjoining key ranges might be combined, or a large tablet might be split in two. A master server analyzes tablet size, load, and tablet server availability. The master adjusts which tablet server serves which tablets at any time. Figure 13.2: BigTable-based Range Partitioning The master server maintains the tablet assignment in a metadata table. Because this metadata can get large, the metadata table is also sharded into tablets that map key ranges to tablets and tablet servers responsible for those ranges. This results in a three-layer hierarchy traversal for clients to find a key on its hosting tablet server, as depicted in Figure 13.2. Let's look at an example. A client searching for key 900 will query server A , which stores the tablet for metadata level 0. This tablet identifies the metadata level 1 tablet on server 6 containing key ranges 500-1500. The client sends a request to server B with this key, which responds that the tablet containing keys 850-950 is found on a tablet on server C. Finally, the client sends the key request to server C , and gets the row data back for its query. Metadata tablets at level 0 and 1 may be cached by the client, which avoids putting undue load on their tablet servers from repeat queries. The BigTable paper explains that this 3-level hierarchy can accommodate 261 bytes worth of storage using 128MB tablets. Handling Failures The master is a single point of failure in the BigTable design, but can go down temporarily without affecting requests to tablet servers. If a tablet server fails while serving tablet requests, it is up to the master to recognize this and re-assign its tablets while requests temporarily fail. In order to recognize and handle machine failures, the BigTable paper describes the use of Chubby, a distributed locking system for managing server membership and liveness. ZooKeeper is the open source implementation of Chubby, and several Hadoop-based projects utilize it to manage secondary master servers and tablet server reassignment. Range Partitioning-based NoSQL Projects HBase employs BigTable's hierarchical approach to range-partitioning. Underlying tablet data is stored in Hadoop's distributed filesystem (HDFS). HDFS handles data replication and consistency among replicas, leaving tablet servers to handle requests, update storage structures, and initiate tablet splits and compactions. MongoDB handles range partitioning in a manner similar to that of BigTable. Several configuration nodes store and manage the routing tables that specify which storage node is responsible for which key ranges. These configuration nodes stay in sync through a protocol called two-phase commit, and serve as a hybrid of BigTable's master for specifying ranges and Chubby for highly available configuration management. Separate routing processes, which are stateless, keep track of the most recent routing configuration and route key requests to the appropriate storage nodes. Storage nodes are arranged in replica sets to handle replication. Cassandra provides an order-preserving partitioner if you wish to allow fast range scans over your data. Cassandra nodes are still arranged in a ring using consistent hashing, but rather than hashing a key-value pair onto the ring to determine the server to which it should be assigned, the key is simply mapped onto the server which controls the range in which the key naturally fits. For example, keys 20 and 21 would both be mapped to server A in our consistent hash ring in Figure 13.1, rather than being hashed and randomly distributed in the ring. Twitter's Gizzard framework for managing partitioned and replicated data across many back ends uses range partitioning to shard data. Routing servers form hierarchies of any depth, assigning ranges of keys to servers below them in the hierarchy. These servers either store data for keys in their assigned range, or route to yet another layer of routing servers. Replication in this model is achieved by sending updates to multiple machines for a key range. Gizzard routing nodes manage failed writes in different manner than other NoSQL systems. Gizzard requires that system designers make all updates idempotent (they can be run twice). When a storage node fails, routing nodes cache and repeatedly send updates to the node until the update is confirmed. 13.4.5. Which Partitioning Scheme to Use Given the hash- and range-based approaches to sharding, which is preferable? It depends. Range partitioning is the obvious choice to use when you will frequently be performing range scans over the keys of your data. As you read values in order by key, you will not jump to random nodes in the network, which would incur heavy network overhead. But if you do not require range scans, which sharding scheme should you use? Hash partitioning gives reasonable distribution of data across nodes, and random skew can be reduced with virtual nodes. Routing is simple in the hash partitioning scheme: for the most part, the hash function can be executed by clients to find the appropriate server. With more complicated rebalancing schemes, finding the right node for a key becomes more difficult. Range partitioning requires the upfront cost of maintaining routing and configuration nodes, which can see heavy load and become central points of failure in the absence of relatively complex fault tolerance schemes. Done well, however, range-partitioned data can be load-balanced in small chunks which can be reassigned in high-load situations. If a server goes down, its assigned ranges can be distributed to many servers, rather than loading the server's immediate neighbors during downtime. 13.5. Consistency Having spoken about the virtues of replicating data to multiple machines for durability and spreading load, it's time to let you in on a secret: keeping replicas of your data on multiple machines consistent with one-another is hard. In practice, replicas will crash and get out of sync, replicas will crash and never come back, networks will partition two sets of replicas, and messages between machines will get delayed or lost. There are two major approaches to data consistency in the NoSQL ecosystem. The first is strong consistency, where all replicas remain in sync. The second is eventual consistency, where replicas are allowed to get out of sync, but eventually catch up with one-another. Let's first get into why the second option is an appropriate consideration by understanding a fundamental property of distributed computing. After that, we'll jump into the details of each approach. 13.5.1. A Little Bit About CAP Why are we considering anything short of strong consistency guarantees over our data? It all comes down to a property of distributed systems architected for modern networking equipment. The idea was first proposed by Eric Brewer as the CAP Theorem, and later proved by Gilbert and Lynch [GL02]. The theorem first presents three properties of distributed systems which make up the acronym CAP: Consistency: do all replicas of a piece of data always logically agree on the same version of that data by the time you read it? (This concept of consistency is different than the C in ACID.) Availability: Do replicas respond to read and write requests regardless of how many replicas are inaccessible? Partition tolerance: Can the system continue to operate even if some replicas temporarily lose the ability to communicate with each other over the network? The theorem then goes on to say that a storage system which operates on multiple computers can only achieve two of these properties at the expense of a third. Also, we are forced to implement partition-tolerant systems. On current networking hardware using current messaging protocols, packets can be lost, switches can fail, and there is no way to know whether the network is down or the server you are trying to send a message to is unavailable. All NoSQL systems should be partition-tolerant. The remaining choice is between consistency and availability. No NoSQL system can provide both at the same time. Opting for consistency means that your replicated data will not be out of sync across replicas. An easy way to achieve consistency is to require that all replicas acknowledge updates. If a replica goes down and you can not confirm data updates on it, then you degrade availability on its keys. This means that until all replicas recover and respond, the user can not receive successful acknowledgment of their update operation. Thus, opting for consistency is opting for a lack of round-the-clock availability for each data item. Opting for availability means that when a user issues an operation, replicas should act on the data they have, regardless of the state of other replicas. This may lead to diverging consistency of data across replicas, since they weren't required to acknowledge all updates, and some replicas may have not noted all updates. The implications of the CAP theorem lead to the strong consistency and eventual consistency approaches to building NoSQL data stores. Other approaches exist, such as the relaxed consistency and relaxed availability approach presented in Yahoo!'s PNUTS [CRS+08] system. None of the open source NoSQL systems we discuss has adopted this technique yet, so we will not discuss it further. 13.5.2. Strong Consistency Systems which promote strong consistency ensure that the replicas of a data item will always be able to come to consensus on the value of a key. Some replicas may be out of sync with one-another, but when the user asks for the value of employee30:salary , the machines have a way to consistently agree on the value the user sees. How this works is best explained with numbers. Say we replicate a key on N machines. Some machine, perhaps one of the N , serves as a coordinator for each user request. The coordinator ensures that a certain number of the N machines has received and acknowledged each request. When a write or update occurs to a key, the coordinator does not confirm with the user that the write occurred until W replicas confirm that they have received the update. When a user wants to read the value for some key, the coordinator responds when at least R have responded with the same value. We say that the system exemplifies strong consistency if R+W>N . Putting some numbers to this idea, let's say that we're replicating each key across N=3 machines (call them A , B , and C ). Say that the key employee30:salary is initially set to the value $20,000, but we want to give employee30 a raise to $30,000. Let's require that at least W=2 of A , B , or C acknowledge each write request for a key. When A and B confirm the write request for (employee30:salary, $30,000) , the coordinator lets the user know that employee30:salary is safely updated. Let's assume that machine C never received the write request for employee30:salary , so it still has the value $20,000. When a coordinator gets a read request for key employee30:salary , it will send that request to all 3 machines: If we set R=1 , and machine C responds first with $20,000, our employee will not be very happy. , and machine responds first with $20,000, our employee will not be very happy. However, if we set R=2 , the coordinator will see the value from C , wait for a second response from A or B , which will conflict with C 's outdated value, and finally receive a response from the third machine, which will confirm that $30,000 is the majority opinion. So in order to achieve strong consistency in this case, we need to set R =2} so that R+W 3}. What happens when W replicas do not respond to a write request, or R replicas do not respond to a read request with a consistent response? The coordinator can timeout eventually and send the user an error, or wait until the situation corrects itself. Either way, the system is considered unavailable for that request for at least some time. Your choice of R and W affect how many machines can act strangely before your system becomes unavailable for different actions on a key. If you force all of your replicas to acknowledge writes, for example, then W=N , and write operations will hang or fail on any replica failure. A common choice is R + W = N + 1 , the minimum required for strong consistency while still allowing for temporary disagreement between replicas. Many strong consistency systems opt for W=N and R=1 , since they then do not have to design for nodes going out of sync. HBase bases its replicated storage on HDFS, a distributed storage layer. HDFS provides strong consistency guarantees. In HDFS, a write cannot succeed until it has been replicated to all N (usually 2 or 3) replicas, so W = N . A read will be satisfied by a single replica, so R = 1 . To avoid bogging down write-intensive workloads, data is transferred from the user to the replicas asynchronously in parallel. Once all replicas acknowledge that they have received copies of the data, the final step of swapping the new data in to the system is performed atomically and consistently across all replicas. 13.5.3. Eventual Consistency Dynamo-based systems, which include Voldemort, Cassandra, and Riak, allow the user to specify N , R , and W to their needs, even if R + W <= N . This means that the user can achieve either strong or eventual consistency. When a user picks eventual consistency, and even when the programmer opts for strong consistency but W is less than N , there are periods in which replicas might not see eye-to-eye. To provide eventual consistency among replicas, these systems employ various tools to catch stale replicas up to speed. Let's first cover how various systems determine that data has gotten out of sync, then discuss how they synchronize replicas, and finally bring in a few dynamo-inspired methods for speeding up the synchronization process. Versioning and Conflicts Because two replicas might see two different versions of a value for some key, data versioning and conflict detection is important. The dynamo-based systems use a type of versioning called vector clocks. A vector clock is a vector assigned to each key which contains a counter for each replica. For example, if servers A , B , and C are the three replicas of some key, the vector clock will have three entries, (N_A, N_B, N_C) , initialized to (0,0,0) . Each time a replica modifies a key, it increments its counter in the vector. If B modifies a key that previously had version (39, 1, 5) , it will change the vector clock to (39, 2, 5) . When another replica, say C , receives an update from B about the key's data, it will compare the vector clock from B to its own. As long as its own vector clock counters are all less than the ones delivered from B , then it has a stale version and can overwrite its own copy with B 's. If B and C have clocks in which some counters are greater than others in both clocks, say (39, 2, 5) and (39, 1, 6) , then the servers recognize that they received different, potentially unreconcilable updates over time, and identify a conflict. Conflict Resolution Conflict resolution varies across the different systems. The Dynamo paper leaves conflict resolution to the application using the storage system. Two versions of a shopping cart can be merged into one without significant loss of data, but two versions of a collaboratively edited document might require human reviewer to resolve conflict. Voldemort follows this model, returning multiple copies of a key to the requesting client application upon conflict. Cassandra, which stores a timestamp on each key, uses the most recently timestamped version of a key when two versions are in conflict. This removes the need for a round-trip to the client and simplifies the API. This design makes it difficult to handle situations where conflicted data can be intelligently merged, as in our shopping cart example, or when implementing distributed counters. Riak allows both of the approaches offered by Voldemort and Cassandra. CouchDB provides a hybrid: it identifies a conflict and allows users to query for conflicted keys for manual repair, but deterministically picks a version to return to users until conflicts are repaired. Read Repair If R replicas return non-conflicting data to a coordinator, the coordinator can safely return the non-conflicting data to the application. The coordinator may still notice that some of the replicas are out of sync. The Dynamo paper suggests, and Cassandra, Riak, and Voldemort implement, a technique called read repair for handling such situations. When a coordinator identifies a conflict on read, even if a consistent value has been returned to the user, the coordinator starts conflict-resolution protocols between conflicted replicas. This proactively fixes conflicts with little additional work. Replicas have already sent their version of the data to the coordinator, and faster conflict resolution will result in less divergence in the system. Hinted Handoff Cassandra, Riak, and Voldemort all employ a technique called hinted handoff to improve write performance for situations where a node temporarily becomes unavailable. If one of the replicas for a key does not respond to a write request, another node is selected to temporarily take over its write workload. Writes for the unavailable node are kept separately, and when the backup node notices the previously unavailable node become available, it forwards all of the writes to the newly available replica. The Dynamo paper utilizes a 'sloppy quorum' approach and allows the writes accomplished through hinted handoff to count toward the W required write acknowledgments. Cassandra and Voldemort will not count a hinted handoff against W, and will fail a write which does not have W confirmations from the originally assigned replicas. Hinted handoff is still useful in these systems, as it speeds up recovery when an unavailable node returns. Anti-Entropy When a replica is down for an extended period of time, or the machine storing hinted handoffs for an unavailable replica goes down as well, replicas must synchronize from one-another. In this case, Cassandra and Riak implement a Dynamo-inspired process called anti-entropy. In anti-entropy, replicas exchange Merkle Trees to identify parts of their replicated key ranges which are out of sync. A Merkle tree is a hierarchical hash verification: if the hash over the entire keyspace is not the same between two replicas, they will exchange hashes of smaller and smaller portions of the replicated keyspace until the out-of-sync keys are identified. This approach reduces unnecessary data transfer between replicas which contain mostly similar data. Gossip Finally, as distributed systems grow, it is hard to keep track of how each node in the system is doing. The three Dynamo-based systems employ an age-old high school technique known as gossip to keep track of other nodes. Periodically (every second or so), a node will pick a random node it once communicated with to exchange knowledge of the health of the other nodes in the system. In providing this exchange, nodes learn which other nodes are down, and know where to route clients in search of a key. 13.6. A Final Word The NoSQL ecosystem is still in its infancy, and many of the systems we've discussed will change architectures, designs, and interfaces. The important takeaways in this chapter are not what each NoSQL system currently does, but rather the design decisions that led to a combination of features that make up these systems. NoSQL leaves a lot of design work in the hands of the application designer. Understanding the architectural components of these systems will not only help you build the next great NoSQL amalgamation, but also allow you to use current versions responsibly.
{ "pile_set_name": "OpenWebText2" }
SOUTH LOS ANGELES (KABC) -- Two men were found shot to death inside a vehicle in South Los Angeles Sunday morning.The bodies of two men in their 30s were found inside a green Mitsubishi SUV with a handicapped license plate in the area of 89th Street and Grand Avenue around 8 a.m.A friend of the victims told Eyewitness News she believes they were shot around 2 a.m.She said the two men were her childhood friends and they were not in gangs. One was 34 years old and the other was 35, she said.She said they did not live in that neighborhood, but would come by at times to visit friends.One man worked as a barber and the other had just gotten a new job as he was taking care of his sister's baby, after his sister died of cancer recently.No description of a suspect was immediately available.
{ "pile_set_name": "OpenWebText2" }
Paul - It was another generally embarrassing week across the Big Ten. But it is what is, and it’s ours, so...yeah...there you go. Anyway, Sparty had a bye, and despite the loss at Oregon last week, they appear to be the best in the Big Ten by a fair margin. So they remain #1. Wiscy had a bye as well, but they haven’t been as convincing to me. DONU looked pretty good in Fresno, so I gave Nebraska gets the nod for #2 with Bucky #3. Because #homer. And that’s where it gets complicated. From #4 to #10 I think you could throw darts and end up with something as correct as trying to slot teams against each other. This is the realm of "teams almost good enough to win" and "lost badly last week but rebounded against shitty teams this week". Notable in this group are THE Ohio State University who got trucked at home last week to a team that got beat at home by East Carolina and Michigan, who got shut out by a Domer squad that needed 3 quarters to pull away from Purdue. And then there are the bottom feeders. Props to Purdue for hanging with Notre Dame for more than half a game. And that’s all I can say about this group. Andy: Sparty and the Badger hold the top spots since, by virtue of not playing or having anyone coldcock a woman, they failed to to further sully the B1G’s honor. (Yes, that last is a contradiction in terms these days). Nebraska re-raises a McNeesed fanbase’s false hopes & keeps #3 by popping 50 on Fresno St. - an accomplishment as deserving of applause as buying a soda from a machine without dropping the coins all over the floor. Maryland, Penn St & Rutgers did nothing to embarrass the conference (Remember 13-10 is OLD SCHOOL BIG 10 football!), so they hold their positions. Pimp slapping a 2012 4-8 Golden Flash team won’t move the Buckeyes up, but it won’t drop them. Same for Michigan & 0-3 Miami. After that comes this week’s leg pissers. As hilarious as many found karma’s revenge for those who hate the timeout tactic, the Iowa St. loss was not totally traumatic given the Cyclones performance last week against Kansas St, so the Hawkeyes may lose the CyHawk trophy, but they do win this week’s B1G Best of the Worst. Indiana follows for losing close and I nudged Purdue out of the cellar in favor of Illinois for showing a pulse against Irrelevant Since ‘88 Jesus. I put Northwestern 12th because I have to put them somewhere and I want to give them full opportunity to earn the bottom spot after next week’s loss to Western Illinois in front of 11,000 disinterested fans. Ranchbabe: I found it was easier to start at the bottom and work my way up trying to figure out who was less awful. The Huskers and Buckeyes looked, by far, the best of the B1G (not on bye) but the level of competition needs to be considered. So, I have an idea of which are probably the best four teams. Penn State has something to play for now, and they have a QB with a pulse--which is something most B1G teams don’t have, so I put them at 5. I didn’t give Michigan much credit for their win over Miami-OH and teams 6-10 could be switched around at will. I think the bottom 4 (Purdue, Indiana, NW, and Illinois) are a special kind of awful. (Andy: "special kind of awful" I like that. A lot.) Aaron: Every team in this league has faults and every team will lose some games. It was nice to see the Huskers pile on some points, but it would be nice to see some sustained drives. It’s feast or famine for the Huskers right now. Big plays or nothing. If they beat Miami, they will move up and if they beat MSU, they will take the top spot. I was a lot higher on Minnesota and Iowa because I thought both of those teams were going to be better this year. I don’t have any faith in Michigan, Northwestern, Illinois, Purdue, Indiana. I think Rutgers/Maryland can beat any of those teams right now. Penn State is going to lose some games too, but right now they are finding ways to win. I’m giving OSU a mulligan on the VTech loss mainly because the rest of the league is so bad and they have a higher ceiling than most in this league. Looking at other Big Ten Power Polls, BTN has the Huskers #3 behind Michigan State and Ohio State, Off-Tackle Empire has Nebraska at #5, Yahoo Sports has the Huskers as #2 in the West behind Wisconsin, and Talking 10, where I am also a voter, has Nebraska and Wisconsin nearly tied for first place in the west at 72-71 total points. And for the record, I voted Nebraska #1 in the West.
{ "pile_set_name": "OpenWebText2" }
Seven years later, John Oliver still can't quite make sense of his first 48 hours in America. Then a 29-year-old comedian working the London club and stand-up circuit, he flew to New York on a Sunday evening and reported to a studio overlooking the Hudson River in Midtown Manhattan the next morning. Several hours later, he was standing in front of a packed audience and riffing on President George W. Bush's latest social faux pas, trading quips with Jon Stewart as The Daily Show's new "Senior British Correspondent." "I just finished the thing and it all happened in a blur, and J.K. Rowling was sitting in the audience," he recalled in a phone call with BuzzFeed on Thursday. "And I could have presumed I just hallucinated it. And then she came around afterwards, just to say, 'Well done.'" Oliver babbled a response, tripping over his dropped jaw and sputtering out a few words of gratitude to the Harry Potter author. "She kind of gave me a hug and told me to calm down," Oliver said. "And that's kind of everything you want from a moment like that, having J.K. Rowling hug you and say everything's going to be OK... It was like a one-evening guardian angel, and my guardian angel was the creator of a boy wizard. I remember getting back to the place I was staying in, looking at my still-full suitcase, and thinking, What the fuck just happened?" Like her magical characters, Rowling was able to foresee the future. Despite warnings from his agent that he shouldn't rent an apartment in New York at first because he'd "probably get fired within four weeks," Oliver has become a Daily Show mainstay. That Oliver has lasted so long there is in and of itself a feat; he was stepping into huge shoes left by the recently departed Stephen Colbert and Ed Helms, who would soon be followed out the door by fan favorite Rob Corddry. Most of the correspondents introduced since Oliver was hired, from Olivia Munn to Wyatt Cenac, have also left. Now 36, Oliver will assume the unenviable task of filling in for Stewart on June 10, when the longtime Daily Show host heads off to the Middle East for 12 weeks to direct his first feature film, Rosewater, an adaptation of the memoir of a former Iranian political prisoner.
{ "pile_set_name": "OpenWebText2" }
Along with the Liberty Bell and an extraordinary signature sandwich, Philadelphians now have a third thing worth bragging about, after UNESCO officially granted their city World Heritage status last Friday. Philadelphia is now the first and only city in the United States to have been given that honor, the Architect’s Newspaper reported. Cities vying for a spot on the World Heritage List have to meet at least one of ten rigorous selection criteria, which range from the architectural (“to be an outstanding example of a type of building, architectural or technological ensemble or landscape which illustrates a significant stage(s) in human history”) to the abstract (“to represent a masterpiece of human creative genius”). Philly qualified for its history (UNESCO having already given Independence Hall, one of many sites key to American history and independence in the City of Brotherly Love, a Heritage nod in 1979). Along with their impressive title, UNESCO World Heritage Cities have the added bonus of increased tourism, which can prove lucrative. The goal of the organization, of course, is not strictly monetary, hoping instead to provide these significant spaces with “programs and projects which aim to promote and support the maintenance, recognition and development of their world heritage.” There are currently 250 World Heritage Cities, which boast a combined population of over 130 million. Italy and Spain lead with over 20 cities represented each, with France, Mexico and Germany following behind with 11. New York City might not “contain superlative natural phenomena,” but perhaps our “developments in architecture or technology, monumental arts, town-planning or landscape design” could secure us the second-place spot some time in the future.
{ "pile_set_name": "OpenWebText2" }
Outraged Hispanic police groups are boycotting the Puerto Rican Day Parade next month because the event will celebrate a pardoned terrorist linked to one of the deadliest groups ever to target New York. The NYPD Hispanic Society and the Rafael Ramos Foundation have pulled out of the June 11 Fifth Avenue march after it was announced that former FALN kingpin Oscar Lòpez Rivera will be honored as a “National Freedom Hero” at the event. Rivera, 74, had his 70-year sentence commuted by outgoing President Barack Obama in January. He had spent nearly 36 years in prison on conspiracy charges for his ties to the Puerto Rican nationalist group, which was responsible for more than 100 bombings in the 1970s and ’80s — including a 1982 blast at NYPD headquarters that left an officer maimed and a 1975 attack that killed four at Fraunces Tavern in the Financial District. Supporters of Rivera — who was released Wednesday from house arrest in Puerto Rico with New York City Council Speaker Melissa Mark-Viverito on hand — note he was never directly linked to any bombings and they considered him a political prisoner. But the NYPD groups can’t forgive him for being a high-ranking member of the terror group. “We support the NYPD members who were seriously injured and the families of the innocent people who lost their lives during these attacks throughout the United States and in our city,” the NYPD Hispanic Society said in a statement. “We took an oath to protect and serve the people. Unfortunately, this year’s views and values of the National Puerto Rican Day Parade committee do not conform with the society’s mission of promoting peace and unity.” The Sergeants Benevolent Association joined the Hispanic groups in calling for a parade-sponsor boycott. “The FALN was one of the most dangerous terrorist organizations in American history,” SBA president Ed Mullins wrote in a letter to sponsors. “There is no justification in lauding or celebrating its murderous leader.” Goya food dropped its sponsorship of the parade earlier this week — though the food giant didn’t explicitly say it was reacting to Rivera’s involvement. The Rafael Ramos Foundation, named after the Brooklyn cop murdered in 2014, pulled its parade sponsorship, the Patrolmen’s Benevolent Association said Thursday. Ramos was of Puerto Rican descent. Gov. Cuomo waffled Thursday on whether he’d go to the parade. “I just heard about it, so I’m going to look at the situation,” Cuomo said. “My inclination would be to march, but I don’t know the facts of the situation.” Additional reporting by Joe Parziale, Michael Gartland and David K. Li
{ "pile_set_name": "OpenWebText2" }
The BC Liberal party has revoked the party membership of MLA Darryl Plecas, one day after he became Speaker of the provincial legislature. The party announced that Plecas was no longer a member in a statement on Saturday. "Constituents must be able to trust their elected representatives," it said. "Party members must be able to trust those who hold positions of leadership in the party. And members of the legislature must be able to trust one another." The statement said Plecas' decision was a betrayal, one he made "despite repeated promises and assurances that he would not." Revoking Plecas' membership was "the strongest action available," a spokesperson added. Liberal MLA Darryl Plecas sits alone in the Legislative Chamber after skipping caucus meeting. He was later named Speaker. (Richard Zussman/CBC News) 'I took him at his word': Coleman Plecas, MLA for Abbotsford South, was the only member of the legislature to put his name forward for Speaker. The move strengthened the NDP government's position in the minority parliament, ensuring the party didn't lose a voting MLA to sit as Speaker. Plecas was also the only member of the BC Liberals to speak out against the leadership of its former leader, Christy Clark. The party's interim leader, MLA Rich Coleman, didn't mince words about his former colleague's move to the Speaker's chair. "Everyone had committed, including Mr. Plecas, to not run for Speaker," Coleman said Friday. "I took him at his word. Obviously, that word didn't mean a lot." There are now 41 Liberals MLAs, 41 NDP MLAs, three Green MLAs as well as Plecas sitting in the legislature. The Speaker would break any tie votes.
{ "pile_set_name": "OpenWebText2" }
OVER the past couple of years, a number of far-right leaders have cropped up across Europe. From the Alternative for Germany Party to the Danish People’s Party, the “Donald Trump effect” has been gaining prominence across the continent in response to acts of terrorism and the refugee crisis. But no western country has been hit harder by large-scale terrorist attacks than France in recent times, and National Front party leader Marine Le Pen believes she’s the answer. Ms Le Pen is staunchly anti-immigration, anti-Muslim and pro-“Frexit” — France exiting the European Union a la the UK. Just this month, Ms Le Pen sparked fresh controversy after proposing that the children of illegal immigrants should be refused public school places as part of tough proposals to restrict state services. “I’ve got nothing against foreigners but I say to them: If you come to our country, don’t expect that you will be taken care of, treated [by the health system] and that your children will be educated for free,” Le Pen said. “That’s finished now, it’s the end of playtime.” Her popularity is not to be understated, with polls in the lead-up to the French election consistently showing the far-right leader will make it to the final round against Francois Fillon. But if you thought Ms Le Pen was conservative, you haven’t met her niece, Marion. As it stands, the 27-year-old may prove to be a thorn in her own family’s campaign. EUROPE’S MOST RIGHT-WING MILLENNIAL Marion Marechal-Le Pen is a polarising figure, and certainly not your average millennial. Also a member of the National Front party, the 27-year-old’s views are even more conservative than her aunt’s — so much so that she’s been dubbed Europe’s “poster child for the far right”. At just 22 years old, Marion was elected MP for Vaucluse’s 3rd constituency in 2012, in the country’s south, making her the youngest MP in France’s modern political history. Like her aunt, the younger Le Pen is heavily opposed to Muslim immigration. During the Nice terror attacks in July this year, before there was any known link between the tragedy and extremist groups, she immediately gathered her followers and blamed “Islamism”. “You are with us and against Islamism, or you are against us and for Islamism,” she said. “Those who choose the status quo become complicit with our enemies.” In a separate instance, she organised a protest against plans to bring 30 teenage asylum seekers from Afghanistan to the nearby town of Grambois. “It is not the hate of others, it is love for Provence, love for France,” she shouted at a rally of hundreds of demonstrators and counter-protesters, according to the New York Times. She said asylum seekers come to France to receive generous welfare cheques at the expense of French natives struggling to find employment. In an interview after the rally, she said she is “against this completely crazy plan to redistribute migrants”. The European project “is a failure”, she said. “We need to build another Europe.” The Le Pens are no longer just a right-wing fringe group. In a country where 230 people have been killed in terror attacks over the past 18 months, the party’s popularity has swelled immensely. It doesn’t hurt that Marion is brutally outspoken. She once stood up in the French Parliament and accused then-Prime Minister Manuel Valls of behaving like a “moron”. He was so visibly furious by her words that his hand began shaking uncontrollably as he responded to her remarks, in a video that went viral. While her aunt refused to play a major role in the campaign against marriage equality in 2013, Marion has expressed her disdain for it, saying it will “open the door to polygamy”. “Once you break away from the natural framework of a man and a woman, you could have other minorities who want their form of love recognised by the state,” she told the Telegraph earlier this year. “If you endorse homosexuality [in marriage], why not polygamy?” But Marion is possibly best known for her controversial views on abortion, because it’s an issue on which she’s repeatedly clashed with her aunt. Earlier this month, she sparked outrage after saying “France should end the full and unlimited reimbursement of abortion”. In an interview with far-right Catholic magazine Present, Marion said: “Instead of putting in place targets, abortion quotas in health establishments, financial support should be given to centres that accompany isolated or hesitant women. “Full and unlimited abortion should be reversed, because women are responsible and should be treated as such.” Her aunt’s right-hand man Florian Philippot issued an icy response on behalf of the party, saying the girl was “alone” and “isolated” in holding such a view. “What counts is what the presidential candidate says, what the movement says, what our presidential project says, namely no questioning of abortion, full reimbursement of abortion,” he told BFM TV. But despite their family ties, Marion’s conservative views may drive a wedge in the party ahead of the election. PROBLEM MAKER Marine Le Pen has redefined what it means to be a member of the “far-right”. She’s deliberately distanced her party from skinheads and Neo-Nazis, instead embracing left-wing causes like gay rights and women’s equality to further her central party line on stopping immigration. As The Guardian noted in a feature last month, this is an effective strategy in that her party then depicts Muslim immigrants as the primary threat to such minority groups. “As fear of Islam has spread, with their encouragement, they have presented themselves as the only true defenders of western identity and western liberties — the last bulwark protecting a besieged Judaeo-Christian civilisation from the barbarians at the gates,” the article read. But this puts Marine at odds with her fiery niece. The younger, more socially conservative Marion routinely speaks at odds with the so-called “new right”, although she’s certainly won the far-right Catholic vote. Their conflicting stances have sparked media reports of a “family feud”, with speculation the party is at risk of a major split before next year’s election. But despite this, the National Party, often criticised as being fueled by fear and xenophobia, has the most support among French millennials according to polling. An Odoxa report released this month found roughly one in five French people aged 18-34 back the party. While it’s forecast to finish second in the polls, the Le Pens are hoping for new momentum after Mr Trump’s victory in the United States. This means huge changes could be coming up for France — if the party can actually get itself together.
{ "pile_set_name": "OpenWebText2" }
COLUMBUS (WCMH) — A woman has been indicted by a Grand Jury on charges she used the social media app Periscope to live stream her own friend’s rape. According to Franklin County Prosecutor Ron O’Brien, 18-year-old Marina Alexeevna Lonina and a 17-year-old friend were socializing with 29-year-old Raymond Boyd Gates at a residence on Christina Lane, February 27, when at some point, Gates allegedly forced sexual intercourse with the victim. O’Brien says that Lonina started live streaming the sexual assault over the social media app Periscope. Lonina also photographed the victim nude the night before the alleged sexual assault, according to O’Brien. Authorities were notified of the sexual assault by another friend of Lonina’s who watched the Periscope live stream in another state. Lonina and Gates have both been indicted on one count kidnapping, two counts of rape, one county of sexual battery, three counts of pandering sexually-oriented material involving a minor. Lonina also faces two counts of illegal use of minor in nudity oriented material or performance for allegedly photographing the victim naked. “If Gates and Lonina are convicted for these charges, they each face a potential sentence in excess of forty years in prison”, O’Brien said.What others are clicking on:
{ "pile_set_name": "OpenWebText2" }
Code: Darkest Hour 1.04 (Changes since patch 1.04 RC1) Engine changes: Fixed 1.04 RC1 bugs - Fixed CTD on pressing "Goto" button on the message when someone joins player’s alliance. This will now properly center map over the capital province of the other country. [CTD is introduced by 1.04 RC1, but the button never worked properly] - Fixed issue with canceling of missions of player controlled units in some situations. Only these missions will be canceled now: - Naval port strike when the fleet has no CV/CVL or any other ships with CAGs left in the fleet - Naval airbase strike when the fleet has no CV/CVL or any other ships with CAGs left in the fleet - Amphibious assault when there is no transport ships left in the fleet - Naval transport when there is no transport ships left in the fleet - Airborne assault when there are no air units loaded with paratrooper - Air supply when there are no air units with transport capacity - Fixed memory corruption/crash issue when long mod name is used - Fixed issue with human controlled land units no longer retreating to transport ships at sea when defeated in land combat. - Fixed game hang on annexation by event of alliance leader by an ally or puppet master by a puppet. - Fixed bug with empty missions menu (on click on unit plate's mission area) for scenarios with disabled provinces. Fixed bugs - Fixed "Goto" button on "New leader" message to center map over player’s capital province. - Fixed strategic redeployment of rockets to be issued immediately after the command instead on the next hour. - Fixed issue with AI trying always to retreat when attacked while not bordering at least one friendly province (like after successful amphibious invasion). - Fixed CTD on Intelligence page when custom unit types are into the redeployment pool of a country. 1.03 bug - Fixed issues with \gfx\map\defense.bmp usage (on counters) in the engine preventing it from displaying properly. It is used when the unit is involved in combat in its current province. - Fixed issue with attack arrow (\gfx\map\direction_counter_battle.bmp) used improperly on counters when the unit is attacked in its current province while moving to another province with no enemy units there. - Fixed displaying issue with overlapping counters for retreating and defending units in the same province. These are properly stacked together now. - Fixed arrow color from blue (move) to red (attack) for units that go to friendly province through enemy controlled provinces. - Fixed invalid model name on message for completed brigade from upgraded serial line. - Fixed a bug with dummy "(null)-1" brigade appearing at redeployment pool in some cases after detaching of brigades from units (ex. From unit with 2 brigades detach the second and then the first in that order). 1.02 bug - Fixed issue with multiple win combat messages when own and military controlled units participated in the combat. Improved non-working 1.03 fix for the same problem - Fixed issue with incorrect MP value returned to MP pool for sold units in some cases. - Implemented MP cost for receiving country for bought units. - Fixed calculation of MP value returned to MP pool on disbanding of units with attached brigades when the unit is not at full strength. Now returned MP equals the MP that will be gained for the unit and for all brigades if detached and disbanded separately. Country specific changes to MP requirements for disbanded unit/brigades are properly accounted too. - Fixed bug with free_xxxx commands (free_transport, free_escort, free_supplies, free_oil, free_ic, free_manpower, free_energy, free_money, free_metal, free_rare_materials) causing data corruption when used with some values. - Added support for month names in "month" trigger as suggested by trigger documentation (only month numbers 0-11 were valid before). Added error checking for both month names and numbers. - Fixed typo in error message about valid AI construction settings (used at_war and not_at_war instead of the proper atwar and not_atwar on the error message). 1.03 Bug - Fixed tooltip for control trigger when no data setting is specified. New features - Removed fixed event picture height (116 pixels) on styles 1 and 2 to allow pictures of any height to be used on events. - Removed fixed event header height (48 pixels) on style 2 to allow headers of any height to be used on events. - "Goto" button on messages about events in provinces set to switch game state to map view automatically (map is centered over the province in question). - Improved logic for evaluation of valid for release countries. Better handling of new vs. old (when no min or min_extra used) release modes. - Added retreat arrow to counters (\gfx\map\direction_counter_retreat.bmp) to be used instead of moving arrow for retreating units. - Disabled AI land units retreat to ships as this fails and the unit is instantly destroyed instead. Workaround for AI issue/limitation. - Removed redundant read of all leader pictures for player’s country on game start. The fix should speed up game start a bit. - Suppressed repair of buildings and resources in provinces with active land combats. - Disallowed building of coastal forts (used only to repel amphibious invasions) to provinces with no beach. - Added 3 new country settings (columns) to db\country.csv: NOTE: Mods that use own version of that file must update it! - Model Nationality - use model pictures for the tag specified in this column. When no new TAG is specified, use current TAG - Icon Nationality – use model icons for the tag specified in this column. When no new TAG is specified, use current TAG - Model Names – use model names from the tag specified in this column. When no new TAG is specified, use current TAG Improved Error Logs - Check and report for missing tech component name definition when extra debug logs are enabled (settings.cfg). - Added check and report for images bit depths other then 1, 8, 16 and 24 when extra debug logs are enabled (settings.cfg). - Improved error checking for defined in scenarios province buildings when extra debug logs are enabled (settings.cfg): - report when buildings are set to non-land provinces - report when naval base is set to province with no port allowed - report when coastal fort is set to province with no beach - report when buildings are set to ignored provinces (battle scenarios) Modding Documentation Changes - Updated event commands.txt with some missing land unit modifiers and mission names. Translation and Text Changes - Corrected all languages for TECH_APP_INDUSTRY_326_NAME and SHORT_TECH_APP_INDUSTRY_326_NAME. - Updated unitnames for HUN and U13. - Fixed wrong German translation for 'Tripartite Pact'. - Corrected tech names for current armored frigate and armored cruiser models. - Fixed incorrect transliteration for Bulgarian leaders, ministers and tech teams. - Fixed airnames.csv entries for BUL to historical (WWII - at the start, WWI - at the bottom) and grammatically correct ones (in the middle). - Fixed airnames.csv entries for U29 (commie BUL) grammatically correct ones (and semi-historical) - Fixed armynames.csv entries for BUL and U29 to grammatically correct ones - Fixed navynames.csv entries for BUL and U29 to grammatically correct ones. - Fixed unitnames.csv entries for BUL and U29 to grammatically correct ones. - Fixed unitnames.csv entries for BUL and U29 to historically correct INF/GAR/MIL names - Fixed Derfflinger class name (GER and U08). - Massive corrections and improvements in English texts (made by Lucifer). - Fixed invalid encoding (UTF-8 -> ANSI) of some leader, minister and tech team files (DH Full). - Fixed typos in A-H and GER ship names. Event Changes Darkest Hour Full - When Syria and Lebanon join Free France, Free France gets cores on their province (then changed back to claims upon Liberation of France). - Fixed a few issues with Chinese surrender to Japan and land distribution to Japanese puppets. - Japan can now choose to Pressure Germany over Manchukuo even if it decided to expand northward; moreover the Recognition of Manchukuo now no longer recalls Von Falkenhausen and no longer ends the Sino-German cooperation in case Japan decided to expand northward. - Corrected a few issues in events of Northern Strike path. - Modified event 2181008 (CHI Soviets Ignore our Demands) so that if Japan decided to expand northwards, now CHI AI has a 50% chance of declaring war on SOV. - Corrected effects of event 2003019 GER Fall Attila. - Surrender event for Japan (Fading Sun) now sleeps future CHI-JAP war events. - Corrected WW2 peace events between Finland and Soviet Union so that all russian provinces are given back to SOV. - Corrected state (active/slept) of some minister/leader/TT upon French surrender to Germany. - Corrected effects of event 2007047 (SOV Great Patriotic War). - Corrected effects of event 2002009 FRA "What colonies for Free France?". - Fixed wrong HoS and HoG for U87 in event 3011059. - Changed effects of decision 2003143 (GER Issue MEFO Bills). - Fixed issue with Korea and events related to Surrender of Japan. - Corrected trigger of 2003102 (GER Generaloberst Kurt von Hammerstein-Equord dies!). - Corrected trigger of one action of "Global peace treaty in Europe" event. - When England surrenders, now troops change allegiance. - Improved event "Vlassov defects" that now involves Zverev and Budyho too. - Modified event 2012005 (CHI China defeats Japan and Korea is freed) so that capital is moved to Nanjing - Modified event 2003118 (GER Announce claims on Greece) to add claims on 365 (Thessaloniki), 366 (Kovani) and 370 (Alexandroupoli). - Improved event 2052010 (SPR Spanish Civil War - Victory) to sleep foreign leaders. - Corrected handling of German leaders in SPA. - Added claims to England in events "England surrenders". - Changed "Churchill becomes PM" event pic (instead of generic). - Capital nuked - Aftermath events now check if the new capital has been not nuked already. - Corrected date and effect of POL event 2013005 (Wladyslaw Sikorski passes away) + added new event 2001096 (ENG Wladyslaw Sikorski and Tadeusz Klimecki die in Gibraltar B-24 crash). - Fading Sun event now takes into consideration a Korea already liberated by Chinese. - Reworked triggers and effects of OTT surrender events. - Waked all U08 slept leaders and ministers in event #2191583 (The military mission to the Ottoman Empire returns home). AI Changes Darkest Hour Full - Changed build priorities of BUL, HUN, ROM (air -7%, land +7%). - Japanese AI now properly garrisons Shanghai province before the war with China. - Removed Suez as invasion target in various AI files, added Port Said instead. - AI JAP should acquire military control over SIA and French Indochina to ensure that JAP AI will be the front leader in SE Asia (and won't release China-Nanjing before Chinese surrender). - Fixed invalid Alexandria province ID (790 instead of 789) in many ENG AI files. Scenarios Changes Darkest Hour Full - Corrected SOV Chief of Air Force in 1939-1945 scenarios. - Corrected land doctrines blueprints of Germany in 1933 scenario. - Added missing tech to list of known techs of RSI in 1944.12 scenario. - Adding ship assembly line to USA's starting techs in 1941 scenario. - Corrected unit names for Hungary in 1933-1945 scenarios. - Corrected air unit level for Hungary in 1936-1945 scenarios. - Corrections to Bulgarian, Hungarian, Romanian and Japanese starting cabinets (thanks to eeeex). - Corrected HoS of Nat. China in 1944.06 and 1944.12 scenario. - Added Serbian client state in 1944.06 scenario. - In 1914 scenario, removed a few airbases from Ottomans and added a few to Russia. - Fixed invalid attachments (AA and Art) to GER/ENG Motorized units in 1944.12 scenario - Moved 2 VPs in NEI: Bengkulu -> Palembang and Hollandia -> Balikpapan. - Improved air, naval base, IC and AA distribution for provinces Dobrich, Sofia and Constanza in all scenarios (port, air base and AA moved from Dobrich to Constanza, added air base to Sofia where it was missing; added IC to some provinces in Bulgaria to match progress between scenarios). - In 1914 scenario, created UK fleet New Zealand Station, with missing cruiser Psyche, plus HMS Pyramus from China Station. - Various corrections to 1914 and 1936 Japan OOB. - Corrected Canadian Chief of Staff/Army/Navy/Air Force in all scenarios. - Corrected known Japanese techs in 1933 scenario. - Corrected dormant_leaders list for U08 in 1914. - Removed MA of Iraq to ENG in 1933 and 1936 scenarios to prevent IRQ troops to move to ENG provinces before entering the Allies. - Corrected sliders of Greece in 1939 and 1940 scenarios. - Added missing WW1 Land Doctrines for some countries (thanks to Treeplace). - Corrected German claims on Balkans in later scenarios. - Corrected Chief of navy of Iran in 1936-1941 scenarios. - Corrected Chief of Navy for France in 1933 and 1936 scenarios. - Added german monsun gruppe sub division to 1944.06 and 1944.12 scenario. - Updated models for HMS Hawkins, Frobisher, and Effingham in scenario files. - Updated UK ships deserving of Improved Armoured Cruiser model: Edinburghs, Warriors, Minotaurs. - Fixed "West Indies Squadron" starting location in 1914 scenario (HOL). - Updated all GER HQ models to level 1 (1914) in 1914 scenario as Ger has the required tech researched. - Added missing 1921 and 1926 INF and 1924 SUB techs to 1933 scenario Portugal to match existing OOB - Changed "NRP Vasco da Gama" model from CA-0 to CA-1 to match model with 1914 scenario - Various fixes to Australian bases, ports and beaches in all scenarios. - Fixed various invalid settings for buildings (mostly coastal fort on provinces with no beach) in all scenarios and tutorial. - Added missing BB USS Oregon to Pacific Fleet in 1914 scenario. - Added 1907 level techs for INF, MTN and CAV to OTT in 1914 scenario (to match models already in OOB). - Removed invalid national province from Senussi in 1914 scenario. Unit Changes - Reduced cost of CVE-1 to CVE-3 models to 0.8 to make those cost less ICdays then a CVL + LCAG brigade. - Changed MAR unit modifiers as follows: - Rain attack: -30 to -10 - Storm attack: -85 to -50 - Rain defense: -20 to -10 - Storm defense: -65 to -50 - Rain move: -18 to -15 - Storm move: -35 to -25 - Jungle move -30 to -25 - Swamp move: -20 to -15 - Changed amph_arm brigade modifiers as follows: - River attack: 0 to 10 - Shore attack: 0 to 10 - Changed Eng. brigade modifiers as follows: - Shore attack: 0 to 10 - Changed shore attack modifiers on some land brigades as follows: - Art: 0 to -15 - SP Art: 0 to -15 - SH Art: 0 to -15 - Gli. Art: 0 to -15 - TD: 0 to -10 - L. Arm.: 0 to -12 - M. Arm.: 0 to -13 - H. Arm.: 0 to -13 - SH Arm.: 0 to -13 - Gli. Arm.: 0 to -12 - CAV: 0 to -13 - AA: 0 to -3 - SP AA: 0 to -5 - AC: 0 to -5 Graphics Changes - Fixed counter_SLO.bmp in Core and removed its copy from Full (removed coat of arms from the flag to match the rest). - Moved the proper counter_bul.bmp from Full to Core. - Moved the proper counter_BUR.bmp from Full to Core. Darkest Hour Full Only - Replaced weird looking OTT topbar.bmp with the default one. - Fixed incorrect picture for Chen Gongbo as Minister of Armament. - Removed unused by the engine graphics file (direction_counter_defense.bmp). - Removed unused brigade and division model pictures from wrong folder in DH Core (proper location for those is \gfx\interface\models and all required files are already there). - Changed 1926 Infantry tech picture (thanks to bestmajor). - Added missing images for vietnamese leaders (thanks to bestmajor). - Removed VIC flags for France. - Fixed icon_FRA_Vic.bmp blue color. - Fixed icon_ETH_14.bmp color order and removed coat of arms. - Fixed icon_U05.bmp flag to match the rest. - Fixed icon_U58.bmp flag to match the rest. - Removed the unused counter_u01.bmp from DH Core. - Improved or fixed various issues with counters and icons (AFG, ANG, AUS, BEN, COL, DOM, FRA, GAB, GRE, HAI, HON, HUN, KOR, KUR, MOR, MON, NAM, NEP, PAR, PER, POL, RUS, UER, UAP, UCH, VIC, YUG, U06, U11, U12, U20, U51, U77). - Fixed POL flag and shield (removed coat of arms). - Fixed UKR flag and shield (removed coat of arms). - Fixed AZB flag (crescent size). - Fixed BHU flag (incorrect direction). - Fixed ETH_14 flag (incorrect color order and animation). - Fixed HON flag and shield (added blue stars). - Fixed IDC flag (sharper stripes). - Fixed MEN flag and shield (improved colors). - Fixed QUE flag and shield (improved colors). - Fixed PRI flag (fixed animation and improved color). - Fixed TRA flag (fixed animation). - Added missing pictures for 2 1914 BHU ministers. - Added missing images for some Albanian, American, Australian, Belgian, Brazilian, British, British Raj, Canadian, Chinese, Croatian, Czechoslovakian, Danish, Dutch East Indies, French, French Indochina, German, Greek, Haitian, Italian, Liberian, New Zealander, Romanian, South African, Soviet, Spanish, Swedish, Vietnamese, Yugoslavian ministers and leaders (thanks to bestmajor). Other DB Changes Darkest Hour Full - Corrected End Date and/or Retirement Date and/or dormant state of a few ministers and leaders (SOV, GER, USA). - Corrected End Date of TT 235013 Zhang Zuolin, set as 1929. - Fixed YUG and SER release in revolt.txt. - Corrections and improvements to Bulgarian, Hungarian, Romanian and Japanese ministers (thanks to eeeex). - Changed capital of U41 (Reichskommissariat Ukraine) from 617 Kiev to 244 Rowne. - Fixed invalid leader tags in congo.csv (U00 instead of CON) and U03.csv (FRA instead of U03) - Fixed picture name for 2 LBY ministers. - Removed Amiens province (#50) from Reichskommissariat Belgien-Nordfrankreich (U47) in revolt.txt. - Corrected name and years of CAN TT Vickers-Armstrong. - Correction of various issues with Japanese ministers (thanks to SushyS). - Removed research bonuses from Reichskommissar minister trait. - Renamed a few TTs of U20 (Finnish Democratic Republic). - Replaced picture for Frank Knox. - In revolt.txt, added claims on Dutch Caraibbean provinces to HOL and U10. - Changed CZE TT 120007 from Emil Janouska to Jaroslav Fajfr. - Corrected name and years of HUN TT 105010 (Andrбs Littay) + added new HUN TT 105018 (Ferenc Feketehalmy-Czeydner). - Corrected cores of MAN in revolt.txt - Correct name of Spanish minister/leader/TT Alfredo Kindelбn y Duany. - Added ministers to Iraq for 1914-1932 time period. - Removed invalid Night Flyer trait from 6 USA admirals (1914 scenario). - Fixed end and retirement dates of Yuan Shikai as L/M/TT. - Various corrections, improvements and additions to Brazilian, British, British Raj, Canadian, Chinese, Czechoslovakian, Danish, Dutch East Indies, French, French Indochina, German, Greek, Haitian, Hungarian, Japanese, Irish, Liberian, Mongolian, New Zealander, Pakistani, Portoguese, Slovakian, South African, Soviet, Spanish, Yugoslavian ministers and leaders (thanks to bestmajor). - Added 7 new Tech Teams to SOV. - Added "New leaders for 1914 Denmark" Mod (made by Melkor89). - Added "Siam 1914" Mod (made by Melkor89). Tech Changes - Tech 2810 Great War Anti-Air Artillery now requires 2800 Great War Static Anti-Air Artillery instead of 2450 Post Great War Static Anti-Air Artillery. - Tech 7230 Nuclear powered Submarine (year 1954) now requires 3690 Semi-Modern Submarine (year 1950) instead of 3700 Modern Submarine (year 1955). - Added missing tech components for some techs (Deep Logistic Organization etc.). - Changed year of tech 8050 "Convoy Sailing" from 1880 to 1890. - Changed Aerial Support naval tech to give position bonuses to SS/SSN. Moved CV/CVL/CVE position bonuses from Aerial Support to Ring Defense naval tech/naval doctrines tech. - Fixed wrong trigger (tech prerequisites) in the event (#3000025) enabling Air-to-Surface Missile (ASM) tech research. - Moved HQ model 1914 from 1917 tech (Centralized Control, 6660) to 1914 one (Strong Point, 6600). Map Changes - Removed zoomed-up islands from lightmap4.tbl as in DH 1.03. - Raised infra from 10 to 50, then allowed Port and set 2325 as Seazone for province 2130 St. John. - Corrected coordinate of Gyor (458) in both HOI2 and DH Maps. - Added metal production to canadese province 2129 (Sault Sainte Marie); - Corrected seazone of provinces 2131 (Moncton) and 1950 (Pensacola). - Fixed counter/sprite position of units in province 2405 (Bowers Island) - Added beach to Petropavlovsk (1184), port and beach to Gavan (1185) and port and beach to Nikolayevsk-on-Amur (1189). - Fixed a few bugs on higher zoom level for provinces Roberval, Val-d'Or, Montreal, Pembroke, Whitehorse. - Removed connection between Saarbrucken and Strasbourg. - Corrected name of province 1692 - Geraldton (Western Australia). - Fixed issues with ports, beaches and sea zones for Suez, Gibraltar, Balboa and provinces around these: - Gibraltar - changed sea province to Strait of Gibraltar - Cadiz (next to Gibraltar) - better location of port and beach icons - Port Said (north of Suez) - changed sea province from Suez Canal to Nile Delta, added beach, better location of port and beach icons - El Ghardaqah (south of Suez) - added beach and port, sea province set to Red Sea - Balboa (controlling Panama canal) - removed beach, sea province set properly to Panama Canal - Suez - removed beach, sea province set properly to Suez Canal - Sharm el Sheikh (east from Suez) - changed sea province from Suez Canal to Red Sea as it should be, properly placed port and beach icons - Bur Tawfiq - removed beach and sea province (Suez Canal) - redundant - San Jose de David (West of Panama Canal) - moved port, beach and sea province from Mosquito Coast (Caribbean Sea) to Gulf of Panama (Pacific Ocean) where these belong - Ciudad de Panama (East of Panama Canal) - moved port, beach and sea province from Panama Canal to Gulf of Darien to allow invasion and enter/leave port if Balboa is enemy controlled - Changed position of counter and sprite in Gulf of Panama to be closer to the strait for better visualization of fleet paths when moving through the canal from Caribbean Sea to the Pacific and vice versa. Main goals/solved issues of these changes: 1. Three strait controlling provinces (Balboa, Suez and Gibraltar) cannot be invaded by sea. 2. Surrounding provinces (Spain, Egypt, Panama) should be invaded instead. There are beaches added where those were missing to allow that. 3. Straits are used as sea provinces only for strait controlling provinces. - Added missing border between San Jose del Guaviare and Requena. - Moved provinces 2148 (Prince Rupert) and 2149 (Queen Charlotte) from Alaska area/region to British Columbia area and Western Canada region. - Fixed Belluno Province containing pixels from Venice. - Adjusted border between Sudbury and Pembroke. - Removed invalid border on lake north of Vorkuta. - Removed invalid border on Zoomlevel 1 in the Sea Zone adjacent to Shanghai Province. - Added missing Borders between Seazones on Zoomlevel 2&3 adjacent to Johor Bahru and Pakan Baharoe Provinces. - Fixed connections for prov 858 (Ulongue) as follows: - Restored connection for 4 impassable provinces - Set two of those as river connections
{ "pile_set_name": "OpenWebText2" }
Texas Central taps Renfe as operating partner The planned high-speed rail line between Houston and Dallas would use overhead electrical lines and its own separated tracks to shuttle riders between the two metro areas, through mostly flat, rural land. The N700 train is shown in this photo illustration from Texas Central Railway, using images provided by Japan Railway Central. less The planned high-speed rail line between Houston and Dallas would use overhead electrical lines and its own separated tracks to shuttle riders between the two metro areas, through mostly flat, rural land. The ... more Photo: Under Permission Of JR Central / Under Permission Of JR Central Photo: Under Permission Of JR Central / Under Permission Of JR Central Image 1 of / 11 Caption Close Texas Central taps Renfe as operating partner 1 / 11 Back to Gallery Texas Central, the developer behind the proposed high-speed rail line between Houston and Dallas, has tapped Renfe as the train's operating partner. The Spanish rail operator, in partnership with Spanish railway infrastructure company Adif, will provide technical advice on the design and construction of the Texas Central railway, and assist with its operation and maintenance plans. Once the railway is built, the company will run the trains, maintain engines and signals and oversee ticketing and passenger programs. Renfe, which has 14,000 employees and revenues of $4.1 billion, brings more than 25 years of experience to the project. The company operates 5,000 trains, including local commuter and high-speed rail lines, every day on 7,500 miles of track in Europe. It handled more than 487 million passengers, including 36 million high-speed rail passengers, and 19.6 million tons of freight in 2017. "Renfe has established a reputation for excellence in railroad operation in Spain and across the world, and we welcome them aboard," Texas Central CEO Carlos Aguilar said in a statement. "With their decades of expertise, they were a natural fit to join our other partners." RELATED: Texas company gets $300M loan for high-speed train project Renfe is the latest international company to partner with Texas Central on the high-speed rail project. Earlier this year, Texas Central announced it is working with Salini Impregilo, operating in the U.S. as The Lane Construction Corp. to lead the civil construction on the rail line. The Texas company also secured $300 million in loans from Japanese sources to fund the permitting, design and engineering for the project. The Texas Central rail line will be built and operated without taxpayer funded state or federal grants. The 200 mph train, based on the Japanese Shinkansen bullet train technology, will allow passengers to go between Houston and Dallas in 90 minutes, with a midway stop in the Brazos Valley. The project is expected to generate some $36 billion in economic benefits across Texas over the next 25 years, including creating 10,000 construction jobs and 1,500 permanent jobs. Texas Central is working with Federal Railroad Administration officials to finalize the rail lines environmental impact review, which will help determine the project's timeline and final route.
{ "pile_set_name": "OpenWebText2" }
Lincoln will add this new Aviator crossover to its lineup later this year, and we're getting our first glimpse at the New York Auto Show this week. There's lots to discuss with this new CUV, but the biggest news is definitely under the hood, where Lincoln is introducing a brand-new plug-in hybrid powertrain. Now playing: Watch this: Lincoln Aviator debuts at NY Auto Show as a twin-turbo... All Aviators will be powered by a twin-turbo V6 (that's all Lincoln will say -- we don't know if its Ford's 2.7-liter or 3.5-liter biturbo V6), but an additional plug-in hybrid system will be available as an option. Lincoln says the gas-only Aviator will be offered with rear-wheel drive standard, though all-wheel drive will also be available. The PHEV will only be available with all-wheel drive. Lincoln is silent on any specs, but if we look at the 3.5-liter EcoBoost engine in the MKT crossover -- which Lincoln will continue to build, by the way -- that engine produces 365 horsepower. Adding hybrid boost could up that to around 400 horsepower, which isn't crazy, since that's what the Volvo XC90 T8's engine puts out. Electric-only range will probably be in the 15- to 30-mile range. But that's just our best guess for now. Lincoln But here's what we can talk about: how freaking sweet it looks. Everything from the wraparound windshield to the slightly curved body panels seems to have a distinctly aeronautical appearance. Inside, you'll find wireless phone charging, multiple power outlets and standard Wi-Fi to keep you connected, but it's the 30-way adjustable seats with massage functionality that we're looking forward to the most. Nothing says luxury more than cruising down the road with the heated seats ablaze, working out the kinks in our backs. If you seem to always have too much in your hands, you'll be relieved that you no longer have to fumble with a key fob. Yes, you can lock, unlock and even start the Aviator with your phone. Should your you forget your phone or break your phone or do anything that would render your phone useless, a keypad in the B-pillar (a Lincoln staple, after all) can accept a code for entry and owners can then start the car from the center screen. Lincoln says the Aviator's new Suspension Preview Technology will be able to read the road ahead and adjust the chassis settings accordingly. Ford's new Co-Pilot360 suite of driving aids will be standard, as well, with automatic emergency braking, lane-keeping assist and more. Lincoln hasn't divulged any info on pricing or availability, just that Aviator should hit dealerships sometime in 2019.
{ "pile_set_name": "OpenWebText2" }
SITTWE, Myanmar — He was a member of the Rohingya student union in college, taught at a public high school and even won a parliamentary seat in Myanmar’s thwarted elections in 1990. But according to the government of Myanmar, U Kyaw Min’s fellow Rohingya do not exist. A long-persecuted Muslim minority concentrated in Myanmar’s western state of Rakhine, the Rohingya have been deemed dangerous interlopers from neighboring Bangladesh. Today, they are mostly stateless, their very identity denied by the Buddhist-majority Myanmar state. “There is no such thing as Rohingya,” said U Kyaw San Hla, an officer in Rakhine’s state security ministry. “It is fake news.” Such denials bewilder Mr. Kyaw Min. He has lived in Myanmar all of his 72 years, and the history of the Rohingya as a distinct ethnic group in Myanmar stretches back for generations before.
{ "pile_set_name": "OpenWebText2" }
A former prime minister was elected Somalia's president on Wednesday, declaring a new "era of unity" as he took on the daunting task of bringing the long-chaotic country its first fully functioning central government in a quarter-century. Advertising Read more Thousands of jubilant Somalis poured into the streets, chanting the new president's name as cheering soldiers fired into the air. "Somalia will be another Somalia soon," said Ahmed Ali, a police officer celebrating in the crowd. Incumbent President Hassan Sheikh Mohamud held a slight lead over Farmajo after an initial round of voting Wednesday that included a field of 21 candidates. But Farmajo easily won the second round contested among three candidates, with 184 votes to Mohamud's 97. Fears of attacks by the Islamic extremist group al-Shabaab dogged the historic vote, which was limited to lawmakers instead of the population at large, with members of the upper and lower houses of parliament casting ballots at a heavily guarded former air force base in the capital, Mogadishu, while a security lockdown closed the international airport. The new president represents a generation of Somalis scattered abroad by conflict who have cautiously begun to return to help their homeland recover. (FRANCE 24 with AP) Daily newsletterReceive essential international news every morning Subscribe
{ "pile_set_name": "OpenWebText2" }
Pakistan’s opening batsman Ahmed Shehzad has denied the news that he broke the glass of dressing room in frustration after missing out on a century against Balochistan during Pakistan Cup. Shehzad, who was made captain of Khyber Pakhtunkhwa side after Younis Khan’s sudden decision to leave the tournament, was dismissed for 79 runs off 70 and rumors stated he broke the glass in frustration. Shehzad, while talking to ARY News, said that such rumors are not true and added that his bag was place by the side of the glass and while keeping his bat over there he accidentally broke the glass. Shehzad said that he wanted to clear the air and controversy that was brewing because he is the captain of the side and he feels that it is his responsibility to clarify such a situation. Ahmed Shehzad also said that the news regarding PCB’s decision to fine the right handed opener was also not true as he has received no such notice from the Pakistan Cricket Board. Comments comments
{ "pile_set_name": "OpenWebText2" }
In a game against the Phoenix Suns, Laker Kareem Abdul-Jabbar scored the first -- and only -- three-pointer of his National Basketball Assn. career. The long-awaited achievement came four days after Abdul-Jabbar scored his 36,000th point. The Hall of Fame center, six-time NBA Most Valuable Player and master of the skyhook scored a league record 38,387 points in 20 seasons before retiring in 1989.
{ "pile_set_name": "OpenWebText2" }
ZURICH, Switzerland, Feb. 27 (UPI) -- Officials at a Swiss business school said a teacher was fired after pornographic videos were projected on his blackboard during class. Students in the man's class at KV Zurich Business School said the teacher apparently forgot the projector was hooked up to his computer Monday when he searched the Internet for pornographic pictures of women with amputated limbs, Swiss newspaper 20 Minutes reported Thursday. Rene Portenier, director at the school, confirmed the incident and said the teacher had apologized to the class and to officials. He said the man had no record of previous incidents at the school and was well-liked by his students. Portenier said officials decided not to immediately dismiss the teacher, but the chairman of the school's council, Rolf Butz, said Wednesday the instructor will be laid off. Butz, who is also director of the Merchants Association Zurich, said "such behavior is unacceptable."
{ "pile_set_name": "OpenWebText2" }
EXCLUSIVE Lala Kent's new guy is free to date her or whoever the hell he wants -- because he's officially single in the eyes of the law. Hollywood producer Randall Emmett just finalized his divorce from his actress wife of eight years, Ambyr Childers. It looks to be nice and neat ... everything has been settled privately, and all the terms are confidential -- including spousal and child support. Randall and Ambyr got hitched in 2009, got separated in 2016 and filed for divorced in January of this year. They have two minor children -- 7 and 4 years old. Ambyr's had roles on "Ray Donovan" and "All My Children" ... Randall's got an EP credit on "Power" and helped produce Scorsese's flick "Silence." As for Lala ... the "Vanderpump Rules" star was seen kissing Randall at the beginning of December at an event in Bev Hills ... before he was formally divorced.
{ "pile_set_name": "OpenWebText2" }
For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter. IN THE SUMMER of 2008, Fred Karger was keeping a close eye on the California ballot initiative known as Proposition 8, the measure that would eventually outlaw gay marriage in the state. He didn’t have much background in the marriage-equality movement—hell, he’d only really been out for a few years. But after retiring from 30 years in politics he wasn’t quite ready to give up the game, and Prop 8 struck a nerve with him. He checked out the campaign finance reports for the main organization backing the initiative, ProtectMarriage.com. Polls had shown that the initiative was likely to fail, and the fundraising records dovetailed with that—Prop 8’s supporters weren’t raising nearly as much money as their Hollywood-backed opponents. But then, in midsummer, Karger noticed something new. Suddenly, money started pouring in to ProtectMarriage.com, and by August, the group was raising about $500,000 a day. Karger wondered where all the money was coming from. Most of the donors, he soon realized, had never made a political contribution before. Some had given to just one candidate: Mitt Romney. Quite a few were graduates of Brigham Young University. It wasn’t hard to connect the dots: This was Mormon money. Once he knew what to look for, Karger found Mormons everywhere in the Prop 8 campaign: as actors in the TV ads, as volunteers, organizers, and political consultants. Just as intriguing, he would discover eventually, the group that had done the lion’s share of the work to get Prop 8 on the ballot to begin with, the National Organization for Marriage (NOM), also had deep ties to the Mormon Church—and the church itself had been engaged in a campaign to block gay marriage across the nation for more than a decade. What he was looking at, he realized, was a stealth campaign much like the ones he’d run during his long career as a Republican political operative. As a political professional, Karger—who for decades worked for one of California’s premier campaign consulting firms, a shop that had helped invent modern opposition research—was grudgingly impressed with what the Mormons were doing. “They completely altered the landscape,” he says. “They took over every aspect of the campaign.” Karger estimates that Mormons ultimately contributed $30 million of the $42 million total raised in support of Prop 8, which passed easily in November 2008. (By contrast, anti-Prop 8 forces raised $64 million.) But if the opponents of gay marriage won the battle, they also ensured themselves a big headache. In Karger, they galvanized an adversary who has now dug in to fight for the long haul—and who brings a dramatically different skill set than the rest of the marriage-equality movement. As Karger notes, most of the prominent gay-marriage advocates are, well, married people: risk averse and unschooled in the political dark arts. “I’m a different kind of gay activist,” he says. “I’m a little wilder.” HE’S ALSO a little more, well, Republican. At 14, growing up in Glencoe, Illinois, Karger took the train to Chicago to work phone banks for Nelson Rockefeller. He was deputy campaign director for former California governor George Deukmejian and spent 27 years with the Dolphin Group, one of the country’s most sought-after Republican consulting firms. The firm did a lot of work with Lee Atwater, the late bad boy of Republican politics. As part of Atwater’s most infamous play, during the 1988 campaign against Michael Dukakis, Karger personally tracked down the victims of furloughed murderer Willie Horton and took them around the country for press events. “We made a huge splash,” he notes. “This is kind of my niche.” Another of Karger’s specialties at Dolphin was setting up Astroturf groups on behalf of corporations like Philip Morris—a phony restaurant trade group, for example, that lobbied against indoor smoking bans. Dr. Stanton Glantz, director of the Center for Tobacco Control Research and Education at the University of California-San Francisco, helped expose Karger’s front group, which he recalls slowed down smoking bans significantly. “Anybody who does that kind of work is a bad guy,” he says. “It’s deceitful.” But he concedes that in the secretive gay-marriage foes, Karger has found the perfect foil. “He’s very well positioned to out these guys because he knows how they work it.” If Glantz thinks of this kind of covert work as sleazy, Karger views it as a whole lot of good fun. Movie-star tan and buff enough to proudly go shirtless at 60, he has an expansive sense of humor about politics that masks just how focused he is on getting results. (He’s been known to hand out three-dollar bills with pictures of Rick Warren, or photos of himself dressed as the Lone Ranger.) A fearless and inveterate gate-crasher, Karger isn’t afraid to pull off nervy stunts, like masquerading as a restaurant lobbyist. In 2006, he waltzed into Vanity Fair‘s exclusive Academy Awards party with a fake Oscar statue and four hot chicks he’d met on the street, claiming to be part of the King Kong special effects team. Karger says he honed his creative, chameleonlike qualities early in life out of necessity—to hide his sexual orientation. As a young man, being gay was his “deepest, darkest secret. I grew up thinking I was much less of a person than my friends and counterparts. Twenty-seven years ago, no one was out. Growing up I had two choices: I was either going to be like Liberace or like Paul Lynde [a.k.a. Uncle Arthur on Bewitched and a regular on Hollywood Squares]. Neither was out.” He did have long-term relationships, including one that lasted 11 years. But his partner had to hide all the photos and flee the house when Karger’s family came to visit. His father, a Chicago stockbroker, had expected him to work in the family firm, but Karger—figuring his secret would get out eventually if he stayed in town—moved to Los Angeles. Through contacts he made at the 1972 Academy Awards (which he’d crashed as part of a frat fundraiser), he ended up appearing in commercials, including a famous shaving cream ad directed by the late John Hughes. He appeared in ’70s shows like Owen Marshall: Counselor at Law and was on the verge of minor stardom when he won a top role in the pilot for a spin-off of Welcome Back, Kotter. But the show was canned, so Karger found work on a political campaign and discovered his true calling. KARGER INSISTS there was never conflict between his sexual orientation and his campaigning for Republicans. The Dolphin Group, he says, worked mostly with socially moderate candidates. Even Ronald Reagan, he recalls, had an inner circle that was “very gay. Nancy was very gay-friendly. He was a wonderful politician, a wonderful man.” Today Karger considers himself a “Schwarzenegger Republican,” noting that the California governor supports gay marriage, “unlike President Obama.” Still, it was only after leaving Dolphin in 2004 that Karger became involved in gay causes—or, to be precise, the cause of a historic gay bar in Laguna Beach, the Orange County surfer town where he lives part time. In 2005, billionaire Steven Udvar-Hazy was seeking to shut down the 43-year-old Boom Boom Room to build a luxury hotel. Karger thought it might be fun to try to save the Boom. It wasn’t an easy decision—he’d only been out to family and close friends, and by joining this fight he’d announce his sexuality to the world. Still, after some soul-searching, he threw himself into the project with the flair of an actor and the chops of an oppo-research man. He took out a Variety ad appealing to George Clooney and Brad Pitt to save the club (the actors had been rumored to be interested in buying it) and picketed their appearances at Grauman’s Chinese Theatre. To help engage the locals and raise a little money, he ran a male calendar contest, using Laguna Beach’s primary natural resource: hot young guys. He delivered wheelbarrows full of petitions to the City Council and finally won its support for the campaign. In the end, the Boom ended up closing anyway, but Udvar-Hazy’s hotel plans have stalled and the building is up for sale, leaving the possibility of a resurrection. The Boom campaign taught Karger an unexpected lesson. After years of depending on gobs of money and powerful allies for his campaigns, it turned out that all he really needed was the Internet. “I didn’t have to raise any money! Which is of course the least enjoyable part of politics,” he says with a laugh. A few months later, Karger read a newspaper story about the push to put Prop 8 on the California ballot. A couple of wealthy San Diego businessmen had contributed a lot of money to NOM, the Mormon-connected group that had been largely responsible for gathering the signatures to qualify the initiative. One of them, Terry Caster, who the paper reported had given more than $160,000 to the effort, said that “without solid marriage, you are going to have a sick society.” For Karger, who’s happily single, gay marriage is a bit of a theoretical concept. But he thinks a lot about the hurtful messages gay kids hear growing up, and Caster’s comment made him mad. So he fired up his laptop and launched Californians Against Hate, a one-man shop dedicated to publicizing the names of major Prop 8 donors. “I wanted to make it socially unacceptable to take away the rights of a minority,” he explains—to, as it were, push such behavior into the closet. In July 2008, he held his first rally in front of a San Diego hotel owned by Prop 8 donor Doug Manchester, calling for a boycott. It caught on, and soon major clients were moving their meetings away from Manchester’s properties. By last spring the hotel’s new, gay PR guru let it be known that Manchester would be donating $25,000 in cash and up to $100,000 in hotel credits to any LGBT group that applied. (Few takers so far.) As the battle over Prop 8 raged, Karger continued to expose donors and work the press. He tipped off the Wall Street Journal about the Mormons’ involvement, and in September 2008 the paper broke the story. And he kept finding new ways to hound his adversaries: In monitoring post-election campaign finance reports, he noticed that the Mormon church was only reporting $2,078 in nonmonetary contributions to the Prop 8 effort. That didn’t square given that the church had mobilized a huge number of volunteers (many of them former missionaries with ample door-knocking experience), brought in busloads of supporters from Utah, arranged satellite broadcasts of church leaders, and produced a host of slick ads plus a top-notch website. Karger filed a formal complaint with the California Fair Political Practices Commission, a move that prompted a spokesman to claim that the church had spent “zero dollars” on Prop 8. Two months later the church filed a new report saying it had given $190,000 worth of nonmonetary contributions in the few days before the election (after the filing deadline for the earlier report). California election officials are continuing to investigate. As he made a name for himself in the Prop 8 fight, Karger began getting anonymous tips about the church leadership. One of those tips led him to a treasure trove of internal church documents that laid out a remarkably organized campaign to fight gay marriage nationwide. The church, Karger realized, had been involved in this fight—quietly, but very effectively—for much longer than he’d thought. THE FAITH of a persecuted people, many of whom starved to death on their trek to Utah, Mormonism has always emphasized the role of marriage and childbearing (hence its early practice of polygamy) to boost its numbers. Mormons must marry and have children to achieve the highest levels of divinity. There’s not much room in that scheme for same-sex marriage, at least not among a leadership dominated by men in their 70s and 80s. In 1995, the church made its position official by issuing a proclamation carrying the weight of scripture that declared marriage between a man and a woman the bedrock of society. Even before that, the church had been working behind the scenes to block gay marriage nationwide—and aligning itself with the Catholic Church, which, elders noted in internal memos, had “more respect” than the Mormons. To execute that vision, the church used its public affairs committee, a body organized much like a political consulting firm. Its leadership has included high-ranking church elder Richard B. Wirthlin, a legendary California political consultant who was Ronald Reagan’s pollster. Wirthlin was a major player in the Prop 8 fight (some of his relatives even appeared in ProtectMarriage.com’s TV ads). The public-affairs committee for decades tracked gay-marriage efforts in every state, almost single-handedly blocked it in Hawaii in the 1990s, and had a significant role in killing it in Alaska. The documents Karger obtained, some of which he has posted at mormongate.org, show that in Hawaii, the church went to the trouble of creating a front group to hide its role. Memos detail how the church looked for an “articulate middle-age mother who is neither Catholic nor LDS” to represent the organization—which would claim to also focus on prostitution and gambling, but would, in fact, be devoted solely to abolishing gay marriage. The documents convinced Karger that the Mormons had also created a front group to fight gay marriage in California. That group, he believes, is NOM, which has also been active on the issue in Massachusetts and Maine, and which was primarily responsible for putting Prop 8 on the ballot. Its board had deep connections to the church, including a former Brigham Young University professor whose family is part of the top church hierarchy. NOM’s president is Maggie Gallagher, the family-values activist who was exposed in 2005 for failing to disclose payments she received from the Bush administration. Karger began hounding NOM for information about its finances, such as the tax forms every nonprofit must make available to the public. He contacted the IRS, various NOM offices, even sent an ally to the group’s headquarters in New Jersey—where, despite repeated visits, no one answered the door. He struck out. Brian Brown, NOM’s executive director, says Karger is guilty of “religious bigotry.” There is, he says, no factual basis for his claims that NOM is a front for the Mormon Church. “Fred Karger has a history of being untruthful and making false attacks on NOM and in general trying to intimidate and harass [NOM] supporters. Frankly he’s an embarrassment to those who want to civilly debate the same-sex-marriage issue. He has no basis in reality. We see Fred Karger as someone who is wasting our time.” Yet Karger’s muckraking has clearly struck a nerve with NOM. In January 2009, with the help of the Indiana-based Christian right law firm Bopp, Coleson & Bostrom, the group sued the state of California, challenging the law that requires disclosure of ballot-initiative donors. NOM alleges that the requirements prompt harassment of donors—in good part, court documents suggest, via lawn-sign theft. It’s a serious case from a group of lawyers who have an excellent track record at overturning campaign finance laws. (James Bopp [pdf], one of the firm’s name partners, brought the original lawsuit in Citizens United v. FEC, the Supreme Court case that in a seismic January ruling led the court to throw out federal limits on corporate spending in elections.) The California lawsuit could have implications far beyond the state, striking at the heart of more than 40 years of transparency legislation. In connection with the suit, NOM has subpoenaed Karger—demanding, ironically, the exact same kind of financial information it’s refusing to give him. Karger is fighting the subpoena. While the lawsuits proceed, Karger is continuing to poke NOM with a stick. When news of Carrie Prejean’s sex tapes leaked, he demanded to know if NOM would fire the former Miss California, who had appeared in one of its ads. He’s sent out press releases calling on Maggie Gallagher to take a lie-detector test. But if his opponents took the gimmicks to mean Karger wasn’t serious, they were in for a surprise. AFTER PROP 8, the gay-marriage battle moved to Maine, whose legislature legalized same-sex marriage last spring. A ballot initiative to void the law soon followed, and Karger started tracking Maine campaign filings. Right away, he noticed that few individuals from Maine had given any money to the local group working to get the initiative on the ballot. Instead, the group, Stand for Marriage Maine, was getting a huge share of its money from NOM. Maine law requires any organization raising more than $5,000 for a ballot initiative to register with the state and report the names of donors who give more than $100. But NOM never registered in Maine. Karger suspected that the boycotts had scared donors, and that NOM was trying to funnel their money to the Maine campaign anonymously. Sure enough, he intercepted 79 emails NOM sent out to supporters after the success of Prop 8 in California and found that 16 of them were essentially fundraising appeals for NOM’s work in Maine. “Every dollar you give…is private, with no risk of harassment from gay marriage protesters,” one promised. Another read, “Donations to NOM are…NOT public information.” Armed with those emails, Karger asked Maine election officials to investigate what he called NOM’s “money laundering.” And on a gloomy day in October, he traveled to Augusta to testify on his complaint. He was going up against some of the nation’s top campaign finance lawyers, and he was pumped. “I’ve spent 30 years in politics, managing campaigns,” Karger told the ethics board, a bipartisan commission appointed by the governor and the legislature. “I’ve filed and read literally thousands of campaign reports in probably 25 states. I’ve never seen this type of blatant disregard for election laws.” The commissioners sat in stony silence. Karger was convinced he’d lost. A career in politics has trained Karger to make realistic assessments. The night before the hearing he predicted, correctly, that gay marriage would go down to defeat in Maine. Putting a minority’s civil rights on the ballot is always a dicey proposition, and the movement, in his view, was “not ready for prime time.” His strategy now is effectively the same one he’d once deployed in the tobacco fight: slow down the other side, force them to spend their money, and embarrass them where you can. (In that spirit, Karger recently launched an ad campaign encouraging viewers to ask Mitt Romney “to urge the Mormon Church to stop its nasty campaign against gay marriage.”) It’s likely to be a long fight—in Maine, too, the anti-gay-marriage measure prevailed at the polls last fall—but Karger did score at least a tactical victory. At the conclusion of the Maine ethics hearing, the commissioners voted to investigate NOM, and a federal judge cleared the way for the state to release NOM’s donor list. After the announcement, NOM director Brown ended up side by side with Karger among the TV cameras. Karger may be a gay man fighting a movement that considers him an offense to God, but he is first and foremost a political operator. He shook Brown’s hand and joked with NOM’s lawyer about his impending deposition. Afterward, leaving the building, Karger was buoyant. “If I had a budget, I’d be dangerous,” he said with a big smile.
{ "pile_set_name": "OpenWebText2" }
Story highlights Ivanka Trump says she was 'glad' her father issued an immediate apology Trump has defended her father against allegations of sexism in the past (CNN) Ivanka Trump says her father's lewd and sexually aggressive comments from a leaked 2005 Access Hollywood video were "clearly inappropriate and offensive." "My father's comments were clearly inappropriate and offensive and I'm glad that he acknowledged this fact with an immediate apology to my family and the American people," Trump's eldest daughter said in a statement to Fast Company published Monday. The statement was sent to the magazine after Trump participated in a profile that touched on the intense media scrutiny of the campaign trail and how she shrugs off reports she knows are wrong. "The greatest comfort I have is the fact that I know my father. Most of the people who write about him don't. I do," Trump said. "So that gives me an ability to shrug off the things that I read about him that are wrong." Read More
{ "pile_set_name": "OpenWebText2" }
Yeddyurappa has 18 cases against him out of which 14 are punishable with life imprisonment. Yeddyurappa has 18 cases against him out of which 14 are punishable with life imprisonment. Names of chief ministers of Punjab and Karnataka — Captain Amarinder Singh and H D Kumraswamy — along with some former chief ministers found mention in the list of 4,122 criminal cases pending against serving and ex-legislators that was submitted before the Supreme Court on Tuesday. Former Karnataka chief minister B S Yeddyurappa, also figured in the list along with serving Kerala minister M M Mani and sitting NCP MLA from Gujarat Kandhalbhai Sarmanbhai Jadeja. A report containing the list said nine cases are registered against former Karnataka BJP MLC and mining baron G Jannardan Reddy. Of these, eight are punishable with life imprisonment while one is punishable with imprisonment with seven years. The apex court, which is dealing with a PIL on the issue, was informed that FIR registered in 2017 with SIT Lokayukta against Kumaraswamy for offence punishable with life imprisonment is awaiting final report, pending probe. The report said a corruption case against Captain Amarinder Singh is pending since 2007 and charge has not yet been framed. The submissions were made in the report filed by senior advocate Vijay Hansaria, who is assisting the court as amicus curiae in the matter along with advocate Sneha Kalita. It said Yeddyurappa has 18 cases against him out of which 14 are punishable with life imprisonment. In the case of Kerala Electricity Minister M M Mani lodged in 1982, charges have not yet been framed even though the charge sheet was filed on November 18, 2015. The report said in Gujarat, six criminal cases are pending against Jadeja, out of which three offences were registered between 1994 and 1998 under TADA and/or Arms Act punishable with imprisonment for life or 10 years. “In all these three cases charges have not been framed and matter is under stay either before the Supreme Court/ High Court,” it said. In Uttar Pradesh, it said, all the 992 cases have been transferred to the special court for MPs and MLAs at Allahabad, both triable by Sessions as well as Magistrate. The charges have not been framed in 395 of these cases, and the High Court has put a stay on 14. It said 22 FIRs are registered against former Samajwadi Party MP Ateeq Ahmed, out of which 10 are for offences punishable with imprisonment for life or death sentence, and are pending at various stages. Twelve FIRs are registered against former Shiv Sena MLA Pawan Kumar Pandey, between 1989 and 2017, out of which three are for offences punishable with imprisonment for life/ death sentence, and are pending at various stages, it said. It said six FIRs are registered against former Samajwadi Party MLA Khalid Azeem, between 2003 and 2011, out of which five are for offences punishable with imprisonment for life/death sentence are pending at various stages. Six FIRs are registered against sitting (BJP) MLA Upendra Tiwari, between 1996 and 2011, out of which three are for offences punishable with imprisonment for life/ death sentence are pending at various stages. FIR was registered against former MLA Chote Lal Gangwar, in which charge sheet was filed on December 14, 1999 and the present status is not known, it said. In Bihar, there are total 23 cases pending which are punishable with life term registered between 1991 and 2018. The report said FIRs were registered against sitting legislators Shamim Akhtar, Anant Kumar Singh, Randhir Kumar Soni of JD(U) and Sarfaraz Alam of RJD, for offence punishable with imprisonment up to life but charges have not yet been framed. The report said FIR was registered against sitting BJP MLA Shashikant Mohabat Ram Pandya in 1998 for offences, punishable with imprisonment up to life and the charges were framed in 2014 but the case is at the stage of recording prosecution evidence. In Karnataka, there are total 58 cases pending against MLAs punishable with imprisonment for life most of whom are on the basis of SIT constituted by the Lokayukta. “Out of these 58 cases, 34 cases are against 9 sitting MLAs and 24 cases are against 6 former MLAs. Most of these cases registered in 2015 and the final report is awaited,” it said. In Madhya Pradesh, all 168 cases have been transferred to the Special Court for MPs/ MLAs at Bhopal, both triable by Sessions as well as Magistrate, the report said, adding that there are total 36 cases pending which are punishable with imprisonment for life, registered between 1998 and 2018. Three FIRs are registered against Congress leader Digvijay Singh all of which are punishable with imprisonment for life, it said. In Maharashtra, 31 Magisterial trial cases have been transferred to Chief Metropolitan Magistrate, Mumbai, who has been designated for cases of MPs/ MLAs in terms of the orders of this court and involve 23 sitting legislators but in 21 cases charges have not been framed. Thirty-three criminal cases are pending against former Congress MLA Ramesh Chandra Jena in Odisha out of which five offences were registered under Section 302 IPC (murder) punishable with death sentence or imprisonment for life. In Tamil Nadu, 321 cases are pending out of which only 71 are pending trial and in 188 cases even charges have not been framed. While in West Bengal, out of 269 cases, 231 have been transferred to the special court for MPs/ MLAs, while 38 sessions trial cases of MP/ MLA have been transferred to special court in Andhra Pradesh. 📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines For all the latest India News, download Indian Express App.
{ "pile_set_name": "OpenWebText2" }
In one of her more controversial appearances in the Wasilla church, Palin told a group of ministry students in June to pray that sending troops to Iraq was part of "God's plan." In a speech this month at a deployment ceremony for her Iraq-bound soldier son, Palin called the conflict a "righteous cause." The linked article suggests that Palin's political instincts kept her from translating some of her more controversial fundamentalist beliefs into policy. That is hardly reassuring given the greater power she would wield as vice president, and potentially as president. Douglas Wead misses the point when he asks: "Are we saying [evangelical Christians] can't participate in public life?" No. We're asking how, if at all, those beliefs shape the candidate's view of appropriate public policy. "It's legitimate to ask questions about candidates who come from a fundamentalist environment with a black-and-white worldview, and want to know how it would affect their approach on all kinds of issues," said Paul S. Boyer, a retired University of Wisconsin history professor who has written about the role of religious prophecy on public policy. Palin can hold whatever religious beliefs make sense to her, but when those beliefs inform her view of public policy, it's important to understand them. We've seen where a president with a "black and white worldview" takes the country. Palin's belief that creationism should be taught in science classes, and that God has a plan for the United States to fight righteous wars against oil rich countries, demonstrates that Palin (like George Bush) is out of touch with a reasonable, mainstream approach to governance.
{ "pile_set_name": "OpenWebText2" }
CIUDAD DE MÉXICO, 23 de abril.- Alejandra Benítez, nueva ministra de Deportes del gobierno de Nicolás Maduro, causó furor con su nombramiento. Odontóloga y esgrimista olímpica en Atenas 2004, Beijing 2008 y Londres 2012. Alejandra Benítez llama la atención por su belleza física y que le ha permitido posar sensualmente en varias sesiones fotográficas. Antes de que aceptara el cargo, se desempeñaba como diputada suplente ante la Asamblea Nacional por la región de Caracas. El día de ayer y después de haber sido nombrada, Benítez recordó que “he sido una luchadora por el deporte, por la reivindicación de los atletas, de los entrenadores y de la lucha social”. La nueva ministra de Deportes señaló que tiene un compromiso grande, “como cuando tú llevas al país en los hombros en una competencia. Ahora voy a llevar al país nuevamente en los hombros en conjunto con todos esos colectivos del deporte que querían un atleta al frente del Ministerio La ley de derechos de autor prohíbe estrictamente copiar completa o parcialmente los materiales de Excélsior sin haber obtenido previamente permiso por escrito y sin incluir el link al texto original.
{ "pile_set_name": "OpenWebText2" }
Aeropuerto Silvio Pettirossi - Paraguay (Shutterstock.com) Un vuelo de Aerolíneas Argentinas que ayer, domingo, tenía previsto conectar las ciudades de Asunción, Paraguay, con Buenos Aires, debió aterrizar de emergencia minutos después de despegar por una falla en una de sus turbinas. Se trata del segundo desperfecto en las últimas horas en unidades de la aerolínea de bandera, luego de la despresurización de en la cabina del vuelo AA2667, que había partido ayer a las 13:25 desde Bariloche y debía llegar a la terminal de Aeroparque. Según confirmó Fernando Gallardo, administrador del Aeropuerto Silvio Pettirossi, al diario ABC, la aeronave aterrizó sin mayores inconvenientes. En tanto, el presidente de la Dirección Nacional de Aeronáutica Civil de Paraguay (DINAC), Édgar Melgarejo, afirmó: “Se constató que había problemas en una turbina para determinar la causa. Pudo haber sido (debido a) pájaros, algo que succionó la turbina; pudo haber sido piedras también, no es raro”. Las autoridades, además, señalaron que durante la jornada del hoy se dará a conocer el informe técnico sobre el desperfecto. Por otro lado, el vuelo AA2667, que había partido a las 13:25 del domingo desde Bariloche y debía llegar a la terminal de Aeroparque, tuvo un inconveniente y el piloto decidió descender al poco tiempo de haber despegado. A través de las redes sociales se difundió un video que una persona grabó desde su asiento. En la filmación se puede ver que caen las máscaras de oxígeno, aunque no muchos se la colocaron ya que su uso no fue necesario ya que la nave se encontraba a una altura por debajo de los 14 mil pies. Aterrizaje de emergencia de un avión de Aerolíneas Argentinas “Estamos ahora en el aeropuerto de Neuquén luego de un aterrizaje de emergencia por despresurizacion de cabina (eso fue lo que nos informaron) Fueron 15 minutos de susto grande y uso de mascarillas de oxígeno”, explicó en Twitter una de las pasajeras que vivió ese momento. La mujer señaló que la situación fue “bien manejada por el capitán y la tripulación”, todo el personal de la aerolínea “se portó de manera muy profesional” y, unas horas más tarde, “todos los pasajeros fueron reubicados" en otros vuelos que partieron este mismo día con destino a la Capital Federal. Por su parte, la Junta de Investigación de Accidentes de Aviación Civil (JIAAC) informó que su sede central se encuentra investigando lo sucedido y las razones por las cuales la cabina se despresurizó y el avión debió descender en el aeropuerto Internacional Juan Domingo Perón. Desde Aerolíneas Argentinas, en tanto, destacaron que “no se registró pánico, ni ninguna alteración ya que la tripulación informó en todo momento de la situación” y destacaron que “la mayoría de los pasajeros -en total eran 98- fueron reubicados”. Seguí leyendo: Un avión de Flybondi tuvo que aterrizar de emergencia porque se activó la alarma de un motor Avión aterriza de emergencia en un country El relato del piloto que tuvo que aterrizar de emergencia cerca de Miramar
{ "pile_set_name": "OpenWebText2" }
1 of 15 View Caption Lyle Jeffs. Courtesy photo Photo courtesy of the Washington County Sheriffs Office Preston Yeates Barlow Photo courtesy of Washington County Sheriff's Office Kimball Dee Barlow | Courtesy photo Federal Judge Ted Stewart. FILE - In this Jan. 21, 2015 file photo, Lyle Jeffs leaves the federal courthouse in Salt Lake City. Polygamous sect leader Jeffs FILE - In this Jan. 21, 2015, file photo, high-ranking polygamous leader Lyle Jeffs leaves the federal courthouse, in Salt Lake Ci Rulon Barlow. Courtesy photo Courtesy | Washington County Jail Nephi Steed Allred Courtesy | Washington County Sheriff's Office Winford Johnson Barlow Courtesy | Washington County Sheriff's Office Kristal Meldrum Dutson Courtesy | Davis County Jail John Wayman | Tribune file photo Seth Steed Jeffs Trent Nelson | The Salt Lake Tribune Lyle Jeffs, FLDS Bishop and brother of Warren Jeffs, speaks to followers in Salt Lake City Lyle Steed Jeffs. Courtesy | Davis County Jail Lyle Jeffs. Courtesy photo
{ "pile_set_name": "OpenWebText2" }
Following up on a segment from last night’s show, it appears the U.S. House of Representatives, just nine months into the current Congress, can’t think of anything to do. The Republican leadership hasn’t scheduled many work days for the remainder of 2013, and they’re now considering a plan to Following up on a segment from last night’s show, it appears the U.S. House of Representatives, just nine months into the current Congress, can’t think of anything to do. The Republican leadership hasn’t scheduled many work days for the remainder of 2013, and they’re now considering a plan to scale back even further For the first time in months, House Republicans are facing no immediate cataclysmic deadlines, and GOP leaders are struggling to come up with an agenda to fill the 19 legislative days that are left in 2013. Need evidence? The House votes Monday evening and will finish its work week Wednesday. After that, the House is out of session until Nov. 12. Internally, Speaker John Boehner (R-Ohio) and senior Republicans aren’t discussing coming back early from the scheduled recess, but instead, they are wondering if they’ll cancel some of the remaining days in session. This Politico item was published yesterday, so there are really only 18 legislative days remaining until New Year’s Eve – it’s great work if you can get it – a total which may be poised to shrink. The 112th Congress was the least productive since the clerk’s office started keeping track seven decades ago, and this current 113th Congress is on track to do even less. Presumably, the Republican majority could at least try to take up meaningful bills in the hopes of passing something, but at this point, they’re not even inclined to bother. Rather, they’re thinking about showing up to work even less. What about the House Republican policy agenda? It apparently doesn’t exist. What about the desire to have some legislative accomplishments? It’s been overwhelmed by political lethargy. This crop of lawmakers is giving new meaning to the phrase “do-nothing Congress,” and instead of scurrying to prove themselves capable of governing, they’re content to just accept the label and go home. As pathetic as this may be, the larger point isn’t just to point and laugh at the House’s ineptitude. Rather, one of the key takeaways of this is that House Republicans keep saying they’d love to tackle immigration reform – if only they had more time. The problem, of course, is not with a lack of time, but rather what they choose to do with it. I’m reminded of an item from two weeks ago, when Byron York quoted a Senate Republican staffer commenting on the House GOP. “They are a majority party that wants to be a minority party,” the aide said.
{ "pile_set_name": "OpenWebText2" }
“Lady Catelyn, you are wrong.” Brienne regarded her with eyes as blue as her armor. “Winter will never come for the likes of us. Should we die in battle, they will surely sing of us, and it's always summer in the songs. In the songs all knights are gallant, all maids are beautiful, and the sun is always shining.” –George R.R. Martin, A Clash of Kings Fantasy Flight Games is proud to announce Called to Arms, the second Chapter Pack in the War of Five Kings cycle for A Game of Thrones: The Card Game! The banners have been called in the War of the Five Kings. In the north, Robb Stark gathers his armies around Riverrun, preparing to strike out at Tywin Lannister’s forces. Joffrey Baratheon has ascended the Iron Throne in King’s Landing, Renly Baratheon has been crowned in Highgarden, and on the islands of Dragonstone and Pyke, Stannis Baratheon and Balon Greyjoy plot their own rise to power. Before long, great armies will march onto the field of battle, while subtle intrigues and assassins’ daggers decide the fates of countless others. Like Across the Seven Kingdoms before it, the Called to Arms Chapter Pack continues to follow the events of A Clash of Kings. A new King version of Balon Greyjoy challenges all enemies of the Iron Isles, even as two new attachments give you ways to crown your other characters and make them Kings. You’ll also find iconic characters like Dolorous Edd and Shae entering the game for the first time, even as other cards lend new importance to loyal cards. Finally, this pack continues to introduce a focus on the seasons of Westeros with two new agendas! Kings of Summer, Kings of Winter In Westeros, seasons can last for years. A long summer means bounteous harvests and prosperity, even for the smallfolk. A harsh winter can doom hundreds to death through cold and starvation. As A Clash of Kings begins, white ravens fly out from the Citadel in Oldtown, bringing word of summer’s end. Soon, everyone in Westeros, from the greatest lord to the lowliest servant will need to contend with the howling winds of winter. It’s fitting, therefore, that summer and winter begin to play a larger role in your games of A Game of Thrones: The Card Game. Called to Arms introduces two new agendas to the game: Kings of Summer (Called to Arms, 37) and Kings of Winter (Called to Arms, 38). Each of these agendas calls you to champion the season of Winter or Summer and gives you benefits in keeping with your chosen season. For instance, you may choose the Kings of Summer agenda. With this as your agenda, you are prevented from including any Winter plots in your plot deck, and the reserve on each player’s revealed plot card is increased by one. Furthermore, while there are no Winter plot cards revealed, you can increase the gold on your revealed Summer plot by one. The Kings of Winter agenda, on the other hand, diametrically opposes the Kings of Summer agenda. Kings of Winter prevents from including any Summer plots in your plot deck, and it reduces the reserve on each player’s plot card by one. Furthermore, while you have a Winter plot revealed, you can reduce the gold on each opponent’s non-Summer plot card by one. Ultimately, these two agendas institute a season—either the balmy heat of summer, which blesses both players with increased reserve, or the freezing chill of winter, which may force you to discard cards you would have rather kept. Of course, the agenda wouldn’t be very useful if it didn’t give you some kind of benefit over your opponent. Deciding if your deck cares more about benefitting itself or hurting your opponent can be a useful way of determining which agenda best suits your deck. It’s also important to consider the greater impact of increased or reduced reserve. For instance, the Kings of Summer agenda increases each player’s reserve, but that won’t do your opponent much good if you’re playing House Lannister and discarding your enemy’s cards as fast as he draws them. Holding extra cards in your own hand can prove extremely valuable, however, and the extra gold from Kings of Summer is always useful for ambushing characters or fueling Tywin Lannister (Core Set, 90). Alternatively, you might want to use the Kings of Winter agenda to reduce both players’ reserve and attack your opponent’s gold supplies. House Stark has already shown its preference for Winter with cards like Winterfell (Wolves of the North, 17) and As Hard as Winter (Wolves of the North, 22). A reduced reserve only hurts if you’re close to your reserve in the first place, and factions like House Stark and House Greyjoy are well known for running out of cards relatively quickly. In these circumstances a reduced reserve and less gold can hurt your opponent much more than yourself. Of course, you’ll need plenty of Summer and Winter plots to gain the full benefit of these agendas, and this Chapter Pack introduces a new plot for each season. On one side, you gain Summer Harvest (Called to Arms, 39). This plot has a gold value of X, and when it’s revealed, you can choose an opponent. The value of X is then two higher than the printed gold value on your opponent’s revealed plot—almost guaranteeing that you come out ahead, at least economically. On the Winter side, we encounter Winter Festival (Called to Arms, 40). These seasonal celebrations advance you much more directly towards victory, but they can also be easily disrupted by the opposite season. If you have Winter Festival in play, when the challenges phase ends, you immediately gain two power for your faction, provided there are no Summer plots revealed. When you get close to victory, Winter Festival may just give you the tools you need to seal the game in your favor, especially for factions like the Night’s Watch that prefer to sit back and defend challenges. The Ravens Are Flying Ravens have been loosed from the Citadel, bringing news of a change in season. Will your faction bask in the sun and take full advantage of summer’s bounty? Or will you harden yourself to the winter snows and destroy your enemy in his time of weakness? With the Called to Arms Chapter Pack, you can call upon the power of these agendas for any faction. Look for the Called to Arms Chapter Pack in the third quarter of 2016!
{ "pile_set_name": "OpenWebText2" }
About This Game We have re-imagined Touhou as an ARPG, creating this "Danmaku-Shooting-ARPG". Fight against different kinds of enemies, and defeat the powerful bosses of Gensokyo! We hope this game will be a good experience for those who are both new and knowledgeable of Gensokyo - the world of Touhou Project! Dodge, shoot! We know you want more playable characters~! A variety of enemies Exciting Boss Combat Explore Gensokyo Solve the Incident! A game from MyACG Studio, a Chinese doujin game fan circle.================================================================A brand new "Danmaku-Shooting-ARPG" game.We have kept the Danmaku features of Touhou, and re-imagined it as an ARPG!Control characters from Gensokyo with interesting and unique abilities.Use their special abilities to defeat a variety of enemies and bosses.An "Incident" happens again as usual. Strange people are appearing in Gensokyo!Who are they? Where did they come from? Why did they come here?But no matter, you have to defeat them!We spent a lot of time polishing the boss fights.Fight against famous Touhou characters, find their weaknesses, and save Gensokyo!You will explore Heaven, the Hakurei Shrine, Misty Lake, the Scarlet Devil Mansion, and more famous places in Gensokyo!Explore the world of Gensokyo and find out what's going on!It's an Incident! But this time, "Incident resolver -- Hakurei Reimu" doesn't appear!Where is she? Gensokyo is in danger!================================================================Touhou Project Creator: ZUNDeveloper: MyACG StudioThis game complies with the guidelines for releasing Touhou doujin games on Steam.Enjoy!
{ "pile_set_name": "OpenWebText2" }
« On a passé une sorte de deal verbal en leur disant : Je ne veux plus d'attentat sur le sol français et en contrepartie, je vous laisse venir en France, je vous garantis qu'il ne vous arrivera rien. » Cette phrase choc, lâchée le 30 janvier dernier, dans le bureau du juge chargé d'enquêter sur l'attentat de la rue des Rosiers, émane d'un octogénaire. Mais pas n'importe lequel. Visage cerclé de rides, mais mémoire intacte, Yves Bonnet, ancien patron de la Direction de la surveillance du territoire (DST), s'épanche sur l'attaque qui a ensanglanté un établissement juif au cœur du Marais en 1982, quelques mois avant sa prise de fonctions à la tête du service secret. Pour la première fois, l'ancien maître espion reconnaît devant la justice l'existence d'un accord secret entre la France et Abou Nidal, un groupe terroriste potentiellement responsable de la tuerie. Un pacte oral ignoré des nombreux enquêteurs et magistrats qui se sont succédé durant trois décennies sur ce dossier insoluble. Sur procès-verbal, Yves Bonnet confirme un « engagement donné aux représentants d'Abou Nidal de ne pas être poursuivis en France ». Le groupe Abou Nidal est à l'époque un mouvement palestinien armé, dissident du Fatah de Yasser Arafat, qui commet des massacres en France et à l'étranger. Il y a 37 ans, le 9 août 1982, il est 13h15 lorsque au moins trois terroristes armés de pistolets-mitrailleurs sèment la mort dans le restaurant Jo Goldenberg, figure du quartier juif de Paris. Trois minutes plus tard, après avoir lancé une grenade et tiré en rafale, ils prennent la fuite. Six morts et 22 blessés gisent au sol. Très tôt dans l'enquête, la responsabilité du groupe Abou Nidal est évoquée. Les balles retrouvées sur place sont issues de modèles Maszynowy wz. 63, une signature de l'organisation extrémiste. «Des attentats en Italie, ça ne me regardait pas» Malgré ces fortes suspicions, l'ancien patron de la DST accepte d'organiser une rencontre clandestine avec le groupe Abou Nidal peu après l'attentat. « Ce sont mes collaborateurs qui les ont vus à l'époque, détaille Yves Bonnet devant le juge. Je ne vais pas les dénoncer. C'est moi qui prends la responsabilité de l'accord. » Le haut fonctionnaire retraité ne détaille pas l'identité des terroristes vus par ses collaborateurs mais, selon lui, il ne s'agissait pas des tueurs de la rue des Rosiers, mais de leurs « comparses ». Le pacte est scellé : les membres d'Abou Nidal réfugiés à l'étranger sont autorisés à « venir en France, sans risque » d'être poursuivis ; en contrepartie, ils s'engagent « à ne se livrer à aucune action violente ». La DST aurait même permis à deux terroristes de l'organisation de rendre visite en prison, en France, aux deux auteurs du meurtre d'un représentant de l'Organisation de libération de la Palestine à Paris. « Et ça a marché, il n'y a plus eu d'attentats à partir de fin 83, en 84 et jusqu'à fin 1985 », se satisfait Yves Bonnet en audition, qui réfute le terme de « collaboration » et préfère celui de « non-agression ». « Après qu'ils commettent des attentats en Italie, par exemple, ça ne me regardait pas tant qu'il n'y avait rien sur le sol français. » Yves Bonnet, ancien directeur de la DST. /AFP/Jacques Demarthon Quel crédit accorder à cette confession tardive, 37 ans après ? Contacté par le Parisien, Yves Bonnet assume ce pacte, destiné selon lui à « assurer la sécurité des Français ». Pour tenter de se forger une opinion, le magistrat instructeur a également convoqué, les 6 et 14 février derniers, Jean-François Clair et Louis Caprioli, deux ex-responsables de la lutte antiterroriste à la DST. Mais tous deux se sont réfugiés derrière « le secret-défense » concernant l'accord. « Je ne nie pas qu'il y a eu des contacts [avec Abou Nidal], ce serait mentir », s'est contenté de déclarer le premier. La présidence de la République était-elle au courant de cet accord secret ? Yves Bonnet affirme qu'il « disait tout » à Gilles Menage, alors directeur de cabinet de François Mitterrand, mais qu'officiellement « l'Elysée ne savait rien »… «Cela devient une affaire d'Etat» Les victimes, elles, se disent choquées que la France ait pu négocier avec les responsables de l'attentat de la rue des Rosiers. « Si un tel accord occulte a été passé, cela devient une affaire d'Etat, estime Me Avi Bitton, avocat de parties civiles. Il faut qu'une enquête parlementaire soit créée et pas uniquement sur le dossier de la rue des Rosiers. De tels pactes ont-ils été noués avec d'autres organisations? C'est possible lorsqu'on voit les agissements de l'entreprise Lafarge en Syrie … » « C'est une honte, tonne aussi Yohann Taieb, proche d'une victime. Imagine-t-on les services secrets négocier aujourd'hui avec Daech? » Newsletter - L'essentiel de l'actu Chaque matin, l'actualité vue par Le Parisien Chaque matin, l'actualité vue par Le Parisien Votre adresse mail est collectée par Le Parisien pour vous permettre de recevoir nos actualités et offres commerciales. En savoir plus
{ "pile_set_name": "OpenWebText2" }
Transcript Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours. Before we get into it just a few quick announcements. If you think of yourself as part of the effective altruism community you should fill out the 2018 effective altruism survey. This helps keep track of who is involved, how they’re trying to improve the world, and what they believe. I’ll put a link in the show notes and associated blog post. If you want to get a high impact job you should check out our job board, which was recently updated with new vacancies. You can find it at 80000hours.org/job-board/. It’s where we list the positions we’re most excited about filling. Finally I just wanted to give a shout out to our producer Keiran Harris who has been doing a great job editing the episodes and generally helping to improve the show. And without further ado, I bring you Eva Vivalt. Robert Wiblin: Today I’m speaking with Dr. Eva Vivalt. Eva is a lecturer in the Research School of Economics at the Australian National University and the founder of AidGrade, a research institute that pools together hundreds of global development studies in order to provide actionable advice. Eva has a PhD in Economics and an MA in Mathematics from UC Berkeley, and an MPhil in Development studies from Oxford University. She’s also previously worked at the World Bank. She’s a vegan, a Giving What We Can member, and principal investigator on Y Combinator Research’s randomized control trial of the basic income. Thanks for coming on the podcast. Eva Vivalt: Thank you. Great to be here. Robert Wiblin: So, we’re going to talk a bit about your career as an economist and the various findings that you’ve had in your research over the last five years. But first, what are your main research interests these days? Is there any way of summarizing it? Is there a core topic that you’re looking into? Eva Vivalt: So a lot of my work is on really how to make better evidenced-based policy decisions. And part of that, that I’ve recently gotten into, is looking more at priors that people may have, both policy makers and researchers. And there’s lots, actually, to say about priors. But I think that’s a direction that my research has gone recently that actually relates quite well to some of the previous stuff, the linkage being evidence-based policy. Robert Wiblin: There’s a lot of heavy material to cover there later on in the show. But to warm up let’s talk first about Y Combinator’s basic income study – what is the study looking at and what motivates it? Eva Vivalt: Yeah, no, I’m really excited by this study. So essentially the study is to give out $1000 per month for either three or five years to a bunch of individuals who are randomly selected. So the randomization is at this individual level, it’s not actually like giving, for example, everybody in an area the program. There’s a control group as well that still gets some nominal amount too, hopefully, so that they continue to answer surveys and such. We’re looking at a variety of outcomes. Things like time use, for example, like most economists would say that if you give people money they should actually work a little a bit less, that’s completely a rational thing to do. But if they are working less, what are they doing with their time instead? Because it could be actually really good for people to work less if they are, for example, getting more education so they can get a better job in the future. Or taking care of their kids, et cetera, et cetera. There’s all sorts of productive uses of time that one might find otherwise adding a lot of value. There’s health outcomes, education outcomes. I should say this program is targeted to relatively poorer individuals and relatively younger individuals because the thought is it could actually change people’s trajectory over time. Those are kind of the areas where we might expect the money to go a bit farther and to see slightly larger effects. Robert Wiblin: Interesting. Okay. So, given that it’s from Y Combinator, which is a tech data accelerator, is it kind of motivated by the concern that everyone’s going to lose their jobs because of technology? Or is it just more prosaic issues around equality and lack of opportunity in the United States? Eva Vivalt: I think there’s a variety of motivations here. So I think in the background somewhere there is this concern about technology potentially displacing workers. I think there’s also some genuine utopian ideal of people should be able to do … Robert Wiblin: They shouldn’t have to be wage slaves. Eva Vivalt: Yeah, yeah, yeah. It’s not like all negative people lose their jobs because people could lose jobs in a good way. Nobody actually wants hard work in some regards. To be fair it’s not like really a great test of what happens if people lose jobs per se because to do that is what you’d want is randomized control trial in which you fire people, which is not likely to happen anytime soon. Robert Wiblin: Not going to get past the ethics board. Eva Vivalt: Yeah. But I think this is more motivated by the idea that you can imagine some worlds in which what you would want to do is expand the social safety net. And if you are expanding the social safety net, this could be one relatively efficient way of doing so, and so let’s look at what the effects of this particular kind of program would be. And you might imagine that some kind of program like this would probably start out with targeting relatively poorer individuals even though a true basic income program would target everybody. Robert Wiblin: So what do you expect to find? Given past studies that are similar. And also, how many people are in this study? Eva Vivalt: We have about 1000 people in the treatment group, 2000 control and then this larger super control group for which we just have administrative data. It’s actually a decent sized experiment and there’ve not been … the most similar studies in the states are some of the negative income tax experiments and EITC from the ’70s, there’s also I guess the Alaska Permanent Fund. The other similar ones I would say would be Moving to Opportunity and the Oregon Health Insurance Experiment. But these are all like … they’ve all got quite a lot of differences actually. So, Alaska Permanent Fund; everybody just gets a certain transfer. So, that one actually is universal. It’s not very much of a transfer and you’ve got to use different approaches to evaluate it since everybody gets it. Oregon health insurance, well obviously that’s health insurance. Negative income tax experiments, those were quite old and had a lot of differential attrition issues. Like I say, by now I think most economists would expect some effects on labor supply. There’s loads of papers on labor supply elasticity. I think there’s a little bit less on what people do with their time otherwise. One thing we’re doing is designing this custom time use app that people can put on their phones so we can sort of ping them and ask, “Hey, what are you doing right now?” Robert Wiblin: Is there a key uncertainty that it’s trying to resolve? Like will people quit their jobs? Or will they become happier? Or will they spend more time on leisure with their family? That kind of thing. Eva Vivalt: Yeah, so rather than one key outcome, we’ve got like lots of different families of outcomes. So we’ve got health outcomes, we’ve got education outcomes, we’ve got financial health, we’ve got subjective wellbeing, we’ve got this kind of employment/time use/income stuff. We’ve actually even got some more behavioral things like political outcomes, do people have more or less inter-group prejudice and other-regarding preferences, that kind of thing. So, we’ve got actually quite a lot of things. Also doing some things relating to work on scarcity, that people under a lot of economic pressure might make worse decisions. Is that a short-term effect? A long-term effect? That kind of thing. So there’s actually quite a lot of outcomes and sometimes when I talk to people about it they get a little bit confused. We’re looking at so many different things, but I think for a study of this kind of cost it’s actually really good to get a lot of different outcomes from it. Robert Wiblin: I just quickly did the maths and it looks like it should cost like 100 million dollars. Eva Vivalt: Not quite, but still quite high up there. Yeah. Robert Wiblin: I was just thinking, if you’ve got a 1000 and you’re giving then $12000 each, that would come to 12 million each year of the study, plus then the control group and all the other on costs and so on. It depends how long you run it, but it’s a pretty serious expense. Do you worry about having too many outcome variables? Or I suppose, you’ll be smart enough to adjust for the multiple testing problem. Eva Vivalt: Yeah, we’re adjusting for that. We’re basically — within a type of thing, so like health, we’ll consider these as sort of like separate subject areas. So, there’ll be like a paper on health, a paper on financial health, et cetera. And then within each of those papers we’ll do all the appropriate family-wise error corrections, et cetera. Robert Wiblin: Yeah. Are you going to preregister the analysis do you think? Eva Vivalt: Yes, we will. Robert Wiblin: Excellent. That’s great. So what’s your role in the whole thing? There’s quite a significant number of people involved right? Eva Vivalt: Yeah, no, this is a great project. For the PIs it’s myself and Elizabeth Rhodes, who’s a recent PhD grad from Michigan, David Broockman, who’s a Stanford GSB assistant professor, and Sarah Miller, who’s a health economist at the business school at Michigan. So those are the PIs and then we’ve got like a larger advisory board. We’re trying to keep in touch with both relevant academics, a bunch of senior researchers, as well as people obviously who are involved in other similar projects that we try to continue to talk with. Robert Wiblin: And what’s your niche? Eva Vivalt: Well I’m just one of PIs. “Just”, with quote marks. I think I was originally brought on board partially for experience with impact evaluations and sort of these large-scale trials. Robert Wiblin: Yeah. When might we hope to see results from it? It’d be some years out. Eva Vivalt: Yeah it will. The shortest treatment arm, that’s three years out. Actually we’d be gathering data slightly before the very end of it because what we don’t want to do is do the survey at the end of the three years and then we get the effect of people coming off the program, that kind of transition effect. We’ve got a baseline survey, midline survey and endline survey, and we’ve got a bunch of little intermediate surveys along the way that people can do just quickly by themselves on mobile. And for the big surveys, we’re going to do the last of those like two and half years in or so. And even if we get like some early results, we’re not going to release the bulk of things until at least the end of that three year arm because things can always change and we don’t … because it’s a very high-profile study, what we don’t want is people to come away with some idea of how things went a year in and then three years in things have changed a lot but nobody listens to it. And it could also like affect some of the narrative. We don’t want the subjects to hear about themselves in the media, right? That would not be great. Robert Wiblin: That would be disastrous really. Another exciting thing you’re working on outside of your core research agenda is how to get people to accept ‘clean meat’, which we’ve recently done done a few episodes on. That paper is called Effective Strategies for Overcoming the Naturalistic Heuristic: Experimental Evidence on Consumer Acceptance of ‘Clean’ Meat. What did you look at in that study? Eva Vivalt: Yeah, so we were interested in a few things. We were interested in looking at … I assume you’ve covered clean meat; clean meat is essentially, you can think of it as lab-grown meat or synthetic meat or some other kind of unpalatable terms, if you like. Robert Wiblin: It’s the rebranding of that. Eva Vivalt: Yeah, it’s the rebranding of that. So meat not from animals directly. Some people have got a knee-jerk reaction that, “Ew, this is disgusting. It’s not natural,” and so this is what we’re calling this naturalistic heuristic, that sort of prevents people from being interested in clean meat. And we’re looking at ways of overcoming that. We tried various methods like directly saying “look, things that are natural aren’t necessarily good and vice versa.” We tried another appeal that was more trying to get them to think about things that they are quite happy with even though they are unnatural. So maybe, prompt some sort cognitive dissonance there. Like if they don’t like clean meat they should also not like a lot of other things that they do like. Robert Wiblin: Vaccines. Eva Vivalt: Yeah, yeah, and I mean there’s lots of foods that something has happened to them. Like they’re fermented or they just changed a lot from the past anyways. Like corn nowadays looks nothing like corn a long time ago, chickens nowadays look nothing like chickens a long time ago, et cetera. And we also looked at giving people sort of a descriptive norms type of approach of; other people are very excited about clean meat so maybe you should be, too. It’s a little bit tentative but it seemed like the approach that was sort of trying to prompt cognitive dissonance by telling them about how there’s all these other unnatural goods that they like was maybe doing the best. The downside though is it did seem like quite a lot of … more people than I would have thought were actually quite negative towards clean meat. And especially, almost nothing did as well as — we had one treatment where we didn’t know how, a priori, how poorly people would respond to it. So we thought we’re going to prime some people with negative social information so that at least there’s some people for whom they’ve got some kind of anti, you know, they’ve got some kind of naturalistic [crosstalk 02:02:14]. Robert Wiblin: Some prejudice against it. Eva Vivalt: Yeah, exactly. And it turned out, that priming effect, was pretty much bigger than anything else we found, which is kind of disappointing because you can imagine that the very first thing that other companies who produce conventional meat products will do, most likely, is to try to attack clean meat as like- Robert Wiblin: Gross. Eva Vivalt: Yeah. So that was little bit unfortunate. And we also did another study where we were looking at the effects of knowing about clean meat on ethical beliefs because we thought actually if the … to some extent your ethical beliefs could be a function of what you think is like fairly easy to do. And so if you think that there is a good alternative out there, it could actually potentially change your views towards animals more generally, or the environment. So we were using this negative priming as an instrument for people thinking more or less positively towards clean meat and then looking at the effect on ethical beliefs, and there was actually some evidence that people were changing at least their stated ethical beliefs. I think we need to do a few more robustness checks there, but it was still quite surprising. Robert Wiblin: Yeah. Why do you think the ’embrace unnaturalness’ message worked the best? Do you have a theory there? Eva Vivalt: My best guess is that it had something to do cognitive dissonance and the fact that it was a relatively mild way of putting things. People don’t tend to like fairly strong messages against what they hold dear. We weren’t really undermining or trying to undermine what they were valuing, we were just saying, “Look, even by your own judgements here, to be consistent with your own things” … Robert Wiblin: ‘You’re right about these other things, so why not be right about this one too’. Eva Vivalt: Exactly. Robert Wiblin: ‘You’re so smart’. Eva Vivalt: It’s a very positive message in a way. Robert Wiblin: How clear cut was the result? Are you pretty confident that that was the best one? Eva Vivalt: You know I’m not 100% confident. So this is why I don’t want to oversell it because one could say this one was the one that sort of lasted the longest. We had like some follow ups. But at least in the short run, it could have also been — the descriptive norms might have done pretty well as well. So like it depends on whether you think — how we should weight the different rounds of data that we collected, right? And so we kind of pre-specified we were interested in the follow up but if you weren’t interested in that, if you thought that actually the early data should be somewhat informative about the later data, maybe the later data was just a bad draw, for example, then, you know. So I wouldn’t lean too, too hard on it. Robert Wiblin: Yeah. I mean I think that the naturalist heuristic is one of the most consistently harmful heuristics that people apply because it causes them to, in my view at least, reach the wrong answer just about so many different issues. And I wonder if there’s potential to just have a non-profit that just like pursues relentlessly this point that being unnatural is not bad, being natural is not good. They would help with clean meat, but also just so many other things as well. Eva Vivalt: That’s a fair point, and while doing this we got introduced to so many people who are doing so much interesting work on vaccines, et cetera, that, you know …. Yeah, I think that especially in the future as biotech in general becomes better, et cetera, et cetera, there’s going to be so many new products that are unnatural that plausibly benefit from such a message. Robert Wiblin: We just need a generic pro-unnaturalness organization that can kind of be vigilantes and go to whatever new unnatural thing people don’t like. Eva Vivalt: Yes, exactly. Robert Wiblin: Well, it sounds like clean meat is just kind of being developed now so there’s probably going to be … we’ll want to try out a whole lot of other messages, because you’ve only tried out three here. Were there any other messages that you considered including that you would like to see other people test? Eva Vivalt: Hmm, that’s a good question. Things don’t come to mind at this moment but I do think there’s a lot more room for further research here. Especially, one thing I don’t know about … I’m imagining that people are using unnaturalness … they seem to also think it’s unnatural and therefore it’s not healthy and therefore it’s all this other stuff. But I think there could be more done to break that down a little bit more because presumably you could in fact, at least theoretically, think that something is unnatural without thinking it’s necessarily unhealthy. Robert Wiblin: So, you’ve written a paper that’s been pretty widely cited in the last few years called “How Much Can We Generalize From Impact Evaluations?” That was your job market paper, right? Eva Vivalt: Yep. Robert Wiblin: So that’s the work that you did during your PhD that you’re using to try and get a job, which we might talk about later. But, what question were you trying to answer with this paper? Eva Vivalt: Yeah. So at the time that I was writing it, there was quite a lot of impact evaluation being done on various topics like de-worming, bednets, et cetera. But not so much of an effort to synthesize all the results. And so I’d started this non-profit research institute, AidGrade, to gather all the results from various impact evaluations and try to say something more systematic about them. But in the course of doing so I was kind of shocked to see how much results really varied. And I think if you talk to researchers they’ll say, “oh yeah, we know that things vary. Of course, they vary. There’s obviously all these sources of heterogeneity.” But I think that the language people use when talking to the general public or to funders is actually quite a bit different. And there, you know, things get really simplified. So I think there’s a bit of a disconnect. And anyways, I was investigating a little bit some of the potential sources of heterogeneity. I mean, it was, at that point, what I’m looking at is observational data. Even if the data are coming from RCTs, because I’m just looking at the results that the various papers found. So I can’t definitively say the sources of the heterogeneity, but I could at least look for correlates of that and also try to say something about how, in a way, we should be thinking about generalizability. And how there are some metrics that we can use that can help us estimate the generalizability of our own results. Robert Wiblin: So basically, you’re trying to figure out if we have a study in a particular place and time that has an outcome, how much can we say that that result will apply to other places and times that this same question could be studied. Is that one way of putting it? Eva Vivalt: Yeah, because you’ll never actually have exactly the same setting ever again. Even if you do it in the same place, things hopefully would have changed from the first time you did it. So we might naturally expect to have different results. And then the issue is, well by how much? And how can we know that? Robert Wiblin: All right. So I’m the kind of guy who, when they load up a paper, skips the method section, skips straight to the results. So, how much can we generalize from studies in development economics? Eva Vivalt: Not terribly much, I’m afraid to say. This was really disheartening to me at the time. Gotten over it a bit, but yeah. I guess one main takeaway as well is that we should probably be paying a little more attention to sampling variance in terms of thinking of the results of studies. Sampling variance is just the kind of random noise that you get, especially when you’ve got very small studies. And some small studies just happen to find larger results. So I think if we try to separate that out a bit and a little bit down-weight those results that are coming from studies of small sample sizes, that certainly helps a bit. Another thing that came out, and this is just an observational correlation, but one of the more interesting ones and I think it’s now part of the dialogue you hear from people, is that results from smaller studies that were done with an NGO, potentially as a pilot before government scale-up, those ones were initially more promising. And then the scale-ups didn’t live up to the hype as it were. Like the government-implemented larger versions of the same programs, or similar programs, they didn’t seem to do so well. So that’s a little bit disconcerting, if we think that generally we start as researchers by studying these interventions in smaller situations in the hopes that when we scale it up we’ll find the same effects. Robert Wiblin: Hmm. So is the issue there that NGOs do these pilot studies and for those pilot studies they’re a bit smaller and the people who are running them are very passionate about it, so they run them to a very high standard? Or they offer the intervention to a very high standard. But then when it’s scaled up, the people who are doing it they don’t have a much money or they don’t know what they’re doing. And so the results tend to be much worse? Eva Vivalt: Yeah, I think that’s part of it. There could also be like a targeting aspect of this. You start with the places where you think there’s going to be particularly high effects. And then, as you scale it up, you might end up incorporating expanding the treatment to some people who are not going to benefit as much. And that would be, actually, completely fine. The worst story is where the initial NGO, or the initial study, everybody was very excited about it and put a lot of effort into it. And then maybe their capacity constraints worsened it when it was trying to be scaled up. So, that’s a little more disconcerting I guess. Robert Wiblin: Right. So let’s just back up a little bit. You said the answer is that we can’t generalize very much from these development studies. What is your measure of generalizability, statistically? And on a scale between zero and one, where do we stand? Eva Vivalt: Yeah, so that’s an excellent question. One of the things I argue for in my paper is that we should be caring about this true inter-study variance term. Which, I and some other people like Andrew Gelman call tau-squared. Which one has to estimate, you don’t know that up front. But that this is a pretty good measure of, well, the true inter-study variance. And there’s also a related figure that that ties into, which is called the I-squared. Where you’ve got essentially the proportion of the variance that’s not just sampling error. And that’s nice because it’s a unitless metric that’s well established in the meta-analysis literature. And it kind of ranges from zero to one and it’s very much related to this pooling factor, where if you’re trying to think about how much to weight a certain study, you might think of putting some weight on that study and some weight on all the other studies in that area. And if you’re doing that, there’s some weight that you can put on one individual study’s result and that would range between zero and one. And similarly, for the weight you put on all the other studies’ results. I’m not sure if that completely answered your question. Robert Wiblin: Yeah. Eva Vivalt: But there are these metrics you can use, and I would completely agree, and I was trying to push for initially that … I mean, I’m still trying to push for it, but I think it’s now more accepted that we should be thinking of generalizability as something that is non-binary that lies somewhere between zero and one. Robert Wiblin: So, what is tau-squared? I saw this in the paper, but to be honest I didn’t really understand what it actually is. Is this some kind of partition of the variance that’s due to … I just don’t know. Eva Vivalt: Yeah, no worries. So essentially, yeah, you can think of it as some measure of …. Okay, you’ve got a whole bunch of different results from different studies. Some of that variation is just due to sampling variance. So if you think of these studies as all replications, I mean they’re not, but if you were to think of them as replications then the only source of variance would be the sampling variance because you’d be drawing an observation from some distribution. And you’d be drawing a slightly different observation, so you’d get a little bit of noise there naturally … Robert Wiblin: So that’s just some studies get lucky and some studies get unlucky in a sense. So they have higher or lower numbers just because of what individuals they happened to include? Eva Vivalt: Yeah, exactly. And so if you’re then thinking okay well we’re not actually really in a case of replications. We’re actually in a case where there is a different effect size in every place that we do the study because there’s so much heterogeneity. Like, there’s other contextual factors or whatnot. Well, then you’ve got not just this sampling variance, but also some additional sort of true latent heterogeneity that you need to estimate. Robert Wiblin: That the effect was different in the different cases. Eva Vivalt: Exactly. Exactly. So, I’m just arguing for separating the two of these things out. And then trying to say, well this is the true heterogeneity. And you could go even a step further and say well, maybe we can model some of the variation. And maybe we want to think that the important thing in terms of generalizing is how much unmodeled heterogeneity there is. Like how much we can’t explain. Like if we can say that, for example, well I’ve got a conditional cash transfer program and I want to know the effects on enrollment rates and maybe I think baseline enrollment rates are really important in determining that. Because it’s probably easier to do a better job in improving the enrollment rate from 75% than from 99%, right? It’s just a little bit easier. So, you can say okay well then I’ve got some model where baseline enrollment rates are an input into that model. And then after accounting for baseline enrollment rates, what’s sort of the residual unexplained heterogeneity in results. Because that’s going to be the limiting factor on how much I can actually extrapolate from one setting to another accurately. Robert Wiblin: Okay. So a tau-squared of one would indicate that all of them had the same effect in every case that they were implemented. And a zero would indicate that it was totally random, the effect that it would have in each different circumstance. Is that right? Eva Vivalt: Not quite, actually. Sorry, I might have explained this a little bit funny. So there is something that ranges between zero and one, which is either the I-squared or this pooling term. But the tau-squared itself, you can think of it as a kind of variance. It’s going to really be in terms of the units of whatever the thing was initially. So if it’s conditional cash transfers on enrollment rates, enrollment rates are maybe in percentage points. So then the variance would relate to those units of enrollment rates. And so that’s actually a great point because it’s going to be very difficult to compare the tau-squared of one particular outcome to the tau-squared of a completely different intervention’s effect on a completely different outcome because those things are going to be in different units entirely. That’s one advantage of I-squared relative to tau-squared, is that I-squared is unitless. It kind of scales things. So that does run between zero and one, and does not depend on the units. Although it’s not 100% straightforward either. I mean, that has also got some drawbacks. I’m trying to summarize the paper here, but I guess if one’s really super interested in these issues I would just recommend reading the paper. Robert Wiblin: Taking a look at it. Eva Vivalt: It goes in much greater detail. I’m simplifying a bit here. Robert Wiblin: Sure, okay. We’ll definitely stick up a link to it. So let’s say that we had a new intervention that no one really knew anything about. And then one trial was done of it in a particular place, and it found that it improved the outcome by one standard deviation. Given your findings, how should we expect it to perform in a different situation. Presumably less than one standard deviation improvement, right? Eva Vivalt: Yeah. I mean, to be honest, one standard deviation improvement is just huge. Enormous. Robert Wiblin: I was just saying that because one’s a nice round number. Eva Vivalt: Oh yeah. But the typical intervention is going to be more like 0.1 rather than one. So if I saw one somewhere, I’d be like, wow, that’s got to be a real outlier. That was a very high draw. So I would be skeptical just for that reason. Robert Wiblin: Okay, so I’ve got 0.1. What might you expect then if it was done somewhere else? Eva Vivalt: Well, it’s going to depend a lot on the intervention and the outcome. And if I’m using some more complicated model. I think the best way to answer those questions is to look at a specific intervention and a specific outcome and try to model as much of the heterogeneity as possible. And there’s not going to be any substitute for that, really. What I’m looking at in my paper is trying to say something like, well that might be so. But still, what can we say about looking across all the interventions, across all the outcomes? And that’s where I pick up patterns like if it’s done by an NGO, if it’s a relatively smaller program it tends to have higher effects. But that’s a little bit hand-wavy. I think the best way to answer those questions in terms of what do I really find is to go to that particular intervention, that particular outcome. But what I can say is that even with one study’s results, and now this is pretty weak but it’s still true, there’s still a relationship, is that if you look at the heterogeneity of results within the study, that actually does predict the heterogeneity of results across studies. I mean, weakly. And there’s no reason for it to necessarily be true, but it is a stylized fact that one could use. Robert Wiblin: Hey, I just wanted to interject that I later emailed Eva to see if there was any rule of thumb we could use to get a sense of how bad the generalisability is from one study to another. One option is to say that: The median absolute amount by which a predicted effect size differs from the true value given in the next study is 99%. In standardized values, the average absolute value of the error is 0.18, compared to an average effect size of 0.12. So, colloquially, if you say that your naive prediction was X, well, it could easily be 0 or 2*X — that’s how badly this estimate was off on average. In fact it’s as likely to be outside the range of between 0 and 2x, as inside it. This wouldn’t be rigorous enough to satisfy an expert in the field, but it’s good enough for us here. Back to the interview. Robert Wiblin: Okay. So did you find out under what circumstances results are more generalizable and when they’re less generalizable? Eva Vivalt: Yeah. So again this is a little bit hand-wavy and I think a little bit less the point of the paper, because like I say, even though these studies are mostly RCTs, when I’m looking at them, at that point it’s as though I’ve got observational data. Because the studies are selected in various ways that … where people even choose to do the studies is selected and I’m just looking at this data. But despite that, if you do the naïve thing of doing ordinary least squares regression of your effect sizes on various study characteristics…. So I mentioned bigger programs and government-implemented programs tend to do worse. There’s not much of a general trend in other things. In particular, it doesn’t seem to matter so much if it’s an RCT or not. Or where it was done. Actually, one thing I did find is, you can’t even necessarily just say …. So often you hear from policy-makers and researchers, “well we’ve got results from one particular country. So at least we know how it works in that country.” And actually, I would disagree with that. Because even within a country, if you’ve got multiple results from the same country, they don’t predict each other very well. And it makes sense if you think about, you know, I don’t think anybody would say within the US, “oh yeah, well results from Massachusetts are going to be very similar to results from Texas,” or something like that. Right? Even within a country there’s so much variation that maybe it’s no better than taking results from a completely different area of the globe. But it’s still not that great and I can’t actually even find any kind of statistically significant relationship within a country. Robert Wiblin: Isn’t this pretty damning? Why would we bother to do these studies if they don’t generalize to other situations? It seems like we can’t learn very much from them. Eva Vivalt: Yeah, so that’s a great devil’s advocate type question. I’m still, despite all this, an optimist that we’re learning something. Right? Because part of it is that this way of looking at it doesn’t model all the little factors. I mean, I am actually quite skeptical of most of the stories that people tell about why an intervention worked in one place and why it didn’t work in another place. Because I think a lot of those stories are constructed after the fact, and they’re just stories that I don’t think are very credible. But that said, I don’t want to say that we can learn nothing. I would just say that it’s very, very hard to learn things. But, what’s the alternative? Robert Wiblin: Well, I guess, potentially using one’s intuition. But one thing you could say looking at this, is that it’s not really worth running these studies. An alternative view would be that because each study is less informative than we thought, we have to run even more of them. Do you have a view between those two different ways of responding? Eva Vivalt: Yeah. I would argue for running more of them, but not in a completely senseless manner. I think we can still say something about …. There are ones which are higher variance, where we could learn more, where the value of information of doing another study is going to be higher. So, I guess part of this depends on, sorry to get into technical details but … Robert Wiblin: No, go for it. Eva Vivalt: … the decision problem we think people are faced with. Right? Because if you think that a policy-maker is, what they really care about in making their decision is whether some result is statistically significant and better than some other result in a statistically significant way. Well okay, then that’s a different problem from if they are just trying to find, if they’re okay with something that there’s a 20% chance works better than the alternative. So think of this all in terms of: there is some problem that a policy-maker is trying to solve, and then within that problem you’ve got the ability to run studies or not run studies. And the value of information of running each of those things is going to be different depending on how much underlying heterogeneity there is. Just to be a little bit simpler about this, the intuition is that if you’ve got … I mean, the studies that are the most valuable to run would be the ones where you don’t know very well a priori what’s going to happen. You’ve got a higher degree of uncertainty up front. But where you think there is a good upswing potential, as it were, right? Like it could overtake the best possible outcome. Robert Wiblin: A lot of value of information, I think is the … Eva Vivalt: Yeah, exactly. Robert Wiblin: Okay. We’ll come back to some of those issues later because you have other papers that deal with how these RCTs can inform policy-makers. But let’s just talk a little bit more about your method here. So, how did you collect all of this data on all these different RCTs. It sounds like an enormous hassle? Eva Vivalt: Yeah. I wouldn’t recommend it. I mean, obviously one has to do it. But, oh, my goodness. I think I was very lucky actually to have a lot of great help from various RAs over the course of several years, through AidGrade, who were gathering and double-checking and sometimes triple-checking some of this data. Everything, all the data was gathered by two people. And if their results disagreed in some way, and their inputs disagreed then a third person would come in and arbitrate. So that’s how we got all of the characteristics of the different studies coded up. All the effect sizes. I am hopeful that in the future, we’re going to be able to do a lot more with automated reading of these papers. You would think that’s absolutely crazy, but I think it works pretty well so far. I mean, not of the actual results tables. I think the results tables are actually the hardest task in a way, because you need to really know what a particular result represents. Is this a regression with controls, is it with whatever else. What methods, et cetera. But for basic characteristics of studies, like where was it done, was it an RCT or not, those kinds of things, actually we’ve had pretty good success with some pilot studies trying to read that automatically through natural language processing. And that, I think, is really the best hope for the future. Because studies are coming out so quickly these days that I think to keep abreast of all of the literature and all the various topics — I mean, it’s even more of a constraint for the medical literature where there’s loads of studies and new ones coming out all the time. Meta-analyses can go out of date quite quickly and they’re not really incentivized properly in the research community so the only way to get people to actually do them and keep the evidence up-to-date in some sense is by at least making the process easier. I don’t think that it can be ever 100% done by computer. I think you’re still going to need some inputs from people. But if you can reduce the amount of effort it takes by 80% or 90% and just have people focus on the harder questions and the harder parts of that, that would be a huge benefit. Robert Wiblin: Do you think there’s enough of this data aggregation? Or are there too few incentives for people to do this in academia? Eva Vivalt: No, I think the incentives are all wrong. Because researchers, they want to do the first paper on a subject. Or ideally, if not the first then the second. The third is even worse than that. And by the time you get to do a meta-analysis, well that’s kind of the bottom of the bin in some regards. You think it would be more highly valued, but it’s not. Robert Wiblin: Wouldn’t you get a lot of citations from that? Because people would trust the results of a meta-analysis more that the individual papers. Eva Vivalt: I think that’s fair. And you can get some fairly well cited meta-analyses. Unfortunately, citations are just not the criterion that’s really used for evaluating research in economics. I know it is more so in other fields, but not so much in economics where it really is the journal that matters. Robert Wiblin: So the journals that publish that kind of thing just aren’t viewed as the most prestigious? Eva Vivalt: Yeah, that’s exactly right. Robert Wiblin: I’ve also heard that in fields where collecting a big data set, especially an historical data set, is what enables you to ask a lot of new questions. There’s perhaps too few incentives to put it together. Because you do all of the work of putting it together then you publish one paper about it, and then other people will use the same dataset to publish lots of papers themselves. And in a sense you don’t get the full fruit of all of the initial work that you did. Is that a possibility here, where other people can now access this dataset of all of these different RCTs that you’ve compiled and so you don’t … Kind of they drank a bit of your milkshake in a sense. Eva Vivalt: I wouldn’t put it that strongly, both because I’m actually quite happy if other people do things with the data and also because …. It depends I guess where you are at in the process. I think for people who are just finishing up their PhD, for example, it’s actually very good to show that you can compile a very large dataset because that’s what a lot … a lot of research depends on having very good data and if you can show that you can collect really good data then that’s great for you. Obviously you also want to publish well based on that. That’s, I guess, a separate question. Robert Wiblin: So, what are the biggest weaknesses of this study? Do you think that we should trust this result, that results aren’t that generalizable? Or is this something that could be overturned with future research? Eva Vivalt: I don’t think it’s really at danger of being overturned per se. That’s just a function of the fact we’re doing social science and there are all sorts of things that can change and that matter for your treatment effects. So, yeah, I’m not tremendously concerned about that. Robert Wiblin: So what kinds of studies did you include in this particular dataset? For example, you were looking at development studies. Eva Vivalt: Yeah. Robert Wiblin: If you looked instead at say, education studies in the developed world. Might you get a different results if you were looking at a different domain or field? Eva Vivalt: Maybe. I think the bigger difference, though, would probably be with things that are less, at least intuitively, context-specific. Things like health … Robert Wiblin: Medicine. Eva Vivalt: Yeah, exactly. So for example in our data, actually the things that almost varied more were the health interventions. But that’s because we weren’t controlling for things like baseline incidence of disease or any of those kinds of things. Robert Wiblin: Right. Eva Vivalt: And if you do control for those, then, I mean, we weren’t doing that in the general analysis, but if you do control for them then actually the heterogeneity is a lot smaller. So, things that have a clearer, more straightforward causal effect, there we might expect to see slightly different results. Robert Wiblin: Hmm. So kind of antibiotics will usually treat the same disease anywhere. But I suppose in these studies they actually have different impacts because in different places people have the underlying disease at different levels. Eva Vivalt: Yeah, exactly. Yeah. I mean, everybody I think at this point would agree that things like de-worming et cetera depend on what the baseline prevalence of the worms, or whatever, is. And once you control for those things, then you actually … Because there’s some very clear mechanisms through which these things work, there are fewer things that can go wrong. Whereas the more general social science type thing, there’s so many factors that feed into what the treatment effects ultimately are, so it’s a little bit messier. Robert Wiblin: So you wrote another paper called “How Much can Impact Evaluations Inform Policy Decisions?” Which, I can imagine, was partly informed by this other paper. Do you want to explain what you found there? Eva Vivalt: Sure. So that paper is looking a bit at, well the fact that if we do try to put this into some kind of framework where a policy-maker is deciding between different options, and they’re always going to want to choose the thing that has the highest effect. Well, given the heterogeneity we observe, how often would they actually change their mind? You know, if the outside option takes some particular value. So, yeah, it’s quite related. We also tried to use some priors that we had collected. Some predictions that policy-makers had made about the effects of particular programs. Robert Wiblin: So just to see if I’ve understood the set-up correctly, you’ve got this modeled agent, which I guess is a politician or a bureaucrat or something. And they, they’ve got some background thing that they could spend money on, perhaps this is spending more money on schools or whatever else. And they think that they know how good that is. And so that’s somewhere they could stick the money. And then you’re thinking of the value of a study on another thing, that might be better, or might be worse. And the bureaucrat say, even though there hasn’t been any studies done yet or not many, they have some belief about how good this other option is, this new option. But they’re not sure about it, and they would somewhat change their mind if a randomized control trial were done. And then you want to see, well, how often would that trial cause them to actually change their decision and go for this alternative option? Eva Vivalt: Yeah, that’s exactly it. You’re putting it much better than I did. Robert Wiblin: So, what did you find? Is there any way of communicating how often people do change their mind? And maybe perhaps what’s the monetary value of these studies? Eva Vivalt: That’s an excellent question. So, we didn’t actually connect it to actual monetary value because that depends a bit upon what you think the value of some of these outcomes is. We did this a little bit abstractly, trying to compare two programs that — one was 90% of the value of another one, or 50%. But we weren’t actually making assumptions on the final, the last mile type part of “well yeah, but what is this actually worth?” I mean, that’s going to depend a bit on what the actual outcomes and the values of the outcomes are. So, I wish I had a better answer is what I’m trying to say. Robert Wiblin: Okay. So in the abstract you wrote, “We show that the marginal benefits of a study quickly fall and when a study will be the most useful in making a decision in a particular context is also when it will have the lowest external validity,” which is a bit counter-intuitive. And then also, “The results highlight that leveraging the wisdom of the crowds can result in greater improvements in policy outcomes than running an additional study.” Did you want to explain those sentences? Eva Vivalt: Sure. So, yeah. I think one of the interesting things is the statement that when a study will be most useful is when it will have the lowest external validity, that is relating to the point that in a sense, when’s the study going to be most useful? What’s going to be the most useful’s when it surprises us and was really different. When’s it going to be the most different? Well, when we’re not going to able to generalize more from it, when it’s got some underlying factors that make it a little bit weird in some way. It’s going to be the highest value in that setting, but if you try to think about extrapolating from it…. Robert Wiblin: So is it not so much that that study can’t be generalized to other things that makes it valuable. But rather that other things can’t already be generalized to this one? So this is a more unique case? Eva Vivalt: Yeah. And I mean, it could go either way in the sense that if you think that the other studies haven’t found this particular thing, and this particular thing is a bit unique, well, likewise, you wouldn’t expect this unique thing to say much about those other ones either. So, again, this is a little bit abstract because you can try to think about, “well, yes, but does this new thing tell us something about some other, more complicated underlying models of the world as to why this one happened to be so surprising?” But yeah, that’s just the general intuition. And then with respect to leveraging the wisdom of the crowds, well, we did look at different kinds of ways of making decisions. We looked at a dictator making a decision all by themselves versus a collective of various bureaucrats voting and just using a majority voting rule to try to decide which particular intervention to do. And there, because people can frequently be wrong, actually adding additional people to the set of people who are making the decision can lead to substantial benefits in terms of the actual … in choosing the right program afterwards. There were actually some simulations in which it performed better. Robert Wiblin: Are you saying that running these broad surveys is potentially more informative than an RCT? And I guess also presumably cheaper as well? Or at least in the model. Eva Vivalt: Yeah, so I guess … So in the model it’s more a matter of how many people are making the decision and how many people’s inputs are being fed into this process. So, I guess if you’ve got a more democratic decision making process or you involve more people, their priors are more likely to be correct in that case. Sort of like their aggregate prior. And the benefits of just doing that can be higher than the benefits of doing an RCT. I mean, it depends a little bit on all sorts of underlying parameters here. But there were at least some simulations for which that was definitely true, where adding additional people helping to make the decision resulted in better decisions than running an additional study. Robert Wiblin: So, what surprised you the most from these simulations that you were running? Was there anything that you didn’t expect? Eva Vivalt: Well I don’t think I was expecting that result, to be honest. Also, obviously, it does depend on the quality of the priors that people initially have, right? Like if you actually do have very highly uninformed individuals, then aggregating more highly uninformed priors is not going to help you. Robert Wiblin: Shit in, shit out. Eva Vivalt: Yeah, basically. Robert Wiblin: I get to swear on my own show. Eva Vivalt: Well, I could just say that you said it. Robert Wiblin: So, do you think that we should run more studies, or less, on the basis of this paper? Eva Vivalt: Well, I don’t think that’s the right … It’s not like we … There’s not a real trade-off here. Have more democratic decision making processes or run additional studies. We can do both. So I think more studies still is going to help, but so is actually taking that evidence into consideration and also having more people help to make decisions and hopefully balance out some of the errors that are made because, actually a lot of … I mean, I’ve also done some work looking at how policy-makers interpret evidence from studies and update. Robert Wiblin: So you modeled bureaucrats or politicians as these Bayesian agents who I guess update perfectly. Was that right? Eva Vivalt: At least in this paper. There’s another paper that does not do it, but yeah. Robert Wiblin: Yeah. What kind of deviations might you expect? Do you think they might update too much or too little in the real world? Eva Vivalt: Well I think, actually, so I’ve got this other paper with Aidan Coville of the World Bank where we are looking at precisely some of the biases that policy-makers have. And one of the bigger ones is that people are perfectly happy to update on new evidence when that goes in a nice, positive — when it’s good news. But people really hate to update based on bad news. So for example, if you think that the effects of a conditional cash transfer program on enrollment rates is that maybe they’ll increase enrollment rates by three percentage points. And then we can randomly show you some information that either says it’s five or it’s one. Well if we show you information that says it’s five, you’re like “great, it’s five.” If we show you information that says it’s one, you’re like “eh, maybe it’s two.” So we see that kind of bias. We also- Robert Wiblin: It’s interesting because if you update negatively or if you update downwards then you’re creating a much greater possibility for future exciting positive updates. You can’t have positive updates without negative updates as well. Eva Vivalt: Well, that’s fair I guess. Robert Wiblin: I guess they’re not thinking that way. Eva Vivalt: Present bias or something. No, I don’t know. And it kind of makes sense intuitively, because one of the initial reasons for why we’re considering this particular bias in the first place is… I think a situation that will be very familiar to people who engage with policy-makers is, you know, you’re asked to do an impact evaluation. You come back saying, “oh yeah, this thing showed no effect.” And people are like, “oh really? It must be the impact evaluation that’s wrong.” Robert Wiblin: I wonder, it’s notorious that impact evaluations within bureaucracies that want to protect their own programs are too optimistic. But I wonder, it’s a bit like, kind of everyone overstates how tall they are on dating sites but at the end of the day, you end up knowing how tall someone is, because everyone overstates by the same amount. And I wonder if looking at these impact evaluations you kind of figure out what’s the truth or what’s right on average just by saying “well was it extremely good or was it merely good?” You just adjust everything down by a bit. Eva Vivalt: That’s a good point. That’s a good point. Yeah, no, fair enough. I mean the other thing that … Robert Wiblin: I suppose that would just end up rewarding even more extreme lying. Eva Vivalt: Yeah. And that’s not the only bias that people have got either, right? So another thing that we were looking at is how people were taking or not taking the variance into consideration. So in the simplest idea, you can think of this as just sampling variance. But you can also look at heterogeneity across studies. And basically, people were not updating correctly based on confidence intervals. That might be the easiest way of framing it. And we did try to break that down a bit, and try to say, “well okay but why is that? Are they misinterpreting what a confidence interval is? Is it some kind of aggregation failure? Is it just that they’re ignoring all new information, and so obviously they’re going to be caring less about confidence intervals than somebody who actually does take information into consideration and does actually update at all?” So we did try to break it down in several ways. And yeah, it does seem like people are not taking the variance into account as a Bayesian would. Robert Wiblin: Oh, hold on. So you’re saying they just look at the point results and not at how uncertain it was? Eva Vivalt: Yeah, pretty much. I mean, they do look a little bit at how uncertain it was, but not as much as they should if they were fully Bayesian. If they were actually Bayesian then they would care more about the confidence intervals. Robert Wiblin: Right. So if it would be a small study that kind of gets a fluky extreme result, people over-rely on that kind of thing. Eva Vivalt: Yeah, exactly. Robert Wiblin: That doesn’t surprise me. So what is the latest on your work on priors? Is that related to this paper? Eva Vivalt: So, it is. This is one of the things that I’ve been up to. So for this particular one, we were looking at biases that policy-makers might have and biases in updating. So, you start out with a Bayesian model and say, “okay, well look, but people aren’t Bayesian. How can we modify this model and have some kind of quasi-Bayesian model?” And so we were looking at two biases: this kind of optimism I was talking about and this variance neglect. Which you can think of it as some kind of extension neglect more broadly and related to the hot hand fallacy or gambler’s fallacy for people who are into the behavioral economics literature. And we basically … It was a really simple study. We just collected peoples priors. We then showed them some results from studies, and then we got their posteriors. And we presented information in different ways, because we were also interested in knowing if the way in which we present information can also help people overcome biases if they are biased. So if you’ve got a problem, what’s the solution? And we did this not just for policy-makers, but also for researchers, for practitioners like NGO operational staff, that kind of thing. We also got a side sample of MTurk participants. And these biases actually turned out to be pretty general. And the big thing on the solution side is more information will encourage people to update more on the evidence. So I guess if you’re in that situation of, you’ve got some bad news, come bearing a lot of data and that should help at least a little bit. So, you know, more quantiles of the data, that kind of thing. Maximum, minimum values, you know, the whole range of as many statistics as you can really. Robert Wiblin: Hold on. So your main finding was in order to accept a negative result, people have to be confronted with the overwhelming evidence so that they can’t ignore it? Eva Vivalt: Yeah, at least it should help. Robert Wiblin: Were there any other discoveries? Eva Vivalt: The other kinds of things that we’ve been doing … We have actually collected priors in a whole bunch of different settings so actually I’m in the process, also with a grad student, of trying to look at some additional biases that policy makers may have. Like omission bias, status quo bias, where people don’t want to actually change, deviate, from decisions that were made in the past where they would have to do something differently, or take action. Like there might be some bias towards inaction. Robert Wiblin: Or at least not changing your action. Not shutting down the program. Eva Vivalt: Yeah. Yeah, yeah. I mean, the kinds of things that bureaucracies are typically sort of criticized for. But more specifically, on the priors, we’ve also asked experts to predict effects of various impact evaluations. One thing that I’m really excited about is trying to more systematically collect priors in the future. And so, I’ve been talking with many people actually, including Stefano DellaVigna and Devin Pope, who’ve got these great papers on expert predictions, about setting up some larger websites so that in the future people could more systematically collect priors for their research projects. I’m getting at this point an email every week roughly asking for advice on collecting priors, because I think researchers are very interested in collecting priors for their projects because it makes sense from their perspective. They’re highly incentivized to do so because it helps with, not just with all this updating work, but also for them, personally, it’s like, “Well now nobody can say that they knew the results of my study all along.” Like, “I can tell them ‘well, this is what people thought beforehand and this is the benefit of my research.’” And also, if I have null results, then it makes the null results more interesting, because we didn’t expect that. So, the researchers are incentivized to gather these things but I think that, given that, we should be doing that a little bit more systematically to able to say some interesting things about like … well, for example; one thing is that people’s priors might, on average, be pretty accurate. So this is what we saw with the researchers, when we gathered our researchers’ priors, that they were quite accurate on average. Individuals, they were off by quite a lot. There’s the kind of wisdom of the crowds thing. But, if you think that you could get some wisdom of the crowds and that people are pretty accurate overall, if you aggregate, well that actually suggests that it could be a good yardstick to use in those situations where we don’t have RCTs. And it could even help us figure out where should we do an RCT, where are we not really certain what the effect will be and we need an RCT to come in and arbitrate, as it were. So I think there’s a lot more to do there that could be of pretty high value. Robert Wiblin: Right, okay. So, I’ve got a number of questions here. I guess, so the question we’re trying to answer, well at least one of them, is: how good are experts as a whole at predicting the likeliest outcome of a study that you’re going to conduct? Or, to put it another way, the impact of an intervention. And, I guess, the stuff that I’ve read is that experts, at least individual experts, are not very reliable. But you’re saying that if you systematically collect the expectations of many different experts, then on average they can be surprisingly good. Eva Vivalt: Yeah. Yeah. I would say that. I think that like, again, it sort of depends a bit on — this is why it would be really nice to get systematic data across many, many different situations. Because it could just be that the ones that we’ve looked at so far are not particularly surprising, but there probably are some situations in which people are able to predict things less well, and it would be nice to know are there some characteristics of studies that can help to tell us when experts are going to be good or bad at predicting this kind of thing. But I would agree that any one individual expert is going to be fairly widely off, I think. Robert Wiblin: So how do you actually solicit these priors or these expectations from these experts? Have you figured out the best way of doing that? Eva Vivalt: Yeah, so that’s an excellent question. And we tried several different things. By now, I think I’ve got a pretty good idea of what works. So, in some sense the gold standard, if people can understand it, which is a big if, is to ask people to put weights in different bins because then you can get the distributions of their priors as well. Like not just a mean, but sort of how much uncertainty is captured in that. But that’s quite hard for most people to do. People aren’t really used to thinking of their beliefs as putting weights in bins. Robert Wiblin: Not even people in this field of social science? Eva Vivalt: Not really. I mean, the researchers are a bit better at it, but in any case, at least what we’ve done, is even when talking with researchers it’s better to try to be perfectly clear about what the bins mean and go through all that kind of thing beforehand. The other thing is, if you are asking sort of more of lay public, is it’s probably better to move to asking them to sort of give ranges, as it were. So, you know, what is a value such that you think that it’s less than a 10% chance it’ll fall below this value, or less than 10% chance it’ll fall above this value, or different quantiles… I mean, you then have to make some assumptions about the actual distribution because people can give you a range but if you really want to get at some of the updating questions, you need to know a little bit more. Like, you want to know whether those distributions are normal or not. And you don’t know whether things are normally distributed if you just have three points, right? Robert Wiblin: Yeah, yeah. So that sounds like a really exciting research agenda, but we’ve got to push on because there’s quite a lot of other paper’s that you’ve published in the last few years that I want to talk about. Another one that you’ve written up, which is a bit more hopeful, is ‘How Often Should We Believe Positive Results: Assessing The Credibility Of Research Findings In Development Economics.’ And of course, most of social science is facing a replication crisis where we’re just finding that many published results in papers don’t pan out when you try to do the experiment again. What did you find in development economics? Eva Vivalt: Yeah, so actually the situation was a lot better than I would have initially thought. So I think this is actually quite a positive result. It could be biased from the kinds of studies that we included. Like we had a lot of conditional cash transfers in there. They tend to have very large sample sizes, so they’re kind of like the best case scenario. But nonetheless, the false positive report probabilities are actually quite small. Robert Wiblin: Are you able to describe the method that you applied in that paper? Obviously, you weren’t replicating lots of these studies, you must have used some other method to reach this conclusion. Eva Vivalt: Yep. Well, there’s quite a lot of nice literature here that I can refer people on to. The false positive and false negative report probabilities, the equations for how to calculate those are coming from out of a paper by Wacholder et al. There’s some other people who’ve also looked at this. Where essentially the probability that you’ve got a false positive or a false negative depends a bit on the priors that you’ve got. So for example, if you think of some study that is looking at, I don’t know, something we really don’t believe to exist, like extra sensory perception or something, right? And if you found some positive result for that well, nobody’s going to trust a study that shows that ESP is real. And to really show that credibly, you would need to have lots of studies with really, precisely estimated coefficients. Again, your priors are going into it, the statistical significance or your p-values that you’ve found would go into it and that’s just an equation you can sort of write out. The other thing is that there are these type S and type M errors that Andrew Gelman and some co-authors talk about. And these are the probability that if you’ve got a statistically significant result, it’s actually of the right sign- Robert Wiblin: So it’s positive rather than negative, or negative rather than positive. Eva Vivalt: Yeah, yeah. Because you would be surprised, but it’s actually true that if you’ve got low-powered results, then even if you find something statistically significant, there is some probability that the true value is negative when you see something that says it’s positive, or vice versa. Robert Wiblin: Yeah, and then there’s type M errors? Eva Vivalt: Yeah so this is same kind of thing except for magnitude. So, you’ve found some significant result and it has certain magnitude, but chances are that’s actually incorrect in some way. Like it’s most likely inflated in value, so the truth is likely to lie lower than that. Robert Wiblin: So how did you put together this information to try to figure out what fraction of results were accurate? I’m not quite understanding that. Eva Vivalt: Sure, sure, sure. So, the main source of data that we used here is we had to get a whole bunch of expert beliefs, because these were inputs into the equations. And to get the expert beliefs we did one thing that’s not 100% kosher, but is the best kind of approximation we could do, which is that we didn’t want to wait until a lot of impact evaluations were over. Like a lot of the other work that I’ve done on priors, also with Aiden, we are actually waiting until all the results of the real studies come out. But for this we wanted a bunch of results to use already, as it were. So what we did was we used AidGrade’s database of impact evaluation results and we said, “Okay let’s go to topic experts,” like people who have, for example, done a study on a conditional cash transfer program, and then ask them “which of all these other programs have you heard about?” They were also all conditional cash transfers programs but, you know, ones by other people. And then for the ones that they hadn’t heard about, we asked them to make up to five predictions about the effects that those studies would find. We’d describe the studies to them in great detail and then got their best guess. Then, using this data we could say something about the false positive report probability, because then we’ve got the p-value that each study found and we’ve got what we’re considering to be the prior probability of some kind of nominal effect. We needed, actually, them to also give a certain value below which they would consider the study to have not been successful. Like, if the conditional cash transfer program doesn’t improve enrollment rates by, I don’t know, 5 percentage points then it’s not successful, because we wanted to …. All these equations deal with sort of like, the likelihood that some particular hypothesis is true. For us we wanted … there’s like some critical threshold above which we would think that it had an effect, versus not have an effect. Some meaningful effect. The minimum meaningful, kind of like the minimum detectable effect size. So we create this probability of attaining this non-null effect, given the distribution of priors and given this particular cut-off threshold. And those are just inputs to this equation, along with the power of the study. Robert Wiblin: Right. Okay. I think I understand now. So, you’ve got all of these different studies looking at the effect size on different outcomes, and they have different levels of power. So different kind of sample sizes and different variances in them. And then, you’re collecting priors from a bunch of different subject matter experts, and then you’re thinking, “Well, if we took those priors and updated appropriately based on the results in those studies, how often would we end up forming the wrong conclusion?” Or is actually just that; what if you took the point estimate from that study, how often would you be wrong relative to if you’d updated in a Bayesian way? Is the second right? Or am I totally wrong? Eva Vivalt: So I would think of it in a different way. If you see a positive, significant result, there’s some probability that it just happened to be that way by chance and there’s some probability that that’s a true thing. Robert Wiblin: And especially if it was unlikely to begin with, then it may well still probably be wrong, because of, kind of- Eva Vivalt: Yes. Robert Wiblin: Regression to the mean effect. Eva Vivalt: Yeah, if you think that it’s really unlikely a priori and you observe it, it’s more likely to be a false positive. If you’re under-powered to begin with, it’s more likely to be a false positive. If it’s got a p-value of 0.049, it’s more likely to be a false positive. So, these are all just sort of factors that go into it and you could do the same kind of thing for false negatives actually. Yep. Robert Wiblin: Okay. Well, let’s push on. You did another paper on specification searching, which is the practice where people who are writing a paper try out a whole lot of different specifications to try to, I guess, get the answer that they’ll like and they publish just the results of that, like, to show you. And you were trying to figure out how common this practice is in different disciplines and researchers using different methods. How did you try to do that and what did you find? Eva Vivalt: Yeah, this paper is similar in methodology to some papers by Gerber and Malhotra and others, where … and also there’s some work by Brodeur et al. looking at essentially the distribution of statistics. Say you’ve got a bunch a different studies, you’ve got a bunch of different t-statistics from each of those results, what you would expect is that there’s going to be some smooth distribution of those statistics. I mean, hopefully. But what you actually observe in the data is there’s some lumpiness and in particular there tends to be some slightly lower density of results that are just marginally insignificant, than you would expect and some sort of bump in the distribution, just above the threshold for statistical significance, which is usually at the 0.05 level. So 1.96. So you’ll see like, relatively few results around 1.95 and relatively more results than you had anticipated having around 1.97. That’s the general intuition, right? Robert Wiblin: Yeah. And that’s an indication that people were fishing around to find the specification that would just get them over the line to be able to publish. Eva Vivalt: Exactly. But I mean it’s not as straightforward as just that because you can imagine that … what is that distribution supposed to look like in reality? And there’s other reasons why you might expect to see some more statistically significant results. For example, people design the studies such that they can find significant results in the first place. So, it’s not 100% straightforward to just say, “Oh yeah well we’ve got a lot of significant results and therefore it must be specification searching.” I think it becomes more credible that it is specification searching if you can say, “Yeah but it’s within a really small band, right around the threshold for significance.” As you expand the band out a little bit, I think you could try to argue- Robert Wiblin: There are other possible explanations. Eva Vivalt: Yeah, exactly. That like, people are designing this study very cleverly just to get significance. Although, honestly to be fair, I think it’s difficult to swallow that people are designing the study perfectly appropriately to just barely get statistical significance, right? I mean it’s so hard to predict what the effects will be anyways, and then your hands are a little bit tied from the fact that generally when you’re doing this you have got a given budget and you can’t really exceed that budget anyways. So you’re dealing with a certain sample size and having to adapt your study accordingly. It’s not like you’ve got free reign to perfectly maximize. Robert Wiblin: Okay so the alternative innocent explanation is that people can anticipate ahead of time what the effect size will be, and then they chose the sample size that will allow them to get just below 0.05 p-value. So they’ll be able to publish the paper at minimum cost. Eva Vivalt: Yeah. Robert Wiblin: But in reality it’s just it’s a bit hard to believe that that explains most of what’s going on, especially given that we just know that lots of academics in fact do do specification searching. Eva Vivalt: Yeah. It’s just people don’t have as fine control over the design of study as you would perhaps anticipate because funding is somewhat out of their hands. Also, because any one given paper is going to be looking at so many different outcomes, so how can you really design a study so that you are just barely significant for outcome A and B and C, you know? And so like it becomes a little bit implausible. But that would be the best case for the contrary view. Robert Wiblin: Yeah. Okay. So you looked for this suspicious clumping of p-values or effect sizes across a whole of lot of different methods and disciplines, and what did you find? Eva Vivalt: Yeah, actually the situation seemed a lot better for RCTs than non-RCTs, which is kind of understandable if you think about it because I think RCTs generally have an easier time getting published these days anyways. It could be reflecting that, that you don’t need to engage in specification searching if you’ve got an RCT and people are more likely to publish your results anyways, even if they’re null. The other thing is that things do seem to be changing a little bit over time. In particular the non-RCTs, as time goes on they become more and more significant, as it were. Let’s just not lean too hard on this explanation but it could be, in the old days, maybe you would lie and say, “Well, I’ve got a non-RCT and it found a value of 1.97.” People would be like, “Oh, okay. 1.97, I believe that.” And nowadays if you see 1.97 everybody’s like, “Wait a second.” So now, you’ll see values that are more like 2.1 or something, right? It’s like values that are a little bit farther out there and more significant. Robert Wiblin: I see. Okay, so you’re saying because people have learned that this is kind of an indication of specification searching, people have to go even further and find specifications that get them an even more significant result so it doesn’t look suspicious. Eva Vivalt: Yeah, maybe, yeah. That would be the intuition. Again, I can’t like 100% say, but it would consistent with that at least. Robert Wiblin: It sounds to me like you’ve been doing quite a lot of work on this Bayesian approach. Looking into priors and updating, based on those. Does it feel like development economics is becoming more Bayesian? And is that a good thing? Eva Vivalt: You know, actually, honestly, I believe it is and that’s really exciting. These days I don’t have to worry quite so much about … I’m definitely hardcore Bayesian and I think that it’s a little bit easier for me to talk about things that rely on a Bayesian interpretation. Robert Wiblin: Do you think there’s any downsides of Bayesian method being applied more often? I guess one thing I worry about is people kind of fiddling with the priors in order to get the outcome that they want. Or perhaps there’s a bit more flexibility and there’s more possibly for specification searching. Eva Vivalt: Hmm. Honestly we’re probably not going to go down the route of being … I don’t see the discipline as becoming fully Bayesian any time in the near future. I just don’t see the likelihood of that. What I do think though is that … so it is true that what researchers do and what policy makers do could be a bit different. It might be fine to be different. I’ve heard the argument that researchers should be very concerned about getting unbiased estimates and policy makers … there’s this bias-variance tradeoff that I care actually very passionately about and that others care passionately about too as well, I believe. Robert Wiblin: Did you want explain what that is? Eva Vivalt: Sure. The bias-variance tradeoff is essentially saying that you’ve got several sources of prediction error. You’ve got some error due to possible biases, you’ve got some error due to variance and you’ve got some other idiosyncratic error. And this is something that is generally true in all contexts, in all ways, and comes up in different ways. An example is: if you think of nearest neighbor matching, if you want you can include more neighbors, and if you include more neighbors you’ve got more observations, so you’ve got more precise estimates. Like lower variance estimates. But on the other hand, if you’re including more neighbors, you’ve got some worse matches. So you’re increasing your bias. And so, all estimation approaches are going to have some error due to bias and some error due to variance. And economists have focused really narrowly on producing unbiased estimates, and if all you care about is prediction error … I know Andrew Gelman takes this view and so do I and so do other people like Rachael Meager I think and others. We’re like, “Well hang on, why do we care just so much about getting unbiased estimates?” You also care about having precise estimates, too. It would help for prediction error to maybe accept a little bit of bias. And the argument I’ve heard is that “maybe researchers should be unbiased, but policy makers interpreting the evidence, it’s okay to accept a bit more bias there.” Maybe the … you don’t need every person at every layer to be reducing prediction error as much as possible. I think that like in practical terms, if you’re an effective altruist, et cetera, you do care about minimizing prediction error regardless of the source. But then it’s a slightly separate question to say what researchers should be doing per se. Robert Wiblin: So I’ll stick up links to both Andrew Gelman’s blog and a description of the bias-variance tradeoff. So as I understand it you’re saying that there’s different statistical methods that you could use that would be systematically too optimistic or pessimistic, but would be more precise, is that right? And in general, people go for something that’s neither too optimistic or pessimistic, but is not as precise as it might be. It has like larger average mistakes, and it’s just not clear why we’ve chosen that particular approach. Eva Vivalt: Yeah. So, there’s a nice diagram that you can throw up if you’re putting links to things that sort of shows the bias-variance tradeoff really, really nicely, I think. Where you’ve got prediction error on one axis and you’ve got different curves of error for if you’ve got biased estimates or if you’ve got estimates with high variance, low precision. Your total prediction error is going to be some function of both of these things as well as some other error. And economists have focused really quite a lot on getting unbiased estimates. You would think that if anywhere this consideration might come up a little bit in the process of using machine learning because there there’s a lot of techniques that are biased that people accept. Like Lasso or ridge regressions and all sorts of other things, but even there, if you talk to people who are actually involved with these kinds of methods, they’re highly focused on getting unbiased estimates so that the rest of the profession accepts them, which I think is kind of a shame in some regards. But again, I want to be a little bit agnostic because I’m not 100% sure actually myself what is the best way of going about it, I just feel that at least at the time of making a policy decision, we should be minimizing overall prediction error regardless of the source of that error. Whether it’s bias or variance. I’m not sure what the researchers should do. That’s, I think, like I said, a slightly separate problem. But I do think we’re not paying attention to prediction error as much as we should. Robert Wiblin: Alright. Let’s turn now to some of the implications of this work and some research that we’ve done for people involved in the effective altruism movement. So we wrote this article, ‘Is It Fair To Say That Most Social Interventions Don’t Work?’ Ben Todd worked on that and put it up last year. It’s one of the articles on our site that I like, I think the most out of all of them. And the reason we looked into it is in a lot of our talks, so many years, we’ve been saying most social interventions, if you look at them, don’t work. On the basis of looking at lots of randomized control trials and saying while most of them seem to produce null results, the interventions that they’re looking at don’t seem to be helping. But then we had some doubts about that, because we’re thinking “it’s possible you’re getting false negatives for example, and it’s possible that an intervention works in some circumstances and not others.” So, is there anything that you want to say about that article possibly? We could walk through the various different moves that we take and then try to reach a conclusion about it. Eva Vivalt: Yeah, it’s a really difficult question because, like you say, there are lots of things that go into it. Null results could just be underpowered. The other big thing is that unfortunately we tend to do impact evaluations in some of the better situations in the first place, and this would sort of work in the other direction. Like, so many impact evaluations just fall apart and never happen and we don’t actually observe their outcomes because the study just fell apart. So yeah, it’s hard to say, to be honest, but happy to walk through- Robert Wiblin: Sure, sure. Okay. So one of the things is; only some interventions are ever evaluated and they’re probably ones that are better than others, because you would only bother spending the money on an RCT if it looks really positive. Do you have any sense of how big that effect is? Eva Vivalt: Honestly, I don’t, but I will say that there’ve been some people looking at the impact evaluations that don’t end up happening. Like David McKenzie and some other people were trying to pool together some estimates of this. And I think that problem is actually quite large. It’s not necessarily that it’s … it’s a little bit distinct from the problem that we only try to study those things that have some chance of being really highly effective. It’s also that even within a particular topic, that is highly effective or that we suspect is highly effective, the ones that end up happening are the better instantiations of that particular program. Like the government in that particular area had it more together or whatever else. So we’re getting biased estimates as well that way. Robert Wiblin: Okay. So, we kind of start with this quote from David Anderson, who does research in this area, and he says it looks like 75% of social interventions, that he’s seen, have weak or no effects. And this suggests that it might even be worse than that because there’s all of these programs that aren’t even being evaluated, which are probably worse. Maybe it’s 80 or 90% of social interventions have small or no effects. But there’s other things that we need to think about. So, there’s lots of different outcomes that you could look at when you’re studying an intervention. You might think “you’ve got this change in a school, should it be expected to improve their math scores or their english scores or how much they enjoy being at school?” All of these different things, which I guess that pushes in the direction of being over optimistic because the papers that get published can kind of fish for whichever one they found a significant effect in. But even if we were honestly reporting the results, it then just becomes kind of unclear which were the things that you kind of expected to have an effect on anyway. It just makes it quite confusing. What actually are we saying when we say 75% of things have weak or no effects? Was it just a primary effect or on many of them? Eva Vivalt: Yeah. That’s totally fair because often times a study will throw in all sorts of random other things that they don’t actually honestly anticipate there being effect on, but if you’re doing the study anyways, why not? Robert Wiblin: Yeah, yeah, yeah. So it turns out that this change at the school didn’t make the students happier, would you expect it to anyway? Maybe they were just curious about that. So it’s really unclear what you’re sampling across. Then there’s this issue of; we said no effect or weak effects is often how this quote is given, but then what is a weak effect? That’s just kind of a subjective judgment. Is it relative to the cost? Is it relative to the statistical significance? Is it material? Again, that just kind of muddies the water and it you think about, it becomes a much more subjective kind of claim. Do you have anything to add to that? Eva Vivalt: Not really. I mean- Robert Wiblin: Does this come up in your own research? Eva Vivalt: I mean, to me, what I would find the important question is actually in some ways … I realize that obviously for the purposes of this post that you’ve put together with Ben Todd, et cetera, I think that the question is really interesting, of which have any effect whatsoever, but I would a little bit think that another important question would be “which matter relative to some other outside” … I guess, which matter at all is a good question, but I always think about what is the outside option, and what the outside option is really matters. So when you were talking about weak effects, yeah probably they are talking about statistical significance, but you can also think of weak effects as like “sure it has an effect but so what? We can do so much better.” Robert Wiblin: Mm-hmm (affirmative), yeah. And then, I think the part of the article that you helped with was moving from talking about individual studies, where very often you get null results to meta analyses where you combine different studies. And then more often, I think you find that an intervention works, at least on average. Do you want to talk about that? Eva Vivalt: Yeah, if you’ve got some underpowered studies then combining them does tend to improve the situation slightly. It depends a little bit on exactly how you’re doing it and what kinds of things you’re including, but I’d say by and large you do end up with … because you’re essentially adding some power when you do a meta-analysis, by at least partially pooling results from different studies. Robert Wiblin: And so you can pick up smaller effects. Eva Vivalt: Yeah. Robert Wiblin: Which means that, I guess, more of them become … like just jump over the line of being positive or material or observable. Eva Vivalt: Becoming significant, not necessarily- Robert Wiblin: Statistically. Eva Vivalt: Yeah exactly. It could be like a very small effect, but … Robert Wiblin: Well there’s a bunch of other moves that we make here, or adjustments up and down, but what we were trying to kind of get at is how much of a gain do you get by picking the best interventions or trying to be evidence based rather than just picking something at random. And I think the conclusion that we reached after looking at all of this, is that it’s perhaps not as much as you might … or people who are extremely supportive of doing more empirical work might hope, because one; is that the measurements are somewhat poor. So there’s a good chance often of you think that you’ve the best intervention from a pool but in fact you’ve gotten it wrong. But also that even if there’s like a small fraction of the interventions that you might be sampling from that are much more effective than others, even if you choose at random, you still have a reasonable chance of picking one of those anyway. Which means that, let’s say that there’s like 10 different interventions and only one of them works. If you pick at random, you can’t do worse than a tenth as well as definitely picking the best one because you have a one in ten chance of picking it anyway. Which I guess is perhaps something that I think effective altruism hadn’t thought as much about. We often tended to compare the very best interventions with the very worst ones, but it’d be a very peculiar strategy to try to find the very worst ones and do those. Instead you should really compare your attempt at picking the best intervention with kind of picking at random among things that have been studied. In which case the multiple ineffectiveness that you get probably isn’t going to be huge. Do you have any comments on that? Eva Vivalt: Yeah. I mean this is a little bit similar to when I was trying to look at like how much we can learn from an impact evaluation. I had to make assumptions about what that outside option is that the policy makers are considering. And just sort of based on the distribution of effects that I saw in AidGrade’s database, it’s actually reasonable that a lot of these projects, a lot of interventions have got somewhat similar effect sizes, at least without taking cost-effectiveness into consideration. Obviously I’d love to take costs into consideration but it’s very hard to because like 10% of studies say anything about costs and then it’s not very credible when they do say it. But things were pretty tightly distributed. So I tried some different specifications. Like I was saying, trying out 50% of the effect of another program or 90% of the effect of another program, like how well can you distinguish between two programs, one of which is 90% of the value of the other one, as it were. You have to make some pretty strong assumptions there. Things do seem to be … so, I don’t know. That’s how I’ve gone about it in the past. Robert Wiblin: Things seem to be fairly clumped together, you’re seeing? Eva Vivalt: Well, out of the ones in AidGrade’s database, and again without taking costs into consideration. I’m not trying to make a broader claim than that because there’s just no data. Robert Wiblin: Right. Okay, so I was just about to bring this up next, which is like four years ago or so, Robin Hanson responded to one of your graphs from AidGrade, which seemed to suggest that if you looked at effect sizes in terms of standard deviation improvements then you kind of found a normal distribution of effect sizes and it wasn’t that widely dispersed, as you’re saying. And he was saying “well this was a bit in conflict with the standard line that people in effective altruism give, which is that there’s massive distributions in how cost effective different approaches are. That it’s not just normal, but it’s lognormal or power law distributed, or something like that. Which gives you much greater dispersion between the best and the average and the worst.” Did you ever respond to that? Because I think we ended concluding it might be a bit of a misunderstanding. Eva Vivalt: I think that, yeah … so there’s two things that are certainly not included. One thing I just alluded to is costs. That’s saying nothing about the cost-effectiveness of a particular intervention and I would love to have been able to produce those graphs for the cost-effectiveness. But, like I say, the thing is that papers just don’t report costs, and they should. But they don’t do. So it’s really hard for me to come in as an outsider to each of these papers and say, “Oh yeah, but actually I know what the costs are.” One could make strong assumptions about those and try to infer what costs are from other studies, et cetera, but it’s quite hard to do and not very credible. So I’m sure one could do it, but probably not in an academic setting. I haven’t been pursuing it but I would love for other people to pursue it and I’m sure that other people are pursuing it. Robert Wiblin: Well the other thing, if you want to move to cost-effectiveness you also have to think about the actual welfare gain from the different improvements. Eva Vivalt: Exactly. So, that’s the other thing I was going say is then how can you actually value these outcomes? Because the outcomes are pretty … they don’t have intuitive value to them, right? How do you value an extra year in school versus a centimeter of height, right? How do you think about that kind of thing. What does that actually mean in terms of value? So then you need some additional mapping from the outcomes to something that we value. Robert Wiblin: So, yeah. Is it possible that we start with this normal distribution of standard deviation changes and then because costs per recipient are so wildly distributed and the benefits per standard deviation improvement are so wildly distributed that you still very wide dispersion in the cost effectiveness of different interventions? Eva Vivalt: You could do. Robert Wiblin: Mm-hmm (affirmative), you could. Yeah. Eva Vivalt: I just have not a very clear sense of that because I don’t have a clear sense of the costs. Robert Wiblin: Okay, it’s not just positive. Other people could look at this and try to figure that out. Eva Vivalt: Yeah, yeah, and I really hope somebody does. Robert Wiblin: I guess there’s also the Disease Control Priorities Project, of course, has produced cost effectiveness estimates for lots of different health treatments and find that they’re extremely widely dispersed. But I think that their resourcing per intervention that they’re looking at isn’t so good, and very often they rely on modeling rather than empirical results, which might be causing them to overstate the variance because some of it is just mistakes on their part. Eva Vivalt: I see. Yeah, no, I’ve heard a little bit about that. That makes a lot of sense. I think that one thing that is certainly necessary and I hope happens in the near future is some attempt at also adding values to these other things that we might care about. Like all the educational stuff, et cetera, to sort of be able to compare them with heath interventions, et cetera. Because the same kind of way that they do the disability adjusted life years, et cetera, they could do for some kind of more general well being. Robert Wiblin: Right, yeah. So I really want to try to pin you down a little bit on how valuable is being empirical? Because it seems like you’ve got some positive results and some negative results, you’ve got the generalizability doesn’t seem so good so can we really learn so much? On the other hand it looked like some of your research suggests that in fact most of the results that show positive effects are kind of right about that. And then we’ve got to consider I guess the cost of doing these different studies and whether people actually respond to it in government. Did you have … you’ve been working in this area for five or ten years now, have you updated in favor of empirical social science or against it? Eva Vivalt: I think it’s the only game in town to be honest. As much as we may criticize some of the things that come out of standard research, I guess the only answer in terms of what to do next is more of the same. And with some improvements, but more is better. And I think people are a little bit more aware of and focused on addressing some of the limitations in past research, both in terms of — people are thinking more now about the differences in scale up. People are thinking a bit more now about how results actually feed into the policy process. So, for me I think there’s incremental change, but I’m certainly pro-empirical work because what’s the alternative? It’s not- Robert Wiblin: Well, I think there are alternatives. One is, as you were saying, just survey people on their expectations about what works, even before you’ve run any studies. And it could just be that that gets you a lot of the way and it costs very little, so maybe we should just do that and then screw the RCTs, or only do them occasionally. Eva Vivalt: I don’t want to rule out the possibility that we can learn something from … I think we can learn more using observational data, which priors would also be similar to. And I don’t want to rule out that we can learn something from those, I just, maybe this is just a matter of semantics. Like I would still consider that, in some sense, empirical work because what you could do is try to say, “well yes but I want to try to”- Robert Wiblin: A systematic survey. Eva Vivalt: “Figure out which are the … yeah. Figure out the situations in which this is actually relatively okay,” and then it’s some approximation strategy that’s still not quite valid but better than
{ "pile_set_name": "OpenWebText2" }
NECA's BEST GODZILLA FIGURE! I own a lot of Godzilla figures and I can honestly say, NECA put a lot of time and effort into this figure. It's also one of my favorite because I grew up watching this classic movie. The skin, claw, fin and teeth are finely detailed and really makes the figure come to life. Not only is he film-accurate, he's able to articulate pretty well and not just a statue. Overall I would definitely recommend this figure to any classic Godzilla fans out there whose looking for a film-like representation from the classic Godzilla 1954 movie. Verified purchase: No
{ "pile_set_name": "OpenWebText2" }
GO LONDON newsletter Bringing our city to your living room Enter your email address Continue Please enter an email address Email address is invalid Fill out this field Email address is invalid You already have an account. Please log in Register with your social account or click here to log in I would like to receive the best London offers and activities every week, by email Update newsletter preferences Heathers the Musical is set to transfer into the West End following a sold out run at The Other Palace. The show, based on the cult 1988 teen movie starring Winona Ryder, opened in London in June. Carrie Hope Fletcher plays Veronica, a high school nobody who falls in with elite girl gang the Heathers and a misfit called JD. Buy tickets for Heathers the Musical with GO London It will open at the Theatre Royal Haymarket on September 3, with Fletcher reprising her role. Jamie Muscato will also return to play JD, the role made famous by Christian Slater in the original film. Despite not having a press night, the production had to introduce a ticket lottery after all tickets for the run sold out. Laurence O’Keefe and Kevin Murphy, the songwriting team behind the show, previously worked on Legally Blonde the Musical and the TV Desperate Housewives respectively. Work began on the Heathers musical in 2010, with an off-Broadway production taking place in 2013. Heathers will run at Theatre Royal Haymarket from September 3 to November 24.
{ "pile_set_name": "OpenWebText2" }
To learn more about our company's namesake, Nikola Tesla, visit his entry on Wikipedia, or “Tesla: Master of Lightning” at pbs.org. Why the Name "Tesla"? The namesake of our company is the genius Nikola Tesla, an inventor, electrical engineer, and scientist. Among his life's many inventions (and more than 700 patents) are the induction motor and alternating-current power transmission. Without Tesla‘s vision and brilliance, our car wouldn't be possible. We're confident that if he were alive today, Nikola Tesla would look over our 100 percent electric car and nod his head with both understanding and approval. A Bright Moment to Honor the Man Who Lit the World The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has declared 2006 the Year of Nikola Tesla, in celebration of the 150th anniversary of his birth. We are delighted that the world unveiling of the Tesla Roadster fell on July 19, 2006 — just days after his birthday. “Were we to seize and eliminate from our industrial world the result of Mr. Tesla‘s work, the wheels of industry would cease to turn, our electric cars and trains would stop, our towns would be dark, and our mills would be idle and dead. His name marks an epoch in the advance of electrical science.” — B. Behrend, Vice President of the Institute of Electrical Engineers
{ "pile_set_name": "OpenWebText2" }
If there is one minister who is being targeted by media it is Smriti Irani. She is being simultaneously vilified as an intellectual lightweight, an RSS plant, a Modi favourite, an Ambani agent and worse. But why this special hate for Irani? Here I explain why Irani is facing a combined onslaught from almost all sections opposed to the BJP. Academic credentials and astrologer controversy Suffice to say that Irani had cleared the air on her degree from Yale almost immediately. Her words were torn and twisted and she was harangued as a liar. She is being portrayed as a bimbo who has been airlifted into a high profile ministry with little administrative experience and lack of a typical Oxbridge education. In fact, this clubbed with her previous stint as a model and actress in television soaps (shown as regressive by intellectuals) has made her an object of ridicule. Derisive comments on her past by paragons of liberalism are common. The latest photograph of her consulting an astrologer has added to her image. Almost all politicians consult astrologers. It is a reality and flippant comments about an HRD minister believing in mumbo jumbo don’t wash. This is because paramount rationalist and secular-e-hind Shri Arjun Singh himself was a big believer in astrology. Previous HRD minister Kapil Sibal too was associated with PV Narasimha Rao’s astrologer NK Sharma. Not too many comments by media personalities then! It is obvious standards are different for BJP ministers and Congress ones! Gandhi plot Most in the media are closet Gandhi family sympathisers. Not only are they feudal in thinking but also dependent on the crumbs the family has thrown at them in the past. Some are also enamoured by their star power just as many of us get star struck when we meet famous Bollywood actors. (I confess to meeting Govinda in a lift once and not being able to muster the courage to ask him for an autograph. Indeed I was almost bowing!) Smriti Irani waged a spirited battle in Amethi and almost tripped Rahul Gandhi. Since her loss she has adopted Amethi and has been nursing the constituency showing her seriousness in sticking to the area. She would be a made-for-TV opponent to Priyanka Vadra should she launch herself in the next elections. This has quite obviously rattled the Gandhi family and therefore, Irani is bearing the brunt of covert opposition attacks. The Gandhi family has ordered surreptitious hit jobs on her since any direct attack on her will be charged as motivated. For example, there were some journalists and media commentators who were poking fun at her for distributing 12,000 sarees in Amethi. First, almost all MPs distribute such gifts in their seats. Second, compare and contrast her saree distribution programme with that of Azam Khan. Third, it is my contention that had Priyanka Gandhi distributed such gifts there would have been adulatory by-lines on “Didi giving return gifts” etc. Hitjob through innuendo In the Lok Sabha campaign Modi made his brother-sister relationship with Irani known in public. She too made no bones about the fact that she was grateful he had given her a second chance to prove herself. Malicious voices in the media started disgusting rumours regarding their pure relationship. People who claim to be liberal in public showed themselves up when they question how a man and a woman cannot work closely together without any illicit relationship. The cat was set in the pigeons by Modi hater turned fan turned cautious critic Mrs. Madhu Kishwar, when she alleged the worst of relations between the Prime Minister and Irani. It has been quite obvious to many of us that Kishwar has some deep personal grudge against Irani and as has been proven by her frequent ejaculations – Kishwar made the allegation in an unstable frame of mind. However, this was a signal to more mainstream critics to taint by rumour. Almost all female politicians have had to face such dirty tricks. So there is nothing new in the manner in which such a campaign is being conducted against Irani. What is repulsive however is that Irani has a family who too are affected, unlike many other successful women politicians, and the fact that the Prime Minister’s personal life has been above reproach in such matters. Left-liberal and media caucus The HRD ministry is of special interest to media people and their close friends in the academic world. This is because most of them are either related to each other or developed friendships and bonds in college and university days as media persons and academicians tend to be the intellectual ones in colleges. They are dependent on HRD ministry for their sinecures and retirement postings. Visiting faculty appointments, tenures, VC ships, scholarships etc. can be directly influenced by the HRD ministry. Most importantly, future minds can be altered through education. So it is clear anyone appointed, as HRD minister would have been vilified almost immediately. The RSS is trying to correct the anti-Hindu, anti-nationalist literature that is passed of as intellectual output in our education system. For BJP, governance is not just about grabbing power but about shaping India. Education is central to this effort. Irani is a casualty of this tug of war between the nationalists and left-liberal caucus. Proof of media’s over focus is the countless columns on fringe elements like Dinanath Batra and on PM’s comments in jest about plastic surgery. More serious is the effort to paint Sanskrit as a communal out-dated language. Equally serious is the exertion in defending the discretionary imposition of German as a third language rather than a more practical foreign language like Spanish, Mandarin or even Japanese. Contentious Issues The HRD ministry deals with contentious issues like language policy, education programmes, universities etc. These are closely debated and hotly contested by state governments, students, bureaucrats, and political parties and last but not the least – parents. These issues are so divisive that no national consensus has emerged on even one of these issues in the last 60 years. An HRD minister tries to keep these issues on the back burner and takes decisions on less contentious policies while, hoping that these issues will get resolved by market forces. Unfortunately, with a hostile media Irani has not had such luck. Consequently, everything she has done is marked by controversy and the media too rakes up hot issues to put her on the back foot. A whisper campaign has started across staff rooms and corridors of educational institutes that she is not up to it. The successful implementation of the PM’s talk on Teachers Day too has rattled leftists who fear an entire generation with little reverence for Nehru and Indira Gandhi and the name of Modi on its lips will come up. Add to this the heavy presence of left-friendly bureaucracy in the ministry and we know why almost all decisions are leaked before hand. It will take time to clean up the ministry from vested interests who are not implementing the government’s programmes faithfully. For more information on how media is selectively playing up controversies please read articles by PT Kartikeya Tanna’s who has emerged as a truth-o-meter on all matters contentious. Cabinet infighting As a loyal supporter I hesitate to bring this up but as Atal ji had said “Kinchit nahin bhaybhit main, Kartavya path par jo bhi mile, Yeh bhi sahi woh bhi sahi”. Smriti Irani’s rise in politics has been virtually vertical. This has obviously created many rivals within the party who want to pull her down. Moreover, a weighty ministry to a first time MP has rankled many who fear she could become the senior most woman leader in the party. However, I am less concerned about this because Modi is known for his no-nonsense approach to infighting. A necessary condition to be on Team Modi is willingness to work in a team. To conclude it is now clear why Smriti Irani is being targeted. This blog is for those right wing supporters who were wavering in their support to her. Hopefully, it will now be clear why she is the victim of a plot rather than any inefficiency on her own.
{ "pile_set_name": "OpenWebText2" }
Una furgoneta ha atropellado a la multitud que transitaba la Rambla de Barcelona provocando 13 muertes y causando más de un centenar de heridos en una tarde de pánico en el corazón de Barcelona. La policía autonómica confirmó que se trata de un atentado terrorista y que tenía relación con la explosión de una vivienda en Alcanar (Taragona) la madrugada pasada. El vehículo alquilado por los terroristas inició su recorrido en la confluencia de la Rambla con Plaça Catalunya y ha recorrido 600 metros hasta el mosaico de Joan Miró (Carrer Hospital), atropellando a toda la gente que se encontraba, haciendo eses para causar el mayor daño. El presidente de la Generalitat, Carles Pugdemont, indicó que hay dos detenidos en relación a la masacre terrorista. Aunque Puigdmenont anunció 12 muertes, el SEM actualizó la cifra a 13 personas fallecidas por el atropello mortal. El atentado terrorista perpetrado con furgoneta se ha convertido, por ahora, en el segundo más mortífero en Barcelona tras el de Hipercor en 1987. En concreto, el balance, según Emergències, es de 13 muertos, 15 heridos graves , 23 heridos menos graves y 48 heridos leves. Joaquim Forn, conseller de Interior, confirmó las 13 muertes y más de un centenar de heridos, contando los que fueron por su cuenta a los hospitales, lo que aumentó el balance inicial de unos ochenta. El mayor de los Mossos d’Esquadra, Josep Lluís Trapero, informó de las dos detenciones practicadas y aseguró que ninguno de ellos es el conductor de la furgoneta, aunque están relacionados “directamente con el atentado”. Según su relato de los hechos, el conductor abandonó el vehículo a la altura del Liceu tras el atropello mortal y no hay evidencia de que la persona que salió de la furgoneta fuera armada de manera visible. Trapero afirmó que el atentado guarda conexión con otro incidente registrado la madrugada del jueves, la explosión de una casa en la localidad de Alcanar. Hay “claras” relaciones, indicó Trapero. La policía habló de una célula terrorista y sospecha que el conductor huido era uno de los tres implicados en la explosión de Alcanar, donde había una veintena de bombonas de butano. Mariano Rajoy anunció tres días de luto oficial y la próxima convocatoria del Pacto Antiterrorista El mayor de los Mossos indicó que los dos detenidos son de Melilla y de Marruecos. También confirmó que el marroquí es Driss Oukabir Soprano, con residencia en Ripoll. Según la agencia Efe, Driss Oukabir pasó un mes en la cárcel de Figueres en 2012 acusado de un delito de abusos sexuales. Un par de horas después del atentado, un coche atropelló a dos mossos d’Esquadra en un punto de control en la avenida Diagonal en dirección a la salida de Barcelona, han informado fuentes policiales. Pero Josep Lluís Trapero descartó una relación de estos hechos con el atentado de La Rambla. El conductor se dio a la fuga y acabó muriendo por causas todavía sin aclarar, aunque la policía descartó que fuera por disparos en San Just Desvern. Hay dos detenidos en relación al atentado pero el conductor de la furgoneta continúa libre Driss Oukabir (Facebook) La furgoneta ha entrado en la calzada peatonal en el inicio de la Rambla y ha recorrido el paseo central, donde desde la altura de la Font de Canaletes ha empezado a atropellar a las personas que transitaban el gran paseo peatonal. El vehículo ha quedado abandonado a la altura del Pla de la Boqueria. Momentos antes había chocado con uno de los kioskos. El vehículo con el que se ha perpetrado el atentado era alquilado. Los autores habrían rentado dos vehículos y con el segundo tenían planeado huir: se trata de una furgoneta Fiat con la matrícula 7082JWD, de color blanco, recogida en Santa Perpètua de la Mogoda, como la utilizada en el atentado, y que ha aparecido en la ciudad de Vic. En una declaración institucional a las nueve de la noche, el presidente de la Generalitat, Carles Puigdemont, contabilizó 12 personas muertas y 80 heridos ingresados en distintos hospitales, 15 de ellos en estado grave. El mandatario catalán ha indicado que la cifra de afectados podría variar porque se trata de un atentado de “dimensiones muy graves” que ha conmocionado al mundo entero. También ha confirmado dos arrestos vinculados al atentado. Pocas horas después del atentado, Estado Islámico reivindicó la autoría de la acción con un comunicado pulbicado por la agencia Amaq, que tiene vínculos con la organización terrorista. El presidente del Gobierno español, Mariano Rajoy, llegó a Barcelona a las once de la noche y anunció el decreto de tres días de luto oficial. “Estamos unidos en el dolor y en la voluntad de acabar con esta sinrazón y esta barbarie”, dijo Rajoy, quien avanzó que convocará a reunión próximamente al Pacto Antiterrorista. Según ha asegurado el alcalde de Ripoll, Jordi Munell, a La Vanguardia, un hombre llamado como Driss Oukabir Soprano, y cuya foto y nombre coinciden con la de uno de los identificados como presuntos autores del atentado, se ha presentado voluntariamente en la comisaría y ha explicado allí que le habían robado su documentación, así como ha aclarado que en el momento del ataque se encontraba en Ripoll. Pero esa versión fue desmentida por el mayor de los Mossos. En el momento del atropello La Rambla estaba atestada de gente, en plena temporada alta. Anualmente recorren la Rambla cien millones de visitantes. “La furgoneta ha bajado por el centro arrasando con todo”, explica uno de los testigos. Los heridos han sido derivados por toda la red de hospitales de la ciudad. Las ambulancias no dejaron de recorrer las calles colindantes en dirección a la zona del atentado por más de una hora, con unas cuarenta unidades desplazadas. Tras el atropello las fuerzas policiales desalojaron y cercaron la zona, que se sobrevoló en helicóptero. Se activó el dispositivo Cronos, que ha supuesto un despliegue sin precedentes y el despliegue policial en todas las principales salidas de la ciudad para detener a los autores. La Rambla entera permaneció cerrada, desde Plaça Catalunya al Passeig de Colom, con las calles vacías y todas las estaciones de metro y tren cercanas cerradas, como Catalunya, Liceu o Drassanes. Los tramos centrales de la L2 y la L3 se han cerrado. A la medianoche fue reabierta a los transeúntes. Se recomienda no salir a la zona, que está totalmente cerrada Momentos después del ataque la policía ha recorrido las tiendas, restaurantes y hoteles de la zona ordenando a los propietarios que cerraran las persianas y que nadie saliera a la calle. Decenas de clientes se encerraron en el interior de los locales, muy asustados, según los testimonios recogidos por LaVanguardia.com. Un par de horas después, sobre las 19:30, se ha iniciado su evacuación. Los Mossos y varios testimonios socorren a un herido en la Rambla de Barcelona (LV) Otros negocios se adelantaron y cerraron al ver policía corriendo y peatones gritando. Otras arterias vecinas, como Portal de l’Àngel y Passeig de Gràcia, también han cerrado comercios con los clientes y el personal dentro. Los testigos que rondaban la zona afirman que la multitud que estaba en la Rambla ha empezado a gritar y correr en dirección a la Plaça Universitat. Personas que se han acercado a lugar de los hechos dicen que se ha visto “gente con sangre”. Desde Emergències se ha recomendado que no se salga a las calles de la zona. El conseller de Interior, Joquim Forn, informó que a la medianoche se daba por terminada la evacuación de las personadas confinadas en los comercios aledaños al lugar del atentado. El Ayuntamiento de Barcelona ha informado vía comunicado que ha activado el Plan Básico de Emergencias municipal por incidentes con múltiples víctimas. Todos los servicios de emergencia están activados y trabajando en el caso, bajo la batuta del Centro de Coordinación recién constituido. El Centro de urgencias y emergencias de Barcelona (CUESB), que se encarga de prestar atención psicosocial a víctimas de diverso tipo, se ha trasladado a la Rambla para atender a los afectados. Asimismo, los actos programados para este jueves de las fiestas de Gràcia se han suspendido, según ha informado el conseller d’Interior, Joaquim Forn. La alcaldesa Ada Colau, los tenientes de alcalde Jaume Collboni y Janet Sanz, así como el comisionado de Seguridad Amadeu Recasens, se han trasladado ya al centro de coordinación.
{ "pile_set_name": "OpenWebText2" }
RUTH Davidson was paid £7500 for a single night’s work, she has revealed. The former Scottish Tory leader received the money for appearing as a political pundit on ITV’s general election night coverage in December. THe SNP called on her to hand the money back, otherwise she would be "laughing all the way to the bank" while her party's "heartless" polices forced families into poverty. At the election, the Scottish Tory manifesto boasted Boris Johnson's government was raising the National Living Wage to £10.50 an hour by 2024, around 1 per cent of Ms Davidson’s hourly rate that night. READ MORE: Ruth Davidson quits £50,000 job with lobbying firm It NLW is currently £8.21 for over-25s and is due to rise to £8.72 in April. The fee, which was rumoured to have deterred other broadcasters from inviting her on, has now been declared in Ms Davidson's Holyrood register of interests. The latest entry states: “On 24 January 2020 I was paid a £7,500 fee by ITV (of 200 Grays Inn Rd, Holborn, London WC1X 8XZ) for participating in the network’s election night coverage. [Registered 12 February 2020]”. The Herald on Sunday revealed in November that Ms Davidson had been offered an "unprecedented" sum to appear at a pundit on ITV. She was also approached by the BBC, but the corporation was unable to match ITV's offer and backed off. READ MORE: Ruth Davidson offered 'unprecedented sum' to discuss General Election on ITV Ms Davidson refused to say at the time how much she was being paid, but has now been olbliged to disclose the sum under Holyrood propriety rules. Last year Ms Davidson was criticised for trying to start a new career as a £2000-a-day consultant to a City of London PR firm while still an MSP. She pulled out of the job with Tulchan Communications within days after a ferocious backlash at Holyrood. READ MORE: Ruth Davidson says she is ready to go to Lords Ms Davidson, 41, who earns a basic salary of £63,579 as the MSP for Edinburgh Central, is standing down at the next Holyrood election in 2021. She is expected to be elevated to the House of Lords like her predecessor as Scottish Tory leader, Baroness Annabel Goldie. Labour MSP Neil Findlay, who has campaigned for MSPs to be banned from taking second jobs, accused Davidson of bringing politics into disrepute. He said: "She is a mercenary whose sole motivation now appears to be making as much money as possible from her political connections - it brings politics into disrepute." SNP MSP Rona Mackay said: "Ruth Davidson's priority appears to be picking up thousands in outside earnings while neglecting her actual job. "Serving politicians who appear on election night broadcasts do so to represent their party - not to pick up a pay cheque. This payment is unprecedented – and she should now hand it back. "Public outrage might have forced Ms Davidson to u-turn on her attempt to take a job with a lobbying firm – but now she plans to take a seat in the House of Lords where, unfortunately, voters will be denied the chance to vote her out. "While she's laughing all the way to the bank, Ruth Davidson might want to stop to consider the thousands of families forced into poverty because of her party's heartless policies." A Scottish Conservative spokesman said: "Ruth was invited to appear on ITV's election night coverage and agreed. The programme set the fee structure for all of its presenters and pundits, and Ruth had no input into that process."
{ "pile_set_name": "OpenWebText2" }
Montreal city council passed a motion Monday making it the latest Canadian jurisdiction to declare itself a "sanctuary city" for non-status immigrants. The designation means undocumented refugees will have full access to local services regardless of their situation, with the city following in the footsteps of Toronto, Hamilton and London, Ont. Mayor Denis Coderre told reporters he felt compelled to act because of events south of the border. "One of the reasons I've done that is clearly because of what's happening in the United States and what I'm witnessing in Europe," Coderre said. In recent weeks, more and more people have flowed illegally across the U.S. border into Canada as President Donald Trump cracks down on illegal immigration and imposes new restrictions on refugees. Canada Border Services Agency says 452 people filed a claim for refugee asylum at Quebec-U.S. land border crossings in January. Given that current context, several Canadian cities have expressed interest in adopting similar motions, including Ottawa, Saskatoon and Regina. Toronto became Canada's first sanctuary city in 2013. Coderre, a former federal immigration minister, assured the measures will go beyond symbolism and help those who need it the most. Available services would include access to municipal programs and buildings, including ibraries and recreation centres, while Coderre said he wants to discuss major issues such as health, housing and education with provincial and federal authorities. "The bottom line is to integrate them," he said. "And if you don't have a criminal case (or pose a security risk), we will normalize your situation. You will be able to remain here." Montreal Mayor Denis Coderre speaks to reporters in February 2016. File photo by The Canadian Press But some migrant rights' groups called the measure largely symbolic as Montreal joined other North American cities such as San Francisco, Boston, New York and Chicago as designated sanctuary cities. A number of groups told a news conference a few hours before the motion passed that while the gesture would be in good faith, it wouldn't provide the tangible changes to make Montreal truly a sanctuary city. "He's coming from a good place, I'm not going to deny that," said Jaggi Singh, a spokesman for Solidarity Across Borders. "But it doesn't go far enough." Singh said the city should at least ensure that Montreal police and transit officials will not collaborate with Canada Border Services Agency and hand over undocumented migrants. Singh said there are countless instances where an arrest on a minor infraction can lead to deportation, while the representative of a sex-workers' rights group told the news conference that undocumented women working at massage parlours are routinely handed over to immigration officials. "Honestly, in many ways, having a symbolic motion can be worse than having no motion at all," said Singh. "What it does is creates a false sense of security and false sense of protection and the moment where the police are deporting people, you destroy any sense of trust." Coderre said after the motion was adopted the city's public security committee would study the matter of how police and transit officials deal with the migrants. Opposition Leader Valerie Plante of Projet Montreal said how police work with undocumented people will be key. "I think this is a great decision, but we have to be cautious not to create a false sense of security for those vulnerable people," she said.
{ "pile_set_name": "OpenWebText2" }
Latitude 360 shut its doors in January after significant financial troubles. Now, a local company is auctioning off everything - even the building. You can head to 10370 Philips Highway on Saturday, June 25 at 10:10 a.m. to try and get some of the items. Related - Jacksonville's Latitude 360 closes; owner settles eviction lawsuit from landford over rent (1/7/2016) - Landlord sues Latitude 360 for millions in overdue rent (10/20/2015) Some of the items up for auction include the building, bowling balls and shoes, a wide range of commercial kitchen equipment, pots, pans and over 30 flat-screen TVs. An online auction is slated for Sunday, June 26 for the arcade games, the movie theater, stage lighting and more. You can go to proxibid.com beginning at 1:01 p.m. to try and get some of those items. Luman E. Beesley Auctioneers is hosting the event. .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }
{ "pile_set_name": "OpenWebText2" }
The warm spring weather brought more than just humans to the White Rock beach area Thursday afternoon. A pod of five humpback whales were spotted about two kilometres from the White Rock pier. White Rock Sea Tours and Whale Watching offered to take people, including Peace Arch News, out to view the whales, which were feeding in the Strait of Georgia between White Rock and Point Roberts. Resident Susan Barbos Solym, who was spending the afternoon on the waterfront, noticed the whales off in the distance and got a pair of binoculars. “I could see them breaching from the pier, with my binoculars,” she told PAN. About five boats, both from the Canadian and American side, took spectators to see the whales feed. White Rock Sea Tours and Whale Watching owner Andrew Newman told PAN that it’s the first time he’s seen the whales feeding so close to the city. Sign up here Get local stories you won't find anywhere else right to your inbox. Humpback whales were spotted feeding near White Rock Thursday. (Aaron Hinks photo) Humpback whales were spotted feeding near White Rock Thursday. (Aaron Hinks photo) Humpback whales were spotted feeding near White Rock Thursday. (Aaron Hinks photo)
{ "pile_set_name": "OpenWebText2" }
You glance toward Lower Manhattan and expect to see a single tower where two once stood. You delight in the spectacle of sunlight glinting off its slivered facade. Suddenly, you realize, the new 1 World Trade Center — the Freedom Tower — has become familiar. And 15 years after the twin towers disappeared abruptly from the skyline, they have begun to fade from popular consciousness. They once nearly rivaled the Statue of Liberty and the Empire State Building as simple, graphic representations of the complex idea of New York. In movies and logotypes, on knickknacks and letterheads, two parallel strokes meant only one thing. Now, a shaft of slender, alternating isosceles triangles — so simple a child could draw it — is coming to mean the same thing. Campagna & Sons of Brooklyn, which makes boxes for pizzerias around Lower Manhattan and nearby New Jersey cities like Hoboken and Weehawken, carries a Freedom Tower design, in 10-, 12-, 14-, 16- and 18-inch sizes. Instagram currently counts nearly 200,000 posts tagged #oneworldtradecenter. Fishs Eddy, an imaginative housewares store in the Flatiron neighborhood of Manhattan, has introduced the new 1 World Trade Center to its popular “212” line.
{ "pile_set_name": "OpenWebText2" }
Shamshad Ahmad performs rudrabhishek at a temple of Lord Shiva in the city on Monday ALLAHABAD: Reciting Vedic mantras like ' Shiva Tandava ' and ' Bagalmukhi mantra' every day early morning and seeing patients at during the day, that has been 59-year-old Muslim doctor Shamshad Ahmad's daily routine since the last 15 years. A resident of Rasoolpur locality of the city, Ahmad's devotion to Lord Shiva is an example of the Ganga-Jamuni tehzeeb , which Allahabad is famous for. Ahmad often recites the Mahamrityunjaya mantra and performs 'rudraabhishek' every year during the Hindu month of Shravan. On Monday, TOI spotted him performing the rudrabhishek at the Takasaknath temple of Lord Shiva in Meerapur. Passers-by were surprised to see Ahmad pray with folded hands and chanting shlokas along with the priests of the temple. Ahmad said that his being a Muslim had nothing to do with his devotion for Lord Shiva or chanting Vedic mantras. "Since my childhood, I was never comfortable with eating non-vegetarian food. In my 20s, I used shiver at the thought of sacrificing a goat during Bakreid." The homoeopathic doctor said that he was drawn to Hindu deity after meeting a devotee of Lord Shiva at the Padela Mahadev temple in Phaphamau when he 35 years old. "I also learnt Ravana was a very learned man and was a devotee of Shiv. I was curious for a long time and then just out of curiosity started reciting ' Shiva tandava'," he added. A father of two daughters and a son, Ahmed said, "During Bakrid, my family leaves the city for a couple of days to avoid seeing goats being slaughtered or getting the meat as a gift." It is a part of Muslim tradition to distribute the meat of the sacrificial goat among friends and family during Bakrid. He said that his family plans to visit his eldest daughter in New Delhi who is pursuing a PhD this Bakrid. "When would people of my community understand that rather than killing an animal and spending around Rs 10,000-15000 on a sacrificial goat, they can food a needy person if they want to perform a a kurbaani at all," he said. Read this story in Hindi
{ "pile_set_name": "OpenWebText2" }
Pushing the so-called “Democratic Socialism” is all the rage on the Left, including among some Democrat candidates for the party’s 2020 nomination. However, not all Dems are fully embracing the sales pitch for socialism, including Rep. Stephanie Murphy of Florida: Rep. Stephanie Murphy (D-FL), the first Vietnamese-American to serve in Congress: "The idea that in the greatest democracy, the greatest capitalist system in the world, we’re having casual conversation about socialism, offends me."https://t.co/PXVPW0sGPk — Josh Kraushaar (@HotlineJosh) April 3, 2019 U.S. Rep. Stephanie Murphy, a democrat, is speaking on capitalism from the perspective of someone who grew up in Vietnam. She knows where socialism goes. pic.twitter.com/miGdO49n10 — ?Diana? (@dianathecaswell) April 3, 2019 Paging Bernie Sanders and Alexandria Ocasio-Cortez! Perhaps she can walk down the hall and have a chat with her colleague AOC. https://t.co/ARFzem3ZW5 — m/ -=EdVT=- m/ (@CargoShortLife) April 3, 2019 If only. Rep.Stephanie Martin (D.FL): Murphy – the first Vietnamese-American woman of Congress. Her opposition to socialism is rooted in her youth, growing up in socialist Vietnam. Immigrated to U.S. (refugee) recalls mailing medicine, fabric, bandages, etc. to Vietnam unavailable there. — Gayle McLaurin (@mamamac03) April 3, 2019 The left is going to find itself in a flashback issue of immigrants who came from socialist systems or whose parents did not being on board with their Ivory White Tower theories about how wonderful socialism is. Gonna have a blast with Cubans and Eastern Europeans too — Orange Muppet Energy (@sunnyright) April 3, 2019 She’s done. Will have to re register as a Republican. — Eric (@Ericb1980) April 3, 2019 We’ll have to wait and see if comments like that get Rep. Murphy kicked out of the modern Democrat Party.
{ "pile_set_name": "OpenWebText2" }
Googleは31日、子ども向けの動画視聴アプリ「YouTube Kids」を日本でも提供開始した。Android/iOSアプリを用意しており、無料で利用できる。 YouTubeで配信されている子ども向けの動画コンテンツをフィルタリングして提供しており、安心して利用できるのが特徴。2015年に米国でサービスを開始してから、現在28カ国、7言語で展開している。 「アニメ・ドラマ」「おんがく」「まなぶ」「はっけん」の4カテゴリーを用意。アニメ・ドラマカテゴリーでは「しまじろう」「妖怪ウォッチ」などの日本のアニメや「セサミストリート」「ひつじのショーン」といったグローバルコンテンツ、絵本の読み聞かせ動画や知育動画などをラインアップする。おんがくカテゴリーでは、例えば「動物戦隊ジュウオウジャー」「キラキラ☆プリキュアアラモード」のダンス動画や、童謡、人気アニメのテーマソング動画、まなぶカテゴリーでは、文字の読み書きや英語、算数、地理、理科分野の動画を用意。はっけんカテゴリーでは、乗り物や動物、折り紙の折り方や実験動画など知的好奇心を刺激する動画をそろえる。また、YouTubeクリエイターによる子ども向けコンテンツも用意する。 アプリで表示される各動画のサムネイルを大きく表示し、テキスト情報を少なくすることで子どもでも見やすいUIに仕上げている。また、文字入力のほか、音声入力による動画検索も可能としており、未就学児でも動画コンテンツを探し出しやすいような工夫を施している。 保護者向けの管理機能も備えており、動画の視聴時間を制限できるタイマー機能や、視聴したくない動画・チャンネルのブロック機能、検索機能のオン/オフを切り替えることができる。検索設定をオフにすると新しいコンテンツを検索できなくなるが、アプリによって選ばれたホーム画面の動画と視聴に基づいたおすすめ動画が表示される。なお、ホーム画面に表示する動画の対象年齢は、「未就学児」「学齢児童」「すべての子ども向け」の3つから選択できる。 また、任意のパスコードを設定することも可能で、保護者による使用制限機能や各種機能に子どもがアクセスできないようできる。 アプリはChromecast、Apple TV、ゲーム機、スマートテレビなどにも対応。大画面で動画を楽しめるようにもなっている。 YouTubeグローバルFamily&Learningコンテンツディレクターのマリーク・デュカード氏によると、YouTubeでは家族向けのコンテンツが最も増えているカテゴリーの1つであり、米国でYouTube Kidsを提供してからこれまで300億回の視聴回数、毎週800万のアクティブユーザーを獲得したという。また、各国のアプリストアでも4~5の評価を得るなど好調なようだ。 デュカード氏は、現代の子どもは「地理的な国境、それ以外のさまざまな枠組みを超えており、ボーダーを感じない」と語る。また、「非常に好奇心が強く、さまざまなコンテンツを見ることで積極的に学び、自分たちのプロジェクトを創造する傾向がある」そうだ。そのため、YouTube Kidsではファミリーフレンドリーなコンテンツを提供するだけでなく、子どもやその家族のYouTubeでの体験をより良いものにするために取り組んだという。
{ "pile_set_name": "OpenWebText2" }
Vatican-owned church in Rome opens as homeless dormitory in cold snap ROME, Jan 13 (Reuters) - The Vatican has allowed a church it owns in a central Rome neighbourhood to open at night as a dormitory for the homeless while a cold snap persists. The 17th century church of San Callisto, which is used for Mass and religion classes for the elderly during the day, has been filled with beds and electric space heaters at night. A statement from Pope Francis' alms-giving office on Friday said the church can accommodate about 30 people. The church is part of a complex of Vatican offices and residences that has "extra-territorial" diplomatic status, meaning it is part of the sovereign state of Vatican City even though it about a mile away in the Rome neighbourhood of Trastevere. The program is run by the Sant' Egidio Community, a lay Catholic group of mostly volunteers that runs soup kitchens in the crowded neighbourhood and offers assistance, including Italian language classes, to refugees and immigrants. As unseasonably cold weather hit Rome this month, Pope Francis also ordered that Vatican cars and vans be parked in the neighbourhood around St. Peter's Square at night to shelter homeless. In the past two years, his alms-giving office has opened locales near the Vatican where the homeless can wash, get haircuts and receive information on where to get medical help.
{ "pile_set_name": "OpenWebText2" }
Dragon Ball Xenoverse 2 is the game that just keeps on giving. Hot on the heels of the Hero Colosseum update, new content is on the horizon! First off, a new mysterious original character will be joining the game! This character is apparently named Fuu, and he seems to be a member of the demon clan, however we can’t confirm that at this time. On the bottom right, we see Goku with a Journey to the West inspired costume, as well as Super Saiyan Hercule from the Dragon Ball Super anime! More details should be here soon with translations, however check out the raw scans for now!
{ "pile_set_name": "OpenWebText2" }
Vederlagskommissionens forslag om højere lønninger til folketingsmedlemmer, ministre og borgmestre bliver ikke til noget. Ifølge Politiken barsler kommissionen med lønstigninger på mellem 15 og 30 procent. Men det er alt for voldsomt, lyder meldingen samstemmende fra Venstre og Socialdemokraterne, der nu har trukket i nødbremsen efter både SF og Det Konservative Folkeparti tidligere onsdag meldte ud, at de henholdsvist bakker ud og er stærkt skeptiske. - Jeg ser ikke for mig, at der er plads til lønstigninger til hverken folketingsmedlemmer, borgmestre eller ministre. Nu beder man om mådehold i den offentlige sektor generelt fra politisk hold, og så må det også gælde for os selv, lyder det fra Venstres gruppeformand Søren Gade. Lignende toner hører fra man fra hans gruppeformands-kollega hos Socialdemokraterne, Henrik Sass Larsen. - Jeg siger mange tak, men nej tak. Vi synes ikke tiden er inde til at give så store stigninger, siger Henrik Sass Larsen (S) til TV 2 onsdag, hvor Politiken har erfaret, at der vil være tale om lønstigninger på mellem 15 og 30 procent. S: Kun bundet af sund fornuft I stedet ønsker Socialdemokraterne at fortsætte med den regulering, der allerede i dag sker af politikernes lønninger. Såfremt oplysningerne om kommissionens arbejde holder stik vil det være alt for voldsomt, mener Henrik Sass Larsen. - Vi er lige kommet ud af en krise, og så præsentere så store lønstigninger til politikerne. Det ønsker vi ikke, siger han, der afviser at være bundet af den musketered, som Socialdemokraterne sammen med Venstre, Konservative, Radikale Venstre og SF indgik tilbage i 2014. Her aftalte partierne at stemme for det, Vederlagskommissionen lægger frem uanset indholdet. - Man er nødt til at tage bestik af hvad kommissionen kommer med. Og jeg er kun bundet af min sunde fornuft. Den må man lytte til, lyder det fra Henrik Sass Larsen, der dog ikke har set det samlede udspil fra kommissionen endnu. V: Man må kunne se sig selv i spejlet Heller ikke Søren Gade føler sig forpligtet af den aftale, Venstre indgik før valget - hvor han ikke selv sad i Folketinget. - Min anbefaling til Venstres folketingsgruppe vil være, at hvis rapporten kommer ud med lønstigninger på 15 procent, så skal Venstre sige nej. Vi bryder den musketéred, der blev lavet for to år siden, men engang imellem skal man også se sig selv i spejlet. Det, man siger, skulle også gerne være i overensstemmelse med det, man gør, siger Søren Gade. Kun ét af partierne bag musketereden er nu tilbage - nemlig Radikale Venstre. Her lyder meldingen, at man afventer den endelige rapport, men at man ikke agter at løbe fra den oprindelige aftale. Det får politikerne i løn i dag De 179 medlemmer af Folketinget får et vederlag på 624.887 kroner årligt. Statsministeren får 1.470.744,45 kroner årligt, mens nummer 2, 3 og 4 i statsrådsrækkefølgen får 1.294.255,12 kroner årligt – det er p.t. finansministeren, udenrigsministeren og kulturministeren. De øvrige ministre får 1.176.595,56 kroner årligt. Landets 98 borgmestre aflønnes efter størrelsen på kommunerne. Borgmestre i kommuner med under 12.500 indbyggere får 527.889 kroner årligt, mens borgmestrene i de større kommuner – med over 80.000 indbyggere – får 825.939 kroner årligt. Kilder: Folketinget, Statsministeriet og Ritzau.
{ "pile_set_name": "OpenWebText2" }
KrF kaller inn til ekstraordinært møte onsdag Venstre bryter forhandlingene med regjeringspartiene Høyre og Frp. KrF skal bestemme seg for om de bryter forhandlingene onsdag. Oppdatert 29. november 2016 Artikkelen er over tre år gammel I en skriftlig uttalelse tirsdag ettermiddag informerte Venstre om at de bryter budsjettforhandlingene med regjeringspartiene, ifølge NTB. – Venstres stortingsgruppe har i dag besluttet å bryte forhandlingene med Høyre og Frp om statsbudsjettet for 2017, heter det. – Årsaken til bruddet er at regjeringspartiene ikke har vært villige til å forhandle om viktige deler av statsbudsjettet, spesielt om tiltak for å redusere Norges klimagassutslipp i tråd med våre forpliktelser i Parisavtalen, skriver Venstre. Varslet tidligere i dag Venstre-leder Trine Skei Grande bekreftet tidligere i dag overfor VG og andre medier at partiet vurderer å trekke seg fra budsjettforhandlingene. Les: Frp-åpning for å gå ut av regjeringen Til NTB sier hun: – Vi har ikke noen konklusjon nå. Det kommer vi sikkert til å få i løpet av dagen. Vi jobber for å få klarlagt hva som egentlig er forandret fra i går kveld, og det virker ikke som det er særlig mye. Hareide: Bryter ikke nå Men nå går KrF-leder Knut Arild Hareide ut og gir en annen vurdering: – Vi vurderer ikke nå å bryte forhandlingene, sier Hareide til VG. – Dersom Venstre velger å bryte forhandlingene, så vil vi kalle inn til et ekstraordinært gruppemøte for å avgjøre hva KrF da skal gjøre. Jeg har god kontakt med Trine Skei Grande, og er kjent med de tankene og vurderingene Venstre nå sitter med. Det kan bygge opp under at det kan gå mot en mulig splittelse mellom KrF og Venstre. BRYTER: Trine Skei Grande og Venstre meldte i dag at de bryter budsjettforhandlingene. Foto: Erlend Daae, VG Hastemøte Frp har hasteinnkalt landsstyret. Møtet vil skje per telefon og fysisk på partikontoret i Karl Johans gate i Oslo klokken 16 tirsdag ettermiddag, får VG opplyst. Ifølge NTB ønsker partileder Siv Jensen å informere om utviklingen og situasjonen rundt statsbudsjettet. Les: KrF kritiske til regjeringens håndtering Samtidig har sentrumspartiene KrF og Venstre i formiddag fått en ny tilbakemelding fra regjeringspartiene Høyre og Frp i formiddag. – Vi har ikke så mye å melde nå. Jeg tror alle forstår at vi er i en kritisk fase. Jeg er mer usikker nå enn noen gang tidligere i denne prosessen. Vi vurderer nå innspillene vi nettopp har fått fra regjeringen i formiddag, sier KrF-leder Knut Arild Hareide. Les: Norske elever best i matte i Norden Ikke et eneste nytt tilbud Mandag kveld var det første møtet mellom de fire lederne for de fire borgerlige partiene på seks døgn. Møtet fant sted i Statsministerboligen bak Slottet, etter at statsminister Erna Solberg brått måtte avbryte sin lenge planlagte juleturné i Nord-Norge mandag ettermiddag. Men, etter det VG får opplyst, skal statsminister Erna Solberg (H) ikke ha lagt ett eneste nytt forslag til løsning på bordet da de fire partiene møttes til nye forhandlinger om neste års budsjett mandag kveld. KrF og Venstre gikk i møtet til angrep på regjeringens bilpakke, som de også tidligere har tatt opp og bedt om at det må forhandles om. Men Høyre og Frp avviste blankt ethvert forsøk på å gjøre justeringer på pakken, opplyser kilder til VG. Publisert: 29.11.16 kl. 10:03 Oppdatert: 29.11.16 kl. 16:38 Les også
{ "pile_set_name": "OpenWebText2" }
In my youth, Druim Fraoich – or Heather Ridge – was often full of noises. At night, there might be the twanging of guitars or the pounding of an accordion coming from Ness Hall, built in the 1960s on the site of a former quarry. During the day, its stone cliffs – where my father started work with pickaxe and hammer at the age of 14 – were home to twites and sparrows. Their chirping would accompany me as I dawdled home from Cross primary school in the neighbouring village. For all that the ceilidh music has long hushed, Ness Hall having closed a few years ago, the birds still perch there, an insistent chorus as I walk my childhood route to school. I stride through the village of North Dell, aware that some are still crofting there, though in a different way from their predecessors. Polytunnels are now as common as byres and outhouses, and pigs, rather than cattle, are churning up soil near old school walls. It would be easy to be negative about some of these changes, to note that, according to a recent study by Donald Macritchie, a local maths teacher, the population of north-west Lewis has declined from 2,445 in 1979 to 1,610 in January 2019. More than a third of its residents are over the age of 65. ‘This isle is full of noises’ – the annual charity tractor race makes its way through the village of South Dell. Photograph: Ali Finlayson Yet that would be to overlook the district’s spirit, the way its residents have tried to stem the outward drift of people from these shores. The community took over the running of the Galson estate, between the Butt of Lewis lighthouse and the stone monolith at Ballantrushal, in 2007, employing more than 30 people where no jobs existed before, and finding a new use for Cross primary, which closed due to falling rolls in 2011. At present, the old school building is again full of noises. Drills and hammers are transforming it into a new museum for Comunn Eachdraidh Nis, the Ness historical society. When it opens in early 2020, there will be photographs and exhibits displayed in the rooms where I used to sit and peer out of windows, watching twites and sparrows, hordes of starlings clouding stretches of moor and croftland nearby.
{ "pile_set_name": "OpenWebText2" }
With the 2016-17 NBA season just a month away, rumors are spreading that significant amounts of players will remain seated during the national anthem to protest the recent tensions between police and black communities. Reigning champion and 12-time all-star LeBron James made it clear to reporters on Monday that he would not partake in these types of protests. "Me standing for the national anthem is something I will do," James said at the annual media day. "That's who I am. That's what I believe in." James did admit that he was worried about race relations in America, at times fearing for his 12-year-old son's life. "For me, my personal feelings is that I got a 12-year-old son, a 9-year-old son and a 2-year-old daughter, and I look at my son being four years removed from driving his own car and being able to leave the house on his own, and it's a scary thought right now to think if my son gets pulled over," James said. "You tell your kids if you just apply [the lessons you teach them] and if you just listen to the police that they will be respectful and it will work itself out. And you see these videos that continue to come out, and it's a scary-ass situation that if my son calls me and says that he's been pulled over that I'm not that confident that things are going to go well and my son is going to return home. And my son just started the sixth grade." He admitted that finding the perfect answers to the complex situation will not be easy. He said all lives matter, not just black or white. "We just wanted the conversation to continue to keep going, and I don't have the answer," James said. "None of us have the answer. But the more times that we can talk about it and the more times that we can [converse] about it [the better]. Because I'm not up here saying that all police are bad, because they're not. I'm not up here saying all kids are great or all adults are great, because they're not. But at the same time, all lives do matter. It's not just black or white, it's not that. It's everyone."
{ "pile_set_name": "OpenWebText2" }
Our products are crafted by a top American manufacturer with decades of experience in pant making. We take pride in supporting our domestic partnerships. The New Standard better fit, better quality, and made to last
{ "pile_set_name": "OpenWebText2" }
Taking his cue from the BJP, his biggest political adversary, Bihar Chief Minister Nitish Kumar on Wednesday opened a ‘war room’ at his residence, 7 Circular Road, to take on the party in the coming Assembly polls. Prime Minister Narendra Modi had set up one before the general election. Not only that, Mr. Kumar has even poached Prashant Kishor, the strategist Mr. Modi had hired. Mr. Kishor on Tuesday asked the JD(U) to open the ‘war room’ and improve the media cell. He also had a long meeting with three party spokespersons at the official residence of Mr. Kumar and gave them important lessons on how to issue statements and counter the arguments of BJP leaders. There will be three commanders at the well-equipped ‘war room’. Rajya Sabha member K. C. Tyagi will be the chief commander, assisted by other RS members Pawan Verma and Harivansh. JD(U) strategist counsels ‘balanced response’ to charges Janata Dal (United) insiders told The Hindu that JD(U) spokespersons Ajay Alok, Rajeev Ranjan and Dr. Nihora Prashad were present at the meeting held by strategist Prashant Kishor. Two other spokespersons — Sanjay Singh and Neeraj Kumar — could not make it for some reasons. Mr. Kishor asked JD(U) spokespersons to give a “balanced response” when allegations are hurled against the party and to counter the BJP Ministers on “individual basis.” Party spokespersons were also advised to be in touch with each other on a daily basis. “Only those who have a complete grasp on an issue should make a statement,” Mr. Kishor advised the spokespersons. Hi-tech initiative On Tuesday, the JD(U) had launched an ambitious and high-tech “Badh Chala Bihar” initiative to reach out to the people to seek their ideas on issues of governance. “Our target is to reach out to 40,000 villages across Bihar in the next six to seven weeks to get feedback on the State’s development and also to involve them in governance in the next 10 years with their suggested ideas,” said Chief Minister Nitish Kumar on the occasion. Under the programme altogether 400 trucks equipped with TV sets, music systems, microphones and speakers and led by a representative of Mr. Kishor’s team will move from village to village and showcase the government’s achievements.
{ "pile_set_name": "OpenWebText2" }
GEELONG coach Chris Scott rates Patrick Dangerfield only a 40 per cent chance of playing in Friday night's blockbuster against Adelaide. However, the Cats superstar will make the trip with the team to Adelaide and will be given until the last minute to prove his fitness. Dangerfield injured his left foot in a collision with Hawthorn captain Jarryd Roughead last Saturday. Scott said he was prepared to wait as long as possible before a decision was made on the 27-year-old's availability, but he was not getting minute-by-minute updates on Dangerfield's progress. He reiterated the club would take no risks. "If there is any risk that he (Dangerfield) is going to compromise himself towards the end of the season, then it is unlikely he will play," Scott said. Dangerfield put in a memorable performance last Saturday playing as a deep forward after suffering the foot injury, kicking 5.6 against the Hawks. Scott said last year's Brownlow Medal winner had previously shown the damage he might cause playing as a deep forward and it was always an option for the Cats. "In simple terms, that idea of keeping a really dangerous forward ahead of the ball makes a bit of sense. The more complicated part is that it does impact you in other parts of the game," Scott said. Scott said the Crows' challenge would only grow larger if Dangerfield couldn't play, however the Cats would receive a boost in confidence if they managed to win without him in the team. Tom Stewart will return after an operation to repair a fractured eye socket and Andrew Mackie will be back after missing last week with a wrist injury. Scott Selwood is also expected to return from a hamstring injury, but the Cats will not risk Nakia Cockatoo. Geelong has a good recent record against the Crows, but Scott said any sense that the Cats had the wood over Adelaide was misguided. "I don't subscribe to the theory that we have got a formula that works," he said. "If they play as well as they possibly can and we are a bit off, we will get beaten comprehensively."
{ "pile_set_name": "OpenWebText2" }
Dresseurs, Les Germignon, Héricendre, Kaiminus et bien d'autres Pokémon arrivent bientôt ! Dès la fin de la semaine, vous aurez la possibilité d'attraper plus de 80 Pokémon originaires de la région de Johto dans les jeux Pokémon Or et Pokémon Argent. Nous avons également introduit de nouvelles fonctionnalités pour améliorer votre expérience Pokémon GO. Pokémon supplémentaires : Plus de 80 Pokémon initialement découverts dans la région de Johto dans les jeux Pokémon Or et Pokémon Argent, ainsi que des Pokémon genrés arrivent bientôt dans Pokémon GO. Nouvelles évolutions : Aujourd’hui plus que jamais, profitez de nombreuses opportunités pour faire évoluer vos Pokémon dans Pokémon GO. Certains Pokémon découverts originaires de la région de Kanto auront bientôt la possibilité d'évoluer... pour devenir des Pokémon habitant la région de Johto ! Restez à l'affût des nouveaux objets d'Évolution disponibles dans les PokéStops pour faire évoluer certains de vos Pokémon. Nouveau système de rencontre : Lorsque vous croiserez des Pokémon dans la nature, ne soyez pas surpris(e) de les voir réagir d'une nouvelle manière lorsque vous tenterez de les attraper ! Vous remarquerez également l'ajout de nouveaux sélecteurs d'objets, vous permettant ainsi de choisir les Baies et les Poké Balls directement à partir de l'écran de rencontre. Affinez votre technique pour attraper ces Pokémon insaisissables ! Nouvelles Baies : Les Pokémon adorent les Baies ! Faites tourner le PhotoDisque des PokéStops et tentez d’obtenir les deux nouvelles baies :: les Baies Nanab et les Baies Nanana ! Les Baies Nanab ralentissent les mouvements des Pokémon qui les ingèrent, les rendant plus facilement capturables. Les Baies Nanana doublent quant à elles le nombre de Bonbons que vous recevez si votre tentative de capture réussit. Nouveaux avatars et nouveaux vêtements : Vous pourrez dorénavant personnaliser intégralement votre avatar ! Personnalisez votre look avec de tous nouveaux chapeaux, chemises, pantalons et autres objets. N'oubliez pas d'utiliser le hashtag #PokémonGO sur Twitter pour partager vos expériences avec votre famille et vos amis tandis que vous explorerez votre voisinage. Nous avons hâte de voir quels Pokémon vous allez attraper ! —L’équipe Pokémon GO
{ "pile_set_name": "OpenWebText2" }
The grand accomplishments of our genomic age—which are reliant to a large extent on unheralded, bleary-eyed graduate students staring at seemingly infinite bytes of data on their screens for hours on end—might never have come to pass were it not for the copious amounts of coffee fueling said students. So it's only fitting that some of them have now analyzed the genome of the coffee plant itself. An international team of researchers spanning both coffee growing and coffee consuming regions of the globe sequenced Coffea canephora, one of the parent strains of the heavily cultivated C. arabica. They found that the plant has extra copies of genes called N-methyltransferases (NMTs), which encode a class of enzymes that mediates the late steps in caffeine biosynthesis. Coffee has a total of 23 NMT genes, which arose primarily via a series of gene duplication events. The collection of duplicated genes is distinct from the ones found in tea and cacao, two other caffeine-producing plants that are more closely related to each other. That suggests that these two lineages evolved the ability to give humans a jolt separately. Coffee's NMTs also exhibited evidence of positive evolutionary selection, indicating that caffeine biosynthesis may serve an adaptive purpose only in coffee. The function of its convergent evolution in the other drinks was not explored. The coffee plant is also enriched in a class of enzymes that makes linoleic acid, a polyunsaturated fatty acid that contributes to the aroma and flavor retention of coffee beans after roasting. There are also a lot of genes involved in secondary metabolites other than caffeine, like flavonoids, isoflavones, and alkaloids, including quinine. The quinine might explain the unfortunate inspiration for the coffee tonic. Science, 2014. DOI: 10.1126/science.1255274 (About DOIs).
{ "pile_set_name": "OpenWebText2" }
Travel Six distilleries in three days. Can you handle it? All photos by Adam Robb unless otherwise indicated The Kentucky Bourbon Trail is basically like Vegas, except instead of swimming pools, they've got distilleries, and almost every one of them welcomes visitors to stop by and get wet. With seven big boys and another seven craft operations, there's no way to hit all 14 official stops in one trip, so we picked six, and added in some fine dining and barroom drinking to round out your pilgrimage. Day 1: Louisville Drinking, Dining, Art-Looking-At, and Your First Distillery Welcome to Kentucky! First tip: don't buy one of those wax-dipped Maker's bottles in the airport. They're barbecue sauce. You now have a choice. Head straight to your lodgings in the Pip Mobile, 21c Museum Hotel's rhinestone-caulked limousine, designed to resemble the interior of a pomegranate, or... Photo by Mint Julep Tours ... get on a Mint Julep Tours bus and head to the closest distillery, about 20 miles South of the airport. Distillery #1: The Jim Beam American Stillhouse, Clermont, KY Jim Beam just built this visitor center last year. It's a replica of a post-Prohibition building from the '30s, when Col James Beauregard "Jim" Beam resumed production immediately after America admitted, "I've made a huge mistake." The distillery is famous for its unorthodox methods of teaching martial arts. Beam produces half the world's bourbon supply. To give you a sense of the scope of their global operation: that pickup truck will probably have to make a second trip. The bourbon's history goes back seven generations to 1795, starting with Jacob Beam and his distillery, Old Tub, and moving through David, David M., The Colonel, T. Jeremiah, Booker Noe, and Frederick Booker Noe III. Normal-sized scrolls can't handle that much heritage. Booker Noe wanted his 100-proof small batch Knob Creek barreled nine years, as was the custom before Prohibition made extended aging prohibitively expensive. Every bottle's hand-dipped in hot black wax, a surprise to people who assume small-town bottles "wouldn't be into that kind of thing". Sample any marque you want from their Trail-exclusive enomatic taps, from Knob Creek's 100-proof rye, to the Jacob's Ghost white whiskey they encourage you to "drink any damn way you please" -- including out of an enomatic tap! The new tour encourages guests to relieve this Fowler washing machine of its duties and hand-bourbon-rinse the same bottles they can later buy. Book a VIP tour with Master Distiller Fred Noe, and he'll teach you how to smell whiskey by raising the glass to your nose and then inhaling through your mouth. Sometimes it's good to be a mouth-breather. Fred'll also teach you how to sample a barrel. 125-proof Booker's is bottled uncut straight out of these suckers, which is why the stuff is so strong. That doesn't mean you can't cut it: in his book American Still Life, F. Paul Pacult has Booker Noe himself saying, "Do I add water ta Booker's? At a hunnert twenty-six percent alcohol, you kiddin'? Don't an' it'll blow the top o' your head right off." Back to Louisville Okay, back to your hotel to prep for the night. On Museum Row, 21C occupies five former bourbon and tobacco warehouses and doubles as a contemporary art museum open to the public 24/7, with pieces like conceptual artist Serkan Özkaya's double-height David... ... and this really cool secret door. Not the secret door to the secret penthouse apartment where everyone from Justin Bieber to Bruce Springsteen's stayed, but still pretty secret. Dine at the hotel's Proof on Main. These Creole-buttered fried oysters come straight from the pan... ... the very same Greek god/flute manufacturer whose statue here keeps a loose grip on the bar's Death's Grip cocktail, made with Old Grand-Dad and house-made Dark Star Porter jam syrup. Stroll down the block to Actors Theatre and take a front-row seat at MilkWood, a Top Chef vet's basement bar/resto where booze is served by a sommelier/fetish model you'll be running into again at Woodford Reserve. Hopefully tonight she'll say something like, "Oh, you want to breathe, wine? I don't think so. You've been a bad, bad wine." No matter what you order here, you'll definitely be stimulated. Another great thing about Louisville: you can wander the galleries with a drink in your hand. Just know that not everybody will be as impressed by that as you are. Day Two: Hit the Trail! Breakfast time. Score a farm table at farm-to-table Harvest on E Market St, where everything that goes into your mouth -- from the scratch biscuit w/ chorizo gravy to the muddled-peach Old Forester Old Fashioned -- is sourced from within 100mi. This girl is also locally sourced and old-fashioned. Distillery #2: Limestone Branch, Lebanon, KY, aka "Kind of Like the Paris, TX of Kentucky" This microdistillery is so new, its whiskey isn't even bourbon yet (it's moonshine). This is Steve Beam. His great-great-grandfather was Joseph Washington Dant, and his great-grandfather on the other side was Minor Case Beam, a name that prevented him from ever appearing on Law & Order: Criminal Intent. Minor Case sold Old Trump to the Dant family before Prohibition, then died before he could open another distillery. Also, Revenge is a moonshine best served with a glass finger loop. The sugar whiskey's uncooked in the old Appalachian style, with 50% corn and 50% cane sugar. It's distilled with a bucket and... another bucket. That jar of "Heads" contains the first stuff that's cut during distillation: acetone (nail polish) and methanol ("stuff that makes you go blind"). "Hearts" comprise the center part of distillation -- good alcohol whose quality's gauged strictly by taste. "Tails" are congeners (the sugary compounds that cause hangovers). The flavor's made distinct based on how much Tail the Heart gets. Poetry. Steve's grandfather was a master distiller for Yellowstone and Seagram's. Steve swabbed the inside of his grandfather's old yeast jug -- which is stored in the Bourbon Museum -- and found two strains from which to cultivate what will eventually be Limestone's bourbon. Until then... ... they'll peddle eight varieties of sugar whiskey and moonshine cordials (apple & pumpkin pie, strawberry...), but for now only in Kentucky and Indiana, yet another reason to get down here. Or Indiana. (Also, before you leave, pick up some Moonshine Balls -- you'll find out why later.) Distillery #3: Maker's Mark, Loretto, KY This is the only reception you'll find on the road to Loretto, so save directions to your phone. The Maker's Mark campus was designed by the current COO/distillery GM's grandmother, an engineer who insisted on brown, red, and cream to reflect the bottle's bourbon, label, and wax. She also provided the name -- inspired by the maker's marks on her collection of handmade English pewter -- and designed the bottle, which hasn't changed since. The label and wax are hand-torn and hand-sealed respectively, making each bottle unique. Here's the key to reading the label: *S IV: Maker's was founded by the fourth (legal) generation of Samuels bourbon distillers. *The Star: Star Hill Farm is the site of the original distillery. *The breaks in the circle around S IV: These represent the periods when it was illegal to produce whiskey in the US (Prohibition, WWI, and the Civil War). *"Whisky": They spell it in the e-less style as a tribute to their Scottish heritage (Samuels have distilled there since the early 1500s). Besides the aforementioned COO and GM duties, the current face of Maker's Mark makes mean bacon-jalapeño wings. He's named for Robert Samuels, Jr., who fought in the Whiskey Rebellion before moving his family to Kentucky back when when it was still Bourbon County, VA. Just like his namesake, he's still full of whisky and rebellion. Not all of the family history's been dipped in hot wax. This pistol was surrendered to Dr. Reuben Samuels by his stepson Frank James, brother of Jesse. To maintain flavor consistency, a majority-female (women have more taste buds) panel of fifteen staffers sample every batch five times from still to barrel. This dude is not in the majority. Why do Scotches spend longer in the barrel than bourbons? Because intense Kentucky Winters and Summers speed up aging -- one year here's equivalent to four years in Scotland, making a 5yr bourbon equal to an 18-22yr old scotch. Maker's White moonshine is only for sale at the distillery. You can even dip your own bottle, but you can't dip then sip then dip again. This isn't fountain soda, son. Your lodgings: the historic Beaumont Inn, Harrodsburg, Kentucky The rooms inside the Beaumont's Greystone Lodge have whirlpool tubs, electric fireplaces, and no minibars... ... because Mercer's a dry county. No worries, because after check-in you've still got a few more stops. You can either grab a six-pack by heading nine miles down one of these unmarked roads or cutting across the property and walking down the Route 127 bypass, or... Distillery #4: Alltech Lexington Brewing Co. ... drive on to the brewery popularly known as Kentucky Ale, which also opened the first new distillery in Lexington in 100 years. Besides beer, they make Town Branch Bourbon and Pearse Lyons Reserve: a malt whiskey distilled in Scottish-built copper pots (pick up a bottle -- you'll need it for a surprise you'll find out about in a few minutes). The story is wacky: In 1999, the president of the Irish animal food & nutrition company Alltech, Dr. Pearse Lyons, called the owner of what was then just a brewery to get his son an internship. The owner said sure, but since the place was closing for good that next Friday, he'd better hurry. So Dr. Lyons bought the brewery, and a year later started making Kentucky Ale -- a cross between an Irish Red and English Pale available in 13 states, Ireland, Canada, and of course China. Eight years later, they built out the distillery. Beer yeast. You can tell where you are in the fermentation process by its sweetness or sourness. This bottling line dates back to the 1940s, when all beer was made by chimney sweeps. After the Haiti earthquake, Alltech started marketing the Haitian bean coffee Café Citadelle, sending proceeds back to the island. Their latest beer, Kentucky Bourbon Barrel Stout, is brewed with the coffee, then aged six weeks in said barrels. Because Kentucky Ale's too popular to be considered a micro-brewery you can't sample their brews in their in-house pub... until mid-June! The KY legislature just legalized it. Don't criticize it. You can revive yourself with a Bluegrass Sundown: roasted Haitian beans + Town Branch bourbon + sugar & cream. Snack time. As the parking meter outside Lexington's Jonathan at Gratz Park powers down, hit up the bar for low country bites like Country Ham Potstickers and something off their bourbon menu. Which literally is a bourbon menu, in that it's glued to a charred oak barrel stave. You like how we used "literally" there? Cool. Sometimes people get angry. Before heading back to the hotel, stop by Guy Fieri-favorite The Parkette Diner Drive-in Dive . Pick up a long, lonely night's worth of fried chicken & gravy to go -- all will be explained in a minute. Get back to the Beaumont Inn's main house by 9p, when complimentary glasses of Beaumont Cocktail are left beside the ladies' room door. Those in the know refer to it as "ice water". Okay, now get in the bathtub, and lay out your Limestone Branch moonshine balls and a bottle of Pearse Lyons Reserve, both of which have notes of brown sugar and vanilla that go great with... ... Tub Chicken. Or as you'll frantically call it on Twitter, #tubchicken. God your life is good right now. Day Three: Oh man, you shouldn't have eaten so much #tubchicken. You have 15min to listen to Rosetta Stone before you reach Four Roses. Distillery #5: Four Roses, Lawrenceburg, KY Listed on the National Register of Historic Places, Roses' Spanish Mission-style buildings date back to 1910. They were built by Louisville architecture firm Brown & Brown, which now has absolutely no idea why they designed the former Frankfort Distilling Co buildings Mission-style. Despite studying marketing, chemistry and physics, by the time Jim Rutledge became Four Roses' master distiller all he remembered from college is where he went. In 2001 Jim convinced new parent company Kirin to sell Americans the premium version of Four Roses that by then was the most popular whiskey in Asia. In 2002 Kirin bought up all the inferior blended whiskey that'd carried the Four Roses label in the US and either dumped it, or hid it under Jim's desk. By 2004, the superior version of Four Roses you're probably enjoying right now was no longer under lock and key. Four Roses is made from GMO-free Yellow Corn #2 grown in south-central Indiana. They pay surrounding farmers over commodity price to not grow anything that could contaminate it through cross-pollination. The resulting grain flavors are so smooth and mellow that people think they're drinking 80 proof, but it's actually 100 proof. Psych. To gauge aroma, Jim'll pour just a drop into a tasting glass, cover it up, and let it vaporize overnight. He'll smell the vapors the next day, then do the same to a full glass and see if he can detect the same notes through the alcohol. To ascertain the worthiness of grain samples, he'll pop 'em in the microwave just long enough so they don't pop back. After Four Roses couldn't find any more Red Cypress for their mash tubs, a Florida dude convinced them he could pull logs from river beds and dry them off, even though he'd never done it before. The first attempt failed, but then... great success! Distillery #6: Woodford Reserve, Versailles, KY How nice is this backyard? The Pepper family first distilled here in 1812. They sold it to Labrot & Graham, who sold it to Brown Forman, who sold it to a farmer who just kind of farmed while the buildings sat there. Then Brown Forman bought it back from the same farmer when they decided to make a bourbon that appealed to single malt & cognac drinkers -- named Woodford for the county the property rests in. The original operation was pot still. They switched to these column stills, then dialed down the corn and brought up the rye and malt ratios. The water's pumped in direct from a 95ft limestone well, which doesn't bode well for that chick from The Ring. Science. What happens in the Yeast Culture Fridge, stays in the Yeast Culture Fridge. Woodford mutated 27 generations of the oldest yeast strain in the industry, Old Forrester, to get the fruitier 27-B strain. Master Distiller Chris Morris followed his parents into the bourbon game at 18. He has a business degree and two masters, but there's no Bachelor of Distilling -- you just have to get as much experience as you can. Morris says "Bourbon's born in a barrel for a reason", which is why he doesn't market the moonshine this lovely lady's drinking. Hey, isn't that the sommelier/fetish model from Milkwood? Yup. According to Morris, Woodford's essentially a 16yr-old bourbon aged in half that time. The barrels get "ricked up" and aren't moved until they're tested and selected for bottling, providing for 14-15yrs of oxidation despite only seven years of tannin absorption. ricked past participle, past tense of rick (Verb) 1. Form into a rick or ricks; stack.2. Strain (one's neck or back) slightly. The barrels heat cycle all Winter long, which is much better for them than spin class. Back to Louisville Craving more brown? Check into the Brown: a 16-story, 1923 landmark that's smoke-free except for when the ghost of James Graham Brown fills a room with smoke (he loved to wander around greeting guests while puffing a cigar). This is the four-room Muhammad Ali suite, where you can float like a butterfly and sleep like a champ. Time to hit industry hangout The Silver Dollar. How seriously do they take their bourbons & ryes? Well, they have 70 of them, and only one gin, one rum, and one vodka (Tito's, "from Texas, of course"). Silver Dollar's mint julep. It should be renamed the Susan Bourbon Anthony. The Dollar's also famous for its house-made hot sauces. Slogan: "So hot, they're horny". Move on to Rye on Market. FYI, the capacity at Rye is 76 persons... ... though they're always making room for more. The chef comes from New York restaurant The Breslin. He grills up a mean hanger steak with cipollini peperonata and castelvetrano olive puree. Shown here mixing two drinks at once, Rye's bartender really knows his sh*t. He'd better, because this spicy chile/lemon/lime/gin number is actually called "The Sh*t". Back at the Brown. If you're still hungry, order a room-service Hot Brown: a Welsh Rarebit-inspired open-faced baked turkey & Texas Toast sandwich smothered in Mornay sauce and piled with tomato and bacon, invented by the staff in 1926 when they got tired of late-night ham & eggs. This soaking tub might be too nice to eat in... ... but lucky for you, the suite comes with a dining room. Day Four: (Almost) Time to Say Goodbye. Good morning! You're probably exhausted after several days of bourbon and a few hours waiting for that cow to move out of the way. To perk up before hitting the airport, follow the tracks to Frankford Ave for Blue Dog Bakery coffee. At Blue Dog, you always know exactly what you're getting. You ready?
{ "pile_set_name": "OpenWebText2" }
Don't play much anymore so thought I'd share this if anyone is looking for a new look :) Spent a lot of time messing around with this one! Love my warrior so much :) Really love the medium armor sets but unfortunately as warrior you don't get much similar to that and I like trying to keep her looking a bit lightweight - at least as much as a warrior can. If you're looking to be a bit more warrior-like than ranger, I'd recommend tier 3 human shoulders. Chest is pretty interchangable with banded, tier 3 human or aetherblade. Boots are a pain as it seems only Aether is tall enough to compensate the braham legplates without looking chunky. Thanks for looking!
{ "pile_set_name": "OpenWebText2" }
President Trump Donald John TrumpFederal prosecutor speaks out, says Barr 'has brought shame' on Justice Dept. Former Pence aide: White House staffers discussed Trump refusing to leave office Progressive group buys domain name of Trump's No. 1 Supreme Court pick MORE on Wednesday said the country should find out the identity of the person who provided information to a whistleblower who raised concerns about his phone call with the Ukrainian president. Trump railed against the whistleblower during an Oval Office meeting with the president of Finland, telling reporters that the individual portrayed his July 25 call with Ukrainian President Volodymyr Zelensky in a "vicious" way. ADVERTISEMENT "In other words, he either got it totally wrong, made it up, or the person giving the information to the whistleblower was dishonest," Trump said. "And this country has to find out who that person was, because that person's a spy, in my opinion." Trump's attacks on the whistleblower come as members of both political parties have voiced support for upholding legal protections for the individual. "I think a whistleblower should be protected if the whistleblower is legitimate," Trump said when asked about those comments from lawmakers. Trump has gone on the offensive against the whistleblower, who anonymously filed a complaint with the intelligence community inspector general in August after they were concerned by Trump's conduct on a July 25 call with the Ukrainian president. The whistleblower complaint, which was made public last week, matches with a rough White House transcript of the call. It alleges that Trump urged the Ukrainian president to look into Democratic presidential candidate Joe Biden Joe BidenFormer Pence aide: White House staffers discussed Trump refusing to leave office Progressive group buys domain name of Trump's No. 1 Supreme Court pick Bloomberg rolls out M ad buy to boost Biden in Florida MORE and that the White House sought to contain access to the contents of the call. The complaint was based on firsthand information and information from other sources, the inspector general of the intelligence committee said this week. During a conversation with U.S. diplomats last week, Trump reportedly suggested that those behind the whistleblower complaint should face severe punishment like spies did decades ago. Trump has blasted the whistleblower as a partisan and questioned their loyalty to the country. He has claimed he has a right to interview the individual, even though protections exist to keep a whistleblower's identity anonymous. The Whistleblower Protection Act makes it a violation for federal agencies to threaten retaliation against individuals who come forward to raise concerns of wrongdoing within the government. "No one should be making judgments or pronouncements without hearing from the whistleblower first and carefully following up on the facts," Sen. Chuck Grassley Charles (Chuck) Ernest GrassleyGOP lawmakers distance themselves from Trump comments on transfer of power The Hill's 12:30 Report: Ginsburg lies in repose Top GOP senators say Hunter Biden's work 'cast a shadow' over Obama Ukraine policy MORE (R-Iowa) said in a statement on Tuesday. "Uninformed speculation wielded by politicians or media commentators as a partisan weapon is counterproductive and doesn’t serve the country." Rep. Paul Mitchell Paul MitchellGOP wants more vision, policy from Trump at convention Loomer win creates bigger problem for House GOP Lisa McClain wins Michigan GOP primary in race to replace Rep. Paul Mitchell MORE (R-Mich.), a member of House GOP leadership, said late Tuesday that the law protecting whistleblowers should be respected. House Intelligence Committee Chairman Adam Schiff Adam Bennett SchiffSchiff to subpoena top DHS official, alleges whistleblower deposition is being stonewalled Schiff claims DHS is blocking whistleblower's access to records before testimony GOP lawmakers distance themselves from Trump comments on transfer of power MORE (D-Calif.) said at a press conference Wednesday morning that Trump's attacks on the whistleblower amounted to an "incitement of violence."
{ "pile_set_name": "OpenWebText2" }
Opinion Listen to FBI, not bogus science, on gun safety A recent story (“Connecticut among five safest states for gun violence,” October 12) cites a sham study by the Center for American Progress, CAP, as evidence that Connecticut’s restrictive gun control laws are responsible for making Connecticut “the fifth-safest state with respect to shootings.” That bogus conclusion is based on junk science that would not pass muster in a middle school science fair. The people of Connecticut deserve better than that. Remarkably, the “study” in its very first paragraph, describes many of the factors which influence a state’s violent crime, murder and suicide rates and then ignores all those factors in its analysis, instead focusing on only one factor: gun control laws. The nation’s top law enforcement agency, the FBI, is adamant that its crime data NOT be used to create these misleading state “rankings.” The FBI says, “These incomplete analyses have often created misleading perceptions which adversely affect geographic entities and their residents. For this reason, the FBI has a long-standing policy against ranking participating law enforcement agencies on the basis of crime data alone. Despite repeated warnings against these practices, some data users continue to challenge and misunderstand this position.” Here are some factors the FBI says are “known” to affect crime rates: • Population density and degree of urbanization. • Variations in composition of the population, particularly youth concentration. • Stability of the population with respect to residents’ mobility, commuting patterns, and transient factors. • Economic conditions, including median income, poverty level, and job availability. • Modes of transportation and highway systems. • Cultural factors and educational, recreational, and religious characteristics. • Family conditions with respect to divorce and family cohesiveness. • Climate. • Effective strength of law enforcement agencies. • Administrative and investigative emphases on law enforcement. • Citizens’ attitudes toward crime. • Crime reporting practices of the citizenry. The FBI has this exactly right. If you are truly interested in promoting effective laws to protect the public, you must consider all the factors that influence crime rates — not the one factor the FBI doesn’t even consider, gun control laws. The CAP report is junk science meant to provoke a political response, not answer a legitimate research question. This approach is known in the social sciences as a “bivariate” analysis. This is where a researcher only examines the relationship between two variables, when there are in fact other factors that could influence the outcome of interest. For example, roosters crow frequently when the sun rises. There is indeed a strong correlation between these two variables. But could anyone credibly say that the sun rises because roosters crow? Of course not. One of the gun control laws currently being pushed nationwide is legislation to criminalize virtually all private firearms transfers. Gun control advocates call these measures “universal background check” laws and claim they help make communities safer by keeping guns out of the hands of criminals. The facts don’t support that conclusion. In a recent column in the Las Vegas Review Journal, economist John Lott, Jr. notes that states with these so-called universal background checks experienced “a post-2000 increase of 15 percent in per capita rates of mass public shooting fatalities. They also saw a 38 percent increase in the rate of injury.” He goes on to note that “there is no evidence that expanded background checks reduce rates of any type of violent crime.” Put simply, Lott’s conclusions are based on conducting research the right way — considering all the variables which contribute to violent crime and looking over an extended time frame to find answers. His findings, not CAP’s, are consistent with extensive studies by the Centers for Disease Control and Prevention and by the National Academy of Sciences that found no evidence that gun control actually reduces crime, despite what gun control proponents have to say. Our Second Amendment Rights in Connecticut are under assault. New York billionaire Michael Bloomberg is funding this well-coordinated effort. As he continues to peddle these pernicious lies about the Second Amendment, the NRA will fight back every time to debunk the junk science he is funding. Christopher G. Kopacki is the NRA legislative liaison for the state of Connecticut. He received his doctorate in public policy and administration from Virginia Commonwealth University.
{ "pile_set_name": "OpenWebText2" }
A section of Wenger’s squad would like to do more work on the opposition and believe it could help them to start stronger when faced by the Premier League’s better teams.
{ "pile_set_name": "OpenWebText2" }
New Broncos quarterback Joe Flacco has been testing the team’s defense with his big-arm throws at practice this offseason. Flacco hasn’t been lighting up the defense on every play, though, as Denver has some stars on the other side of the ball as well. Practicing against pass rushers like Von Miller and Bradley Chubb will only help Flacco learn to get the football out of his hands quickly. “I felt like I had to slide a little bit at times and get the ball out of my hands [at practice],” Flacco said on May 13. “Guys had to get open quick. Then there were times where it was hit your foot, get the ball out because the guy is open. I thought it was good. We’re going to get tested every single day out there. We have a couple of guys on the edge that obviously can play football. “Everybody else to go along with that — that’s a good defense over there. Having experience throughout my career, it’s an awesome test and an awesome advantage that we have to be able to go against a good defense, a good scheme and good pass-rushers. Everywhere you look, you have guys that can play. I think that’s going to prepare us big time.” Flacco is used to going up against tough defenses in practice from his time with the Ravens. He used that experience to help him win a Super Bowl, something fans in Denver hope he can do again with the Broncos.
{ "pile_set_name": "OpenWebText2" }
Best Alternative To Trade Bitcoin For Cash Without prior notice, in cash-traders on localbitcoin.com were denied the service of fiat-to-bitcoin trades as reported by coindesk. All pending trades on the platform that has to do with fiat-to bitcoin were cancelled without any form of warning or announcement by localbitocin.com to its users. Users of the platform have been forced to seek the same or better service elsewhere after the incident. We unaware of the reasons behind this action by locabitcoin, but the fact is Cash still rules in this emerging world and still has high relevance in local markets because local traders still make use of cash to buy or sell goods. In a country like Nigeria where crypto knowledge and adoption isn’t well established, people still use fiat for transactions thus causing cryptocurrency users to exchange their crypto for fiat to settle transactions in their local markets. CoinCola as an alternative to trade bitcoin for cash One of the advantages that coincola.com has over other exchanges or trading platforms like Paxful, Localbitcoins etc is that users can carry out all forms of transaction that involves trading or exchange of cryptocurrencies and even iTunes gift cards. BONUS1: Get $5 Bitcoin Instantly After You Buy Bitcoin With A Gift Card On CoinCola. Here are the steps to carry out a successful BTC to fiat exchange on CoinCola trading platform: Once your account has been created successfully, automatically you get a secure web wallet to store and manage your bitcoin. Get your initial account verification done and bind your phone number to your account for extra security. Check your bitcoin balance or deposit cryptocurrency to your wallet if empty. Go to the OTC page click on the search icon to filter advertisements based on the preferred currency, location and payment method you want to use. Click on the search icon to get advertisements by your preferred currency, location and payment method. Select an offer based on the trader with the highest trust score and a large amount of trades. Click on the “SELL BTC” button to display more information about the advertisement. The above step comes with the terms of the trade. Read the terms carefully before you proceed. If the trades suit you, you can go ahead and make a trade request, else go back and choose another advertisement. Start trade by typing the amount of bitcoin you want to sell or enter how much money you to sell. Click on “SELL NOW” button to receive payment. Also, get ready to release BTC once payment has been confirmed. Once the trade has started, your bitcoin will be transferred to the trade escrow from your wallet. As a chat negotiation, once you’ve sent a trade request the buyer sends payment. Click on “Release BTC’ once you’ve confirmed the payment. Thus transferring the BTC to the buyer’s wallet from the trade escrow. Within the allotted time of 15 minutes, if you don’t release the bitcoin the trade will be automatically canceled. In addition, coincola offers referral rewards also. Using your referral link, for every friend you refer, you get to earn 20% of completed trades during the 1st month, and 15% for the remaining 5 months. And each user is eligible to earn 0.0002 BTC after the first trade. An exchange without an escrow system is likely to lose customers because traders could be robbed of their funds. So it’s advisable to trade on platforms with an escrow system which is far more secure and safer to use. With Coincola, your funds and transactions will always be cheap, fast and secure. Until the world fully adopts these new digital currencies, users will continue to seek for secured and trust-worthy platforms where they can comfortably carry out crypto to fiat transactions. at a cheap rate, fast and of course with no implications whatsoever. BONUS2: Start Your Bitcoin Trading On CoinCola Today And Get 0.0001BTC Instantly!
{ "pile_set_name": "OpenWebText2" }
This is what Albert Einstein wrote in his letter to philosopher Eric Gutkind, in response to his receiving the book "Choose Life: The Biblical Call to Revolt". The letter was written on January 3, 1954, in German, and explains Einstein's personal beliefs regarding religion and the Jewish people; it was put on sale one year later and remained into a personal collection ever since. Now the letter is again on auction in London and has a starting price of 8,000 sterling pounds. The letter states pretty clearly that Einstein was by no means a religious person - in fact, the great physicist saw religion as no more than a "childish superstition". "The word god is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this", Einstein wrote. Einstein was Jewish, which is why the people of Israel asked him once to become Israel's second president. Also, Einstein felt uncomfortable with the idea that the Jews are God's favored People. "For me the Jewish religion like all others is an incarnation of the most childish superstitions. And the Jewish people to whom I gladly belong and with whose mentality I have a deep affinity have no different quality for me than all other people. As far as my experience goes, they are no better than other human groups, although they are protected from the worst cancers by a lack of power. Otherwise, I cannot see anything 'chosen' about them", said Einstein. Although, neither Einstein nor his parents were religious people, he did in fact attend the Catholic primary school. But at the age of 12 he was already questioning the truth of the stories written in the Bible. "The consequence was a positively fanatic orgy of freethinking coupled with the impression that youth is being deceived by the state through lies; it was a crushing impression", Einstein wrote. Einstein may have not believed in God, but he felt that faith was a must. This is probably why he never gave a second thought to studying the quantum theory and its random nature. He once said that "God does not throw dice", meaning that quantum theory randomness is out of the question for him. This belief in faith is probably also why his position towards religion was often misinterpreted. "Like other great scientists he does not fit the boxes in which popular polemicists like to pigeonhole him. It is clear for example that he had respect for the religious values enshrined within Judaic and Christian traditions... but what he understood by religion was something far more subtle than what is usually meant by the word in popular discussion", said John Brook from the Oxford University, leading expert on Albert Einstein. Einstein was often associated with atheism because of his views on conventional religion, but he never liked being called an atheist.
{ "pile_set_name": "OpenWebText2" }
Japan's Nihon Keizai Shimbun reported Tuesday that Sega will stop production of the Dreamcast game console by the end of March to focus on developing and marketing game software for other companies' devices. A Sega representative in San Francisco denied the report, saying plans for the device are "huge and long term." "There is no public comment that would endorse that story," Sega spokesman Charles Bellfield said. In October, Sega executives in Japan said they were restructuring the company to focus more on networked entertainment, but indicated they would continue developing Dreamcast. Another Japanese newspaper reported this week that Sega is looking to develop games for rival consoles, including Sony's PlayStation 2 and Microsoft's upcoming Xbox. "We have no acknowledged plans in terms of those platforms," Bellfield added. Sega has said it will develop games and content for set-top boxes, cell phones and handheld devices. Bellfield said Sega is set to announce enhancements for Dreamcast next week. "Dreamcast is very much the core of our business going forward," he said. However, one analyst predicted that Sega's exit from the console business is definitely coming. Following an initial rush in 1999 after its introduction in the United States, sales of the Dreamcast petered off. By getting out of the console business, Sega could cut its costs dramatically. Brian O'Rourke, senior analyst at Cahners In-Stat Group, said it's more a question of when rather than if Sega leaves the hardware business. As the No. 3 console, he said, Dreamcast has little chance of surviving Microsoft's entry into the market. "It's questionable whether they have the resources to develop a next-generation console," O'Rourke said, adding that in recent conversations, Sega executives have emphasized plans for software and online gaming through the SegaNet service. "There's a question as to whether they really think online gaming is the future or whether they really can't afford to stay in the hardware business, but that's where they're heading," he said. "I don't know if they can make money on the online service. It's something that's never been tried by a games manufacturer." Gartner analyst P.J. McNealy said he wouldn't be surprised if Sega stops making the Dreamcast. "If they are getting out of the console business, it?s because consoles lose money," McNealy said. Sega was the subject of speculation in November when The New York Times reported that Nintendo was in talks to buy Sega for $2 billion. Executives at both companies denied that report. If Sega does decide to exit the hardware business, it would not be the first. Game maker 3DO launched its own machine in October 1993, but the $700 price tag and a poor game selection hurt sales. However, after the company halted production, it successfully transitioned to a software-only company. In the wake of this week's press reports, Sega issued a statement Tuesday that it "globally reaffirms its commitment to Dreamcast" and said it has more than 100 games coming out worldwide for the console in the next year. "It is not Sega's policy to comment on rumors and the company has not made any statement regarding ceasing manufacturing of Dreamcast or development for other videogame platforms," the company said in a statement. News.com's David Becker and staff writers Robert Lemos and Richard Shim contributed to this report.
{ "pile_set_name": "OpenWebText2" }
BOSTON, Sept. 28, 2017 /PRNewswire/ -- Residents in parts of Jamaica Plain and Hyde Park are now able to order Verizon Fios Internet, TV and phone services as the company announces the start of expansion of its 100% fiber-optic network into these Boston neighborhoods. Residents can visit www.verizon.com/BostonFios to check Fios availability at a specific address, and to sign up for email updates as the build out continues. Verizon is bringing superfast Fios Internet, Custom TV packages, and Digital Voice services to more neighborhoods just days after approval of a video franchise by the Mayor of City of Boston. This marks the next step forward in the company's $300 million investment to build a 100% fiber-optic network platform across Boston over six years. Verizon began its Fios rollout within the City of Boston in December of 2016, offering service initially in parts of Dorchester, Roslindale, West Roxbury and Roxbury (including the Dudley Square Innovation District), and continues construction and expansion in these neighborhoods. View from Verizon "Boston has welcomed Fios, and the superior service, competition and choice Verizon delivers, with open arms," said New England Region President, Donna Cupelo. "Since we introduced Fios in Boston late last year we've almost doubled available data speed, and now offer our Fios Gigabit Connection service. Our teams are outside everyday transforming Boston's technology foundation." Investing in the City of Boston The Verizon Foundation has awarded a $50,000 grant to non-profit Tech Goes Home to support entrepreneurs in Boston's underserved neighborhoods with small business training at the Codman Square Neighborhood Development Corporation (NDC) Computer Learning Center and Dorchester Bay Economic Development Corporation (EDC). The program includes digital tools to enhance management, marketing, and sales; the option to purchase a new tablet or laptop for $50; and assistance improving English-language skills. Connected devices deserve a 100% fiber-optic network Connected devices can perform only as well as the network they're on - from TVs to tablets, everything is amazing on Verizon's 100% fiber-optic network. Imagine the brilliant picture quality of 4K UltraHD delivered over a 100% fiber-optic network. For a limited time, Verizon is offering residents of the City of Boston Fios Triple Play Bundles, featuring Fios Gigabit Connection internet with speeds up to 940/880 Mbps, TV and phone at a promotional price of $69.99* per month for 2 years Additionally, for customers in contract with cable or satellite providers, Verizon Fios is offering up to $500 credit to help offset any early termination fee. Fios is the most awarded network for internet speed and customer satisfaction over the past 10 years. Fios Internet is ranked "Highest Residential Internet Satisfaction in the East"** by J.D. Power four years running, and rated #1 in satisfaction with speed for 12 years in a row in PC Magazine's Readers Choice Survey.*** *Pricing for two years + Taxes, equip. charges, RSN, FDV & other fees. ** Verizon received the highest numerical score among 6 companies in the East in the J.D. Power 2013-2016 U.S. Residential Internet Service Provider Satisfaction Study. 2016 study based on 24,203 total responses, measuring the opinions of customers with their internet service provider, surveyed November 2015- July 2016. Your experiences may vary. Visit jdpower.com. *** Reprinted from www.pcmag.com with permission. ©2017 Ziff Davis, Inc. All Rights Reserved. Based on PCMag.com's Readers' Choice Survey customer ratings of ISP performance. Verizon Communications Inc. (NYSE,Nasdaq: VZ), headquartered in New York City, has a diverse workforce of 163,400 and generated nearly $126 billion in 2016 revenues. Verizon operates America's most reliable wireless network and the nation's premier all-fiber network, and delivers integrated solutions to businesses worldwide. Its Oath subsidiary houses more than 50 media and technology brands that engage about 1 billion people around the world. VERIZON'S ONLINE MEDIA CENTER: News releases, stories, media contacts and other resources are available at www.verizon.com/about/news/. News releases are also available through an RSS feed. To subscribe, visit www.verizon.com/about/rss-feeds/. Related Links http://www.verizon.com/ https://www.verizonwireless.com/ http://www.verizonenterprise.com/ http://www.verizon.com/about/ Media contact: Mike Murphy 781-932-1213 [email protected] Twitter: @mikemurphypr SOURCE Verizon Related Links http://www.verizon.com
{ "pile_set_name": "OpenWebText2" }
The slow-motion disintegration of the Conservative and Labour parties is the key political fact of 2019. Brexit is the immediate detonator, but the underlying causes run deeper and create the opportunity to reshape politics. Both of the large parties prospered because they were coalitions; they found space for liberals who argued for economic and social progress based on respect for individuals and communities, and for Britain’s role in the community of nations. However, this approach has broken down. Although many social democrats and liberal Conservatives still instinctively prefer to argue their case within the larger parties, there are several reasons why this approach no longer works. First, there is no time. The traditional parties are both committed to carrying through different versions of Brexit, but
{ "pile_set_name": "OpenWebText2" }
FILE - In this Jan. 24, 2018, file photo,the Alaska Marijuana Control Board meets in Juneau, Alaska. Alaska regulators, once on the cusp of allowing onsite use of marijuana at authorized retail stores, will take another run at the issue. Sitting from far left are members Loren Jones, Brandon Emmett, Nicholas Miller, Travis Welch and Mark Springer. (AP Photo/Becky Bohrer, File) FILE - In this Jan. 24, 2018, file photo,the Alaska Marijuana Control Board meets in Juneau, Alaska. Alaska regulators, once on the cusp of allowing onsite use of marijuana at authorized retail stores, will take another run at the issue. Sitting from far left are members Loren Jones, Brandon Emmett, Nicholas Miller, Travis Welch and Mark Springer. (AP Photo/Becky Bohrer, File) JUNEAU, Alaska (AP) — Alaska regulators, once on the cusp of allowing on-site use of marijuana at authorized retail stores, plan to take another run at the issue this week. The Marijuana Control Board is scheduled to discuss proposed rules for allowing on-site consumption, but whether the board will reach a final conclusion isn’t clear. The board is down one member; Travis Welch resigned less than two months after his appointment to the public safety seat after being dismissed from his job as a police chief. The board’s director has recommended that the panel put the draft rules out for public comment. ADVERTISEMENT The board is scheduled to meet in Nome for three days, starting Wednesday. Regulators have been mulling on-site use for several years, adopting rules in late 2015 to allow for people to use marijuana at authorized stores, but never finalized how that would work. One of the ideas behind on-site use was providing a place for tourists to partake. The board last year rejected a set of proposed rules, with member Mark Springer, a frequent swing vote, suggesting moving slowly, citing uncertainty with how President Donald Trump’s administration might view marijuana. Weeks later, though, the board decided to give it another try. The latest iteration was worked on by a subcommittee consisting of board members Brandon Emmett and Loren Jones, who have been on opposite sides of the issue. Emmett holds an industry seat on the board; Jones, the public health seat. The proposal would allow for on-site use in a designated area, separated from the rest of the store either by a secure door, in an area with a separate ventilation system, or outdoors. The board would have to evaluate whether an outdoor consumption area was compatible with other uses in the area. A person could only use marijuana purchased from the on-site retail store — they couldn’t bring any from home. The proposal sets daily rather than transactional limits on what a store can sell to a person, which the board’s director, Erika McConnell said would be similar to alcohol tasting rooms. It would allow for local governments to protest on-site use applications and to pass local ordinances prohibiting on-site consumption or certain elements of on-site use, such as smoking. California permits marijuana smoking at marijuana retailers with specially designed lounges. But it also allows cities to ban those kinds of shops. ADVERTISEMENT Emmett, who has expressed frustration with how long the debate over on-site use has gone on, said he thinks the board will be presented with a clearer, more fleshed out set of rules than they’ve considered before. “I was able to put my own personal frustrations aside and really look at, OK, what are the issues here? Why has the board been so unwilling to discuss this?” he said. Jones said the process for working on the draft was a good one, but said supporting the proposal is “a bridge too far for me at this point.” If any rules were to pass the board, he said he wanted something in place that would be workable and enforceable. But he said he’s still not convinced that the initiative passed by voters in 2014 legalizing recreational use of marijuana permitted on-site use. “I think the attorneys disagree with me, but I’m sort of stuck on that,” he said.
{ "pile_set_name": "OpenWebText2" }
By Megan Lee There are no dodgeball games, cookouts or other rushing events at Virginia Commonwealth University’s campus in Richmond, but fraternities and sororities are still recruiting new brothers and sisters. The Greek chapters at VCU, and many other Virginia schools, are using Zoom to recruit new members. Some fraternities and sororities believe the challenge of social distancing has strengthened bonds amongst each other as well as their philanthropy efforts. “Not being able to meet in-person this semester as a whole chapter has been hard, but it has given us more time to focus on our priorities,” said John Rudolph, VCU Pi Kappa Alpha recruitment chair. “Those being our grades, community service and philanthropy.” VCU’s fraternities and sororities have given around 8,400 hours of time to charities in the last academic year, said LaDarius Thompson, associate director of Civic Engagement and Fraternity and Sorority Life at VCU. VCU Pi Kappa Alpha is finding alternative methods for their usual events like Bowling Buddy, community clean-ups and food drives, according to Rudolph. Rudolph said the organization is preparing virtual fundraisers using Instagram and Venmo for its annual Fireman’s Challenge, benefitting the Evans-Haynes Burn Center in Richmond. Bingo donation boards, orders of Campus Cookies, and raffles are just a few of the virtual fundraising challenges Virginia Tech Kappa Alpha sorority are circulating through Instagram and Snapchat stories. Mojdeh Nourbakhsh, Panhellenic director of risk at Virginia Tech, said that most of its fraternity and sorority causes “are in greater need now more than ever” due to the pandemic. Kappa Alpha and Kappa Delta at Virginia Tech are raffling Airpods and a TV on social media to fundraise for NRV CARES, a nonprofit advocating for children involved in Juvenile and Domestic Relations Court proceedings. William & Mary Kappa Sigma President Danny Driscoll said the chapter has raised $600 worth of food this semester for the Williamsburg House of Mercy, a local homeless shelter, and plans to raise another $10,000 over this academic year for philanthropy while following COVID-19 guidelines. These philanthropy efforts are sometimes overshadowed by the notorious social life fraternities and sororities can bring to college campuses. But these social and recruiting events play a large role in establishing the sense of brotherhood and sisterhood that is an integral part of the organizations, Driscoll said. Nourbakhsh said that when recruiting in person, “you get a better feel of their energy and how they would benefit your chapter best.” Since most of these in-person events cannot happen, many Greek leadership boards decided to decrease semester fees. “I think it’s so wrong to charge someone $300 when what are they really getting besides to say, ‘I’m in Kappa Sigma?’” Driscoll said. Although rectangles of faces on a Zoom call have replaced real life meetings, there is no substitute for the brothers and sisters that live together — a common standard for many fraternities and sororities. Students have tested positive in Virginia Tech’s Kappa Delta house, said Claudia Wrenn, the sorority’s vice president of membership. Wrenn said that the organization’s positive students were quarantined off campus until they were well. She said that resident assistants conduct walk-throughs of all on-campus Greek housing, ensuring that masks are worn at all times in common areas and social distancing measures are in place. Potential issues arise in off-campus housing, where universities do not have much control. Thompson said that VCU, the Interfraternity Council and VCU Police have met with sororities and fraternities to reiterate state orders and find ways to prevent COVID-19 within Greek-populated houses. Some of the Greek chapters that ignored college COVID-19 guidelines have suffered the consequences. Radford University suspended the Iota Zeta chapter of its Theta Chi fraternity for not following health measures during the COVID-19 pandemic, according to The Roanoke Times. James Madison University also brought Harrisonburg to the top of a USA Today list of college towns with the worst U.S. COVID-19 outbreaks. Twenty percent of JMU students participate in Greek organizations, according to the university’s Fraternity and Sorority Life page. There are no confirmed links between JMU Greek life and the school’s COVID-19 cases, but Driscoll said that people are connecting the two. “We don’t want to be the scapegoats that they’re making Greek life into at JMU right now,” Driscoll said. About 800,000 undergraduate students participated in Greek life across the country in 2018, but VCU Pi Kappa Alpha and William & Mary Kappa Sigma have seen lower recruitment numbers than usual with this year’s freshman class. Virginia Tech Kappa Delta is anticipating even lower numbers for spring recruitment. Driscoll said Kappa Sigma has not reached half of the number of potential new members he would typically like to have at this point in the semester. However, he said the unique challenges of this year have created a space for a different bond amongst the incoming class as they “navigate the pandemic together.” Driscoll decided to have a “year-long pledge class” for the 2020-2021 school year to create a longer acclimation period for potential recruits and a greater reach to interested students. As fraternities and sororities reevaluate chapter goals, they also have time to reflect on Greek life’s impact beyond the college campus, Nourbakhsh said. Nourbakhsh serves on a national committee created by the National Panhellenic Conference to address underlying racism and noninclusive policies within Greek life. “There is so much work to be done in our country and starting off with organizations like Greek life and fixing systemic issues is a great start to changing the nation’s perceptions and culture,” Nourbakhsh said. Students said they are eager to return to regular life. “Coming from a pandemic and going back to normalcy, whatever normalcy will be, is going to benefit participation, loyalty, interest in these fraternities and sororities because people miss it,” Driscoll said.
{ "pile_set_name": "OpenWebText2" }
Thank you for the awesome assortment of gifts. These two "lite" murder mystery books are right up my alley and I know I can escape for a few hours while reading them and I won't be disturbed because my cat and dogs will be having fun playing with their new toys too! The Squirrel hide-away is so cute I almost hate to let them slobber on it and I know my crazy cat will be chasing her new ROBOTIC CAT TOY! How cool is that? You made my day brighter, thank you kind person!
{ "pile_set_name": "OpenWebText2" }
A recent academic paper (PDF) shows “that Tor faces even greater risks from traffic correlation than previous studies suggested.” In other words, one of the world’s best tools for keeping online speech anonymous is at risk in a previously known—but now even clearer—fashion. In the wake of a recent uptick of Tor usage (whether from a botnet or from people inspired by former National Security Agency [NSA] contractor Edward Snowden), a reminder of these risks is certainly germane to today’s Internet. The new research has shown that a potential adversary with control of Internet Exchange Points (IXPs) or autonomous systems (ASes) that have large-scale network control (like an ISP), could expose and identify a Tor user, given enough time. That could include a nation in which the Internet is state-controlled, like Iran, or a vast telecommunications company like Level 3, but it could also certainly include a very sophisticated adversary with significant technical and legal resources on its side, like the NSA. “Essentially what we’re saying is location matters,” Chris Wacek, a researcher at Georgetown University and one of the paper’s authors, told Ars. “If you are a user connecting from Iran and you’re connecting to a destination in Iran, you can plausibly assume that the Iranian government knows who you are. If you have a concern that that type of entity might pose an adversarial threat to you, then you should be aware that they may be able to compromise you given a long enough period of time, even if you’re using Tor.” Essentially, an adversary can simply wait long enough so that your traffic will turn up on their own network points that are also on the Tor network. Given more time and more traffic, there is greater likelihood that an adversary can figure out who you are. “If you use Tor as a casual user, your security isn’t going to go down dramatically” Specifically, a group of five researchers at Georgetown University and the Naval Research Laboratory—the arm of the Navy that originally developed Tor—explained it this way: An adversary that provides no more bandwidth than some volunteers do today can deanonymize any given user within three months of regular Tor use with over 50 percent probability and within six months with over 80 percent probability. We observe that use of BitTorrent is particularly unsafe, and we show that long-lived ports bear a large security cost for their performance needs. We also observe that the Congestion-Aware Tor proposal exacerbates these vulnerabilities. Some of our results against an adversary controlling ASes or IXPs are similarly alarming. Some users experience over 95 percent chance of compromise within three months against a single AS or IXP. We see that users’ security varies significantly with their location. However, an adversary with additional ASes or IXPs has much higher compromise speed, notably against even those users in “safer” locations. Such an adversary is highly relevant in today’s setting in which many large organizations control multiple ASes or IXPs. Surprisingly, we observe that high diversity in destinations may actually result in improved security against a network adversary. The folks behind Tor have said that they have long been aware of this vulnerability. “Yes, a big enough adversary can screw Tor users,” Roger Dingledine, the project’s director, wrote on a Tor e-mail list earlier this week. “But we knew that. I think it's great that the paper presents the dual risks of relay adversaries and link adversaries, since most of the time when people are freaking out about one of them, they're forgetting the other one. And we really should raise the guard rotation period. If you do their compromise graphs again with guards rotated every nine months, they look way different." But that doesn’t mean that everyone should necessarily disconnect from Tor, Wacek added. “What our research shows is that [if] an adversary who is powerful like the NSA is paying attention to you—and is looking for you and trying to deanonymize you—[they are] likely to be able to do it if they have access to the right network locations and the right resources,” he said. “If you use Tor as a casual user, your security isn’t going to go down dramatically. I think what we’re showing is that for a certain type of user, potentially a dissident who always uses Tor to avoid being captured, this may be a significant concern for them. If you don’t want your employer to know that you’re searching for health information and are worried about your health insurance going up, it’s probably not a concern.” The paper will be formally presented at a computer science conference in Berlin in November 2013.
{ "pile_set_name": "OpenWebText2" }
Between shoots for the movie Sticking Point in the blazing hot sun in Palm Springs, Trenton Ducati found some time to soak up some local culture. Thanks to Bear Wear for letting us shoot in their store. Last modified: Feb 28, 2013
{ "pile_set_name": "OpenWebText2" }
Anyone who has just joined the workforce for the first time has a list of things to spend on—from clothes to gadgets, and more. Saving and investment rarely feature in this list. This may sound boring and even unimportant, but if you don’t want to be financially lost, you must plan your finances. Here are a few things you can do with your income in the early stages of your career. Start early When it comes to growing your money, the earlier you start saving and investing, the easier it will be to build a corpus. “You should understand the power of compounding. Unfortunately, people don’t understand it and how starting early will enable lower investment savings," said Dilshad Billimoria, director, Dilzer Consultants Pvt. Ltd. Say, you are 25 years old and plan to retire at 60. If your current annual expense is 10 lakh, the expenses in your first year of retirement would be 77 lakh, assuming annual inflation of 6%. So, you will need a corpus of 10.7 crore at age 60, for which you need to invest 28,000 per month till retirement age and earn return of 10% on it. If you delay and start investing only when you turn 30, you would need to save 35,365 per month. So, the later you start, the more you need to save. Identify goals later You may be wondering, why invest when you don’t have goals. Imagining about retirement or any other kind of long-term goal is difficult when you are in your 20s. “Many financial commitments come in the form of events. The older you get, the more difficult it gets to catch up to the expenses. People don’t think about this in their 20s," said Leo Puri, managing director, UTI Asset Management Co. Ltd. How does one overcome this difficulty? “It is a simple thing. Generally, your financial goals will include retirement, buying a house, marriage, children, their higher education and marriage, your higher education, travel and spending on gadgets or white goods. Even if none of these make into your list right now, they will soon creep in," said Suresh Sadagopan, a Mumbai-based financial planner. Even if you don’t have a goal, keep a part of your salary aside to be used for future needs. Insure yourself Once you have decided to save a certain portion of your income, the next step you may assume is to invest. It’s not. the next step should be buying health insurance so that medical liabilities are taken care of. “Life insurance can wait. But you should take medical insurance immediately. You may think that your employer will take care of it. But health issues can occur any time, say, when you are in between jobs. Consider taking health cover of at least 3 lakh, which will cost you under 4,000 per annum," said Sadagopan. You don’t want to dip into your savings or investments when you have an option to hedge. Understand products After health insurance comes investing. You must remember that over time, money loses value due to inflation and taxes. So, leaving all your money in a savings account is not prudent. Of course, that doesn’t mean that you invest in any product that gives you higher returns than a savings deposit. You should calculate the returns you get after factoring in inflation and tax. “People don’t understand the difference between real return and nominal return. They misunderstand nominal return to be the real return. Always remember to factor in inflation when you are investing," said Vivek Dehejia, professor of Economics at Carleton University in Ottawa, Canada. So, which product to choose? Since you have time on your side, you are in a better position to take risk. “Equity-oriented products are a good option. But you should invest at least 40% of your money in lock-in products such as Public Provident Fund as it will help you build financial discipline," said Sadagopan. You can create a corpus by investing in short-term products such as debt funds or even bank fixed deposits. This will help build financial discipline. Though you should save and invest regular, it doesn’t mean that you can’t indulge. “You can buy a new gadget or go for a vacation, but it doesn’t mean that you go overboard with you credit card and spend more than you can afford," said Sadagopan. If you have basic understanding of financial products and how they work, you will be able to make the right decisions about your money life. Doing so will earn healthy returns. Subscribe to Mint Newsletters * Enter a valid email * Thank you for subscribing to our newsletter. Share Via
{ "pile_set_name": "OpenWebText2" }
En iyi oteller Sadece ibadetlerinize odaklanın istiyoruz. Türk aşçılar da bulunan otellerimizde; restoran, temizlik ve konfor anlamında seçtiğiniz otel standartlarında son derece iyi şartlar sunuyoruz. Özel Bir Rehberlik Kişi sayısına düşen rehber sayısıyla bir adım öndeyiz. Kutsal Topraklarda yaşayan, müşteri memnuniyetine azami önem veren rehberlerimiz ile umre ibadet ve ziyaretleriniz boyunca hep yakınınızdayız. En İyi Havayolları Umre turlarımızda otel seçenekleri gibi Havayolu ve Uçuş bilgisi konularında ve/veya gibi ifadeler bulamazsınız, çünkü uçağınızın saatine kadar her şey nettir. Kesin Kalkışlı Turlar Tüm programlarımız kesin kalkışlıdır. İster 1 kişi, ister 40 kişi olsun o turumuz gerçekleşir. Her hafta İstanbul ve Ankara'dan programlarımız bulunmaktadır. Program ortalamamız 20 kişidir. Mutlu Müşteriler Memnuniyet oranlarında %97'nin üzerindeyiz. Bizi tercih eden, tavsiye eden ve her fırsatta teşekkürlerini ileten siz değerli müşterilerimize sonsuz teşekkürler.
{ "pile_set_name": "OpenWebText2" }
Over the weekend, a Boeing 737 Max departing Addis Ababa, Ethiopia, crashed and killed all 157 people on board, the second deadly disaster involving the new model in five months. In the last 48 hours, China, Indonesia, Malaysia, Singapore, Australia, and the U.K. have closed their air spaces to the Max, with Europe expected to follow suit, as investigators attempt to piece together why the plane that generates nearly a third of Boeing’s operating profits crashed six minutes after takeoff. Luckily, on Tuesday, aviation specialist Donald Trump weighed in with his assessment: If your initial response to that was “uh, what?” or simply stunned silence, you’re not alone. Per usual, no one seems to know what Trump is getting at here, other than his obsessive nostalgia for the good old days, when men were men, women were women, stewardesses were appreciative of a pinch on the ass, and [checks notes] planes used to crash a lot more. Planes being more automated is, as most people would agree, a good thing, and, as others have pointed out, currently there are no commercial pilots who don’t know how to “take control of a plane.”
{ "pile_set_name": "OpenWebText2" }
The long awaited Minix Z64 PC has finally appeared after many weeks of speculation as to when it would finally be unleashed. Many, this author included, are excited given Minix’s track record for build quality and customer support. GeekBuying are the first to offer the Z64 for sale. It comes in two variations, the Z64A and Z64W. Whilst both offer the same hardware, the key difference is the pre-installed operating system. The Minix Z64A comes preloaded with Android 4.4.4 KitKat, whereas the Minix Z64W comes with Windows 8.1 with Bing. UPDATE: GearBest have also added both the Z64A and Z64W into their catalogue. Given the success of Windows devices such as the PiPO X7, Minix entering the market is certainly exciting. MINIX NEO Z64W/Z64A Technical Specifications Chipset: Intel Baytrail quad core processor with Intel HD graphics (Z3735F) RAM: 2 GB DDR3 Storage: 32 GB eMMC + micro SD slot up to 32GB Video & Audio Output: HDMI 1.4, 3.5mm Audio Connectivity: 802.11 b/g/n Wi-Fi, Bluetooth 4.0, 10/100 Ethernet USB: 2 x USB 2.0 port Other Features: Power button, IR Remote Control OS: Android 4.4.4 KitKat (Z64A) or Windows 8.1 with Bing (Z64W) MINIX NEO Z64 Videos GeekBuying have put together a number of videos together showing various aspects of the device. MINIX NEO Z64 Unboxing MINIX NEO Z64 Teardown MINIX NEO Z64A Review MINIX NEO Z64 BIOS Walkthrough MINIX NEO Z64W/Z64A Photos Getting one You can purchase the MINIX NEO Z64W and Z64A from Geekbuying here and here respectively. They are currently offering a promotion where you also receive a free 8 GB microSD card and wireless mouse. The Z64A and Z64W are also available from GearBest.
{ "pile_set_name": "OpenWebText2" }
Soon™ IGN - Mitthrawnuruodo One half of the "Frick and Frack of negativity".
{ "pile_set_name": "OpenWebText2" }
by Carol Tilley The business of early comic book publishing in the US is something of a black box: too little data about actual practices, too many secrets in the name of competition, and too much self-aggrandizement in lieu of actual information. What we do know is kludged together lovingly by a small army of devoted comics historians from oral histories of aging comics pros, scant extant company records, occasional legal proceedings, notices in trade publications, and similar bits.[1] It’s a documentary amuse-bouche, but one seemingly without the promise of a main course. Last year in the course of doing some research on comics readership, I stumbled across an interesting work that, while not providing a main course, perhaps at least gets us closer to an entrée. David McKay Company was a Philadelphia-based publisher, founded in the 1880s. McKay published eclectic materials from the complete works of Shakespeare, detective fiction, and children’s books alongside titles on home economics, animal care and parlor games. In the early 1930s, McKay issued a handful of hardcover comics reprints including Mickey Mouse and Popeye. By the mid-1930s, McKay had brokered a deal with King Features Syndicate to repackage their newspaper comics—Katzenjammer Kids, Blondie, The Phantom, and other titles—as floppies. These reprints were published under three titles, Ace Comics, King Comics, and Magic Comics. The first two ran for more than 150 issues each with Magic making it for more than 120 issues. Feature Book, another McKay comics title, accumulated more than 50 issues; unlike its siblings, each issue of Feature included stories from a single property. In the 1940s, a young accountant named Charles Cridland took over as treasurer for McKay. Cridland, who grew up in the Philadelphia area and was himself the son of an accountant, was married to Margery McKay, the granddaughter of David McKay and the daughter of Alexander McKay, who was then the company’s president. He and Margery may have been high school sweethearts, as they were both members of the Class of 1932 at Upper Darby High School. “Charlie,” pictured here in his senior class photo, was ribbed in the yearbook about being “quite a naturalist,” who was especially interested in “Boyds.” Cridland served on the naval destroyer USS Madison for at least part of World War II, and then in 1946, he took on oversight of the comics business at McKay in addition to his regular position as treasurer. Sometime around 1947 when Drexel Institute of Technology (now Drexel University) launched a part-time evening program leading to a Master of Business Administration, Cridland enrolled. He deposited an original 115-page typewritten thesis titled An Analysis of Comic Magazine Sales in the United States from 1935 to 1949 in December 1949. In winter 2015, after finding it in a list of search results and requesting it through interlibrary loan, I got to read it. Cridland’s motivation for choosing this particular topic for his thesis was practical. He wrote that when he took over the comics division for McKay, it “was then publishing five or six titles with an annual sale in excess of twenty-five million copies. I soon found that there was little available in the way of management tools to effect any informed decisions.” He goes on, The main problems facing each publisher are first, to set print orders for each issue so that there are sufficient copies for adequate distribution and at the same time to keep returns of unsold copies to a minimum. Second, each publisher must decide on the advisability of of adding new titles and of discontinuing old ones. For the industry these two problems have existed since the end of the war and have become increasingly pressing. Excessive returns due to unbalanced production have been increasing. In turn there appear to be too many different titles (pp. 1-2). Cridland was both a rationalist and an optimist: if he could document and analyze comics sales over time, looking for patterns with regard to seasons, pagination, and frequency, perhaps he could improve the industry’s bottom line. A handful of folks including some of the officers of the recently formed Association of Comics Magazines Publishers (ACMP) were excited about the possibility of wide-scale cooperation. But, as you might suspect, the industry as a whole wasn’t ready for management efficiency. Cridland writes, Unfortunately, the comic magazine industry by its very nature has not been conducive to cooperation. Specific sales information concerning particular titles gives competitors results of perhaps expensive testing. It also invites immediate competition in the form of similar magazines…Very little information regarding sales is available. Estimates have been largely guesswork and most decisions have probably been of a similar nature (p. 2) An even more sobering paragraph follows. Before attempting to gather data, I spoke with many people in the industry. All of them thought that such a study would be extremely useful but most stated that I probably would not be able to obtain material. While I was able to gather material, it was extremely difficult and my being known in the industry probably accounts for this limited success (p. 3, emphasis mine). After laying out his rationale and method in the opening chapter, Cridland uses the second chapter to provide an overview of the comics industry. Some of the history and explication in this overview is familiar even to the casual student of comics history, but there are some less common insights. For instance, it may seem counterintuitive, but repackaging newspaper strips in comic book form is viewed here as a more costly endeavor than creating wholly original material. However, to adapt it to comic book form it was necessary to rescale and reballoon it to comic book size. And, since the newspaper strips were often produced for black-and-white daily papers, it was necessary to make color schemes (p. 8). Likewise there are some useful insights into accounting practices related to distribution and sales. Some distributors make a first payment of twenty-five percent as soon as distribution is completed…the final payment for an issue of King Comics is made at least sixty days after the off sale date, and then some adjustments will continue for another two or three months until the exact final net sale can be determined (p. 12). Cridland also outlines the pricing structure. For a 10-cent comic, wholesalers paid 6-cents, which he says had just been negotiated to 5 ¾-cents for the majority of them, which some urban wholesalers were already paying. Wholesalers charged retailers 7 ½-cents per issue. For each issue sold, wholesalers kept a brokerage fee of ½-cent. Thus publishers grossed 5 ¼-cents for each sale, retailers 2 ½-cents, and wholesalers 2 ¼-cents. Unsold issues might be resold as premiums (e.g. 25 for $1) for use at movie theaters and similar businesses. They might also be sold overseas—although Cridland notes this practice had been largely curtailed because of currency devaluation—or donated to veterans hospitals, prisons, or other institutions. In the third chapter, Cridland begins to provide some specific information about manufacturing and sales. With regard to manufacturing cost, Cridland cautions that his information should be viewed as an approximation. Including all costs assignable to printing an issue and before considering overhead, the publisher of a magazine must sell from fifty to sixty percent of print order to break even. These break even points are applicable on a printing of from three hundred and fifty thousand to five hundred thousand copies [Earlier he states that a 500,000 printing would be high, and that an 80% sales rate would be viewed as strong, although he also notes later that during the war-era paper restrictions, printing runs sold at nearly 100%] (p. 24). Using data gathered by Edward and George Dougherty in their Comic Magazine Publishing Report, a monthly bulletin of on-sale dates for comics, Cridland compiles a comprehensive tabulation of the numbers of comics titles on sale each month between January 1942 and October 1949, both as aggregates and by periodicity (or frequency). The low point in the range is 99 titles in June 1942, and the high end is October 1949 with 326 titles. With a few exceptions, the number of titles between 1942 and 1945 stays between 120 and 150, but in 1946, the number of titles begins to balloon. Here the range is 175 to 222. In 1947, it moves up: 198 to 226; 1948 sees it tick higher, 237 to 299. The ten-months reported for 1949 gives a range of 292 to 326. Bi-monthly and quarterly titles accounted for most of the growth during the decade. Cridland documents a similar trend in shrinking page numbers. Whereas in May 1943, more than 80% of comics were 64-pages that figure began to fall rapidly: by December 1943, only about 5% of comics still had 64-pages with most at 56. Within another six months, page counts had shrunk even more, so that by July 1944, more than 80% of comics were at 48 pages. Significant growth in the 32-page format didn’t happen until 1948, and by late-1949, there were more 32 than 48-page titles for sale. The fourth chapter rewards the reader with some specific sales figures. Cridland uses Ace Comics as a single case. Full-year sales fluctuated for this title from a low of 2,004,000 (223,000 average per issue) in 1937 to a high of 5,761,000 (476,000 per issue) in 1945. Between 1947 and 1948, sales dropped by nearly 50%, a drop made more vivid by an accompanying chart that depicts an almost cliff-like edge. Cridland writes, Sales during 1947 and 1948 show a very rapid decline from this unusual and highly satisfactory condition. Generally, when a title is made into a bimonthly from a monthly some increase in issue sales can be expected. In the case of Ace Comics, this was barely discernible. Actually then, Chart X represents the life cycle of this magazine; and, while the growth from 1941 through 1946 may indicate the situation for many other comic magazines, the subsequent period does not (p. 67, emphasis mine). The final issue of Ace bore the October-November 1949 date on its cover. In fact, all of McKay’s comics ended their runs that fall. More phenomenal sales growth awaited publishers that could or chose to hang out, but McKay was finished with its comics gambit. Cridland offers more sales data, though, in this chapter. Here I include a photograph of the table, Table XI, in which the sales of ten titles are compared against sales Cridland calculated for all titles reported by the Audit Bureau of Circulation (ABC). He does not name the ten titles, but three of them are likely Ace, King, and Magic, as he notes that three were being suspended at the end of the calendar year. Few of the ten were viable: The magazines comprising this combination are of a very similar content. Two titles were suspended in 1948. Several were changed to a bimonthly status early in 1949. Three are scheduled for suspension by the close of 1949, and the publisher of another is considering what change he will make (p. 69). Cridland delves deeper into the data available in the ABC reports. The ABC (which is now the Alliance for Audited Media) compiles figures from periodicals publishers to help establish reasonable advertising charges. Comics publishers typically reported their sales to ABC not by title, but by group, making it difficult to determine how well particular comics sold. Cridland, though, laboriously figured the numbers of titles represented in the ABC data, weighted them by frequency, and calculated means. Plus, he compared the total number of issues as represented in the ABC data to the number of issues that the Doughertys tallied each month. The key takeaways are that the mean sales per issue – as calculated from the ABC data – ranged from a low of 256,000 in 1940 to a peak of 559,000 in 1945. For the first half of 1949, the mean sales were 342,000 per issue.[2] While there was a decline in sales per issue after the end of the war, the number of comics on sale increased significantly, driving total sales per year higher. Cridland reports what the total sales are according to the ABC data, but I’m skipping that part. Why? Because he also found that the ABC data accounted for only about half of all of the titles that were actually on sale. To account for this disparity, Cridland works out estimated total comics sales for the years 1942 to 1949. Here are his figures: 1942 231,000,000 issues sold 1943 407,000,000 1944 518,000,000 1945 532,000,000 1946 644,000,000 1947 641,000,000 1948 728,000,000 1949 750,000,000 (extrapolated from the first 6 months) These numbers are a little more robust than, but largely consistent with, figures I’ve encountered elsewhere. Certainly they help make sense of the trajectory that led Publishers Weekly in a 1954 article to place annual comics sales above 1 billion.[3] Except to note that Cridland found peak sales for McKay titles in January and August with a dip in the summer months, I’ll omit his discussion of seasonal trends. Instead, let me leave you with one of his concluding observations. [I]t should be pointed out that, although the industry is enjoying the highest level of sales since it began, it is highly probable that present operations have resulted in the lowest financial return. Probably more comic magazines are returned unsold today [1949] than were produced in 1946. Because of the lack of mutual trust among publishers, little is known of actual results. During those months of the year when sales can reasonably be expected to be lower due to seasonal slack in demand many print orders are not regulated accordingly. Conditions in the field of distribution are not satisfactory…Many times large bundles of magazines never leave the wholesaler’s warehouse but are moved from the receiving platform to the return room… [T]here appears to be a pressing need for complete information. Such factual data could then be combined to furnish industry-wide data so essential to management control. In addition, the continued development and improvement of such management tools would undoubtedly be accompanied by more experienced and competent management throughout the industry (pp. 106-107). About a year and a half ago here on the Beat, Brad Ricca and Sean Howe shared their discovery of a 1942 graduate thesis by cartoonist Paul Cassidy. Although, as I’ll share in the near future, it wasn’t the first graduate level work on comics in the US, it is unique because of its author: someone in the industry studying industry practices. Cridland’s thesis is intriguing for the same reason, and it offers a useful parallel to Cassidy’s work. In Cassidy’s paper we learn about cartooning practices via a survey; in Cridland’s we gain insights into management and sales via a detailed analysis. Are there other insider gems awaiting us in a library’s stacks? Possibly, but I don’t think so. Cridland left McKay, although I’m not certain when, and took a similar job with the publisher Thomas Nelson & Sons. He died in 1972. I wonder what he would think of the comics industry today? About the Author Carol L. Tilley is Associate Professor in the Graduate School of Library and Information Science at the University of Illinois Urbana-Champaign. Part of her scholarship focuses on the intersection of young people, comics, and libraries, particularly in the United States during the mid-twentieth century. An in-demand speaker on comics history, you can find her on the web at www.caroltilley.net and on Twitter at @anuncivilphd and/or @comicscrusader. And don’t miss her post about a 1948 anti-Wertham comic created by a teenage fan that she recently discovered in the collections of the Billy Ireland Cartoon Library and Museum. [1] These historians include Brad Ricca, Sean Howe, Gerard Jones, and Michael Barrier. I have great respect for the work they’ve done and continue to do to tell the story of comics. [2] By way of comparison, the top selling comic in January 2016 was Walking Dead #150, which sold about 156,000 copies. John Jackson Miller’s Comichron site contains extraordinary information about comics sales during the past half-century. [3] “Comic Comics.” Publisher’s Weekly 165 (1954): 2042.
{ "pile_set_name": "OpenWebText2" }
On January 21, 2010, the Supreme Court in a misguided and destructive decision ruled in Citizens United v. Federal Election Commission that the longstanding ban on corporate expenditures in federal campaigns was unconstitutional. In reaching this decision, the Court unleashed a flood of unlimited contributions into federal elections through Super PACs and other independent spending entities, and thereby unleashed the corruption of our government. Recently the argument has been made that the Citizens United decision had little responsibility for the torrent of unlimited individual contributions being spent by Super PACs to influence the 2012 elections. This argument is wrong. A little history is in order. Citizens United and Super PACs Super PACs are federally registered political action committees that file disclosure reports with the FEC and spend unlimited contributions from individuals, corporations, labor unions and other entities on independent expenditures to influence federal elections. Super PACs are playing a major role in the 2012 presidential primaries and will continue to be a major factor throughout the 2012 presidential and congressional races. The Citizens United decision explicitly held that corporations could make independent expenditures in federal elections and also implicitly held that corporations could give unlimited amounts to third party groups, such as Super PACs, to make independent expenditures in federal races. Two months later, on March 26, 2010, the D.C. Circuit Court of Appeals in SpeechNow v FEC held unconstitutional the existing $5,000 per year limit on the amount that an individual could contribute to a third party group, such as a Super PAC, to make independent expenditures in federal elections. The SpeechNow decision is explicitly based on the earlier Citizens United decision. In the operative sentence of the SpeechNow decision, Judge David Sentelle writing for the full D.C Circuit Court of Appeals stated: Thereafter, the Supreme Court decided Citizens United v. FEC, 130 S. Ct. 876 (2010), which resolves this appeal. In accordance with that decision, we hold that the contribution limits of 2 U.S.C. § 441a(a)(1)(C) and 441a(a)(3) are unconstitutional as applied to individuals' contributions to SpeechNow. (Emphasis added). Floyd Abrams, an attorney who supported the Citizens United challenge in the Supreme Court, has argued that the right of individuals to make unlimited contributions to Super PACs was established by the landmark Supreme Court decision in 1976 in Buckley v. Valeo. This argument is wrong. While the Buckley decision held that an individual could make unlimited expenditures of his own money in federal elections, the Court did not rule that an individual could make unlimited contributions to a group that is making independent expenditures in federal elections. And until 2010, federal campaign law and FEC regulations limited to $5,000 per year the amount that an individual could give to a PAC making independent expenditures in federal elections. It is this individual contribution limit that was declared unconstitutional in the SpeechNow case, which was based on the Citizens United decision. Mr. Abrams also cites the Swift Boat PAC ads in 2004 which attacked Senator John Kerry and the expenditures by two pro-Democratic PACs that supported Senator Kerry, funded in part by multimillion dollar contributions from George Soros, to argue that unlimited individual contributions to PACs making independent expenditures to influence federal elections have long been allowed. This argument is wrong. The unlimited contributions from individuals to these three PACs went to groups that operated illegally in the 2004 presidential election and accepted contributions that did not comply with federal law. The PACs paid substantial fines to the FEC for making combined illegal expenditures of $170 million in the 2004 presidential election. If these PACs had properly complied in 2004 with existing campaign finance laws, the contributions from individuals to the Swift Boat PAC and to the two pro-Democratic PACs would have been limited to $5,000 per donor per year. The bottom line is this: the ability of corporations, labor unions and individuals to give unlimited contributions to Super PACs making independent expenditures to influence federal elections flows directly from the Supreme Court's decision in the Citizens United case. Unlimited contributions in federal elections invariably lead to corruption and scandal, and that is what is unfolding in the 2012 elections. Fred Wertheimer is the President of Democracy 21, a nonprofit, nonpartisan organization that promotes campaign finance reforms and related government integrity measures.
{ "pile_set_name": "OpenWebText2" }
MOSCOW — Ukraine joined the World Trade Organization on Tuesday after 14 years of negotiations, a milestone for the former Soviet republic that helps clear the way for a valuable free trade agreement with the European Union. Officials in the Ukrainian capital, Kiev, suggested that the membership represented a coming of age, of sorts, for trade relations of former Soviet countries. The deal is expected to lift living standards and increase investment in Ukraine, one of Europe’s poorest countries. Ukraine — with a population around 46 million and a gross domestic product estimated by the World Bank at $106 billion — became the largest former Soviet state to join the trade group to date; Russia is negotiating for membership. “We have difficult homework to do,” President Viktor A. Yushchenko of Ukraine said at a news conference in Geneva, where the W.T.O. is based, as quoted by Bloomberg News. “This starts the colossal integration work that lies ahead.”
{ "pile_set_name": "OpenWebText2" }
Introduction This is a follow-up to a post from earlier this year discussing the likelihood of encountering two identical packs of Skittles, that is, two packs having exactly the same number of candies of each flavor. Under some reasonable assumptions, it was estimated that we should expect to have to inspect “only about 400-500 packs” on average until encountering a first duplicate. This is interesting, because as described in that earlier post, there are millions of different possible packs– or even if we discount those that are much less likely to occur (like, say, a pack of nothing but red Skittles), then there are still hundreds of thousands of different “likely” packs that we might expect to encounter. So, on 12 January of this year, I started buying boxes of packs of Skittles. This past week, “only” 82 days, 13 boxes, 468 packs, and 27,740 individual Skittles later, I found the following identical 2.17-ounce packs: Test procedure I purchased all of the 2.17-ounce packs of Skittles for this experiment from Amazon in boxes of 36 packs each. From 12 January through 4 April, I worked my way through 13 boxes, for a total of 468 packs, at the approximate rate of six packs per day. This was enough to feel like I was making progress each day, but not enough to become annoying or risk clerical errors. For each six-pack recording session, I did the following: Take a pack from the box, open it, and empty and sort the contents onto a blank sheet of paper. Take a photo of the contents of the pack. Record, with pen and paper, the number of Skittles of each color in the pack (more on this later). Empty the Skittles into a bowl. Repeat steps 1-4; after six packs, save and review the photos, recording the color counts to file, verifying against the paper record from step 3, and checking for duplication of a previously recorded pack. The photos captured all of the contents of each pack, including any small flakes and chips of flavored coating that were easy to disregard… but also larger “chunks” of misshapen paste that were often only partially coated or not at all, that required some criteria up front to determine whether or how to count. For this experiment, my threshold for counting a chunk was answering “Yes” to all three of (a) is it greater than half the size of a “normal” Skittle, (b) is it completely coated with a single clearly identifiable flavor color, and (c) is it not gross, that is, would I be willing to eat it? Any “No” answer resulted in recording that pack as containing “uncounted” material, such as the pack shown below. The entire data set is available here as well as on GitHub. The following figure shows the photos of all 468 packs (the originals are 1024×768 pixels each), with the found pair of identical packs circled in red. But… why? So, what’s the point? Why bother with nearly three months of effort to collect this data? One easy answer is that I simply found it interesting. But I think a better answer is that this seemed like a great opportunity to demonstrate the predictive power of mathematics. A few months ago, we did some calculations on a cocktail napkin, so to speak, predicting that we should be able to find a pair of identical packs of Skittles with a reasonably– and perhaps surprisingly– small amount of effort. Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as “merely abstract” and not concretely useful mathematics. (As an aside, I think the fact that this particular concrete application happens to be recreational, or even downright frivolous, is beside the point. For one thing, recreational mathematics is fun. But perhaps more importantly, there are useful, non-recreational, “real-world” applications of the same underlying mathematics. Cryptography is one such example application; this experiment is really just a birthday attack in slightly more complicated form.) Assumptions and predictions For completeness, let’s review the approach discussed in the previous post for estimating the number of packs we need to inspect to find a duplicate. We assume that the color of each individual Skittle is independently and uniformly distributed among the possible flavors (strawberry, orange, lemon, green apple, and grape). We further assume that the total number of Skittles in a pack is independently distributed with density , where we guessed at based on similar past studies. We use generating functions to compute the probability that two particular randomly selected packs of Skittles would be identical, where Given this, a reasonable approximation of the expected number of packs we need to inspect until encountering a first duplicate is , or about 400-500 packs depending on our assumption for the pack size density . Observations The most common and controversial question asked about Skittles seems to be whether all five flavors are indeed uniformly distributed, or whether some flavors are more common than others. The following figure shows the distribution observed in this sample of 468 packs. Somewhat unfortunately, this data set potentially adds fuel to the frequent accusation that the yellow Skittles dominate. However, I leave it to an interested reader to consider and analyze whether this departure from uniformity is significant. How accurate was our prior assumed distribution for the total number of Skittles in each pack? The following figure shows the observed distribution from this sample of 468 packs, with the mean of 59.2735 Skittles per pack shown in red. Although our prior assumed average of 60 Skittles per pack was reasonable, there is strong evidence against our assumption of independence from one pack to the next, as shown in the following figure. The x-axis indicates the pack number from 1 to 468, and the y-axis indicates the number of Skittles in the pack, either total (in black) or of each individual color. The vertical grid lines show the grouping of 36 packs per box. The colored curves at bottom really just indicate the frequency and extent of outliers for the individual flavors; for example, we can see that every color appeared on at least 2 and at most 24 Skittles in every pack. The most interesting aspect of this figure, though, is the consecutive spikes in total number of Skittles shown by the black curve, with the minimum of 45 Skittles in pack #291 immediately followed by the maximum of 73 Skittles in pack #292. (See this past analysis of a single box of 36 packs that shows similar behavior.) This suggests that the dispenser that fills each pack targets an amortized rate of weight or perhaps volume, got jammed somehow resulting in an underfilled pack, and in getting “unjammed” overfilled the subsequent pack. This is admittedly just speculation; note, for example, that the 36 packs in each box are relatively free to shift around, and I made only a modest effort to pull packs from each box in a consistent “top to bottom, front to back” order as I recorded them. So although each group of 36 packs in this data set definitely come from the same box, the order of packs within each group of 36 does not necessarily correspond to the order in which the packs were filled at the factory. At any rate, if the objective of this experiment were to obtain a representative “truly random” sample of packs of Skittles, then the above behavior suggests that buying these 36-pack boxes in bulk is probably not recommended. Stopping rule Finally, one additional caveat: fortunately the primary objective of this experiment was not to obtain a “truly random” sample, but only to confirm the predicted “ease” with which we could find a pair of identical packs of Skittles. However, suppose that we did want to use this data set as a “truly random” sample… and further suppose that we could eliminate the practical imperfections suggested above, so that each pack was indeed a theoretically perfect, independent random sample. Then even in this clean room thought experiment, we still have a problem: by stopping our sampling procedure upon encountering a duplicate, we have biased the distribution of possible resulting sample data sets! This can perhaps be most clearly seen with a simpler setup that allows an analytical solution: suppose that each pack contains just Skittles, and each individual Skittle is independently equally likely to be one of just possible colors, red or green. If we collect any fixed number of sample packs, then we should expect to observe an “all-red” pack with two red Skittles exactly 1/4 of the time. But if we instead collect sample packs until we observe a first duplicate, and then count the fraction that are all red, the expected value of this fraction is slightly less than 1/4 (181/768, to be exact). That is, by stopping with a duplicate, we are less likely to even get a chance to observe the more rare all-red (or all-green) packs. It’s an interesting problem to quantify the extent of this effect (which I suspect is vanishingly small) with actual packs of Skittles, where the numbers of candies are larger, and the probabilities of those “extreme” compositions such as all reds is so small as to be effectively zero.
{ "pile_set_name": "OpenWebText2" }
1 Do whales produce milk? The need for milk is an essential part of the development of any young mammal, and being aquatic makes breastfeeding considerably harder. Nursing their young with milk is one of the things that defines mammals, so whales definitely do have mammary glands and they do produce milk. But how do they manage to breastfeed underwater? 2 Do whales have nipples? Species from three orders – Carnivora (including seals and sea lions), Cetacea (dolphins and whales) and Sirenia (manatees and dugongs) – live and feed at sea, but they’ve evolved different methods for breastfeeding. Seals and sea lions have retractable nipples that tuck inside the body when the baby is not feeding, but animals that are fully restricted to the sea, such as whales and dolphins, have evolved ‘mammary slits’ – special folds of skin that enclose the feeding glands. We’re still not completely sure how they do it, but it is thought that either the calves can curl their tongues to channel released milk, or that specialised muscles actually contract the mammary glands, squeezing milk into the baby’s mouth. 3 What is whale milk like? As a general rule, whale milk is rich in fats and comes in very large quantities! The blue whale has the largest mammary glands on Earth – each is about 1.5m long and weighs as much as a baby elephant. Blue whale mothers can produce 200 litres of milk per day with a fat content of 35-50%. That enables a blue whale calf to gain weight at the incredible rate of 100kg per day! Did you know that whale mothers communicate with their calves by ‘whispering’? Find out more here! Photo © Fabrice Guerin/Getty Do you have a wildlife question you’d like answered? Email your question to [email protected] or post it to Q&A, BBC Wildlife Magazine, Immediate Media Company, 2nd Floor, Tower House, Fairfax Street, Bristol, BS1 3BN
{ "pile_set_name": "OpenWebText2" }
Following today’s revelation that British people are ‘having less sex’, we felt it was timely to revisit the results of two studies from a few years back. Published by the American Urological Association, it investigates the link between cycling and both sexual health and urinary function… and you’ll be relieved to hear that the news is good! Mostly… It’s a question that might be on the mind of many regular cyclists — does all that pressure on your delicate nether regions have an effect on your sexual health? Or what about the bacterial breeding ground that is the chamois; a recipe for urinary tract infection (UTI) hell? The American Urological Association, a medical research organisation with a focus on genitourinary health, presented the results of two seperate studies on the impact of cycling on men’s sexual and urinary functions, and on the impact of cycling on women’s sexual and urinary functions at a conference in Boston, MA, USA. Cyclists compared to runners and swimmers While the positive effects of cycling are well known, which include improved cardiovascular health, strength, stress reduction and an improvement in mental wellbeing, there have also long been discussions on the link between cycling and various sexual and urinary tract health issues such as erectile dysfunction, perineal numbness, UTIs and saddle sores. This study aimed to take a detailed comparative look at the impacts of cycling on genitourinary health of male and female cyclists compared to non- or irregular cyclists. 4,000 men took part in the study, of whom 63 percent were cyclists and 37 percent were swimmers or runners who did not cycle. 2,691 women took part, of whom 39 percent were cyclists and 61 percent runners or swimmers who did not regularly cycle. The study focussed on ‘athletes’, though the definition of ‘athlete’ is not mentioned. Since participants were recruited via Facebook adverts and outreach to cycling clubs, with the swimmers and runners recruited as controls, it’s likely though unconfirmed that the term ‘athlete’ is a self-selected title. Participants were asked about their cycling habits, sexual function, urinary symptoms and histories of UTIs and perineal numbness. For the purposes of the study, ‘high-intensity cycling’ was defined as ‘cycling for longer than two years, more than three times per week and a daily average of more than 25 miles.’ Cycling could be good for your sex life Breathe a sigh of relief, regular cyclists, for the results seem positive! For men, there was no indication that cyclists had worse erectile function than non-cyclists, no impact on the incidence of lower urinary tract infections, and the study found that the cardiovascular benefits outweighed any ‘theoretical deterrent of cycling’, which essentially means that the benefits far outweigh any negatives. That said, male cyclists did have a greater chance of experiencing perineal numbness. For women, there seemed to be no ‘appreciable’ effect on sexual or urinary functions, and no ‘significant’ urinary symptom differences between cyclists and non-cyclists, though as we haven’t seen the paper in full, we can’t comment on how the researchers are defining ‘significant’, and it may relate to frequency, severity, or a combination of several measures. On the downside, there may be an increased risk of UTIs, and high-intensity cyclists as defined above were more likely to develop numbness and saddle sores, which is hardly revolutionary news. But possibly the two most interesting elements are the fact that the research summary we’ve seen indicates firstly that ‘bike seat type’, by which we assume the researchers mean saddle, had ‘no significant effect on results’, and secondly that both males and females had higher scores on their respective sexual function tests. Simon Dodd Foto The former is interesting, but without more detailed information on what the researchers mean by ‘seat type’ and ‘significant’ we can’t comment. We’d still suggest that finding a saddle that is comfortable for you, whatever type that might be, is worth the time and money, as anyone who’s tried to ride an uncomfortable saddle with attest. The second point — higher scores in sexual function tests — is true of both men and women. These tests measure things like arousal, satisfaction (you can work out what that might refer to) etc., and is self-scored. It’s not a comment on experience, knowledge or attitudes. So it seems, according to this preliminary data, that cyclists have a better sex life than non-cyclists, at least marginally. This, of course, doesn’t delve into specifically why this might be and it’s unlikely to be the magic effect of sitting on a saddle. However, given that factors such as physical health and mental wellbeing all have an effect on sexual function, if cyclists are fitter and happier, that may well lead to a happier sex life. It doesn’t explain why cyclists seem to be happier than the runners or swimmers, though. Further research needed?
{ "pile_set_name": "OpenWebText2" }
Tony Abbott's ministry reshuffle may appear to be a reset in preparation for 2015, but in reality it is more about the PM's paranoia and tenuous leadership than it is about his Government's rejuvenation, writes Paula Matthewson. A full 12 months earlier than it's customary to do so, Tony Abbott has reshuffled his ministry. This is what governments usually do one year out from an election to prove they're not stuck in a rut but capable of the regeneration that brings vigour and fresh ideas. PM Abbott brought the activity forward a year as part of his attempt to scrape off the Government's barnacles before Australian voters turn their attention to the beach and the barbie. The move finally brought to an end the PM's insistence that the ministry's continuity was necessary to create a sense of political stability, a stubbornness demonstrated by 20 members of Abbott's ministry having served in the last ministry of the Howard government. The most intriguing thing about the reshuffle is not Abbott's belated recognition of the need to do it, but his concession to the demands of critics while handing poisoned chalices to dud ministers and potential competitors. The young guns in the Victorian Liberal MPs have essentially been rewarded for their years of agitation and complaint about having to cool their heels on the backbench. This group is responsible for a proportion of the grumbles about the PM's chief of staff Peta Credlin, particularly her reported resistance to an early reshuffle. While it could be argued that NSW Liberals benefited most from the reshuffle by getting another MP into Cabinet, they also lost a spot in the outer ministry with the resignation of stood-aside Assistant Treasurer Arthur Sinodinos. In fact, the Victorian young guns gained more than any other state, with two of their MPs being promoted. Victorian MP Josh Frydenberg was elevated from parliamentary secretary to Assistant Treasurer, while his Victorian colleague Kelly O'Dwyer was brought from the backbench to the rank of parliamentary secretary. In doing so, the PM has made considerable concessions to the ambitious Victorians, even going so far as to make Frydenberg Assistant Treasurer instead of Hockey's preferred candidate, the Queenslander Steve Ciobo. Whether this will be enough to quell the Victorians' noisy agitation over Credlin is yet to be seen. Many of the other ministerial changes are better understood if viewed through the lens of Abbott's leadership. While the PM made no changes to the stellar Foreign Minister Julie Bishop's portfolio, he did remove her friend and ally, the poorly performing David Johnston, from Cabinet. That leaves Bishop with only one Western Australian colleague, Mathias Cormann, at the big table. No changes were made to Turnbull's portfolio either, suggesting Abbott is content with leaving the former Liberal leader to disappoint his progressive fan-base with the Government's cut-rate NBN. And then there is Scott Morrison's promotion from Immigration and Border Protection to a revamped Social Services portfolio, which the PM says is essentially a ministry for economic participation. Morrison is also tasked with producing a holistic families package that Abbott described as being "an important part of our political and economic agenda in the first half of next year". Political commentators are calling this a big win for Morrison, who is keen to broaden his experience with an economic portfolio, thereby strengthening his leadership credentials. But a closer look at the appointment does not bear out this interpretation. Much of Morrison's success in the Immigration portfolio was built on the Australian community's antipathy for asylum seekers. His willingness to do whatever it took, and unwillingness to talk about it, essentially gave Australian voters permission to turn a blind eye to the human cost of border protection while giving him kudos for "solving" the asylum seeker issue. However, Morrison will not be able to deploy the same tactics in Social Services. While asylum seekers are for most voters a distant concept, pretty much everyone knows someone who is dependent on the welfare system. As a result, the impacts of welfare reform are seen, felt and known, and there will be no glory for Morrison having "stopped the dole" in the way he "stopped the boats". It's therefore likely Morrison's promotion is a poisoned chalice, and a way for Abbott to push through one of his toughest reform agendas while also reducing the appeal of one of his competitors. Curiously, Morrison was not the only minister to receive a dubious and potentially career-limited promotion in the reshuffle. Kevin Andrews' move to Defence will likely see him begging to be let go by the next election, for the Department is known for chewing up and spitting out their civilian "masters". The future doesn't look particularly rosy for former Health Minister Peter Dutton either. Dutton may be a retired policeman but it's difficult to see him bring the same steely resolve that served Morrison so well in the Immigration and Border Protection portfolio. And then there is the welcome appointment of NSW's Sussan Ley to Cabinet, thereby doubling the number of women to two. Clearly the representation of women in the Cabinet is unacceptably low, and not due so much to a lack of merit as the arcane balance of states, factions, and parties that make up the Coalition's ministry. Abbott at least did the right thing in appointing two more women as parliamentary secretaries, so they can become ministers-in-training. Prime ministers usually reshuffle their ministry to provide a fresh aspect on their government while hopefully also evoking a sense of stability through the regeneration. But with one or two exceptions, like the promotion of Ley, Abbott's reshuffle is characterised by concessions to antagonists, throwing competitors in the deep end, and leaving the deadwood to atrophy. Abbott's reshuffle may superficially appear to be a reset in preparation for 2015, but in reality it is more about the PM's paranoia and tenuous leadership than it is about his Government's rejuvenation. Paula Matthewson is a freelance communications adviser and corporate writer. She was media advisor to John Howard in the early 1990s. She tweets and blogs as @Drag0nista. View her full profile here.
{ "pile_set_name": "OpenWebText2" }
Tape Deck King presents THAT’S MY WORD, an EP dedicated to the memory of Craig Jamieson Mack (10/05/1970 – 12/03/2018) British DJ, Tape Deck King, has compiled an 8-track EP made up of tracks Mack dropped in 2000 alongside snippets from an interview with Tim Westwood and Marley Marl from the same year. During the interview, which was apart of a ‘New York Live’ edition of the Radio 1 Rap Show, Mack announced the name of a project he was set to drop that summer, but it never came out. That project was ‘That’s My Word’, so hence the title. That’s My Word EP by Craig Mack Purchase via bandcamp Tape Deck King has also asked if anyone has any details for Mack’s widow, Roxanne, as he would like to give a percentage of any sales to her to show some love for her late husband’s contribution to the Hip Hop culture. https://twitter.com/CraigMackOffic1 www.facebook.com/unitedelementzmedia/ https://twitter.com/UnitedElementz https://soundcloud.com/unitedelementzmedia
{ "pile_set_name": "OpenWebText2" }
Photo : Getty Internal company documents from IBM show that medical experts working with the company’s Watson supercomputer found “multiple examples of unsafe and incorrect treatment recommendations” when using the software, according to a report from Stat News. Stat reviewed documents that were included in two presentations given in June and July 2017 by IBM Watson’s former deputy health chief Andrew Norden. The documents were reportedly shared with IBM Watson Health management. According to Stat, those documents provided strong criticism of the Watson for Oncology system, and stated that the “often inaccurate” suggestions made by the product bring up “serious questions about the process for building content and the underlying technology.” One example in the documents is the case of a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug “Bevacizumab.” But the drug can lead to “severe or fatal hemorrhage,” according to a warning on the medication, and therefore shouldn’t be given to people with severe bleeding, as Stat points out. A Memorial Sloan Kettering (MSK) Cancer Center spokesperson told Stat that they believed this recommendation was not given to a real patient, and was just a part of system testing. According to the report, the documents blame the training provided by IBM engineers and on doctors at MSK, which partnered with IBM in 2012 to train Watson to “think” more like a doctor. The documents state that—instead of feeding real patient data into the software—the doctors were reportedly feeding Watson hypothetical patients data, or “synthetic” case data. This would mean it’s possible that when other hospitals used the MSK-trained Watson for Oncology, doctors were receiving treatment recommendations guided by MSK doctors’ treatment preferences, instead of an AI interpretation of actual patient data. And the results seem to be less than desirable for some doctors. “This product is a piece of shit,” a doctor at Florida’s Jupiter Hospital said to IBM, according to the documents reviewed by Stat. “We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.” That doctor was reportedly one of many whose complaints were included in the internal documents. Within days of when Norden gave one of these presentation, Gizmodo spoke with an oncologist at Jupiter Hospital for a report on the overzealous hype and shortcomings of Waston. During the interview, which was arranged by IBM Watson Health, Shah said Watson sometimes served as an extra opinion when Jupiter doctors could not agree on treatment. Shah did not provide a ringing endorsement or the sort of harsh criticism that the unnamed Jupiter doctor apparently candidly shared with IBM executives, as shown in the internal documents. Responding to Stat’s report, a IBM spokesperson told Gizmodo that Watson for Oncology is trained to help oncologists treat 13 cancers and is being used by 230 hospitals around the world, and has “supported care for more than 84,000 patients.” “At the same time, we have learned and improved Watson Health based on continuous feedback from clients, new scientific evidence, and new cancers and treatment alternatives,” the spokesperson said. “This includes 11 software releases for even better functionality during the past year, including national guidelines for cancers ranging from colon to liver cancer.” Norden told Stat he could not comment since he is no longer working for IBM. He left weeks after the presentations to work for Cota, a heath care data-analytics company that had partnered with IBM. Memorial Sloan Kettering spokesperson Caitlin Hool told Stat that the criticisms in the internal documents are a reflection of “the robust nature of the process” of developing software like Watson. “While Watson for Oncology provides safe treatment options, treatment decisions ultimately require the involvement and clinical judgement of the treating physician,” Hool told Stat. “No technology can replace a doctor and his or her knowledge about their individual patient.” [Stat News]
{ "pile_set_name": "OpenWebText2" }
Googleが2日に公開したブラウザ「Google Chrome」は、「Acid3」テストで、「Firefox 3」と「Internet Explorer(IE) 7」の2つの「安定版(stable)」ビルドよりも高得点を出した。Acidテストは、ブラウザがウェブ標準に準拠しているかを調べるためのテスト。Chrome、Firefox、IEの3ブラウザはAcid2をクリアしており、Acid3ではChromeが100点満点中78点を取得、Firefox 3は71点、IE7は14点となっている。Chromeを上回る製品品質のビルドは「Opera」で、83点を取得している。 Googleが安定したビルドをリリースしたとはいえ、Chromeがまだ開発中であることを忘れてはならない。Chromeを上回る開発途中の「不安定版(unstable)」ビルドは複数あり、たとえば「Firefox 3.1 Beta 1」は85点、「Safari 4 Developer Preview」とOperaの「Public Acid3 build」は100点となっている。SafariとChromeはともに「WebKit」フレームワークをベースとしていることから、Safari 4 Developer PreviewがChromeを上回っているのは興味深い。 新しいブラウザや既存ブラウザのアップデートがリリースされるたびに、コンピュータ愛好家がチェックすることの1つがAcidテストの点数だ。最新版であるAcid3はもっとも厳しいテストで、Safari 4 Developer Previewは100点を取得したが、「安定版」のブラウザで100点満点を取得したものは1つもない。 Acid3テスト合格はブラウザ開発者にとって重要な目標であり、Chromeが最初から好成績を出しているのはすばらしいことだ。 本稿公開後、読者からVista SP1でAcid3テストを実施した場合、Chromeのスコアが74点〜79点を推移していたという意見があり、再度検証した。その結果、スコアは79点となった。こうしたスコアの変動は、Chromeのリリースに伴い、Acid3テスト用のサーバに負荷がかかっているためかもしれない。
{ "pile_set_name": "OpenWebText2" }
In the executive seats, alongside Patrice Evra, was Ed Woodward, wearing a big pair of sunglasses. Beneath those shades, surely, the Manchester United executive vice-chairman will have been wincing. This was as far from what he would expect from United, what he wants from Jose Mourinho, as was possible. As well as West Ham played United were, quite frankly, too easy to beat and too quickly lost heart. They were too negative, by far, which started the moment the team sheets were delivered – with a back-three, Scott McTominay on the right of that, Nemanja Matic and Marouane Fellaini in midfield and Alexis Sanchez not even in the squad. It looked, it was, a mess. Where was the creativity? Where was the pace? Where – damningly – was the motivation, the leadership, the game-plan. Mourinho’s frustration with his team was clear but he has to look at himself. As vast as the technical area is at the London Stadium he strayed beyond it as he tried to cajole and turn around his team but this is a pale shadow of United; as pale and insipid as those off-colour pink shirts they wear. Not only have they now lost three of their seven Premier League games – including defeats to Brighton and West Ham – but they have a negative goal difference and seem solely reliant on trying to score goals from long balls, crosses or set-pieces. For all the hundreds of millions of pounds spent there is no incision.
{ "pile_set_name": "OpenWebText2" }
Article content continued Simpson said ordinarily, when a large number of people support an issue, more of them will say they somewhat agree, instead of strongly agree. In the case of bike lanes, the situation is reversed, with 43% strongly supporting additional bike lanes, and 38% of respondents saying they somewhat support the proposal. More than half of all university graduates surveyed said they strongly supported bike lanes, with another 34% said they somewhat supported them. The results show that younger people are more likely to support bike-lane expansion, but only by a margin of 5%. The poll’s findings also suggest that while the public is in favour of truck side-guards to protect cyclists from becoming trapped under trucks in the event of a collision, they feel the financial cost is too high during a time with a fragile economy. New Democrat MP Olivia Chow proposed legislation that included making truck side-guards mandatory after Toronto cyclist Jenna Morrison, 38, collided with a truck and died Nov. 7. Both Morrison and the driver of the vehicle were attempting to make a right-hand turn onto a downtown Toronto street at the same time when they collided. Morrison was pinned under the truck. Police laid no charges. A month earlier, on Oct. 11, Ottawa cyclist Danielle Nacu died after driving into the open door of a parked car; she subsequently was run over by another vehicle. Chow’s proposal to install guards — estimated by Ipsos to cost between $600 and $2,600 depending on the type of guard and truck — would prevent a person from being stuck in the space between the vehicle’s wheels. While 58% of those surveyed said they did support the mandatory use of guards, 60% said the cost was too high at a time when the economy is at risk from outside pressures. Support for the guards is highest in Quebec, at 70%, but lowest in Alberta with only 40% support. The Internet-based poll was conducted Nov. 16-20 and surveyed 1,017 people. The poll has an estimated margin of error of 3.1 percentage points, 19 times out of 20. Postmedia News [np-related]
{ "pile_set_name": "OpenWebText2" }
Sign up to our newsletter for daily updates and breaking news Sign up here! Thank you for subscribing See our privacy notice Invalid Email The coach company who transported the 83 coronavirus quarantined passengers have now revealed why their bus drivers were not wearing any protective gear. Members of the public were left confused and concerned when pictures of the Horseman Coaches drivers wearing just their normal uniforms emerged on Friday morning, January 31. The drivers were making the trip to Arrow Park in Wirral, Liverpool, from Brize Norton after collecting the evacuees when they were seen to be dressed normally rather than in medical masks and suits. At least seven coaches from Horseman Coaches Ltd were seen arriving at the Brize Norton RAF base in Oxfordshire at around 10am. (Image: PA) When the journey was taking place a number of pictures made their way onto social media and people were left questioning the safety of the drivers. When the story went live on the Liverpool Echo Facebook page, a number of people commented saying that they should have been given something to stop the spread of any germs on the coach. Margz said: "This is beggars belief. Accompanying escorts wearing bright yellow Hazmat suits and potential carriers of the virus wearing masks but the poor drivers have nothing, not even a mask! Disgraceful!" Despite the publics fears the company revealed that the drivers weren't wearing protective gear: because it wasn't necessary. Horseman Coaches was reportedly told by Public Health England that their drivers did not need protective clothing as all of the passengers had been thoroughly screened multiple times and did not present with any signs of the virus. (Image: PA) In a statement on its website they said: "On January 30 2020, Horseman Coaches was contacted on behalf of the British Government to assist with the repatriation of British citizens from China following the coronavirus outbreak. "83 Britons returned to RAF Brize Norton on January 31, 2020. "Every one of the Britons has been quarantined for the past 8 days in China and none of the individuals on the plane presented any signs of the virus. "Horseman Coaches was advised by the Deputy Chief Medical Officer for Public Health England that drivers were not required to wear PPE equipment. "Every individual from the plane has been fully screened every day for the past 8 days and has been deemed fit to return to the UK by medical professionals in China. "All passengers were screened prior to boarding the plane in China and have again been screened when they landed in RAF Brize Norton before being allowed on the coaches." The company also explained what would happen to the buses and drivers once the quarantined passengers had been dropped off at Arrowe Park. (Image: PA) It said: "PHE (Public Health England) has confirmed that all vehicles used will be subject to a military grade cleansing process and that there will be no risk to any future passengers. "Every driver involved will remain at home for the next 10 working days under quarantine conditions as an additional precautionary measure. "Each vehicle used will remain locked in a secure lock up facility for a minimum of 10 working days after a military grade cleansing process as an additional precautionary measure. "Five Horseman Coaches were used from a fleet of 62." James Horseman, Company Director, said: "The safety and health of our staff and passengers is our number one priority. The individuals brought back to the UK have been through unimaginable anguish and we are proud to play a small part in their healthy and safe return to the UK. "Please be assured that all necessary precautions have been taken to ensure the safety of our staff and passengers. We will continue to work closely with the British Government departments to safeguard our drivers and uphold the required decontamination standards."
{ "pile_set_name": "OpenWebText2" }
Betfair just announced an exciting new development for Profit Rush members. You’re now able to select a bespoke Betfair rewards plan to suit your needs… This is a big deal. Why? because Betfair is the largest and most fluid betting exchange. Getting your lay bets matched will always be a lot easier when there is significant volume available! There are 3 different options available so choose carefully, the best option for you could vary depending on the level of activity. You can register to use our premium tools for free here. To select your plan: Visit Betfair.com > Account > Promotions & Rewards > My Betfair Rewards > Select But before you do, make sure you read the next section of this article. You may find that, depending on your betting activity, a different plan suits you to others. Rewards Plan Options (3) Here are the key points you should consider when selecting: Option 1: Rewards+ To recreational punters, this option may look attractive. However, you should carefully consider if its for you. At 8% commission that’s going to take a large chunk out of any winnings when placing your matched bets. The potential upside is the extra free bonuses available. To us, this is the least attractive option. Unless you are betting a low volume to claim the bonuses whilst not paying large amounts of commission its likely to be worst overall. The 10% rebate option is interesting, although it’s largely a gamble because you can’t guarantee the exchange bets will lose. This option could appeal to somebody who uses Betfair a few times a month. Option 2: Rewards Those who don’t select one of the three options will be enrolled into option two by the end of the month, at 5% there’s very little difference to the existing commission structure. For some, this may work out. From our point of view, it’s better than option one but isn’t the most attractive overall. Similar to previous comments, if you don’t use Betfair regularly those extra bonuses could be useful. Option 3: Basic This is the one you’ll find all the die-hard users ranting about. 2% has historically been the lowest flat rate of commission around. However, that usually goes hand-in-hand with poor liquidity exchanges. Not any more! It’s our view that this is the best option for anyone who is as serious exchange user, be it matched betting or trading. Over the course of a month commission soon adds up, the difference between 8% and 2% is huge. For anyone playing 2up offers or rolling over bonuses, it’s the choice to make. Not got an account yet? Sign up for the benefits and options below (+£20 free bet) by clicking here!
{ "pile_set_name": "OpenWebText2" }
Born in July 2013 in NYC, Peaceful Dumpling is an online magazine at the intersection of sustainable lifestyle, ecological literature & reportage. We are now headquartered in beautiful Portland, OR and report from around the world. Read more about our ethos and masthead here. Find our submission guidelines here.
{ "pile_set_name": "OpenWebText2" }
While the rise in temperatures this past weekend from last week’s cold snap was cause for celebration for those in the Pittsburgh area, many smartphones warned us of a potentially dangerous side to the good weather. Code Orange air quality alerts like the one sent out on Saturday mean that people with heart and lung diseases, children and the elderly may be affected by the worsened air quality. The Pennsylvania Department of Environmental Protection issued another Code Orange action day Monday, and had issued several more on top of that in January. This weekend’s air quality concerns were a product of the major shift in temperatures. A warm air mass sitting above a cold air mass creates what’s called an inversion, which traps the cold air underneath the warm air. Unhealthy air quality results from pollutants becoming trapped in the cold air. The number of air quality alerts issued lately is cause for great concern. It feels as if we’re sliding backwards with our air quality, and this is something we absolutely can’t allow to happen. The Mellon Institute of Industrial Research stands as a constant reminder of just how bad Pittsburgh once let its air pollution get. Over two decades, starting in the 1940s, newer, cleaner industries moved into the area and the surfaces of buildings were cleaned but one side of the Mellon Institute’s 62 columns was left untouched. The blackened limestone acts as a reminder of Pittsburgh’s dirty history with air quality. That history continues to this day. Last year, a Dec. 24 fire at North America’s largest coke plant, U.S. Steel’s Clairton Coke Works, damaged gas dispatch stations so that amounts of sulfur dioxide past the federal standards were released into the air. There have been nine more instances of sulfur dioxide exceedance in the Mon Valley since the fire, including one this Monday at U.S. Steel’s Edgar Thomson Steel Mill in Braddock. Allegheny County Health Department officials said Wednesday that enforcement action against U.S. Steel could happen in one to three months and would include fines and conditions on permits on top of existing quarterly enforcement actions. But Jim Kelly, the deputy director for environmental health, said the company has had trouble complying with mitigation strategies. “I had to personally tell U.S. Steel that they needed to start working with the community,” he said. The recent problems with sulfur emissions beg the question of how further enforcement actions can truly stop exceedances, and how to deal with the possibility of more air quality warnings in the future caused by changes in weather patterns. Inversions are common in cold-weather climates like Pittsburgh, and a 2016 Utah Climate Center study says they may get “three times worse” in valley areas. Democratic lawmakers from the Mon Valley, health department officials, U.S. Steel representatives, union leaders and health advocates are holding a hearing today to discuss the situation, although state Sen. Jim Brewster of McKeesport doesn’t think the meeting will generate any legislation to solve the air quality problem. “I think it’s procedure, you gotta have agreements between U.S. Steel, the Allegheny County Health Department, local elected officials, state elected officials, the Allegheny County emergency management director,” he said Wednesday. “Those are the things we hope to leave with tomorrow, that those things get fixed.” But if more regulations are all they come up with to fix things, it’s likely we can expect more of the same. More drastic pressure from local officials is needed in order to combat the cycle of excess emissions and inversions.
{ "pile_set_name": "OpenWebText2" }
<1 min read ⌚ “Future Shock” takes you back in the 70s and allows you to see the future through the eyes of Alvin Toffler. About Alvin Toffler Alvin Toffler is a writer, thinker, and futurist. In all of his works, he collaborated with his colleague and wife Heidi Toffler. “Future Shock Summary” “Future Shock” is the term Toffler gave to the trauma that happens as a result of going through great changes in a short time. In his book he explores how people can adapt to the changes they face, and while doing that he establishes a new social norm: embracing change. Although it was written in the 70s when people were not aware that many of the ways they conducted business and the technology they were familiar with would disappear, his thoughts remain relevant even today. Preparing for change and embracing it is not a topic that will lose its importance soon. Today, we are used to this sort of advice, but at the time when Toffler advocated his views, his thoughts were unusual. He did not know much for the Internet, but he did a wonderful job in predicting the future by forming a new mode of being that he called “The Ad-hocracy” which will transform the world into a “free-form world” of kinetic organizations. According to his notions, which later turned out to be exact, many functions will disappear, offices will no longer be as necessary, communication will be constant and conducted over different types of media, etc. He tries to find ways to balance vicarious experiences, which are the things you pay others to do for your enjoyment and non-vicarious experiences, which are the things you do.
{ "pile_set_name": "OpenWebText2" }
29 SHARES Share Tweet A Florida officer was placed on restrictive duty after he was captured using excessive force on a Black teen. The incident took place Thursday afternoon near J.P. Taravella High School in Coral Springs, Florida. Video footage shows Broward County sheriff’s deputy Christopher Krickovich jumping on the high school student before slamming his head against the pavement as another teen laid on the ground in handcuffs. Various social media users who shared the video have identified the teen as 15-year-old “Lucca.” According to the Sun Sentinel, the incident ended with the arrest of two teenagers. One of the teen’s was charged with simple assault and resisting arrest without violence, while the other was reportedly released to his mom. 15 yr old Black boy, Lucca picked up a cell phone that fell out of the pocket of a Black boy who was being arrested. In response @browardsheriff officers Christopher Krickovich & Greg LaCerra pepper sprayed, brutally beat, and arrested him. He broke no laws.#JusticeForLucca pic.twitter.com/RQLj38GYGN — Bishop Talbert Swan (@TalbertSwan) April 20, 2019 Not long after the video was shared online, the hashtag #JusticeforLucca started to trend. Several Twitter users have been calling on Florida officials to take disciplinary action towards the officers involved. LeBron James and Warriors head coach Steve Kerr also reacted the graphic video. So wrong!! Hurts me to my soul!! To think that could be my sons. ??‍♂️?. Scary times man https://t.co/tRxk6sV7sb — LeBron James (@KingJames) April 20, 2019 Law enforcement officials arrived on the scene after receiving a report of a fight according to the Sun Sentinel. Krickovich wrote in a police report that the fight had ended right as he and fellow officers approached the scene. This was when he saw a teenager who had bee warned against trespassing in the area. “While I was dealing with the male on the ground, I observed his phone slide to the right of me and then behind me,” Krickovich wrote. “I observed a teen wearing a red tank top reach down and attempt to grab the male student’s phone. [The teen] took an aggressive stance, bladed his body and began clenching his fists.” Krickovich wrote that he had feared for his safety. Broward Sheriff Gregory Tony told Sun Sentinel that a “thorough investigation” into the incident will take place. Read more: Longtime Friends Learn They’re Half Brothers After Genetic Testing Virginia police sergeant loses job after social media posts referenced white supremacy Copyright ©2019 The Black Detour All Rights Reserved.
{ "pile_set_name": "OpenWebText2" }
25% off LIVEN G-5 Smart Oil-free Air Fryer from XIAOMI YOUPIN 1400W Power 2.5L Capacity Fat-free for Home Banggood Coupon Promo Code Banggood Coupon Price :$118.99 Banggood Regular Price : $158.59 Your Save : $39.60 Coupon Limit: 50 times Expires : September 30, 2020 LIVEN SMART OIL-FREE AIR FRYER Smart Touch Oil-free Health Pattern Food Timed Temperature Top Panel easy to use Power Off Memory safe use Smart Menu multiple choices A4 Paper Size save space No Fat Low Calories The use of hot air instead of frying, high temperature hot air brings out the food itself, forcing the oil to expel. Top-mounted Smart Glass Touch Panel Pull-out Split Basket 1. The split type baking basket is convenient to take, and the food can be contacted with the non-stick coating, which is easy to clean; 2. The roasting basket has the function of power-off memory. When the work basket is taken out, the screen is closed, the basket is placed again, the screen is bright, and the temperature remains unchanged. Timing Temperature Time can be set up for 0-60 minutes, temperature can be set within 80-200 °C High Temperature Hot Gas Circulation Heating Elegant Appearance Small Volume and Large Capacity 2.5L Capacity DETAILS
{ "pile_set_name": "OpenWebText2" }
House Judiciary Committee Chairman Jerrold Nadler has said he’s “disturbed” by William Barr’s reticence to share Robert Mueller's full report immediately. | Mandel Ngan/AFP/Getty Images The Mueller Report House Judiciary plans to authorize subpoenas for Mueller's full report The House Judiciary Committee intends to authorize subpoenas Wednesday morning for special counsel Robert Mueller’s full report and his underlying evidence, escalating a fight with the Justice Department as it reviews the special counsel’s work. Rep. Jerrold Nadler, the committee chairman, said Monday he’s moving ahead with plans to press for the nearly 400-page Mueller report via subpoena despite written assurances from Attorney General William Barr that he plans to release the document by mid-April or sooner. Under the proposal, Nadler would determine when to actually issue the subpoenas, which may depend on the level of cooperation and transparency Democrats get from the Justice Department. Nadler has said he’s “disturbed” by Barr’s reticence to share the full report immediately and has asked him to join the committee in seeking a judge’s approval to release all grand jury information in Mueller’s report. Barr is reviewing the document to scrub it of four categories of information — including grand jury information and material he deems derogatory to “peripheral third parties.” Barr has also indicated he intends to redact classified information and material relevant to ongoing investigations. Rep. Doug Collins (R-Ga.), the top Republican on the Judiciary Committee, blasted Nadler's move as an unnecessary escalation against Barr. COUNTDOWN TO 2020 The race for 2020 starts now. Stay in the know. Follow our presidential election coverage. Email Sign Up By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. “Judiciary Democrats have escalated from setting arbitrary deadlines to demanding unredacted material that Congress does not, in truth, require and that the law does not allow to be shared outside the Justice Department," Collins said. "It’s unfortunate that a body meant to uphold the law has grown so desperate that it’s patently misrepresenting the law, even as the attorney general has already demonstrated transparency above and beyond what is required.” In a New York Times op-ed Monday morning, Nadler picked apart Barr's decision to withhold elements of the Mueller report from Congress and to issue his own judgment on whether Trump obstructed justice. Though Mueller indicated he did not make a "traditional prosecutorial judgment" on obstruction after his 22-month investigation, Barr quickly issued his own last week, saying Trump could not be charged with the crime in part because there was no underlying crime to obstruct. But Nadler noted in his op-ed that numerous Trump associates were charged in the Mueller investigation — and that Deputy Attorney General Rod Rosenstein, who was previously a federal prosecutor in Maryland, "routinely charged individuals with obstruction without charging the underlying crime." Trump, too, has been implicated — though remains uncharged — in a campaign finance-related crime that stemmed from Mueller's probe as well, Nadler noted. Nadler suggested he expects Congress to craft a legislative response to the entire episode. "Put another way: If President Trump’s behavior wasn’t criminal, then perhaps it should have been," he said. Democrats on the panel on Wednesday also intend to authorize subpoenas for documents from five former Trump White House officials: Former chief of staff Reince Priebus, former senior adviser Steven Bannon, former communications director Hope Hicks and former White House counsel Don McGahn, as well as McGahn’s former deputy Ann Donaldson. All of the individuals may have materials from the White House relevant to the Mueller probe or their outside lawyers, which Democrats say waives their ability to assert protection privileges. A DOJ spokeswoman declined to comment about Nadler’s latest subpoena move. Attorneys for Priebus, McGahn, Bannon and Hicks also declined comment. Donaldson did not immediately respond to a request for comment.
{ "pile_set_name": "OpenWebText2" }
The mining and resources giant Adani is being investigated for alleged involvement in a US$4.4bn pricing scandal around coal sales by Indian power companies. Adani Enterprises is one of six Adani Group companies named for the first time in connection with an industry-wide scandal in which Indian energy companies are accused of profiteering on coal imported from Indonesia. The company denies being involved over-valuing the coal. It comes days after Adani Enterprises’ Australian subsidiary, Adani Mining, was granted mining leases by the Queensland government for the country’s largest proposed coal project in the Galilee basin. The Adani Group companies are among dozens of companies targeted in an 18-month investigation by the Indian Directorate of Revenue Intelligence (DRI), the Economic and Political Weekly revealed. DRI last week issued a “general alert” to customs offices claiming that power companies were exploiting “higher tariff compensation based on [the] artificially inflated cost of the imported coal” from Indonesia, it reported. Profits from the alleged scam by companies supplying state-owned power utilities were being “siphoned” overseas, the DRI alert said. An Adani Group spokesman told Guardian Australia it was “aware of the investigations being conducted by the DRI, and has fully co-operated, and shall continue to co-operate with the investigating agencies”. “Adani Group denies the allegations of over valuation and there is no show cause notice received till date,” he said. The DRI, an agency attached to the Indian finance ministry, made its first arrest as part of the investigation in February, in a case unrelated to the Adani Group. In court documents following the arrest, the DRI alleged a number of Indian power plants were inflating the prices of their Indonesian coal imports, passing on the costs to customers and hiding the profits overseas, the Economic and Political Weekly reported. The power companies typically used front companies in Singapore, Hong Kong and Dubai to inflate the prices of coal in official billing documents, the DRI alleged. Indian energy minister Piyush Goyal told Economic and Political Weekly that the DRI was “investigating cases related to misdeclaration of value (over invoicing) of coal imported from Indonesia and supplied to power plants of NTPC [the former National Thermal Power Corporation, India’s biggest power producer]”. The publication named Adani Group companies Adani Enterprises, Adani Power, Adani Power Rajasthan, Adani Power Maharashtra, Adani Wilmar and Vyom Trade Link as targets of the investigation. The Adani Group spokesman said all its coal imports had “taken place at contemporaneous prices prevailing in the international market which all along [have] been accepted by customs authorities across all ports in India. “Adani Group is supplying to different utilities including public sector utilities in the power sector through a transparent bidding process,” he said. “The tariff in most cases is also discovered through bidding system and other cases determined by the regulatories in accordance with law.” Adani Mining has indicated about half the coal from its Queensland mine, which will produce up to 60 million tonnes a year, would supply the Indian market, including Adani’s own generators.
{ "pile_set_name": "OpenWebText2" }