text
stringlengths
269
235k
id
stringlengths
47
47
metadata
dict
A serious shortage of physicians is expected to hit the United States over the next 10 years unless multipronged interventions are undertaken, according to a “Viewpoint” article appearing recently in JAMA.1 Darrell Kirch, MD, and Kate Petelle, MPhil, representatives of the Association of American Medical Colleges (AAMC) in Washington, DC, say that acute shortages are expected, irrespective of the type of provider setting or geographic location. Demographic change is an important contributor to workplace shortages, the authors note. According to current projections, the US population will increase by 12% between 2015 and 2030. Notably, the population of individuals age 65 and older is predicted to grow by 55%, with a resultant surge in demand for healthcare services for this age group. They also point out that more than one-third of currently employed physicians will be age 65 or older within a decade. Retirement decisions among these physicians will be an important determinant of physician supply, they say. Physician supply projections The 2017 update of physician workforce projections released by AAMC shows a probable shortage of between 40,800 and 104,900 physicians in the United States by 2030. Shortages have already been noted in both urban and rural communities and affect both specialty and primary care. Nonphysician practitioners: helpful but not a remedy Dr Kirch and Ms Petelle say that the employment of nonphysician clinicians, such as physician assistants and advanced practice nurses, has been widely touted as a means of satisfying the increasing demand for physicians. However, they are quick to acknowledge that while an expanded role for nonphysician healthcare professionals “may help mitigate shortages to a certain point,” nonphysician practitioners are not qualified to deliver the exact same services as physicians. A multipronged approach is needed to solve the physician shortage, the authors say. “The US healthcare system is in a transformational moment, representing an important opportunity to develop better practice models, create a culture of interprofessional team-based care, advance medical technology, and develop a diverse healthcare workforce that serves all individuals in the United States,” they write. They also call for medical schools and residency programs to train an adequate number of physicians to satisfy the growing need. At present, however, caps placed almost 20 years ago on federal Medicare funding for residency training make it difficult to expand graduate medical education. Congress, they add, should raise these caps. Otherwise, there will not be sufficient growth in the number of physicians. - Kirch DG, Petelle K. Addressing the Physician Shortage: The Peril of Ignoring Demography. JAMA. 2017;317(19):1947-1948. doi: 10.1001/jama.2017.2714 [Epub ahead of print] This article originally appeared on Medical Bag
<urn:uuid:292c9ace-0680-402e-99ac-bbc5eb07fc41>
{ "dump": "CC-MAIN-2019-09", "url": "https://www.clinicalpainadvisor.com/home/topics/regulatory-issues/multipronged-approach-necessary-to-avoid-serious-physician-shortage/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249595829.93/warc/CC-MAIN-20190224044113-20190224070113-00639.warc.gz", "language": "en", "language_score": 0.9496692419052124, "token_count": 584, "score": 2.578125, "int_score": 3 }
|Description: This detail of a map of Florida shows railroads and major cities and towns current to 1932 for Nassau County. Major waterways are shown, as well as lakes, town, islands, and marsh. Other notable features are Seminole Indian Reservations, canals, and railroads. Features included in this detail are Amelia Island, Callahan, and Hilliard.| Place Names: Nassau, Saint Marys River, Kings Ferry, Orange Bluffs, Gross, Crandall, Chester, Fernandina, Nassau, Tisonia, Callahan, Kent, Crawford, Bryceville, Cambon, Dinsmore, Amelia Island, Hilliard ISO Topic Categories: boundaries, transportation, inlandWaters, oceans Keywords: Nassau County, physical, political, transportation, swamps, everglades, wetlands, physical features, county borders, railroads, boundaries, transportation, inlandWaters, oceans, Unknown,1932 Source: , (, : US Department of the Interior Geological Survey, 1932) Map Credit: Courtesy the private collection of Roy Winkelman.
<urn:uuid:5d790629-ea22-4574-be17-56111c3a6e67>
{ "dump": "CC-MAIN-2017-04", "url": "http://fcit.usf.edu/florida/maps/pages/10800/f10831/f10831.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00495-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.8659695386886597, "token_count": 227, "score": 2.609375, "int_score": 3 }
Why is this loop heavy code so slow in Python? Possible Project Euler spoilers Diez B. Roggisch deets at nospam.web.de Mon Sep 3 00:29:01 CEST 2007 Wildemar Wildenburger schrieb: > Martin v. Löwis wrote: >>>> (2) it is a interpretation language >>> Not quite. It's compiled to byte-code - just like Java (would you call >>> Java an 'interpreted language' ?) >> Python is not implemented like Java. In Java (at least in HotSpot), >> the byte code is further compiled to machine code before execution; >> in Python, the byte code is interpreted. > OK, good. Naive question now comming to mind: Why doesn't Python do the > latter as well? because of the dynamic nature of it. Java is statically typed, so the JIT can heavily optimize. OTH psyco IS a JIT-compiler to optimize certain calculations which are mostly of a numerical nature. But this can't be done to the extend it is possible in java. More information about the Python-list
<urn:uuid:be99fba6-62fe-4431-897c-c74b72ff2af7>
{ "dump": "CC-MAIN-2014-15", "url": "https://mail.python.org/pipermail/python-list/2007-September/455472.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00178-ip-10-147-4-33.ec2.internal.warc.gz", "language": "en", "language_score": 0.852447509765625, "token_count": 252, "score": 2.609375, "int_score": 3 }
WATS & Incident Reports Before there were “800” numbers there was Wide Area Telephone Service (WATS). A WATS line allowed a company or organization to make unlimited long-distance calls in a specified geographic area for a flat monthly fee. In the early ’60s, a call from one county to another was often billed as “long-distance,” and long-distance calls were expensive. Cell-phones, of course, did not exist. Many rural communities in the South did not yet have direct-dial for long-distance, so out-of-area calls had to be placed through the local operators who were all white (“Ma Bell” did not hire Blacks in the South for anything but the most menial positions). Operators in league with the Sheriff or Citizens Council would often block or tap calls from freedom fighters. But a WATS line allowed civil rights workers to bypass the local operators, which meant they could reach their offices when under siege by cops or the Klan. And with a WATS line, long-distance charges were billed to the organization and calls could be made from pay-phones, or the phones of local folk who had little money for phone bills. It was understood, of course, that Movement WATS lines were tapped and bugged by every law enforcement agency from the FBI down to the local beat constable. And anything said over the WATS line was passed on by the cops to the Klan and White Citizens Council. At SNCC, CORE, COFO, and SCLC offices in cities such as Jackson, Greenwood, Atlanta and Baton Rouge, the life-saving WATS lines were manned (or, more accurately, woman-ed) around the clock, 24 hours a day, recording incidents of violence and arrest, dispatching doctors and lawyers to aid the injured and incarcerated, alerting organizers of danger and need, notifying media and Justice Department of abuses and outrages, and coordinating support and assistance nation-wide. As the calls came in over the WATS line hour by hour, the substance of each call was added to each day’s “WATS Report” and summaries were prepared and distributed to the press and Movement supporters around the country. There is no doubt that there are Freedom Movement activists alive today who would have been killed or maimed had word not gotten out quickly of Freedom Houses under attack by Klan night-riders or of activists being “detained” by southern sheriffs. James Forman. SNCC. June 24-26, 1964 SNCC? COFO? Undated possibly 1964 undated (possibly 1965) WATS Report Incident Summaries January 1-8 1964 MFDP. Oct 18-Nov2, 1964 Raw Daily WATS Reports Some of the reports listed below are from the COFO WATS line in Jackson, some are probably from the SNCC office in either Atlanta or Greenwood. We’ve tried to guess which is which, but we can’t be sure we guessed right. Before Freedom Summer ~ 1964 Freedom Summer ~ 1964 from June 21 & 22 Note: By July of 1964 there may have been three different WATS lines in operation. One in the COFO office in Jackson, one in the temporary SNCC national office in Greenwood, and it’s probably that the original SNCC WATS line in Atlanta was also still in operation. We’ve tried to guess which report are COFO and which SNCC, but it’s all guesswork.
<urn:uuid:38b314a8-f2b2-4de4-be2c-b6694434c716>
{ "dump": "CC-MAIN-2023-40", "url": "https://sncclegacyproject.org/wats-reports/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00644.warc.gz", "language": "en", "language_score": 0.9761931300163269, "token_count": 772, "score": 2.90625, "int_score": 3 }
A cherry tomato is a rounded, small fruited tomato thought to be an intermediate genetic admixture between wild currant-type tomatoes and domesticated garden tomatoes. Cherry tomatoes are believed to go as far back as Aztec Mexico at least in the 15th century CE. Cherry tomatoes have been popular in the United States since at 1919. Recipes using cherry tomatoes can be found in articles dating back to 1967. The most common modern species of cherry tomatoes was developed in Israel in 1973. Source: Wikipedia Encyclopedia Camera: Canon P&S PowerShot A720 IS
<urn:uuid:ae94ef9a-23ae-436c-a224-dedfc0347466>
{ "dump": "CC-MAIN-2017-13", "url": "https://epiac1216.wordpress.com/2015/10/24/bright-cherry-tomatoes/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00349-ip-10-233-31-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9239349365234375, "token_count": 115, "score": 2.875, "int_score": 3 }
The Structure of the Chinese Calendar by Peter Meyer There are two Chinese calendars, a solar calendar and a lunar calendar (the latter is also a 'lunisolar' calendar, since it more or less stays in sync with the solar year). Both calendars depend on the times of certain astronomical events, such as dark moons and winter solstices. For at least several centuries (according to some scholars, since the 5th C. BCE) the times of these events have been ascertained not by observation but rather by calculation, so these calendars can be classified as rule-based. The Chinese solar calendar consists of a sequence of solar years which are not divided into months but rather into 24 periods which begin at the "solar terms" (see below). The Chinese lunar calendar consists of a sequence of lunar years which are divided into 12 or 13 lunar months. A solar year begins at the (northern) winter solstice, which is on or around December 22 in the Common Era Calendar. A lunar month begins on the day of a dark moon. The beginning of a lunar year (i.e, lunar new year's day) is more difficult to define (but see below); it always begins from about January 20th to about February 20th, i.e., about a month or so after the start of the Chinese solar year. The Chinese Calendar assumes a prime meridian of 120 degrees East (120°E). This means that a day (or rather, a nychthemeron, a day and a night) is taken to run from midnight Beijing standard time (BST = CCT = GMT+8) to the next midnight BST. This is in contrast to the Common Era Calendar, where a nychthemeron runs from midnight Greenwich Mean Time (GMT) to the next midnight GMT. The time difference between Beijing and London is eight hours, so nychthemerons (or nychthemera) in the Chinese Calendar begin eight hours earlier than nychthemerons in the Common Era Calendar. Before proceeding further we define some terms: A dark moon occurs when the Sun and the Moon are astronomically conjunct (or more exactly, when either the Moon's center lies on the line joining the centers of the Earth and the Sun or the plane defined by the Sun, Earth and Moon is perpendicular to the Earth's orbital plane). The term "new moon" is not used here, since it is ambiguous. It can mean either a dark moon or the phase of the Moon when a crescent is first visible (in which sense a month in the Muslim calendar begins at new moon). A lunation is a passage of the Moon from one dark moon to the next. A lunation begins at the dark moon (astronomical conjunction of Sun and Moon), and the next dark moon marks the beginning of the next lunation. An equinox occurs when the angle formed at the Earth's center between its axis of rotation and the line joining the Earth to the Sun is a right angle. At such a point in the Earth's orbit the length of day and night is almost equal (but not exactly equal, due to atmospheric refraction of the Sun's rays near the horizon and the practice of measuring the start and end of the day from the first or last appearance of the Sun). The northern vernal equinox occurs around March 20th of each year, and the northern autumnal equinox occurs around September 21st. A solstice occurs when this angle reaches a maximum or a minimum. At such a point the duration of the day and the night is either longest or shortest. The northern winter solstice occurs around December 21st of each year, and the northern summer solstice occurs around June 21st. 2. Chinese and Western Years The Chinese Calendar uses cycles of sixty years. A year within a cycle is designated by a combination of an element name (e.g., "Water") and an animal name (e.g. "Rabbit"): For the order in which the various element-animal-designated years occur within a cycle of sixty years (Wood-Rat, Wood-Ox, Fire-Tiger, ...) see Interconverting Chinese and Western Years. Wood Fire Earth Metal Water Rat Ox Tiger Rabbit Dragon Snake Horse Sheep Monkey Chicken Dog Pig A Chinese year is uniquely determined by an element name, an animal name and a cycle number, e.g., the Water-Dragon year in the 21st cycle. Since the years of the Chinese Calendar run concurrently with the years of the Common Era Calendar (although they do not overlap exactly) each year at a certain position in a certain cycle in the Chinese Calendar can be uniquely associated with a year in the Common Era Calendar provided that one such correlation is known. Actually two such correlations are used by different scholars: The first year in the first cycle is correlated either with -2696 CE (i.e., 2697 BC) or with -2636 CE (i.e., 2637 BC). 2004 is a Wood-Monkey year in Cycle 79 (according to the first correlation) or in Cycle 78 (according to the second). 3. The Chinese Solar Calendar As noted above, a Chinese solar year always begins at the winter solstice. It may be thought of either (i) as running from the exact moment of a winter solstice to the exact moment of the next winter solstice or (ii) as running from midnight (Beijing time) at the start of the day during which the winter solstice occurs to the midnight (Beijing time) of the start of the day during which the next winter solstice occurs. We could call these "astronomical" and "calendrical" solar years. The astronomical solar year is divided into 24 periods. The times of the start and end of these are called "solar terms". These are denoted by the symbols J1, Z1, J2, Z2, ..., J12, Z12. The two (northern) solstices and the two equinoxes coincide with four of these solar terms, as follows: vernal equinox (VE) Z2 summer solstice (SS) Z5 autumnal equinox (AE) Z8 winter solstice (WS) Z11 The other eight Z's (the "major solar terms", also known as "zhong qi")) occur at equal (or nearly equal) intervals between these four Z's. The major solar terms thus are like the hour numbers on a clock face, with the vernal equinox at 2 o'clock, etc. (and the minor solar terms, the J's, marking the half-hours). There are two variations on the Chinese solar calendar. It used to be defined so that the period from each solar term to the next was exactly 1/24th of an astronomical solar year, i.e., approximately 15.22 days. This is called the "Mean Sun" variation. In the 17th Century Chinese calendricists adopted calculations based on the true motions of the Earth and Sun, and in this variation of the solar calendar each solar term consists of the time required for the Earth to move exactly fifteen (= 360/24) degrees in its orbit (starting from a solstice or an equinox). This is called the "True Sun" variation. Since the Earth moves at slightly different speeds at different places in its orbit (it moves slightly faster when it is closer to the Sun) this implies that in the True Sun variation the period from one solar term to the next is not always the same. Strictly speaking, solar terms are points in time, namely, the times at which the Sun (as seen from the Earth to be travelling along the ecliptic) reaches 0°, 15°, 45°, ..., measured from a solstice or an equinox. A solar term may also be understood as a period of time, namely, the period between two such solar terms. We can thus say that (in this sense) the solar year is divided into 24 solar terms. Although there are 12 pairs of adjacent solar terms, a pair of solar terms cannot be regarded as a 'month'. The solar year is divisible into solar terms, but not into months. An attempt to do so (as is done in Wikipedia) flounders on the astronomical facts underlying why the lunar year sometimes has 12 months and sometimes has 13 months. Just as "solar year" has two meanings, an astronomical and a calendrical, so a "solar term" may be thought of either (i) as running from the exact moment of a solar term as defined above to the exact moment of the next solar term (an "astronomical solar term") or (ii) as running from midnight (Beijing time) at the start of the day during which the solar term (in the first sense occurs) to the midnight (Beijing time) of the start of the day during which the next solar term occurs (a "calendrical solar term"). The day on which a calendrical solar term begins in the Chinese solar calendar is the day in which the astronomical solar term occurs. E.g., if a winter solstice occurs at 23:03 then the calendrical solar term Z2 begins at midnight (Beijing time) at the start of that day. The 24 calendrical solar terms in a calendrical solar year are numbered 1 - 24 (1 = Z11, 2 = J12, 3 = Z12, 4 = J1, 5 = Z1, and so on). Within a calendrical solar term the days are numbered 1, 2, ... Thus a date in the solar calendar may be represented by a quadruple of the form cycle-position-solarterm-day, where c-p-s-d denotes day d (1-16) of solar term s (1-24) of the year at position p (1-60) in cycle c. Thus a sequence of dates in the Chinese solar calendar looks like this: 1-59-24-14, 1-59-24-15, 1-60-01-01, ..., 1-60-24-16, 2-01-01-01, ... As noted in the preceding section each position-in-cycle is associated with a unique element-animal combination, so, e.g., "1-59-24-14" can also be expressed as "The 14th day of the last solar term of the Water-Dog year in the 1st cycle." Dates in the Chinese solar calendar may be marked by CHS, as in "2-01-01-01 CHS". 4. The Chinese Lunar Calendar The definition of the lunar calendar depends on the definition of the solar calendar, but not vice-versa. The first day of a lunar month begins at midnight (Beijing time) on the day in which the dark moon occurs. Thus a lunar month always runs from the day of the dark moon up to but not including the day of the next dark moon. It is thus tautologous (and hence true) to say that the dark moon always occurs on the first day of the lunar month. This series of lunar months is partitioned into lunar years, which consist of either twelve or thirteen lunar months. Months are labelled with a numeral from "1" through "12" or (when a year contains a thirteenth month) with a numeral-plus-asterisk, e.g., "9*". The way the series of lunar months is partitioned into lunar years is as follows: A nian is the period of a whole number of lunar months making up a lunar year, beginning with month "1". A nian consists of 12 or 13 months. A related concept is a sui, which is a period of a whole number of lunar months such that the first month of the period contains the winter solstice. A sui also consists either of 12 or 13 lunar months. A sui largely overlaps the solar year, but can begin up to nearly a month before the solar year begins (when the winter solstice occurs close to the end of the first month of the sui). Consider the series of lunar months partitioned into suis. Consider a particular sui. If it has twelve months then the months are to be numbered "11", "12", "1", "2", ..., "10". The third month will thus be the first month of the nian which largely overlaps this sui. Suppose, on the other hand, that there are thirteen months in the sui. A sui can contain only twelve major solar terms (the Z's, or zhong qi's, described above), so at least one of the months does not contain a major solar term. The first month which does not contain a major solar term is distinguished as a "leap" month (a.k.a. an "intercalary" month). The first month in the sui cannot be a leap month because it contains the solar term Z11. The twelve non-leap months are numbered "11", "12", "1", ..., "10". The leap month has the same number as its preceding month. Leap months are distinguished by an asterisk or a plus sign, so that, e.g., month "4" may be followed by leap month "4*" (or "+4" or "4+"), which is followed by month "5". A date in the Chinese lunar calendar may be represented by a quadruple of the form cycle-position-month[*]-day, where c-p-m[*]-d denotes day d (1-30) of month s (1-12) — a leap month if this is s* — of the year at position p (1-60) in cycle c. Thus a sequence of dates in the Chinese lunar calendar looks like this: 1-59-11-29, 1-59-11-30, 1-59-11*-01, ..., 1-59-11*-29, 1-59-12-01, ..., 1-59-12-30, 1-60-01-01, ... As with solar dates the position-in-cycle number can be replaced by an element-animal combination. Dates in the Chinese lunar calendar may be marked by CHL, as in "1-60-01-01 CHL". Overseas Chinese number years sequentially, as in the Gregorian Calendar, with Chinese year 4709 corresponding to Gregorian year 2011. Thus the Overseas Chinese date "4709-07-13 CHL" denotes the same day as the cycle-position date "79-28-07-13 CHL". 5. Comparison with the Gregorian Calendar New Year's Day in the Chinese Lunar Calendar can occur on any date in the Gregorian Calendar from January 21 to February 21 (though not all dates are equally likely). New Year's Day in the Gregorian Calendar always occurs about a week after the northern winter solstice, whereas on average New Year's Day in the Chinese Calendar occurs approximately midway between that solstice and the northern vernal equinox. A year in the Gregorian Calendar always has 12 months, whereas a year in the Chinese Calendar usually has 12 but in about one year in three it has 13 months. A month in the Gregorian Calendar may have any number of days from 28 through 31. A month in the Chinese Calendar always has either 29 or 30 days. A month in the Chinese Calendar always begins at the dark moon, and the full moon always occurs in mid-month. In the Gregorian Calendar the dark moon and full moon can occur at any time during a month. Traditionally associated with (although not formally a part of) the Gregorian Calendar is a cycle of 7 days ("the week"). There is no such cycle in the Chinese Calendar; instead there are cycles of 60 days, 60 months and 60 years. Each day, month and year in the Chinese Calendar is traditionally associated with one of twelve animals and one of five elements. There is no such association in the Gregorian Calendar, although months are loosely connected with astrological signs of the zodiac (whose periods are offset from the months by about nine days). The Gregorian Calendar is rule-based (although only a small proportion of people can state its leap year rule correctly), whereas the Chinese Calendar depends on exact calculation of the times of dark moons and solar terms (which must be done by calendrical experts using astronomical methods and data). Reliable conversion between dates in the Gregorian Calendar and dates in the Chinese Lunar Calendar is thus possible only by means of such calculation, which is performed by computer software such as Chinese Calendrics. Links to other articles on the web about the Chinese Calendar Interconverting Chinese and Western Years Messages to CALNDR-L re the Archetypes Calendar and the Chinese Calendar Calendar Software Hermetic Systems Home Page
<urn:uuid:093af4ad-e58b-4247-a9d7-71abf5705dcd>
{ "dump": "CC-MAIN-2017-26", "url": "https://hermetic.ch/cal_stud/chinese_cal.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320209.66/warc/CC-MAIN-20170624013626-20170624033626-00126.warc.gz", "language": "en", "language_score": 0.9262727499008179, "token_count": 3568, "score": 3.734375, "int_score": 4 }
Application layer access to networking is mediated via a series of socket-related hooks, socket_security_ops. When an application attempts to create a socket with the socket(2) system call, the create() hook allows for mediation prior to the actual creation of the socket. Following successful creation, the post_create() hook may be used to update the security state of the inode associated with the socket. Since active user sockets have an associated inode structure, a separate security field was not added to the socket structure or to the lower-level sock structure. However, it is possible for sockets to temporarily exist in a state where they have no socket or inode structure. Hence, the networking hook functions must take care in extracting the security information for sockets. Mediation hooks are also provided for all of the socket system calls: bind(2) connect(2) listen(2) accept(2) sendmsg(2) recvmsg(2) getsockname(2) getpeername(2) getsockopt(2) setsockopt(2) shutdown(2) Protocol-specific information is available via the socket structure passed as a parameter to all of these hooks (except for create(), as the socket does not yet exist at this hook). This facilitates mediation based on transport layer attributes such as TCP connection state, and seems to obviate the need for explicit transport layer hooks. The sock_rcv_skb() hook is called when an incoming packet is first associated with a socket. This allows for mediation based upon the security state of receiving application and security state propagated from lower layers of the network stack via the sk_buff security field (see section 3.7.2). Additional socket hooks are provided for UNIX domain communication within the abstract namespace, as binding and connecting to UNIX domain sockets in the abstract namespace is not mediated by filesystem permissions. The unix_stream_connect() hook allows mediation of stream connections, while datagram based communications may be mediated on a per-message basis via the unix_may_send() hook.
<urn:uuid:2b307c6a-51e3-47ff-8910-e7555e31e26b>
{ "dump": "CC-MAIN-2013-20", "url": "http://namei.org/lsm-ols2002-html/node17.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698693943/warc/CC-MAIN-20130516100453-00031-ip-10-60-113-184.ec2.internal.warc.gz", "language": "en", "language_score": 0.8724964261054993, "token_count": 430, "score": 2.515625, "int_score": 3 }
Source: (2003) Paper presented at Building a Global Alliance for Restorative Practices and Family Empowerment, Fourth International Conference on Conferencing, Circles and other Restorative Practices, set for 28-30 August, 2003. Downloaded 11 September 2003. From Lode Walgrave’s perspective, restorative justice is characterized by its aim of restoration of the harm caused by crime and not by the process which typically favors restorative outcomes. Restorative justice in this definition is goal-oriented not process-oriented. Thus Walgrave’s perspective differs from many other restorative justice proponents who see voluntary deliberative processes in an informal context as the key to restorative justice. Walgrave’s point is that the processes do not have intrinsic legitimacy but derive their value from the outcomes they aim for and help to produce. Thus the aim or goal is the key to restorative justice; processes have value insofar as they pursue and enable a restorative aim or goal. That being said, Walgrave sees deliberative processes as crucial tools for restorative justice. This leads to his examination of a conferencing experiment in Belgium to deal with serious juvenile offenders. Your donation helps Prison Fellowship International repair the harm caused by crime by emphasizing accountability, forgiveness, and making amends for prisoners and those affected by their actions. When victims, offenders, and community members meet to decide how to do that, the results are transformational.Donate Now
<urn:uuid:971a649a-21bc-42af-aa0c-3d38a64395e1>
{ "dump": "CC-MAIN-2023-50", "url": "https://restorativejustice.org/rj-archive/restorative-conferences-with-serious-juvenile-offenders-an-experiment-in-belgium/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101195.85/warc/CC-MAIN-20231210025335-20231210055335-00231.warc.gz", "language": "en", "language_score": 0.9436490535736084, "token_count": 297, "score": 2.546875, "int_score": 3 }
Image 18. Photograph of fountain taken shortly before it was dismantled. The Pacific Basin Fountain A History and Description By Anne Schnoebelen Pacific House, Golden Gate International Exposition, 1939-1940 Created in 1938 at Gladding McBean, Lincoln, California Glazed terra cotta, 950 square feet, 361 sections Width at equator: 43’2” Center meridian; 27’2’ Perimeter wall: 114 linear feet Weight: 30 tons (estimated) Colors: Shades of blue, brown, yellow, green, white, aqua The relief map of the Pacific Basin was designed by San Francisco artist Antonio Sotomayor and architect Philip Newell Youtz, and executed in 1938 and 1939 by Sotomayor and assistants at Gladding McBean and Company (GMcB) in Lincoln, California. It is oval in shape and approximately the size of a backyard swimming pool. When installed in Pacific House at the Golden Gate International Exposition, it was filled with water and served as a fountain. [Image 1] The relief map represents the islands, continents and waters of the Pacific, hand modeled in great detail. The highest mountain peak is approximately 16 inches higher than the lowest ocean trench. Lines of latitude and longitude are represented by the joints between the map’s 361 individual sections. Four three-dimensional ceramic whales, approximately 25 inches in length and covered with a teal glaze, spouted at the center of the fountain just north of Hawaii. The map itself is surrounded by a perimeter wall, approximately two feet in height and eight inches wide, finished with an aqua glaze. Compass points are marked with a compass rose and the letters N, S, E and W on the top surface of the wall. On the vertical exterior surface of the wall at each compass point is an animal figure in low relief. A Fountain of the Pacific for the Pageant of the Pacific The map was displayed in Pacific House, the “theme building” of the Golden Gate International Exposition on Treasure Island in 1939 and 1940—“The Pageant of the Pacific.” The man-made island itself was initially conceived as the site of an airport which would serve Pan American Airways trans-Pacific “Clipper” flying boats, the first commercial, regularly-scheduled air transport between the United States and Asia. The fair’s theme, “Pacific Unity,” celebrated peaceful interdependence among the countries of the area that we now call the Pacific Rim. Pacific House, designed by architect William Merchant, was the central building of the fair’s “Pacific Area” where most of the Pacific countries participating in the fair had their pavilions. The inauguration of air travel to Asia helped spark the fair’s theme. But it was the influence of the local chapter of a non-governmental organization that provided the deeper stimulus for the Pacific theme. The Institute of Pacific Relations (IPR) was established in 1925 to provide a forum for discussion of problems and relations between nations of the Pacific Rim. Headquartered in Honolulu, the IPR had chapters throughout the Pacific, and its San Francisco members included many influential citizens, including former Secretary of the Interior and Stanford President Ray Lyman Wilbur, who was the president of Pacific House. Other IPR members headed some of the powerful boards that governed the GGIE during its planning years. As president of Pacific House, Wilbur was the leader of a distinguished board which included scholars and professors from universities around the Bay Area, explorers, anthropologists, and business leaders. Pacific House offered daily, ongoing programming designed to educate the public about Pacific affairs, including lectures, readings, exhibits, and concerts. Its library housed a fine collection of printed materials on Pacific themes, and Pacific House published a series of bibliographies and educational guides. The Pacific House board of directors chose the map of the Pacific as the symbolic theme of Pacific House as a matter of policy. In its publicity materials, Pacific House claimed that all world map projections prior to 1939 had presented the Pacific broken up at the perimeters of the map. While this is something of an exaggeration, it is true that world maps, then and now, usually feature the Atlantic Ocean at the center, with the Pacific at the margins. The goal of Pacific House was to present the possibilities of a united Pacific, and maps were a graphically dramatic illustration of this. [The map shown on the right, which uses the Aitoff projection, is a more typical representation with the Pacific literally “marginalized.” Sorry–map not included here yet.] The Pacific Basin fountain used the same projection, but rotated horizontally so that the Atlantic appears on the margins. Pacific House director Philip Youtz was an architect and the former director of the Brooklyn Museum. San Francisco artist Antonio Sotomayor, a native of Bolivia, was known locally as an author, illustrator, caricaturist and painter of murals at the Palace Hotel and Grace Cathedral. He was chosen for the Pacific Basin Fountain project through his affiliation with Pacific House board member Carl Sauer, an eminent geographer and anthropologist from the University of California, Berkeley, who provided the map projection. [Images 4 and 5] The fountain was located on the floor of the airy, three-story atrium of Pacific House. Natural light poured in from floor to ceiling windows on four sides, and the room was filled with tropical foliage. Guests could view the fountain from the main floor as well as from a second-story balcony. Six large, colorful mural maps by Sotomayor’s friend, Miguel Covarrubias, representing themes from Pacific cultures, covered the adjacent walls. A large backlit stained glass map of Pacific trade routes by artist Edgar Dorsey Taylor was featured high on the back wall of Pacific House’s main display area, and two decorative maps of Pacific countries by artist Hilaire Hiler were displayed on the balcony. [Images 2, 3, 6] Making the Fountain Information and stories about the fountain are hidden away in a vault at GMcB, where the company has been storing its records for more than a century. The records for Job 2873, “The Relief Map of the Pacific,” reveal a complex and difficult project, plagued with delays and errors, but managed with diplomacy and good humor. According to a GMcB memo, Youtz and Sotomayor had great difficulties with the first stage of the project—a full-scale clay model of the map. Originally, the plan was to do as much work as possible in San Francisco. This proved so difficult that GMcB offered to send professional modelers to San Francisco to help. Sotomayor and Youtz estimated that they would need his services for two months. A GMcB supervisor wrote back, “Sixty days? Good heavens, what are you going to do with him for that long? Only thought you would need him for a week or so, he has a family.” Eventually the entire project was transported to GMcB, and Sotomayor lived in Lincoln for almost six months. Fashioning the enormous relief map took nearly twice as long as anyone had originally projected. There were innumerable delays because, “As you know, artists are apt to underestimate the time it will take them to do the work.” (Everyone who knew Sotomayor agreed that he was not very practical.) And there were errors. On February 9, 1939, less than ten days from the scheduled opening of the fair, the tile setter installing the fountain in Pacific House noticed a glaring design error. “Has there been anything said about letter “S” on south side in center when you stand in front of it. It is turned upside down. Mr. Youtz and Mr. Newhall know about it, but I think they will let it go.” But a GMcB memo dated February 13 reads: “We are re-making the piece with the letter “S”.” Mr. Sotomayor must have modeled this letter upside down.” Sotomayor designed an animal mascot to represent each region located at the fountain’s compass points. For the South Pole, he fashioned a penguin and for the North Pole, a polar bear; in the west, near the Indian subcontinent at the equator, was a water buffalo and in the east, near the equator in Brazil, a llama. Sotomayor also designed the four whales that spouted at the center of the fountain. His wife, Grace, described them: “Everyone knows that whales are mammals and they have horizontal flippers. But Soto’s whales have fins, like fish!” Sotomayor’s whales look like flukes, but they are oriented vertically. “He never looked at models of the things he drew—he just made what he saw in his imagination,” according to Grace. The whales, with their fierce grins and disoriented tails, were “the real Soto.” As the project neared completion, Philip Youtz wrote in a letter to Atholl McBean, president of GMcB, “I feel confident that this terra cotta fountain will be one of the most beautiful and educational features of the entire exposition. As far as I know, this is the first time that terra cotta has been used for such a piece of sculpture, and the successful completion of this project will therefore make art history.” After the Fair is Over The Pacific Basin Fountain, made of terra cotta and weighing thirty tons, was not a piece of world’s fair ephemera; it was made to last for a very long time with reasonable care. It was also designed in individual sections that could be transported from Lincoln to Treasure Island in relatively manageable loads. This modular design would also make it possible to move the fountain after the fair ended for display elsewhere. It couldn’t stay in Pacific House forever, since Pacific House was a temporary building (as were most of the fair’s buildings). During the fair, a number of potential permanent locations were considered. Those who ran Pacific House fully intended for the organization to continue with a new facility in San Francisco, perhaps to be joined with a new Museum of Pacific Cultures. The Pacific House maps, especially the fountain and the Covarrubias murals, would be the centerpieces of a Pacific cultural and educational center. The fair closed in September of 1940 and the Navy began moving onto the island almost immediately. Within another fourteen months, the United States was at war with Japan. The Pacific House Board of Directors continued to meet and arranged to loan the six Covarrubias murals to the Natural History Museum in New York, where they remained on display for more than a decade. In 1942, the Navy demolished Pacific House and moved the Pacific Basin Fountain to another former GGIE location dedicated to the Pacific theme: “The Court of Pacifica,” site of the colossal goddess Pacifica and the twenty Pacific Unity Sculptures. [Image 7] The goddess came down in 1942, but the sculptures remained. This site, at the current intersection of 9th Street and the central corridor through Treasure Island’s Job Corps campus, remained intact from 1942 until 1994. The Navy left the fountain and sculptures in the middle of 9th Street [see map], diverting east and westbound traffic around the site. Fifty years later, this detail would become important [sorry–the map referred to will be added later]. After the demolition of Pacific House, Edgar Dorsey Taylor’s illuminated stained glass map and Hilaire Hiler’s maps disappeared. The area around the fountain and sculptures on 9th Street was made into an attractively landscaped garden, with a lawn and flowering gum trees. A wooden plaque explained the origin of the fountain and sculptures. This ad hoc world’s fair sculpture garden was used as a setting for many official and unofficial Navy photographs and was a source of pride on the base, but rarely seen by the public. [Images 8, 9, 10, 11, 12] World War II came and went. Pacific House never did get a new site in San Francisco—its temporary headquarters in the Palace Hotel closed in 1945—although, curiously, the corporation it formed still exists in California corporate records. Five of the six Covarrubias murals came back to California and were exhibited in the remodeled Ferry Building from 1959 until 2001, when the Ferry Building was again remodeled and the murals removed. The fate of the missing mural, “Art Forms of the Pacific,” has plagued art detectives for more than five decades. The remaining five murals underwent restoration in Mexico and have been exhibited occasionally over the last decade. One of the murals, “Flora and Fauna of the Pacific,” has been on display at San Francisco’s De Young Museum since 2008. Fate of the Fountain According to witnesses who lived on Treasure Island, the world’s fair sculpture garden remained in good condition and well tended for almost three decades. Maintenance began to falter during the Vietnam War when the fountain and statues became targets for vandalism. Rocks, bottles, and automobile parts were hurled at the fountain, the area’s identifying plaque faded and was not repaired, dirt accumulated and weeds grew in the cracks between the sections of the fountain. By the late 1970s, all of the whales had disappeared*. The tiles featuring the decorative compass points had been stolen and only one of the compass point animals, the polar bear, remained. Dozens of retaining wall tiles disappeared, and virtually every tile sustained some degree of damage. [Images 13, 14, 16] “Save the Fountain!” Concurrent with the founding of the Treasure Island Museum in 1976, a campaign to “Save the Fountain” came to life through the efforts of Walter Morris and Bob Giesar, naval personnel stationed on the island. Their efforts evolved into a master plan to move the fountain and the sculptures to the front of the former Administration Building on Treasure Island (now Building One), the home of the Treasure Island Museum. The fountain and sculptures would become the focal points of a new garden commemorating the history of Treasure Island and the Golden Gate International Exposition. [Image 17] In his ROHO interview in 1981, Antonio Sotomayor referred to the fountain as the most difficult, challenging project of his career. He was aware of the restoration plans and expressed the belief that the fountain was under restoration. When Sotomayor died in 1985, the San Francisco Chronicle named him “San Francisco’s Artist Laureate.” The restoration project ultimately foundered for many reasons, including lack of funding, the difficulty of moving the fountain, and fears that the area in front of Building one was not strong enough to hold the thirty ton fountain. The “fountain problem” was magnified by the fact that it was now one massive object, not moveable in sections. To the best of anyone’s knowledge, when the fountain was originally installed in Pacific House, the sections of the map were not attached to each other, or to their retaining wall; they were simply placed, one at a time, in a waterproof basin. But in its new outdoor location on 9th Street, the sections of the fountain were cemented together, meaning that the thirty-ton mass would have to be moved all in one piece unless a method of separating the sections could be devised. The plan to create a sculpture garden at the front of Building One was abandoned. But a new campaign, entitled “The 50th Anniversary Art Treasures Restoration Project,” spearheaded by the Treasure Island Museum and the Art Deco Society of California, was begun in 1989 and focused on restoration of the Pacific Unity Sculptures. Enough money was raised to move six of the sculptures to the front entry area of Building One and hire a professional restoration team. This was accomplished in 1991, and the sculptures can still be seen there today. The remaining ten sculptures were moved into storage on the island. The fountain, meanwhile, continued to languish in the Court of Pacifica. After fifty years of diverting traffic around the fountain, the Navy decided that it was time to straighten out 9th street and build a parking lot in its place. But what to do with the fountain? The fountain posed such a problem that, according to Treasure Island Museum staff, the Navy had even proposed filling in the below-grade section of the court, thereby “preserving” the fountain under tons of sand. [Image 15–note stairs leading down to fountain] One last photograph of the intact fountain was taken by award-winning Art Deco architecture photographer Randy Juster. (He wrote an amusing blog post about the experience here: http://www.decopix.com/treasure-island-map/ Also please note, this photo was shot with a lens which distorts the shape; the fountain was an oval). [Image 18] The fountain was then cut up into its component parts so that it could be moved into storage. The intention was to cut precisely along the joints where cement bonded the terra cotta sections. A professional team using hydraulic jets and circular masonry saws performed the task, but the cuts were not made precisely and many of the cuts sliced through the terra cotta. [Image 19] Once the cuts were made, the fountain pieces were moved into storage with the sculptures. The fountain is now stored on pallets in an empty building on Treasure Island, along with the sculptures. Stored for close to twenty years in the former “Art Palace” of the exposition (Building Three), the sculptures and pieces of the fountain were recently moved to another building on the island, which happens to be the site where Pacific House once stood. Can the fountain be restored? Yes, it can. However, the damage to the fountain is significant, and restoration will be expensive. Those who know and love the fountain, however, are hopeful that efforts will be made to ensure that the fountain will not be forgotten when public art is chosen for the redeveloped Treasure Island. *At the time of the production of the fountain, enough copies of the whales were made to provide one to individuals involved in the project. Mrs. Sotomayor thinks eight were made in addition to the four displayed on the fountain. Mrs. Sotomayor kindly gave hers to me. References and sources: Mrs. Antonio Sotomayor Lamar Schuler, Chief Draftsman, Gladding McBean Bill Wyatt, Company Historian, Gladding McBean Visit to Gladding McBean in May of 1990 by Anne Schnoebelen, Michael Gray and Mrs. Antonio Sotomayor Archives of American Art, Papers of Philip Youtz Regional Oral History Office, Interview with Antonio Sotomayor Bancroft Library, Papers of Pacific House Participants Green Library, Stanford University, Papers of Pacific House Participants UCLA Library, Special Collections The Masthead, Publication of Naval Station Treasure Island San Francisco Art Institute, Archives San Francisco Examiner, Photo Archives The Treasure Island Museum, Archives
<urn:uuid:81eafd4b-919d-4a18-8d82-b782d157fc8b>
{ "dump": "CC-MAIN-2020-45", "url": "https://treasureisland1939.com/tag/ralph-stackpole/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922746.99/warc/CC-MAIN-20201101001251-20201101031251-00006.warc.gz", "language": "en", "language_score": 0.9693868160247803, "token_count": 4025, "score": 3.484375, "int_score": 3 }
Mood swings are a common experience many of us encounter at some point in our lives. They can range from minor fluctuations in emotions to more extreme mood swings. But any form can impact our well-being. This article will explore coping techniques to help you through a mood swing. Incorporating these strategies into your daily routine can cultivate a healthier mindset. Whatever the cause of your mood swings, the coping techniques discussed here will help. Let’s dive in and discover practical ways to cope with mood swings. Let’s regain a sense of stability and control together. Self-awareness allows you to recognize and acknowledge your emotions. This includes recognizing mood swings symptoms. This will help you create a space for understanding and compassion towards yourself. Being self-aware also allows you to make better decisions during a mood swing. A high level of self-awareness lets you recognize that your emotions are temporary. They are not reflective of the reality. This helps you make choices that align with your long-term goals and values. Being self-aware and mindful fosters self-compassion. When experiencing a mood swing, you can respond with kindness towards yourself. Self-compassion allows you to embrace your emotions without self-criticism. This will help reduce the impact of mood swings on your well-being. Stress can trigger or worsen mood swings. Engaging in relaxation techniques can help ease stress and promote a sense of calm. By reducing stress levels, you create a more favorable emotional environment. This will make it easier to navigate through mood swings. Regular practice of relaxation techniques has a cumulative effect on your well-being. These techniques can do a lot when you incorporate them into your daily routine. It will help you manage stress levels, promote relaxation, and create a foundation of emotional balance. This approach reduces the frequency and intensity of mood swings. Physical activity stimulates the release of endorphins. These are the natural mood-boosting chemicals in the brain. Endorphins promote feelings of happiness and reduce stress. Thus they ease symptoms of depression and anxiety. Letting you experience a natural mood elevation can help counteract mood swings’ adverse effects. Mood swings can sometimes leave you feeling tired or lethargic. Engaging in physical activity can boost energy and combat feelings of low energy associated with mood swings. Exercising will increase blood flow and oxygen circulation to the brain. This enhances your alertness and sense of vitality. Physical activity also serves as a healthy distraction from negative thoughts and emotions. When you immerse yourself in any physical activity, your attention shifts toward the present. This redirection of focus allows you to break free from the cycle of negative emotions. Plenty of professionals can help you with mood swings. They can conduct thorough assessments to find what causes mood swings. They can even find mood disorders in children through diagnostic evaluation. This assessment helps tailor treatment approaches and interventions to your specific needs. Professionals can teach you specific skills and techniques to cope with mood swings better. They help you develop different skills and healthy coping mechanisms. These tools empower you to navigate mood swings. It lets you do this greater resilience which will improve your emotional well-being. In some cases, professionals may prescribe medications to help manage your feelings and perspectives. This is the case if they are associated with underlying mental health conditions. Psychiatrists or other medical professionals can check your symptoms and prescribe appropriate medications. Medication management can be a valuable tool in stabilizing mood swings when combined with therapy. A well-balanced diet supports optimal brain function and promotes emotional well-being. Adequate intake of nutrients from whole foods can help mood stability and resilience. Eat fruits, vegetables, lean proteins, and healthy fats. Getting enough sleep is also crucial for emotional well-being and mood regulation. Sleep deprivation or irregular sleep patterns can disrupt mood and increase the risk of mood swings. Establish a consistent sleep routine. You can do this by creating a relaxing sleep environment and practicing good sleep hygiene. This will help ensure quality sleep and promote emotional stability. A support system provides a space for emotional validation. Here you can express your feelings and experiences without judgment. Sharing your feelings with trusted people makes you feel understood and accepted. This validation reinforces that your emotions are valid and eases the sense of isolation that mood swings can bring. Supportive individuals in your network can also offer a different perspective on mood swings. They may have experienced similar challenges or have insights that can shed light on your situation. Their viewpoint can help you gain a fresh perspective. This new perspective will help you consider alternative coping strategies. They can also provide suggestions that you may not have considered on your own. Having a support system means having people who believe in you. They provide encouragement and motivation during difficult times. Help you boost your confidence and resilience. Their unwavering support can inspire you to keep pushing forward. Their presence can also help you maintain a positive mindset even when facing mood swings. Journaling provides a safe and private space to express your thoughts, feelings, and emotions. It serves as an emotional outlet, allowing you to release pent-up emotions associated with mood swings. Writing down your experiences can provide relief, catharsis, and release of emotional tension. Keeping a journal allows you to track your progress over time. This lets you observe patterns of improvement or identify recurring challenges. This is why documenting your mood swings are helpful. Journaling also lets you record your wins. You will see what your strengths are and celebrate milestones. Seeing this positive progress will do wonders for your mental health. But if you don’t know any mood changes, it may be time to seek professional help. Mood swings can be challenging to deal with. But understanding coping techniques can help lessen a mood swing’s severity and frequency. If you’re still struggling, you are speaking with a licensed mental health professional. They can help you identify the best techniques based on your particular needs. If you’re struggling to get your kid to think more positively, consider enrolling them in my online program Tweak A Week. This course is designed to make it fun and easy to add in new small tweaks to your habits each week.
<urn:uuid:4a029452-816f-4ee9-8e54-22428c637242>
{ "dump": "CC-MAIN-2024-10", "url": "https://www.notsalmon.com/2023/05/17/coping-tips-for-mood-swing/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474440.42/warc/CC-MAIN-20240223153350-20240223183350-00840.warc.gz", "language": "en", "language_score": 0.9282160401344299, "token_count": 1282, "score": 2.53125, "int_score": 3 }
Aerial view of lava fountains along a 250-m-long fissure during the September 1977 eruption of Kilauea Volcano. Lava fountains from the steep spatter cone in lower left reached 100 m high, but no significant flow came from this vent. The middle vent would soon build a large rampart and feed the main flows of the eruption. During the next four days, continued fountaining resulted in coalescence of all the vent deposits to form a large spatter cone, Pu`u Kia`i (Hawaiian for Guardian Hill). The 1977 eruption began on September 13 when a discontinuous, en-echelon fissure broke out in the middle east rift zone between Kalalua and Pu`u Kauka. The initial fissure was 5.5 km long. The eruption lasted only 18 days.
<urn:uuid:ae700da5-8c64-4aab-a9be-d90453cd244f>
{ "dump": "CC-MAIN-2016-50", "url": "http://hvo.wr.usgs.gov/gallery/kilauea/erz/kiai.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541525.50/warc/CC-MAIN-20161202170901-00179-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9323883056640625, "token_count": 185, "score": 3.453125, "int_score": 3 }
Science and inventions of Leonardo da Vinci |Leonardo Da Vinci| |Born||Leonardo di ser Piero da Vinci April 15, 1452 Vinci, Italy |Died||May 2, 1519 Amboise, Indre-et-Loire, France |Known for||Polymath: painter, sculptor, architect, musician, scientist, mathematician, engineer, inventor, anatomist, geologist, cartographer, botanist and writer| |Notable work||Paintings including the Mona Lisa and The Last Supper. Many scientific drawings including The Vitruvian Man.| Leonardo da Vinci (1452–1519) was an Italian polymath, regarded as the epitome of the "Renaissance Man", displaying skills in numerous diverse areas of study. Whilst most famous for his paintings such as the Mona Lisa and the Last Supper, Leonardo is also renowned in the fields of civil engineering, chemistry, geology, geometry, hydrodynamics, mathematics, mechanical engineering, optics, physics, pyrotechnics, and zoology. While the full extent of his scientific studies has only become recognized in the last 150 years, he was, during his lifetime, employed for his engineering and skill of invention. Many of his designs, such as the movable dikes to protect Venice from invasion, proved too costly or impractical. Some of his smaller inventions entered the world of manufacturing unheralded. As an engineer, Leonardo conceived ideas vastly ahead of his own time, conceptually inventing an improved verson of the helicopter based on earlier preexisting ancient Chinese helicopter toy technology, an armoured fighting vehicle, the use of concentrated solar power, a calculator, a rudimentary theory of plate tectonics and the double hull. In practice, he greatly advanced the state of knowledge in the fields of anatomy, astronomy, civil engineering, optics, and the study of water (hydrodynamics). - 1 Condensed biography - 2 Approach to scientific investigation - 3 Leonardo's notes and journals - 4 Publication - 5 Natural science - 6 Mathematical studies - 7 Engineering and invention - 8 Leonardo's inventions made reality - 9 Leonardo's projects - 10 Models based on Leonardo's drawings - 11 See also - 12 Notes - 13 References - 14 Further reading - 15 External links - NOTE: This is a brief summary of Leonardo's early life and journals with particular emphasis on his introduction to science. Leonardo da Vinci (April 15, 1452 – May 2, 1519) was born the illegitimate son of Messer Piero, a notary, and Caterina, a peasant woman. His early life was spent in the region of Vinci, in the valley of the Arno River near Florence, firstly with his mother and in later childhood in the household of his father, grandfather and uncle Francesco. His curiosity and interest in scientific observation were stimulated by his uncle Francesco, while his grandfather's keeping of journals set an example which he was to follow for most of his life, diligently recording in his own journals both the events of the day, his visual observations, his plans and his projects. The journals of Leonardo contain matters as mundane as grocery lists and as remarkable as diagrams for the construction of a flying machine. In 1466, Leonardo was sent to Florence to the workshop of the artist Verrocchio, in order to learn the skills of an artist. At the workshop, as well as painting and drawing, he learnt the study of topographical anatomy. He was also exposed to a very wide range of technical skills such as drafting, set construction, plasterworking, paint chemistry, and metallurgy. Among the older artists whose work stimulated Leonardo's scientific interest was Piero della Francesca, then a man in his 60s, who was one of the earliest artists to systematically employ linear perspective in his paintings, and who had a greater understanding of the science of light than any other artist of his date. While Leonardo's teacher, Verrocchio, largely ignored Piero's scientifically disciplined approach to painting, Leonardo and Domenico Ghirlandaio, who also worked at Verrocchio's workshop, did not. Two of Leonardo's earliest paintings, both scenes of the Annunciation show his competent understanding of the linear perspective. Leonardo Da Vinci was profoundly observant of nature, his curiosity having been stimulated in early childhood by his discovery of a deep cave in the mountains and his intense desire to know what lay inside. His earliest dated drawing, 1473, is of the valley of the Arno River, where he lived. It displays some of the many scientific interests that were to obsess him all his life, in particular geology and hydrology. Approach to scientific investigation During the Renaissance, the study of art and science was not perceived as mutually exclusive; on the contrary, the one was seen as informing upon the other. Although Leonardo's training was primarily as an artist, it was largely through his scientific approach to the art of painting, and his development of a style that coupled his scientific knowledge with his unique ability to render what he saw that created the outstanding masterpieces of art for which he is famous. As a scientist, Leonardo had no formal education in Latin and mathematics and did not attend a university. Because of these factors, his scientific studies were largely ignored by other scholars. Leonardo's approach to science was one of intense observation and detailed recording, his tools of investigation being almost exclusively his eyes. His journals give insight into his investigative processes. A recent and exhaustive analysis of Leonardo as a scientist by Fritjof Capra argues that Leonardo was a fundamentally different kind of scientist from Galileo, Newton, and other scientists who followed him, his theorizing and hypothesizing integrating the arts and particularly painting. Capra sees Leonardo's unique integrated, holistic views of science as making him a forerunner of modern systems theory and complexity schools of thought. Leonardo's notes and journals Leonardo kept a series of journals in which he wrote almost daily, as well as separate notes and sheets of observations, comments and plans. He wrote and drew with his left hand, and most of his writing is in mirror script, which makes it difficult to read. Much has survived to illustrate Leonardo's studies, discoveries and inventions. On his death, his writings were left mainly to his pupil Melzi with the apparent intention that his scientific work should be published. This did not take place in Melzi's lifetime, and the writings were eventually bound in different forms and dispersed. Some of his works were published as a Treatise on Painting 165 years after his death. Leonardo illustrated a book on mathematical proportion in art written by his friend Luca Pacioli and called De divina proportione, published in 1509. He was also preparing a major treatise on his scientific observations and mechanical inventions. It was to be divided into a number of sections or "Books", Leonardo leaving some instructions as to how they were to be ordered. Many sections for it appear in his notebooks. These pages deal with scientific subjects generally but also specifically as they touch upon the creation of artworks. In relating to art, this is not science that is dependent upon experimentation or the testing of theories. It deals with detailed observation, particularly the observation of the natural world, and includes a great deal about the visual effects of light on different natural substances such as foliage. Begun at Florence, in the house of Piero di Braccio Martelli, on the 22nd day of March 1508. And this is to be a collection without order, taken from many papers which I have copied here, hoping to arrange them later each in its place, according to the subjects of which they may treat. But I believe that before I am at the end of this [task] I shall have to repeat the same things several times; for which, O reader! do not blame me, for the subjects are many and memory cannot retain them [all] and say: ‘I will not write this because I wrote it before.’ And if I wished to avoid falling into this fault, it would be necessary in every case when I wanted to copy [a passage] that, not to repeat myself, I should read over all that had gone before; and all the more since the intervals are long between one time of writing and the next. The lights which may illuminate opaque bodies are of 4 kinds. These are: diffused light as that of the atmosphere... And Direct, as that of the sun... The third is Reflected light; and there is a 4th which is that which passes through [translucent] bodies, as linen or paper or the like. For an artist working in the 15th century, some study of the nature of light was essential. It was by the effective painting of light falling on a surface that modelling, or a three-dimensional appearance was to be achieved in a two-dimensional medium. It was also well understood by artists like Leonardo's teacher, Verrocchio, that an appearance of space and distance could be achieved in a background landscape by painting in tones that were less in contrast and colours that were less bright than in the foreground of the painting. The effects of light on solids were achieved by trial and error, since few artists except Piero della Francesca actually had accurate scientific knowledge of the subject. At the time when Leonardo commenced painting, it was unusual for figures to be painted with extreme contrast of light and shade. Faces, in particular, were shadowed in a manner that was bland and maintained all the features and contours clearly visible. Leonardo broke with this. In the painting generally titled The Lady with an Ermine (about 1483) he sets the figure diagonally to the picture space and turns her head so that her face is almost parallel to her nearer shoulder. The back of her head and the further shoulder are deeply shadowed. Around the ovoid solid of her head and across her breast and hand the light is diffused in such a way that the distance and position of the light in relation to the figure can be calculated. Leonardo's treatment of light in paintings such as The Virgin of the Rocks and the Mona Lisa was to change forever the way in which artists perceived light and used it in their paintings. Of all Leonardo's scientific legacies, this is probably the one that had the most immediate and noticeable effect. ...to obtain a true and perfect knowledge [of the vascular system]... I have dissected more than ten human bodies, destroying all the other members, and removing the very minutest particles of the flesh by which these veins are surrounded, ... and as one single body would not last so long, since it was necessary to proceed with several bodies by degrees, until I came to an end and had a complete knowledge; this I repeated twice, to learn the differences... Leonardo began the formal study of the topographical anatomy of the human body when apprenticed to Andrea del Verrocchio. As a student he would have been taught to draw the human body from life, to memorize the muscles, tendons and visible subcutaneous structure and to familiarise himself with the mechanics of the various parts of the skeletal and muscular structure. It was common workshop practice to have plaster casts of parts of the human anatomy available for students to study and draw. If, as is thought to be the case, Leonardo painted the torso and arms of Christ in The Baptism of Christ on which he famously collaborated with his master Verrocchio, then his understanding of topographical anatomy had surpassed that of his master at an early age as can be seen by a comparison of the arms of Christ with those of John the Baptist in the same painting. In the 1490s he wrote about demonstrating muscles and sinews to students: Remember that to be certain of the point of origin of any muscle, you must pull the sinew from which the muscle springs in such a way as to see that muscle move, and where it is attached to the ligaments of the bones. His continued investigations in this field occupied many pages of notes, each dealing systematically with a particular aspect of anatomy. It appears that the notes were intended for publication, a task entrusted on his death to his pupil Melzi. In conjunction with studies of aspects of the body are drawings of faces displaying different emotions and many drawings of people suffering facial deformity, either congenital or through illness. Some of these drawings, generally referred to as "caricatures", on analysis of the skeletal proportions, appear to be based on anatomical studies. As Leonardo became successful as an artist, he was given permission to dissect human corpses at the hospital Santa Maria Nuova in Florence. Later he dissected in Milan at the hospital Maggiore and in Rome at the hospital Santo Spirito (the first mainland Italian hospital). From 1510 to 1511 he collaborated in his studies with the doctor Marcantonio della Torre. I have removed the skin from a man who was so shrunk by illness that the muscles were worn down and remained in a state like thin membrane, in such a way that the sinews instead of merging in muscles ended in wide membrane; and where the bones were covered by the skin they had very little over their natural size. In 30 years, Leonardo dissected 30 male and female corpses of different ages. Together with Marcantonio, he prepared to publish a theoretical work on anatomy and made more than 200 drawings. However, his book was published only in 1680 (161 years after his death) under the heading Treatise on painting. Among the detailed images that Leonardo drew are many studies of the human skeleton. He was the first to describe the double S form of the backbone. He also studied the inclination of pelvis and sacrum and stressed that sacrum was not uniform, but composed of five fused vertebrae. He also studied the anatomy of the human foot and its connection to the leg, and from these studies, he was able to further his studies in biomechanics. Leonardo was a physiologist as well as an anatomist, studying the function of the human body as well as examining and recording its structure. He dissected and drew the human skull and cross-sections of the brain, transversal, sagittal, and frontal. These drawings may be linked to a search for the sensus communis, the locus of the human senses, which, by Medieval tradition, was located at the exact physical center of the skull. Leonardo studied internal organs, being the first to draw the human appendix and the lungs, mesentery, urinary tract, reproductive organs, the muscles of the cervix and a detailed cross-section of coitus. He was one of the first to draw a scientific representation of the fetus in the intrautero. Leonardo studied the vascular system and drew a dissected heart in detail. He correctly worked out how heart valves ebb the flow of blood yet he did not fully understand circulation as he believed that blood was pumped to the muscles where it was consumed. In 2005 a UK heart surgeon, Francis Wells, from Papworth Hospital Cambridge, pioneered repair to damaged hearts, using Leonardo's depiction of the opening phase of the mitral valve to operate without changing its diameter allowing an individual to recover more quickly. Wells said "Leonardo had a depth of appreciation of the anatomy and physiology of the body - its structure and function - that perhaps has been overlooked by some." Leonardo's observational acumen, drawing skill, and the clarity of depiction of bone structures reveal him at his finest as an anatomist. However, his depiction of the internal soft tissues of the body are incorrect in many ways, showing that he maintained concepts of anatomy and functioning that were in some cases millennia old, and that his investigations were probably hampered by the lack of preservation techniques available at the time. Leonardo's detailed drawing of the internal organs of a woman (See left) reveal many traditional misconceptions. Leonardo not only studied human anatomy, but the anatomy of many other animals as well. He dissected cows, birds, monkeys and frogs, comparing in his drawings their anatomical structure with that of humans. On one page of his journal Leonardo drew five profile studies of a horse with its teeth bared in anger and, for comparison, a snarling lion and a snarling man. I have found that in the composition of the human body as compared with the bodies of animals, the organs of sense are duller and coarser... I have seen in the Lion tribe that the sense of smell is connected with part of the substance of the brain which comes down the nostrils, which form a spacious receptacle for the sense of smell, which enters by a great number of cartilaginous vesicles with several passages leading up to where the brain, as before said, comes down. In the early 1490s Leonardo was commissioned to create a monument in honour of Francesco Sforza. In his notebooks are a series of plans for an equestrian monument. There are also a large number of related anatomical studies of horses. They include several diagrams of a standing horse with the angles and proportions annotated, anatomical studies of horses' heads, a dozen detailed drawings of hooves and numerous studies and sketches of horses rearing. He studied the topographical anatomy of a bear in detail, making many drawings of its paws. There is also a drawing of the muscles and tendons of the bear's hind feet. Other drawings of particular interest include the uterus of a pregnant cow, the hindquarters of a decrepit mule and studies of the musculature of a little dog. All the branches of a tree at every stage of its height when put together are equal in thickness to the trunk [below them]. The science of botany was long established by Leonardo's time, a treatise on the subject having been written as early as 300 BCE. Leonardo's study of plants, resulting in many beautiful drawings in his notebooks, was not to record in diagramatic form the parts of the plant, but rather, as an artist and observer to record the precise appearance of plants, the manner of growth and the way that individual plants and flowers of a single variety differed from one another. One such study shows a page with several species of flower of which ten drawings are of wild violets. Along with a drawing of the growing plant and a detail of a leaf, Leonardo has repeatedly drawn single flowers from different angles, with their heads set differently on the stem. Apart from flowers the notebooks contain many drawings of crop plants including several types of grain and a variety of berries including a detailed study of bramble. There are also water plants such as irises and sedge. His notebooks also direct the artist to observe how light reflects from foliage at different distances and under different atmospheric conditions. A number of the drawings have their equivalents in Leonardo's paintings. An elegant study of a stem of lilies may have been for one of Leonardo's early Annunciation paintings, carried in the hand of the Archangel Gabriel. In both the Annunciation pictures the grass is dotted with blossoming plants. The plants which appear in both the versions of The Virgin of the Rocks demonstrate the results of Leonardo's studies in a meticulous realism that makes each plant readily identifiable to the botanist. As an adult, Leonardo had only two childhood memories, one of which was the finding of a cave in the Apennines. Although fearing that he might be attacked by a wild beast, he ventured in driven "by the burning desire to see whether there might be any marvelous thing within." Leonardo's earliest dated drawing is a study of the Arno Valley, strongly emphasizing its geological features. His notebooks contain landscapes with a wealth of geological observation from the regions of both Florence and Milan, often including atmospheric effects such as a heavy rainstorm pouring down on a town at the foot of a mountain range. It had been observed for many years that strata in mountains often contained bands of sea shells. Conservative science said that these could be explained by the Great Flood described in the Bible. Leonardo's observations convinced him that this could not possibly be the case. And a little beyond the sandstone conglomerate, a tufa has been formed, where it turned towards Castel Florentino; farther on, the mud was deposited in which the shells lived, and which rose in layers according to the levels at which the turbid Arno flowed into that sea. And from time to time the bottom of the sea was raised, depositing these shells in layers, as may be seen in the cutting at Colle Gonzoli, laid open by the Arno which is wearing away the base of it; in which cutting the said layers of shells are very plainly to be seen in clay of a bluish colour, and various marine objects are found there. This quotation makes clear the breadth of Leonardo's understanding of geology, including the action of water in creating sedimentary rock, the tectonic action of the Earth in raising the sea bed and the action of erosion in the creation of geographical features. In Leonardo's earliest paintings we see the remarkable attention given to the small landscapes of the background, with lakes and water, swathed in a misty light. In the larger of the Annunciation paintings is a town on the edge of a lake. Although distant, the mountains can be seen to be scored by vertical strata. This characteristic can be observed in other paintings by Leonardo, and closely resembles the mountains around Lago di Garda and Lago d'Iseo in Northern Italy. It is a particular feature of both the paintings of The Virgin of the Rocks, which also include caverns of fractured, tumbled, and water-eroded limestone. In the early 16th century maps were rare and often inaccurate. Leonardo produced several extremely accurate maps such as the town plan of Imola created in 1502 in order to win the patronage of Cesare Borgia. Borgia was so impressed that he hired him as a military engineer and architect. Leonardo also produced a map of Chiana Valley in Tuscany, which he surveyed, without the benefit of modern equipment, by pacing the distances. In 1515, Leonardo produced a map of the Roman Southern Coast which is linked to his work for the Vatican and relates to his plans to drain the marshland. Recent research by Donato Pezzutto suggests that the background landscapes in Leonardo’s paintings depict specific locations as aerial views with enhanced depth, employing a technique called cartographic perspective. Pezzutto identifies the location of the Mona Lisa to the Val di Chiana, the Annunciation to the Arno Valley, the Madonna of the Yarnwinder to the Adda Valley and The Virgin and Child with St Anne to the Sessia Valley. All the branches of a water [course] at every stage of its course, if they are of equal rapidity, are equal to the body of the main stream. Among Leonardo's drawings are many that are studies of the motion of water, in particular the forms taken by fast-flowing water on striking different surfaces. Many of these drawings depict the spiralling nature of water. The spiral form had been studied in the art of the Classical era and strict mathematical proportion had been applied to its use in art and architecture. An awareness of these rules of proportion had been revived in the early Renaissance. In Leonardo's drawings can be seen the investigation of the spiral as it occurs in water. There are several elaborate drawings of water curling over an object placed at a diagonal to its course. There are several drawings of water dropping from a height and curling upwards in spiral forms. One such drawing, as well as curling waves, shows splashes and details of spray and bubbles. Leonardo's interest manifested itself in the drawing of streams and rivers, the action of water in eroding rocks, and the cataclysmic action of water in floods and tidal waves. The knowledge that he gained from his studies was employed in devising a range of projects, particularly in relation to the Arno River. None of the major works was brought to completion. The earth is not in the centre of the Sun’s orbit nor at the centre of the universe, but in the centre of its companion elements, and united with them. And any one standing on the moon, when it and the sun are both beneath us, would see this our earth and the element of water upon it just as we see the moon, and the earth would light it as it lights us. Claims are sometimes made that Leonardo da Vinci was an alchemist. He was trained in the workshop of Verrocchio, who according to Vasari, was an able alchemist. Leonardo was a chemist in so much as that he experimented with different media for suspending paint pigment. In the painting of murals, his experiments resulted in notorious failures with the Last Supper deteriorating within a century, and the Battle of Anghiari running off the wall. In Leonardo's many pages of notes about artistic processes, there are some that pertain to the use of silver and gold in artworks, information he would have learned as a student. Leonardo's scientific process was based mainly upon observation. His practical experiments are also founded in observation rather than belief. Leonardo, who questioned the order of the solar system and the deposit of fossils by the Great Flood, had little time for the alchemical quests to turn lead into gold or create a potion that gave eternal life. Leonardo said about alchemists: The false interpreters of nature declare that quicksilver is the common seed of every metal, not remembering that nature varies the seed according to the variety of the things she desires to produce in the world. Old alchemists... have never either by chance or by experiment succeeded in creating the smallest element that can be created by nature; however [they] deserve unmeasured praise for the usefulness of things invented for the use of men, and would deserve it even more if they had not been the inventors of noxious things like poisons and other similar things which destroy life or mind." And many have made a trade of delusions and false miracles, deceiving the stupid multitude. The art of perspective is of such a nature as to make what is flat appear in relief and what is in relief flat. During the early 15th century, both Brunelleschi and Alberti made studies of linear perspective. In 1436 Alberti published "della Pittura" ("On Painting"), which includes his findings on linear perspective. Piero della Francesca carried his work forward and by the 1470s a number of artists were able to produce works of art that demonstrated a full understanding of the principles of linear perspective. Leonardo studied linear perspective and employed it in his earlier paintings. His use of perspective in the two Annunciations is daring, as he uses various features such as the corner of a building, a walled garden and a path to contrast enclosure and spaciousness. The unfinished Adoration of the Magi was intended to be a masterpiece revealing much of Leonardo's knowledge of figure drawing and perspective. There exists a number of studies that he made, including a detailed study of the perspective, showing the complex background of ruined Classical buildings that he planned for the left of the picture. In addition, Leonardo is credited with the first use of anamorphosis, the use of a "perspective" to produce an image that is intelligible only with a curved mirror or from a specific vantage point. Those who are in love with practice without knowledge are like the sailor who gets into a ship without rudder or compass and who never can be certain whether he is going. Practice must always be founded on sound theory, and to this Perspective is the guide and the gateway; and without this nothing can be done well in the matter of drawing. While in Milan in 1496 Leonardo met a traveling monk and academic, Luca Pacioli. Under him, Leonardo studied mathematics. Pacioli, who first codified and recorded the double entry system of bookkeeping, had already published a major treatise on mathematical knowledge, collaborated with Leonardo in the production of a book called "De divina proportione" about mathematical and artistic proportion. Leonardo prepared a series of drawings of regular solids in a skeletal form to be engraved as plates. "De divina proportione" was published in 1509. All the problems of perspective are made clear by the five terms of mathematicians, which are:—the point, the line, the angle, the superficies and the solid. The point is unique of its kind. And the point has neither height, breadth, length, nor depth, whence it is to be regarded as indivisible and as having no dimensions in space. Engineering and invention He made designs for mills, fulling machines and engines that could be driven by water-power... In addition he used to make models and plans showing how to excavate and tunnel through mountains without difficulty, so as to pass from one level to another; and he demonstrated how to lift and draw great weights by means of levers, hoists and winches, and ways of cleansing harbours and using pumps to suck up water from great depths. Practical inventions and projects Leonardo was a master of mechanical principles. He utilized leverage and cantilevering, pulleys, cranks, gears, including angle gears and rack and pinion gears; parallel linkage, lubrication systems and bearings. He understood the principles governing momentum, centripetal force, friction and the aerofoil and applied these to his inventions. His scientific studies remained unpublished with, for example, his manuscripts describing the processes governing friction predating the introduction of Amontons' Laws of Friction by 150 years. It is impossible to say with any certainty how many or even which of his inventions passed into general and practical use, and thereby had impact over the lives of many people. Among those inventions that are credited with passing into general practical use are the strut bridge, the automated bobbin winder, the rolling mill, the machine for testing the tensile strength of wire and the lens-grinding machine pictured at right. In the lens-grinding machine, the hand rotation of the grinding wheel operates an angle-gear, which rotates a shaft, turning a geared dish in which sits the glass or crystal to be ground. A single action rotates both surfaces at a fixed speed ratio determined by the gear. As an inventor, Leonardo was not prepared to tell all that he knew: How by means of a certain machine many people may stay some time under water. How and why I do not describe my method of remaining under water, or how long I can stay without eating; and I do not publish nor divulge these by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them. And although I will impart others, there is no danger in them; because the mouth of the tube, by which you breathe, is above the water supported on bags of corks. Bridges and hydraulics Leonardo's study of the motion of water led him to design machinery that utilized its force. Much of his work on hydraulics was for Ludovico il Moro. Leonardo wrote to Ludovico describing his skills and what he could build: …very light and strong bridges that can easily be carried, with which to pursue, and sometimes flee from, the enemy; and others safe and indestructible by fire or assault, easy and convenient to transport and place into position. Among his projects in Florence was one to divert the course of the Arno, in order to flood Pisa. Fortunately, this was too costly to be carried out. He also surveyed Venice and came up with a plan to create a movable dyke for the city's protection against invaders. In 1502, Leonardo produced a drawing of a single span 240 m (720 ft) bridge as part of a civil engineering project for Ottoman Sultan Beyazid II of Istanbul. The bridge was intended to span an inlet at the mouth of the Bosphorus known as the Golden Horn. Beyazid did not pursue the project, because he believed that such a construction was impossible. Leonardo's vision was resurrected in 2001 when a smaller bridge based on his design was constructed in Norway. Leonardo's letter to Ludovico il Moro assured him: When a place is besieged I know how to cut off water from the trenches and construct an infinite variety of bridges, mantlets and scaling ladders, and other instruments pertaining to sieges. I also have types of mortars that are very convenient and easy to transport.... when a place cannot be reduced by the method of bombardment either because of its height or its location, I have methods for destroying any fortress or other stronghold, even if it be founded upon rock. ....If the engagement be at sea, I have many engines of a kind most efficient for offence and defence, and ships that can resist cannons and powder. In Leonardo's notebooks there is an array of war machines which includes a vehicle to be propelled by two men powering crank shafts. Although the drawing itself looks quite finished, the mechanics were apparently not fully developed because, if built as drawn, the vehicle would never progress in a forward direction. In a BBC documentary, a military team built the machine and changed the gears in order to make the machine work. It has been suggested that Leonardo deliberately left this error in the design, in order to prevent it from being put to practice by unauthorized people. Another machine, propelled by horses with a pillion rider, carries in front of it four scythes mounted on a revolving gear, turned by a shaft driven by the wheels of a cart behind the horses. Leonardo's notebooks also show cannons which he claimed "to hurl small stones like a storm with the smoke of these causing great terror to the enemy, and great loss and confusion." He also designed an enormous crossbow. Following his detailed drawing, one was constructed by the British Army, but could not be made to fire successfully. In 1481 Leonardo designed a breech-loading, water cooled cannon with three racks of barrels allowed the re-loading of one rack while another was being fired and thus maintaining continuous fire power. The "fan type" gun with its array of horizontal barrels allowed for a wide scattering of shot. Leonardo was the first to sketch the wheel-lock musket c. 1500 AD (the precedent of the flintlock musket which first appeared in Europe by 1547), although as early as the 14th century the Chinese had used a flintlock 'steel wheel' in order to detonate land mines. While Leonardo was working in Venice, he drew a sketch for an early diving suit, to be used in the destruction of enemy ships entering Venetian waters. A suit was constructed for a BBC documentary using pigskin treated with fish oil to repel water. The head was covered by a helmet with two eyeglasses at the front. A breathing tube of bamboo with pigskin joints was attached to the back of the helmet and connected to a float of cork and wood. When the scuba divers tested the suit, they found it to be a workable precursor to a modern diving suit, the cork float acting as a compressed air chamber when submerged. His inventions were very futuristic which meant they were very expensive and proved not to be useful. In Leonardo's infancy a hawk had once hovered over his cradle. Recalling this incident, Leonardo saw it as prophetic. An object offers as much resistance to the air as the air does to the object. You may see that the beating of its wings against the air supports a heavy eagle in the highest and rarest atmosphere, close to the sphere of elemental fire. Again you may see the air in motion over the sea, fill the swelling sails and drive heavily laden ships. From these instances, and the reasons given, a man with wings large enough and duly connected might learn to overcome the resistance of the air, and by conquering it, succeed in subjugating it and rising above it. The desire to fly is expressed in the many studies and drawings. His later journals contain a detailed study of the flight of birds and several different designs for wings based in structure upon those of bats which he described as being less heavy because of the impenetrable nature of the membrane. There is a legend that Leonardo tested the flying machine with one of his apprentices, and that the apprentice fell and broke his leg. Experts Martin Kemp and Liana Bortolon agree that there is no evidence of such a test, which is not mentioned in his journals. One design that he produced shows a flying machine to be lifted by a man-powered rotor. It would not have worked since the body of the craft itself would have rotated in the opposite direction to the rotor. Leonardo's original idea, as preserved in his notebooks of 1488–1489 and in the drawings in the Codex Atlanticus, was to use one or more wheels, continuously rotating, each of which pulled a looping bow, rather like a fanbelt in an automobile engine, and perpendicular to the instrument's strings. Leonardo's inventions made reality In the late 20th century, interest in Leonardo's inventions escalated. There have been many projects which have sought to turn diagrams on paper into working models. One of the factors is the awareness that, although in the 15th and 16th centuries Leonardo had available a limited range of materials, modern technological advancements have made available a number of robust materials of light-weight which might turn Leonardo's designs into reality. This is particularly the case with his designs for flying machines. A difficulty encountered in the creation of models is that often Leonardo had not entirely thought through the mechanics of a machine before he drew it, or else he used a sort of graphic shorthand, simply not bothering to draw a gear or a lever at a point where one is essential in order to make a machine function. This lack of refinement of mechanical details can cause considerable confusion. Thus many models that are created, such as some of those on display at Clos Luce, Leonardo's home in France, do not work, but would work, with a little mechanical tweaking. - Leonardo da Vinci Gallery at Museo Nazionale della Scienza e della Tecnologia "Leonardo da Vinci" in Milan; permanent exhibition, the biggest collection of Leonardo's projects and inventions. - Models of Leonardo's designs are on permanent display at Clos Luce. - The Victoria and Albert Museum, London, held an exhibition called "Leonardo da Vinci: Experience, Experiment and Design" in 2006 - Logitech Museum - "The Da Vinci Machines Exhibition" was held in a pavilion in the Cultural Forecourt, at South Bank, Brisbane, Queensland, Australia in 2009. The exhibits shown were on loan from the Museum of Leonardo da Vinci, Florence, Italy. - The U.S. Public Broadcasting Service (PBS), aired in October 2005, a television programme called Leonardo's Dream Machines, about the building and successful flight of a glider based upon Leonardo's design. - The Discovery Channel began a series called Doing DaVinci in April 2009, in which a team of builders try to construct various da Vinci inventions based on his designs. Models based on Leonardo's drawings - Studies of the Fetus in the Womb—two colored annotated sketches by Leonardo da Vinci - List of works by Leonardo da Vinci - Leonardo da Vinci's personal life - Topographical anatomy is the anatomy that is visible on the surface of the body. - Liana Bortolon, The Life and Times of Leonardo, Paul Hamlyn, 1967 - Capra, Fritjof. The Science of Leonardo; Inside the Mind of the Genius of the Renaissance. (New York, Doubleday, 2007) - Jean Paul Richter editor 1880, The Notebooks of Leonardo da Vinci Dover, 1970, ISBN 0-486-22572-0. (accessed 2007-02-04) - - "Da Vinci clue for heart surgeon". BBC News. 2005-09-28. Retrieved 2013-07-18. - Martin Kemp, Leonardo, Oxford University Press, (2004) ISBN 0-19-280644-0 - E.g. Theophrastus, On the History of Plants. - The London painting of the Virgin of the Rocks is denounced by the geologist Ann C. Pizzorusso, of New York, as largely by the hand of someone other than Leonardo, because the rocks appear incongruous and the lake looks like a fjord. Pizzorusso says "Fjords do not exist in Italy and it is highly unlikely the glacial lakes of the Lombard region would have such steep relief surrounding them." In fact, the glacial lake, Garda, has just such steep geological formations. The sedimentary red limestone which appears in the picture is also typical of Italy. - Pezzutto, Donato (2012-10-24). "Leonardo's Landscapes as Maps". OPUSeJ. Retrieved 2012-11-07. - See Da Vinci's notebooks on astronomy. - Bruce T. Moran, Distilling Knowledge, Chemistry, Alchemy and the Scientific Revolution, (2005) ISBN 0-674-01495-2 - "Quicksilver" is an old name for mercury. - Irma Ann Richter and Teresa Wells, Leonardo da Vinci - Notebooks, Oxford University Press (2008) ISBN 978-0-19-929902-7 - "Animations of anamorphosis of Leonardo and other artists". Illusionworks.com. Retrieved 2013-07-18. - L. Murphy Smith, Luca Pacioli: The Father of Accounting - "Leonardo da Vinci (1452–1519)". Nano-world.org. Retrieved 2013-07-24. - "Da Vinci war machines "designed to fail"". The Age. Melbourne. December 14, 2002. - Needham, Volume 5, Part 7, 199. - "Youtube Video of the BBC documentary". - Liana Bortolon, Leonardo, Paul Hamlyn, (1967) - "The Helicopter » Leonardo Da Vinci's Inventions". leonardodavincisinventions.com. Retrieved 2016-03-21. - see Helicopter for detailed description of solutions and types of functional helicopter. - U.S. Public Broadcasting Service (PBS), Leonardo's Dream Machine, October 2005 - "Leonardo". Museoscienzaorg. Retrieved May 16, 2016. - About Doing DaVinci : Doing DaVinci : Discovery Channel Archived April 19, 2009, at the Wayback Machine. - Bsmbach, Carmen (2003). Leonardo da Vinci, Master Draftsman. New Haven: Yale University Press. p. 414. ISBN 0-300-09878-2. - Moon, Francis C. (2007). The Machines of Leonardo da Vinci and Franz Reuleaux, Kinematics of Machines from the Renaissance to the 20th Century. Springer. ISBN 978-1-4020-5598-0. - Capra, Fritjof (2007). The Science of Leonardo; Inside the Mind of the Genius of the Renaissance. New York: Doubleday. - The Art of War: Leonardo da Vinci's War Machines - Complete text & images of Richter's translation of the Notebooks - Leonardo da Vinci: Experience, Experiment, Design (review) - Some digitized notebook pages with explanations from the British Library (Non HTML5 Available) - Digital and animated compendium of anatomy notebook pages - BBC Leonardo homepage - Leonardo da Vinci: The Leicester Codex - Leonardo's Letter to Ludovico Sforza - Animations of anamorphosis of Leonardo and other artists - The Invention of the Parachute - Da Vinci - The Genius: A comprehensive traveling exhibition about Leonardo da Vinci - The technical drawings of Leonardo da Vinci - a high resolution gallery - Leonardo da Vinci: anatomical drawings from the Royal Library, Windsor Castle, exhibition catalog fully online as PDF from The Metropolitan Museum of Art - Leonardo da Vinci, Master Draftsman, exhibition catalog fully online as PDF from The Metropolitan Museum of Art
<urn:uuid:06035b1d-2734-4584-93e6-afdccdd86d2f>
{ "dump": "CC-MAIN-2016-44", "url": "https://en.wikipedia.org/wiki/Science_and_inventions_of_Leonardo_da_Vinci", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00192-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9669106602668762, "token_count": 9235, "score": 3.546875, "int_score": 4 }
After receiving the diagnosis of heart disease, one of the first questions is always "Why did I get coronary heart disease?" We all know the traditional risk factors for heart disease: high cholesterol, smoking, obesity, poor diet, diabetes, family history, lack of exercise. Should we add stress and negative emotions to this list? Even after hundreds of scientific studies on this topic, the question still generates controversy and, in some cases, intense emotion. While causality is unproven, there appears to be an association between coronary heart disease and anger, anxiety, and depression. Some suggest this could be due to the damaging behaviors that often accompany negative emotions (e.g., smoking, poor diet, lack of exercise), or the physiological effects that emotions can have on the heart and blood vessels (e.g., high blood pressure). Later, we will delve more deeply into how emotional stress affects the heart. For now, we will present the evidence suggesting an association between development of heart disease and two negative emotions, anger and anxiety. The relationship between the heart and the constellation of emotional traits that includes anger, hostility, and cynicism has been examined extensively in observational studies. Although such studies are not considered the highest level of evidence, the findings are so consistent that they can't be ignored. For some of us, anger and frustration begin early in the day. Cut off by the guy in the Pinto. Then stuck behind a slow truck -- what is wrong with that guy? Got around the truck, but still missed every single light. Almost there, but reached the final intersection just as road construction began -- why do they have to do this during rush hour, anyway? Spilled the coffee reaching for the parking pass. Why can't they have automatic parking gates? Arrived at work ten minutes late, frustrated and mad at the world. If this is your typical morning, you are not going to like what we have to say next. Repeated bouts of anger are associated with thickening of the arteries and development of plaque, possible precursors to heart attacks. Over time, people who are frequently angry appear to have an increased risk of developing coronary heart disease. The stronger and more frequent the bouts of anger, the greater the risk of heart disease. Some scientists argue that expressing anger, rather than holding it in, is better for the heart. This is unproven. Yelling might make you feel better for a moment, but it will probably endanger your job or your relationships -- leading to further stress and anger. A more effective strategy includes avoiding situations that trigger anger, and managing anger when it does occur. People who anger easily are frequently also pessimistic and cynical, and pessimism is bad for your heart. In an observational study of nearly 100,000 women, those with a pessimistic, cynical disposition developed more coronary heart disease, had more heart attacks, and died earlier than optimists. Cynical women were also more likely to develop cancer. Like anger, anxiety may forecast the development of coronary heart disease. We all know the tight feeling that we get in the chest when we become very anxious, so we should not be surprised to learn that this emotion can affect heart health. The circumstantial evidence supporting anxiety as a marker of heart risk is strong. In an observational study of 50,000 18- to 20-year-old Swedish men, those with high levels of anxiety substantially increased their risk of developing coronary heart disease over the next 37 years. A recent meta-analysis incorporating twenty studies and nearly 250,000 individuals also found that anxiety is associated with development of coronary heart disease. Once again, the more frequent and intense the anxiety and worry, the more likely the development of heart disease. Veterans with post-traumatic stress disorder, which is characterized by intense anxiety, tend to have more calcium (a marker of coronary blockages) in their hearts' arteries than do soldiers without the disorder. On the civilian front, people who suffer from panic disorders face an increased risk of developing heart problems. Excerpted from Heart 411: The Only Guide to Heart Health You'll Ever Need (Three Rivers Press)
<urn:uuid:4f87c56c-7069-4932-9645-befc7f4a85e3>
{ "dump": "CC-MAIN-2014-52", "url": "http://www.theatlantic.com/health/archive/2012/02/do-negative-emotions-stress-and-anxiety-lead-to-heart-disease/251540/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774899.57/warc/CC-MAIN-20141217075254-00126-ip-10-231-17-201.ec2.internal.warc.gz", "language": "en", "language_score": 0.954206645488739, "token_count": 833, "score": 2.609375, "int_score": 3 }
Video Message by António Guterres, Secretary-General of the United Nations, on the Launch of the Policy Brief on Education and COVID-19 4 August 2020 Education is the key to personal development and the future of societies. It unlocks opportunities and narrows inequalities. It is the bedrock of informed, tolerant societies, and a primary driver of sustainable development. The COVID-19 pandemic has led to the largest disruption of education ever. In mid-July, schools were closed in more than 160 countries, affecting over 1 billion students. At least 40 million children worldwide have missed out on education in their critical pre-school year. And parents, especially women, have been forced to assume heavy care burdens in the home. Despite the delivery of lessons by radio, television and online, and the best efforts of teachers and parents, many students remain out of reach. Learners with disabilities, those in minority or disadvantaged communities, displaced and refugee students and those in remote areas are at the highest risk of being left behind. And even for those who can access distance learning, success depends on their living conditions, including the fair distribution of domestic duties. We already faced a learning crisis before the pandemic. More than 250 million school-age children were out of school. And only a quarter of secondary school children in developing countries were leaving school with basic skills. Now we face a generational catastrophe that could waste untold human potential, undermine decades of progress, and exacerbate entrenched inequalities. The knock-on effects on child nutrition, child marriage and gender equality, among others, are deeply concerning. This is the backdrop to the Policy Brief I am launching today, together with a new campaign with education partners and United Nations agencies called ‘Save our Future’. We are at a defining moment for the world’s children and young people. The decisions that governments and partners take now will have lasting impact on hundreds of millions of young people, and on the development prospects of countries for decades to come. This Policy Brief calls for action in four key areas: First, reopening schools. Once local transmission of COVID-19 is under control, getting students back into schools and learning institutions as safely as possible must be a top priority. We have issued guidance to help governments in this complex endeavour. It will be essential to balance health risks against the risks to children’s education and protection, and to factor in the impact on women’s labour force participation. Consultation with parents, carers, teachers and young people is fundamental. Second, prioritizing education in financing decisions. Before the crisis hit, low- and middle-income countries already faced an education funding gap of $1.5 trillion dollars a year. This gap has now grown. Education budgets need to be protected and increased. And it is critical that education is at the heart of international solidarity efforts, from debt management and stimulus packages to global humanitarian appeals and official development assistance. Third, targeting the hardest to reach. Education initiatives must seek to reach those at greatest risk of being left behind – people in emergencies and crises; minority groups of all kinds; displaced people and those with disabilities. They should be sensitive to the specific challenges faced by girls, boys, women and men, and should urgently seek to bridge the digital divide. Fourth, the future of education is here. We have a generational opportunity to reimagine education. We can take a leap towards forward-looking systems that deliver quality education for all as a springboard for the Sustainable Development Goals. To achieve this, we need investment in digital literacy and infrastructure, an evolution towards learning how to learn, a rejuvenation of life-long learning and strengthened links between formal and non-formal education. And we need to draw on flexible delivery models [methods], digital technologies and modernized curricula while ensuring sustained support for teachers and communities. As the world faces unsustainable levels of inequality, we need education – the great equalizer – more than ever. We must take bold steps now, to create inclusive, resilient, quality education systems fit for the future.
<urn:uuid:003b1c2e-4f5d-402b-8b6f-733cb66b53ec>
{ "dump": "CC-MAIN-2020-40", "url": "https://www.en84.com/9569.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00042.warc.gz", "language": "en", "language_score": 0.6769272685050964, "token_count": 862, "score": 3, "int_score": 3 }
Colors and Color Guards Flags are almost as old as civilization itself. Imperial Egypt and the armies of Babylon and Assyria followed the colors of their kings. Ancient texts mention banners and standards. The flag that identified nations usually were based on the personal or family heraldry of the reigning monarch. As autocracies faded or disappeared, dynastic colors were no longer suitable and national flags came into being. These national flags such as the Union Jack of Great Britain, the Tricolor of France and the Stars and Stripes are relatively new to history. When the struggle for independence united the colonies, there grew a desire for a single flag to represent the new Nation. The first flag borne by our Army representing the 13 colonies was the grand union flag. It was raised over the Continental Army at Cambridge, Massachusetts, on 2 January 1776. The Stars and Stripes as we now know it was born on 14 June 1777. The flags carried by Color-bearing units are called the national and organizational colors. The Colors may be carried in any formation in which two or more company honor guards or representative elements of a command participate. The Command Sergeant Major is responsible for the safeguarding, care and display of the organizational color. He is also responsible for the selection, training and performance of the Color bearers and Color guards. The honorary position for the CSM is two steps to the rear and centered on the Color guard. Because of the importance and visibility of the task, it is an honor to be a member of the Color guard. The detail may consist of three to eight soldiers, usually NCOs. The senior (Color) sergeant carries the National Color and commands the Color guard unless a person is designated as the Color sergeant. The Color sergeant gives the necessary commands for the movements and for rendering honors. The most important aspect of the selection, training and performance of the Color guard is the training. Training requires precision in drills, manual of arms, customs and courtesies and wear and appearance of uniforms and insignia. A well trained color guard at the front of unit's formation signifies a sense of teamwork, confidence, pride, alertness, attention to detail, esprit de corps and discipline. The Color Guard detail should perform its functions as much as possible in accordance with ARs 600-25, 670-1 and 840-10 and FM 22-5.
<urn:uuid:1bd05af3-b16d-4e50-8122-bf19aa78c18e>
{ "dump": "CC-MAIN-2016-50", "url": "http://www.armystudyguide.com/content/army_board_study_guide_topics/nco_history/colors-and-color-guards.shtml", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540698.78/warc/CC-MAIN-20161202170900-00298-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9566457271575928, "token_count": 482, "score": 3.453125, "int_score": 3 }
Inside ACL Injuries ACL injuries are an ever-present threat to the health of professional athletes who participate in high demand sports like football, basketball, and soccer. The latest victim being the Houston Texans rookie quarterback Deshaun Watson, who suffered a torn ACL during practice this weekend. However, this type of injury does not only target pro athletes, anyone can suffer an ACL injury regardless of age or gender, from kids playing sports at school to weekend warriors who want to have a little fun outdoors, and yes even us couch potatoes can sustain an ACL injury. What is an ACL Injury? The knee joint is comprised of three bones, the femur (thighbone), the tibia (shinbone) and the patella (kneecap). These three bones connect to each other via ligaments that hold the bones together and provide stability to the knee. There are two kinds of primary ligaments in the knee: Collateral Ligaments: Found on either side of the knee, these ligaments are responsible for controlling the sideways movement of the knee. Cruciate Ligaments: These are inside the knee, there are two of them the anterior cruciate ligament (ACL) which is in front and the posterior cruciate ligament located in the back. These ligaments cross each other to form an X and are responsible for controlling the back and forth motion of the knee. The ACL prevents the tibia from sliding out in front of the femur and provides rotational stability to the joint. An ACL injury occurs when there is a tear in the ligament, more than half of ACL injuries cause damage to other structures in the knee such as cartilage, meniscus, and other ligaments. Injured ligaments are considered sprains, but, depending on the severity they can be: Grade 1 Sprains: Ligaments are slightly stretched but are still able to keep the knee stable. Grade 2 Sprains: In this type of sprain the ligament has stretched to the point where it becomes loose, this is considered a partial tear of the ACL. This type of injury is rare. Grade 3 Sprains: Known as a complete tear of the ligament. In these cases, the ligament is so severely stretched it splits into two pieces, causing the knee to become unstable. Causes and Symptoms of ACL Injuries The most common causes of an ACL injury include: Being hit hard on the side of the knee, as is the case in a football tackle. Overextension of the knee joint. Changing direction rapidly. Slowing down while running. Landing from a jump incorrectly. Pivoting with your foot firmly planted. Symptoms of an ACL injury include: Feeling or even hearing a pop in the knee at the time the injury occurs. Pain on the outside and back of the knee. Swelling within the first few hours of the injury, sudden swelling is usually a sign of a serious knee injury. Loss of full range of motion. Instability of the knee joint, feeling as if your knee is “giving way” or buckling with weight bearing. Preventing ACL Injuries Proper training and exercise is the best way to prevent ACL injuries, the ER Trained Physicians at Altus Emergency Centers recommend: Doing exercises to strengthen your leg muscles, particularly the hamstring to ensure overall balance in leg muscle strength. Exercises to strengthen your core which includes your hips, pelvis, and lower abdomen. Use proper techniques and knee position when jumping and landing. Improve pivoting techniques. Wear footwear and padding that is appropriate for the type of activity you are engaging in.
<urn:uuid:bf04e6c7-272a-4e0b-bc18-183f2ab74516>
{ "dump": "CC-MAIN-2019-13", "url": "https://www.altusemergency.com/tag/causes-and-symptoms-of-acl-injuries/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.64/warc/CC-MAIN-20190320170159-20190320192159-00155.warc.gz", "language": "en", "language_score": 0.9315144419670105, "token_count": 776, "score": 2.578125, "int_score": 3 }
Anthropomorphism is the attribution of human qualities to anything other than a human being – in literature it is typically done with animals or natural events. Aesop’s fables are a good place to start if you want to see anthropomorphism in action. The reason that it is immoral is that animals aren’t humans and can’t talk. As such, it’s a deception. Think for a moment, on what life would be like if your pets really could talk. There is no reason to believe they would be kind or even helpful. Your cat for example, is probably a jerk, potty-mouthed, and likely a bit of a passive-aggressive racist. You would be sitting there on your end of the couch, Mr. Tiddles on his and you’d be watching the news while he licked his anus with that raspy pink tongue of his, buffing that ruddy pink spider-bite to a new-car shine. Something would come on and you’d only be half paying attention when he’d stop his counter-clockwise lingual rotation and say “I just don’t trust those people!” You would look over and say “Did you actually just say that? Did I actually just hear that?” and he’d go back to licking himself, very slowly, clockwise this time – for two complete rotations – and without ever breaking eye-contact with you (a challenge to your authority!) he’d stop and say “Why don’t you take a f*cking picture, Fatty. It’ll last longer.” And that kind sirs and good madams, is your cat. Don’t even get me started on your dog. If your dog was an actual person he’d repeat everything you said right back to you as a question, be obsessed with poop, and be like that kid in class the teacher shuts in a closet because he’d immediately stick his hand down his pants anytime he heard his name called, even if it was just to go into the closet for having his hand down his pants. The above illustrate why we should not anthropomorphisize. I don’t even think anthropomorphisize is a real word. It didn’t pass spellcheck here. So don’t do it. Now go take Mr. Tiddles to the vet and have him put down. He deserves it. Keep the faith brothers and sisters, and I’ll be back after Watership Down is over. LOVE that movie.
<urn:uuid:ace91a58-13ed-4dff-aba7-6aabbaa6fc7a>
{ "dump": "CC-MAIN-2017-26", "url": "https://stevepassey.wordpress.com/2014/03/28/anthropomorphism-its-morally-wrong/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323588.51/warc/CC-MAIN-20170628083538-20170628103538-00111.warc.gz", "language": "en", "language_score": 0.9662528038024902, "token_count": 551, "score": 2.890625, "int_score": 3 }
But the electron affinity refers to an isolated gaseous atom while the E. An electronegativity table of the elements has the elements arranged exactly like in a periodic table, except that each atom is labeled with its electronegativity. NaBr exhibits the classic of ionic substances whereas is a gas a room temperature. You can then assess the quality of a bond between 2 atoms by looking up their electronegativities on the table and subtracting the smaller one from the larger one. If we take closed shelled elements also in to account, it is neon. Answer: Fluorine Which element has the lowest E. In other words, the shared pair of electrons do not lie in the middle of the molecules but shift towards the atom having greater electron affinity. The O is more electronegative than the two Hs, so it holds the electrons more tightly and makes the entire molecule partially negative at the O end and partially positive at the H ends. This changes because as you move from left to right on the periodic table each additional electron added is not dignificantly farther away … from the nucleus but the charge in the nucleus increases so it attracts that electon with a greater force. This pattern will help when you are asked to put several bonds in order from most to least ionic without using the values themselves. If the difference is between 1. N difference between the two atoms by the Hannary and Smith relationship. Since Pauling's formula only calculates differences, it is crucial that we are given the electronegativity of the one of the atoms in a compound before being able to perform our calculations. You can not talk about how strongly electrons attract when they join, if they do not join. A value between zero and two represents a polar covalent bond. This bond does not contain atoms at all, it consists of two ions. If only nonmetals are involved, the bond is considered polar covalent. Provide details and share your research! If you're asking because you need homework help its a completely different process than what you're used to. In between the two exist the polar covalent bonds. I would say that depending on the compound. It can also be used to predict if the resulting molecule will be polar or nonpolar. Electronegativity changes from atom to atom because the force between the protons in the nucleus and the electrons in the outer shell changes from atom to atom. We can't talk about the electronegativity of one atom in a vacuum. Obviously there is a wide range in bond polarity, with the difference in a C-Cl bond being 0. A bond in which the electron pair is equally shared is called a nonpolar covalent bond. N as the electrostatic force exerted by the nucleus on the valence electrons. Have you ever noticed how some people attract others to them? Electronegativity may be expressed on the following three scales. Find the first ionization energy of your atom. Typically this exchange is between a metal and a nonmetal. Electronegativity is the ability of an atom to attract electrons or electron density to itself. To calculate electronegativy, find the electronegative values of each element involved in the bond. Inert gases have zero E. It has been found that the difference in electronegativity chart varies with the environment of the element. The larger the difference in the electronegativities, the more negative and positive the atoms become. Electronegativities give information about what will happen to the bonding pair of electrons when two atoms bond. N difference between the two atoms and then by assigning arbitrary values to few elements e. Electronegativity is not measured in energy units, but is rather a relative scale. Mulliken Scale In this Mulliken scale, E. The atom that more strongly attracts the bonding electron pair is slightly more negative, while the other atom is slightly more positive. This is because their nuclei does not have a strong attractive force on electrons. The results of the M. Pauling did not assign electronegativities to the noble gasses because they typically do not form covalent bonds. With a few exceptions, the electronegativities increase, from left to right, in a period, and decrease, from top to bottom, in a family. However, this effect is reduced in longer molecules. For this, it definitely depends on the two atoms you're looking at, and will not be constant throughout - however, it will also notsimply be the difference you'd calculate from an electronegativity table because of the effects mentioned above. Use an electronegativity table as a reference. That isn't to say we can't speak in averages, and for all intents and purposes Though not technically , the effective electronegativity of an oxygen atom bound to a carbon atom will be more or less the same.
<urn:uuid:4e0127f1-6d72-417b-b203-4f2af693e2d2>
{ "dump": "CC-MAIN-2022-05", "url": "http://webstreaming.com.br/how-to-determine-electronegativity-of-an-element.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00422.warc.gz", "language": "en", "language_score": 0.9398726224899292, "token_count": 1002, "score": 3.9375, "int_score": 4 }
Room 20 at the National Portrait Gallery The Road to Reform This largest room on the top floor has two large group portraits dealing with reform dominating the display. On the end wall is the House of Commons, 1833, commemorating the passing of the Great Reform Act in 1832, whilst the Anti-Slavery Convention, 1840 dominates another wall, with a portrait of Wilberforce closeby. On the wall opposite there is a full-length portrait of the parliamentary reformer Sir Frances Burdett, one of a number of portraits in the room by Sir Thomas Lawrence. There is also a striking two-tier plinth of white marble busts at the entrance to the room. 33 portraits on display in Room 20 at the National Portrait Gallery
<urn:uuid:63b314bc-b02c-4cac-810d-418979bed087>
{ "dump": "CC-MAIN-2017-47", "url": "http://www.npg.org.uk/collections/search/portrait-list.php?locid=33&displayStyle=thumb", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00728.warc.gz", "language": "en", "language_score": 0.9219404458999634, "token_count": 155, "score": 2.8125, "int_score": 3 }
lactone lac·tone (lāk'tōn') An anhydride formed by the removal of a water molecule from the hydroxyl and carboxyl radicals of hydroxy acids. Any of various organic esters derived from organic acids by removal of water. Lactones are formed when the carboxyl (COOH) group of the acid reacts with a hydroxyl (OH) group in the same acid, releasing water and causing the carbon atom to join to the hydroxyl's remaining oxygen atom, forming a ring. Vitamin C, the antibiotic erythromycin, and many commercially important substances are lactones.
<urn:uuid:5d803343-05ac-4ad9-8ca7-f0a2d0b83292>
{ "dump": "CC-MAIN-2016-07", "url": "http://dictionary.reference.com/browse/lactonic?qsrc=2446", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00237-ip-10-236-182-209.ec2.internal.warc.gz", "language": "en", "language_score": 0.9203613996505737, "token_count": 135, "score": 3.171875, "int_score": 3 }
Hawksbill Sea Turtle Hawksbill Sea Turtle (Eretmochelys imbricata) Spanish Name: Tortuga carey The hawksbill prefers shallow coastal waters and is frequently found around underwater rocks and coral reefs. Sometimes they are seen in estuaries. This turtle lives in the warm tropical and subtropical waters of the Pacific, Atlantic, and Indian oceans as well as the Caribbean Sea. The hawksbill is distinct among the sea turtles. It has a beaklike mouth that is curved and sharp, helping the turtle protect itself and eat. This is the only turtle with overlapping scutes (plates) on its shell. The shell is heart-shaped or shield-shaped; the edge becomes serrated toward the rear of the body. The hawksbill also has two pairs of large scales on its head between the eyes. Finally, and importantly, this turtle has an iridescent brown and cream pattern on its back popularly known as "tortoiseshell."? Both sexes of this species have brown to reddish-brown scales on their skin that are bordered with yellow. The plastron, or underside, is yellow. The female's plastron is flat, but the male has a plastron that curves inward. This helps the male hold onto the female better when they mate, because her shell curves out. The male also has a long tail that can help him hold on. Biology and Natural History Dangerous as it is delicate, the hawksbill can and does defend itself with its large, hooked upper jaw. This is one of the most aggressive sea turtles. Males and females meet in the shallow waters offshore of nesting beaches between April and November, with the highest activity in June or July. But these turtles mate when the female is returning to sea after laying her eggs, so their coupling is for a future season. Unlike the green turtle and olive ridley, the hawksbill is an individual nester. A female comes to nest once every three years, and she lays multiple clutches of eggs. At night, she crawls ashore, digs a chamber in the sand with her hind flippers, and releases between 50 and 200 eggs. She then pushes sand back over the eggs and heads back to the surf. Her eggs will hatch after 8 or 9 weeks, and the young turtles will begin racing to the water. This turtle's diet makes it poisonous to humans (see below), but the hawksbill has still been overhunted for a long time. The hawksbill's shell, uniquely patterned and considered fashionable in some places, made the turtle a target for hunters. The shells were used for combs, jewelry, and other trinkets at the cost of heavily endangering the irreplaceable hawksbill. Now tortoiseshell patterns are manufactured so there should not be demand for the turtle's body. Still, poachers kill these turtles for their shells, meat, and eggs. The hawksbill is a critically endangered species. It is protected by international policies and illegal to hunt. Read about Sea Turtles: Promises and Threats to learn more about the challenges facing this animal. As a younger turtle, the hawksbill eats both plants and prey, but as it matures it eats more meat, including fish, mollusks, and sponges. It is able to eat jellyfish, and even eats the dangerously poisonous Portuguese man-of-war. Toxins from the jellyfish and sponges that the turtle eats make it a dangerous meal for humans, even though some still sell it as food. As long as people keep buying turtle eggs, meat, or shells, the fate of these delicate creatures is at great risk. At its largest, the hawksbill can reach almost 100 cm in length. Leenders, Twan. A Guide to Amphibians and Reptiles of Costa Rica. Zona Tropical, S.A, Miami, FL, 2001. -Amy Strieter, Wildlife Writer
<urn:uuid:6f7baa47-8e8c-4931-b22f-904981732ff0>
{ "dump": "CC-MAIN-2022-40", "url": "https://www.anywhere.com/flora-fauna/reptile/hawksbill-sea-turtle", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00591.warc.gz", "language": "en", "language_score": 0.9586854577064514, "token_count": 865, "score": 3.59375, "int_score": 4 }
posted on Jun, 20 2004 @ 12:39 AM A warning: This is mostly a brain storming / rough rough draft of a paper that I might attempt to flesh out. Question 1: Does science necessarily find truth? Answer 1: No. Science is inherently self correcting. The ideas that are given to us by the scientific method change all the time -- that is, science tends to be in a more historical method of finding truth: "What is true depends on the time." New discoveries overthrow old explanations of events, and as a result, what is thought to be true (With respect to science) is always negotiable. Why is this? The scientific method has a horrible draw back that is not very well known about outside of philosophy circles: the problem of induction. As most know, the heart of the scientific method is the experimentation and the observation thereof. Let me give a mathematical example of the problem, then apply it to the typical science experiment ... Let a function, say f(x) exist. Now, we notice that every time we insert an even number into the function f(2), f(4), f(6) .. and so on, the output Can we then conclude for all even numbers of x that f(x) = 1? The answer is no; it could be the case that the number just after the last one we tried would give a different value. That is, supposing we went up to f(100000), f(1000002) could output 2 (we wouldn't know -- we didn't try it). The point being, we can never be with a certainty about f(x) until we tried all possible even numbers (an infinite amount!). Some would say this is obvious, but how does it apply to science? When science tests a variable within an experiment, it only does so a finite amount of times. That is, there could exist an experiment where the same observation isn't made. This is the heart of why science must be self correcting; it is always possible for a future observation to be different. Question 2: If science doesn't find truth, then why do things built with scientific principles work?! Answer 2: It is simple, really. Just because something works at doing a task doesn't mean the ideas behind its mechanism is true. For a quick analogy, consider a possible scenario where every time a person leans on a specific wall a light turns on. This occurs with all people on this particular wall. Is it true that leaning on a wall will turn on this light? Not exactly; for all you know, the light turns on because of a specific motion (say a human leaning motion) that is programmed to turn on the light. Bringing this all together: Consider the Newtonian concept of gravity. While it might be a phenomenon related to the distance and masses of objects, for all we know there could be little faeries pulling us down that happen to correlation with masses and distance. The truth is not known, even though the equations work! Question 3: Okay, so science doesn't give truth. It only "works." Do you have any other problems with it? In short, science is silly in some regards. Most sciences can be considered a specialized part of physics; i.e. what is biology but particles in motion? What is chemistry but particles in motion? etc. The fundamental idea behind physics is "force." What is a "force?" Existentially, a force is a "push or pull." However, having spoken with a couple of physics professors, I have been corrected. More precisely, a "force" is more of an abstract concept. Why is this? All things in the physical universe (according to the physics professors) are matter and energy (not to mention matter and energy are related). The common thing both matter and energy have is mass (even an electron has mass), and if they have mass, then they have weight. (weight = mass * accel. of gravity). A "force" does not have mass. It can not be weighed. It can not therefore be matter or energy. It can not therefore be in the universe. The contradiction is apparent: Physicists talk about forces in the universe when it can not exist in the universe. Silly? I think so. Question 4: It may be silly, but it still works. What's your problem? Answer 4: By "works" I assume we all mean technology that uses the scientific method's conclusion work. Let me point out something odd: Technology (and the science behind it) comes to maturity the quickest in warfare research. The every day technology that we use now have some interesting backgrounds: The wrist watch was first developed to time cannon fire, for example. What does this mean? What "works" may not necessarily be the best for mankind. Considering human nature, (especially at the point we are at right now), do we really want an atomic bomb to work? Do we want machine guns to work? Do we want steel to become harder for better sabres? For any reader of Gulliver's Travels, it is here one might focus their attention to the philosopher king that would not allow gun powder in to his kingdom. The same point is reached: technology is not necessary for happiness (as the horses point out), nor should humans necessarily be trusted Question 5: What about medicine? Answer 5: Medicine never necessarily needed the scientific method. The scientific method was first really brought up by Descartes ("Discourse on the Method"), and there existed medicine before then. Not to mention, before then (and even now!) people were claimed to be healed from methods outside of current scientific research. My conclusions on science follow as a consequence: 1. Science does not necessarily give us truth. 2. Technology that results from science is not necessarily good for us. 3. The value of science depends on how much value one places on truth, and how much value one places on "what works."
<urn:uuid:9bfb12ea-f2fa-4973-b890-c9b91312a1ec>
{ "dump": "CC-MAIN-2018-43", "url": "http://www.abovetopsecret.com/forum/thread60406/pg1", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00135.warc.gz", "language": "en", "language_score": 0.9445513486862183, "token_count": 1273, "score": 2.953125, "int_score": 3 }
Although evidence of cognitive impairment in MSA is admittedly more limited than in Parkinson disease, it is now substantial enough to address modification of diagnostic criteria to include the potential for cognitive impairment at any stage of the disease. Evaluating the risk factors of neurocognitive decline in HIV. Findings from animal studies indicate that anesthetics may be neurotoxic and could result in long-term central nervous system alterations and cognitive dysfunction. Physicians should provide counseling to adolescents on the effects of marijuana. Mechanisms including silent cerebral infarct, microemboli, microbleedings, and cerebral hypoperfusion may be responsible for the link between atrial fibrillation and cognitive decline. Individuals with prevalent dementia and any nursing home use had especially high 1-year mortality. Cognitive dysfunction may be the result of lower intracerebral vasomotor reactivity which is linked to PHPT. Previous reports have suggested that statins or the low levels of LDL cholesterol they promote may have an adverse impact on cognition. Findings from this trial challenge current recommended systolic blood pressure targets for older adults. Visual dysfunction is associated with poor cognitive function in older adults. Cognitive deficits can be caused by sleep-disordered breathing. Effective cognitive screening instruments are needed in order to assess and manage milder forms of HIV-associated neurocognitive disorders. Most patients undergoing skull base irradiation for cancer do not have detectable cognitive impairment. Reduced intelligence may be linked with developmental exposures to polybrominated diphenyl ethers. A set of biomarkers is associated with cognition in male professional fighters. Many residents in nursing homes with cognitive impairments may be taking potentially inappropriate medications. Light therapy has a moderate effect on behavioral disturbances and depression for people with cognitive impairment. The researchers found that postoperative delirium occurred in 24% of participants and that 12% had 2 or more delirium days. Improved cognitive function in older adults is being linked to the Mediterranean diet. Cognitive dysfunction is 3 times higher in patients with systemic lupus erythematosus. An intensive lifestyle intervention including diet and exercise resulted in no improvements in cognitive impairment. Significant benefits were seen with aerobic exercise, resistance training, multicomponent training, and tai chi. Chronic use of low-dose aspirin was not associated with onset of dementia or cognitive impairment, but was also not associated with significantly better global cognition. The researchers found that higher baseline HOMA-IR and fasting insulin levels were independent predictors of poorer verbal fluency performance. Patients with subclinical hypothyroidism scored worse on tests of processing speed than patients with normal thyroid function. Previous research has suggested that statins and PCSK9 inhibitors may adversely affect cognition. Middle-aged adults with orthostatic hypotension were found 40% more likely to develop dementia over time. The study results suggest that patients with non-NPSLE have impaired memory function indicative of working memory-related neural dysfunction. Engaging in mentally stimulating activities like games, crafting, and computers may help slow cognitive decline in the elderly. Specific differences were found in the brains of those who attempted suicide and those who did not. Neurology Advisor Articles - Coffee Consumption May Decrease Stroke Risk - Prochlorperazine More Effective Than Ketamine for Headache Relief in the Emergency Department - CTE Confirmed With Antemortem PET Imaging, Autopsy in Professional Football Player - Intranasal Sumatriptan More Effectively Reduces Migraine-Associated Nausea - Neuropsychiatric Conditions Common in Relatives of Patients With Amyotrophic Lateral Sclerosis - Treating Cluster Headache: Weighing Current Therapies - Alzheimer Disease Linked to High Cumulative Doses of Zolpidem in Elderly - NBT System Gets FDA Clearance for Depression Treatment - Better Migraine Pain Relief With IV Non-Opioid Combination vs IV Hydromorphone - FDA Approves Myasthenia Gravis Treatment - Reports of Agranulocytosis Prompt Monitoring for Investigational Parkinson's Drug - Nusinersen Improves Motor Function, Survival in Infants With Spinal Muscular Atrophy - Noninvasive Brain Stimulation System Approved for Depression - Hypertension Guidelines Updated by AHA/ACA - Bone Mineral Density Associated With Intracranial Aneurysm Presence, Size
<urn:uuid:09a57620-f4ee-4ab8-b59a-e372e91f4425>
{ "dump": "CC-MAIN-2017-47", "url": "http://www.neurologyadvisor.com/neurocognitive-disorders/topic/40217/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806421.84/warc/CC-MAIN-20171121170156-20171121190156-00505.warc.gz", "language": "en", "language_score": 0.9119579792022705, "token_count": 933, "score": 2.53125, "int_score": 3 }
The Revolution That Led to the Iran We Know Today The 1979 Iranian Revolution set the stage for today’s tough negotiations over Iran’s alleged nuclear ambitions. WSJ’s Jason Bellini has The Short History of Iran’s revolutionary ideals. This transcript has been automatically generated and may not be 100% accurate. ... in today's Iran ... conservative and reformist forces are battling for supremacy ... Thames performers present son wanting to liberalize drones political system ... relaxed strict Islamic code ... are met with stiff resistance in Ayr on ... your credit dictatorship ... the present power is limited by the Supreme leader ... Ayatollah ... he or she ever wants court Islamic government and its antagonistic relationship with the West ... can be traced to the nineteen seventy nine Islamic revolution ... here's the short history it's the nineteen seventies ... your arms economy faltering ... it still played a key to the shop grows increasingly unpopular ... among it was intellectual class the scene is beholden to the US and Israel ... a secular Muslim his social reforms ... including giving women the right to vote ... for him at odds with the bronze religious leaders ... day help spread the writings and speeches of Ayatollah Khomeini ... who shot pushed into exile nineteen sixties ... abroad I've told me you'll street protests calling the overthrow shop ... shop supporters non violent counter protests ... Iran is on the brink of civil war ... then you're left as the meantime the child is drawn in what was officially described as a vacation ... open the way for the Ayatollah to return from exile to popular acclaim ... ten days later the Shah's appointed prime minister goes into hiding ... and Khomeini and his hardline Islamic supporters sees now ... death toll declares Iran Islamic Republic and himself ... as creamy ... as anti Western sentiment bills ... many of the country's western educated the lead the country ... later that year a group of Iranian protesters demand the extradition of the shop back to Rome ... so we can face trial and execution ... at this time he was in the US for cancer free ... the protesters stormed the US embassy ... seeking sixty six people hostage ... held hostage rescue attempts results in the depths of the U S servicemembers hostages are finding that these ... four hundred and forty four days later just minutes after the new American President Ronald Reagan takes the oath of office ... succeeding Jamie quarter ... since then the hardliners with an iron hand ... trading today's ongoing conflict between the former Sri locked country strict Islamic social code ... and what federal issues with the West ... and country entrenched conservative establishment ... to find those ideas at odds with their revolutionary right to its ... that's the short history of the Iranian revolution ...
<urn:uuid:e5359281-d95c-4105-8d3b-1231dce8ddf9>
{ "dump": "CC-MAIN-2018-30", "url": "https://www.wsj.com/video/the-revolution-that-led-to-the-iran-we-know-today/EDAEA07F-0B00-43F0-B312-E85FA7E9D6DD.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00016.warc.gz", "language": "en", "language_score": 0.9478819966316223, "token_count": 538, "score": 2.515625, "int_score": 3 }
Juan Mendoza, a seventh grader at Fair Haven Middle School, used to be fat. Like most overweight kids in middle school, he endured his share of taunting. But as much as it bothered him, he didn?t know how to change his situation. ?I used to just eat and not really care. I didn?t know why I was eating what I was eating,? he remembered. At the end of the school day, having skipped lunch and breakfast, Juan would hit the vending machines and stuff himself with unhealthy treats. On the way home local convenience stores lured him in with the promise of his favorite snack: Doritos. At home Juan?s family served him fried food; until last spring he had never even tried broccoli or cauliflower. But in the fall of 2001, all of that changed. For 16 weeks Juan and 20 of his peers participated in an after-school program designed by Dr. Margaret Grey, Assistant Dean of Research Affairs at Yale School of Nursing, and a team of colleagues. To Dr. Grey and her team, Juan is representative of a national ?epidemic of obesity? that has hit low-income and minority populations disproportionately hard. Though all of America is getting heavier as a result of reduced physical activity and poorer nutritional habits, socio-economic factors often determine who is most affected. It is a fact that the poorest people in this country are also the most overweight. In this regard, New Haven is a prototypical American city. Statistics highlight the gaping health discrepancies that exist nation-wide between upper- and middle-class citizens and inner-city dwellers. While approximately 16 percent of American youth are obese, in New Haven, where 85 percent of students in public schools are either African American or Hispanic, and the majority eat state-subsidized free lunches, the numbers hover between 45 and 50 percent. 41 students participated in the 16-week program at Fair Haven and Sheridan middle schools. All of them were considered clinically obese and all but one of them were either African American or Hispanic. Yale psychology professor and obesity expert Kelly Brownell likens the weight crisis to the early days of the hiv/aids epidemic. ?Obesity is somewhat like hiv/aids was in that it is a stigmatized problem and so despite its dire consequences the public is slower to respond,? he explained. For the most part, poor people in America do not have access to healthy food, cannot afford physically active lifestyles, and live in communities where obesity is commonplace. More troublesome, however, is the fact that obesity is the number one cause of Type 2 diabetes, one of the fastest growing diseases in America. In a statement issued last winter calling for changes in school lunch policies and the fast-food industry, Surgeon General David Satcher lamented, ?The nation?s obesity epidemic has gotten so bad it soon may overtake tobacco as the leading cause of preventable deaths.? For Juan, the program presented an opportunity to improve his health, to stop being teased about his size, and to become an exception to the rule. A lot was at stake for Dr. Grey and her research team as well: If the program, one of the first of its kind, could reverse trends towards obesity and, more importantly, Type 2 diabetes in Juan and his peers, despite their home and school environments, then it could be a viable answer to a national problem. But success will depend on the program?s ability to counter problems deeply engrained in American society. Statistically, Latino males between the ages of six and twelve like Juan have the highest incidence of childhood obesity?clinically defined by a ratio of height to weight above the 85th percentile. Childhood obesity has been linked to low self-esteem, altered body image, decreased preferences for physical activity, and depression. The most alarming problem, however, and the one that Dr. Grey is most worried about, is the direct connection to the early onset of Type 2 diabetes, a condition that impairs the body?s ability to use insulin. As a result, fats and sugars are less effectively digested, causing high blood sugar levels. This can lead to reduced energy, high blood pressure, heart disease, and kidney disease. In 1980, only 2 percent of Type 2 diabetes cases occurred in children between the ages of nine and 19. Now that figure has jumped to between 40 and 50 percent. The sharp rise in Type 2 diabetes in children is a troubling indicator of what is to come. ?The problem here isn?t only health related,? explained Grey. ?This health epidemic has huge societal implications. These kids will be suffering from complications in their 20s that we haven?t generally seen until much later in life?and this doesn?t have to be the case.? But if the over-arching goals of Grey?s course are of national significance, its classroom goals are surprisingly basic: nutrition, exercise, and coping skills. Juan remembers the beginning of the class as being extremely challenging. ?Almost everything I learned was new and it was hard to change the kind of food I ate.? At the second session of the class, students were asked to talk about the kinds of foods they consume and think of why they might choose those foods. While choosing foods based on taste, cost, and convenience was familiar to the students, thinking about nutrition was not. High-sugar and high-fat foods are ubiquitous, regularly appearing in advertisements and promotions, while messages about nutrition are more obscure. According to Brownell, ?the economics of food are the reverse of what they should be. Unhealthy food is easy, cheap, everywhere, and tastes good.? The students? diets at the beginning of the course reflected this. ?Their diets were high-fat, high-carb and low-protein. They were drinking close to a liter of soda a day and didn?t know that it was a problem,? said Diane Berry, one of the primary researchers. During the first few weeks, the nutritionist for the course, Pamela Galasso, tried to give the students tools and information that they could use when making choices about food. ?I had to present them with a new way of talking to get them thinking about and actively participating in more meals,? said Galasso. The approach Galasso used was holistic. Rather than focus on diet and weight-loss, she tried to emphasize small changes that students could make. She presented them with ?culturally competent? food guide pyramids that included foods that the students typically ate, such as rice and beans, and taught them some mnemonic devices to help them make decisions about food. Among the devices were phrases like ?diet: Deprived Individuals Eat Too much??a reminder not to skip meals?and ?soda: Stop Options Decide Act??encouraging careful decision making when choosing a beverage. Though these strategies may seem simple, for students who didn?t know that ?four tennis balls? of rice was too much, they were welcome tools. Each week Juan made goals for the next week?s class based on what he had learned: ?Sometimes it was to add more vegetables or to eat some breakfast. I would try to eat less high fat food.? The course gave Juan clear messages about food and nutrition?messages that were not often reinforced at home or at school. Unfortunately, processed, high-fat, and high-calorie food is just as prevalent in schools as it is in homes and stores. Students on subsidized school lunch programs do not have many options when choosing what to eat. School lunches, though financed by the government and required to meet certain standards, are often high in fat and light on fruits and vegetables. ?The government policies are confused,? explained Grey. ?There are rules and regulations regarding school lunches. But in places like New Haven, where many of the meals are subsidized, the stuff they get free or cheap are the high fat choices.? In a recent study only 20 percent of schools met all the government?s nutritional requirements. ?Lots of times I didn?t like the school lunch,? Juan said, ?and so I would buy a soda or a candy bar or maybe both.? Juan?s decision to skip school meals and buy food from the vending machines was not unusual. ?Lots of these kids are eating two meals a day at school. If the school lunch doesn?t appeal to them, they turn to the vending machines. They have very limited healthy options,? said Berry. Schools across the country have lined their hallways with candy-stocked vending machines and filled their cafeterias with
<urn:uuid:520bcf99-683e-4bfe-a0b1-a040581ada40>
{ "dump": "CC-MAIN-2014-15", "url": "http://www.thenewjournalatyale.com/2002/11/the-fat-trap/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz", "language": "en", "language_score": 0.974892795085907, "token_count": 1782, "score": 3.203125, "int_score": 3 }
Traumatic losses such as the death of a loved one by suicide are far outside of what we normally expect in life. The reactions of suicide survivors often include and go beyond normal grief reactions in severity and duration. Common reactions include: • Distressing recollections of the death • Distressing dreams about the event • A feeling of reliving the experience • Feeling numb • Feeling emotionally detached from other people • Always feeling “on guard” • Difficulty working • Difficulty in social situations • Difficulty falling or staying asleep • Irritability or outbursts of anger • Difficulty concentrating Some survivors have a more difficult time healing. They develop more severe and lasting symptoms which are diagnosed as “Post Traumatic Stress Disorder.” (PTSD) There are many positive ways to cope with symptoms of trauma. A trained professional, experienced in suicide loss or treatment of traumatic grief, can be very helpful. Post Traumatic Stress Disorder Post Traumatic Stress Disorder is defined in DSM-IV, the fourth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual. For a doctor or medical professional to be able to make a diagnosis, the condition must be defined in DSM-IV or its international equivalent, the World Health Organization’s ICD-10. The diagnostic criteria for Post Traumatic Stress Disorder are defined in DSM-IV as follows: A. The person experiences a traumatic event in which both of the following were present: - the person experienced or witnessed or was confronted with an event or events that involved actual or threatened death or serious injury, or a threat to the physical integrity of self or others; - the person’s response involved intense fear, helplessness, or horror. B. The traumatic event is persistently re-experienced in any of the following ways: - recurrent and intrusive distressing recollections of the event, including images, thoughts or perception - recurrent distressing dreams of the event - acting or feeling as if the traumatic event were recurring (e.g. reliving the experience, illusions, hallucinations, and dissociative flashback episodes, including those on wakening or when intoxicated) - intense psychological distress at exposure to internal or external cues that symbolize or resemble an aspect of the traumatic event - physiological reactivity on exposure to internal or external cues that symbolize or resemble an aspect of the traumatic event. C. Persistent avoidance of stimuli associated with the trauma and numbing of general responsiveness (not present before the trauma) as indicated by at least three of: - efforts to avoid thoughts, feelings or conversations associated with the trauma; - efforts to avoid activities, places or people that arouse recollections of this trauma; - inability to recall an important aspect of the trauma; - markedly dimished interest or participation in significant activities; - feeling of detachment or estrangement from others; - restricted range of affect (eg unable to have loving feelings); - sense of foreshortened future (eg does not expect to have a career, marriage, children or a normal life span. D. Persistent symptoms of increased arousal (not present before the trauma) as indicated by at least two of the following: - difficulty falling or staying asleep; - irritability or outbursts of anger - difficulty concentrating; - exaggerated startle response E. They symptoms on Criteria B, C and D last for more than one month. F. The disturbance causes clinically significant distress or impairment in social, occupational or other important areas of functioning.
<urn:uuid:a3305be4-4eb4-4ca4-8462-664da95d57ff>
{ "dump": "CC-MAIN-2017-34", "url": "https://www.onsiteworkshops.com/programs/milestones-at-onsite/suicide-trauma/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00052.warc.gz", "language": "en", "language_score": 0.9170885682106018, "token_count": 740, "score": 3.0625, "int_score": 3 }
Exaptation, aka Co-Option: Still a Favorite Darwinian Excuse for Complex Adaptations in Nature Birds fly because parts (like feathers) popped into existence for no reason, but later were found useful (for flight). Is that any way to explain the complex adaptations in life? How evolutionary adaptations and innovations originate is one of the most profound questions in evolutionary biology. Previous work emphasizes the importance of exaptations, also sometimes called pre-adaptations, for this origination. These are traits whose benefits to an organism are unrelated to the reasons for their origination; they are features that originally serve one (or no) function, and become later co-opted for a different purpose. (Emphasis added.) So we see that exaptation, pre-adaptation and co-option are essentially synonymous, supporting the counter-intuitive idea that purposelessness leads to purpose. (For a general answer to co-option, see Unlocking the Mystery of Life, the section about the bacterial flagellum.) Barve and Wagner set out to determine whether evolution builds eyes, wings and other wonderful things primarily by natural selection or by exaptation. They also wanted to know how exaptations originate. Exactly how new traits emerge in evolution is a question that has long puzzled evolutionary biologists. While some adaptations develop to address a specific need, others (called "exaptations") develop as a by-product of another feature with minor or no function, and may acquire more or greater uses later. Feathers, for example, did not originate for flight but may have helped insulate or waterproof dinosaurs before helping birds fly Their short answer is, Yes -- exaptation outcompetes natural selection as evolution's mechanism of choice. The findings underscore the idea that traits we see now -- even complex ones, like color vision -- may have had neutral origins that sat latent for generations before spreading through populations, Wagner says. "Our work shows that exaptations exceed adaptations several-fold," he says. (Emphasis added.) The Empirical Data To support their claim, Barve and Wagner tinkered with the metabolism of E. coli bacteria. By altering steps in their metabolic networks in computational models, they found that some of the hypothetical bacteria were able to use other sources of carbon for energy: Starting with the metabolism of an E. coli that can survive on glucose as its sole carbon source, they subjected the complex metabolic chemical process to a "random walk" through the set of all possible metabolisms, adding one reaction and deleting another from it with each step. They kept constant the total number of reactions and the bacterium's ability to survive on glucose alone, but allowed everything else to change. Every few thousand steps they analyzed the altered metabolism's reactions. They found that most metabolisms were viable on about five other carbon sources -- sugars, building blocks of DNA or RNA, or proteins -- that are naturally common but chemically distinct compounds. To be certain that viability on these other carbon sources wasn't a natural consequence of viability on glucose, they tested metabolisms starting with viability on 49 other carbon sources, and each time found that exaptations emerged allowing the metabolism to survive on any one of several other carbon sources alone. They reasoned that neutral mutations pre-adapted them to use those energy sources, even though in the wild they fed exclusively on glucose. Since exaptation appeared to work here, it must be a universal mechanism available to evolution. But does their limited experimentation justify the claim that all of life generates complex adaptations by exaptations? There are a number of problems with their conclusions, not exhausted by the following list: 1. Assume a toolkit: They assumed that all necessary nutrient transporters were present. "If this is not the case, the incidence of exaptation may be reduced," they confessed. They felt justified, though, since "84% of E. coli transporters can transport multiple molecules, and their substrate specificity can change rapidly, thus ameliorating this constraint." What other parts did they assume, or provide by intelligent design? 2. Search space: If lucky exaptations just "emerge" randomly, they have an excessively large search space to find a function. Stephen Meyer discussed this problem in Chapters 9-12 of Darwin's Doubt in relation to protein folds, but the same reasoning applies here: "combinatorial inflation" guarantees that no random process will have the time to search the available functional space. Barve and Wagner admit their method relies on chance: Briefly, this procedure involves random walks in the space of all possible networks. During any one such random walk, a metabolic network can change through the addition and deletion of reactions. Although this process resembles the biological evolution of metabolic networks through horizontal gene transfer and (recombination-driven) gene deletions, we here use it for the sole purpose of creating random samples of metabolic networks from the space of all such networks. 3. Investigator interference: To avoid combinatorial inflation, which would have guaranteed failure on grounds of probability, they applied their own intelligent design to drastically reduce their test-tube "universe" of reactions: As we described earlier, we specifically used the REACTION and COMPOUND databases to construct our universe of reactions while excluding all reactions involving polymer metabolites of unspecified numbers of monomers, or general polymerization reactions with uncertain stoichiometry; reactions involving glycans, owing to their complex structure; reactions with unbalanced stoichiometry; and reactions involving complex metabolites without chemical information. The published E.?coli metabolic model (iAF1260) consists of 1,397 non-transport reactions. We merged all reactions in the E.?coli model with the reactions in the LIGAND database, and retained only the non-duplicate reactions. After these procedures of pruning and merging, our universe of reactions consisted of 5,906 non-transport reactions and 5,030 metabolites. 4. Wild type: They did not test the efficiency of alternate carbon sources in living bacteria, nor did they release any in the wild to see if they continued thriving on other carbon sources in real-world conditions. The whole setup was contrived in models. 5. Coding: They did not test whether the alternative lifestyles would recode the genes to preserve the change, so that it could be inherited; they just assumed standard mutation and selection would take care of it. 6. Pre-design: Living E. coli can and do use multiple carbon sources for metabolism, but Barve and Wagner cannot assume that their lab-derived hypothetical exaptations would arise without purpose in a random or neutral manner. Metabolic systems in living cells could be pre-designed to adapt. There are many examples in living cells of "backup plans" if environmental conditions present starvation of resources. 7. Designed variability: Life also uses targeted variation. Our immune system, for instance, randomly generates and varies antibodies in the thymus gland when presented with an unknown pathogen. Once a suitable fit is obtained, it is put to use -- a form of "artificial selection" by a system designed to use randomness for a purpose. The researchers did not consider whether the bacteria were prepared -- by design -- to respond to an imposed "random walk." 8. Extrapolation: Barve and Wagner acted recklessly in their extrapolation, taking results from an artificially small "universe" of reactions in one species of bacteria and assuming it could speak to a universal principle. From their highly contrived results, they presumed to pontificate about bird flight and eye design. 9. Bye-bye Darwin: If Barve and Wagner are right, it means 154 years of faith in natural selection to produce complex adaptations has been misguided. They appear worried about that, saying in their conclusion, "If confirmed in systematic analyses like ours, the pervasiveness of non-adaptive traits may require a rethinking of the early origins of beneficial traits." The Santa Fe Institute puts it this way: "If exaptations are pervasive in evolution ... it becomes difficult to distinguish adaptation from exaptation, and it could change the way evolutionary biologists think about selective advantage as the primary driver of natural selection." For these and other reasons, this paper and its accompanying news release amount to little but bluff and bluster. Their conclusions do not follow from their contrived setup. Maybe we should just ask Barve and Wagner if their own scientific work was an exaptation from hunting for meat or howling at the moon. If so, why would it have any validity? (See The Magician's Twin, Chapters 8-9). It's hard to take seriously the beliefs of those who see dinosaurs finding feathers "emerging" on their arms for no purpose, or to keep warm or attract mates, then imagine that birds found them useful for flying. Time to watch Flight: The Genius of Birds again.
<urn:uuid:92aaeede-632d-4035-acbd-47ab8668e28f>
{ "dump": "CC-MAIN-2017-09", "url": "http://www.evolutionnews.org/2013/08/exaptation_aka075351.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00098-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9446832537651062, "token_count": 1834, "score": 3.421875, "int_score": 3 }
Taking care of your teeth should be a part of your daily routine. Neglecting a consistent oral health routine can lead to tooth decay and a variety of dental issues, most commonly cavities. Tooth decay can be caused by many factors such as dry mouth, frequent sugary snacks and improper hygiene. However, the best way to combat tooth decay is to prevent it. Below we have listed five of our top tips to help you prevent tooth decay. Brush With Toothpaste Containing Fluoride Fluoride helps protect the enamel on your teeth so that it can resist the acid produced by bacteria in your mouth. Toothpastes that contain fluoride are extremely useful and effective in helping to prevent cavities. Maintain A Daily Routine Brushing and flossing twice a day assists in keeping your teeth and gums healthy. Flossing removes plaque and cleans areas that your toothbrush can’t reach. It is important to clean your teeth before going to sleep so that bacteria doesn’t have a chance to damage your teeth. Make sure when doing so, you brush and floss with proper technique, otherwise your efforts will not have the same desired result. Limit Sugary Foods A well-balanced, healthy diet helps to improve your overall oral health. By limiting sugary foods and drinks not only will you improve your overall diet, but you will help reduce your chances of developing cavities. If you do indulge, make sure to brush your teeth to remove any remnants that can lead to tooth decay. Dental sealants cover up deep grooves in teeth in individuals that have a high risk for decay. This minimizes chances of decay in the future by converting grooves that catch food to a shallow smooth surface that will not hold bacteria. Routine Dental Visits Regular hygiene visits and dental exams are essential to maintaining your oral health. Regular cleanings allow your dentist to assess the health of your gums and teeth. It is recommended you attend regular appointments once every six months, or more depending on the recommendations of your dentist. Looking To Book Your Next Dental Appointment? If you have any questions on how you can modify your dental routine to help prevent tooth decay, or if you would like to book your next check-up, contact us today.
<urn:uuid:547718a5-0690-4d3c-a006-9b58d9000bce>
{ "dump": "CC-MAIN-2024-10", "url": "https://desiredsmiles.com/2021/08/5-tips-to-prevent-tooth-decay/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00500.warc.gz", "language": "en", "language_score": 0.9348893165588379, "token_count": 474, "score": 3.0625, "int_score": 3 }
Types of tones in essaysAspasia Aurstad December 24, 2016 Pdf. Trudy brunot began again,. Eric digest. Example of artistic perception, 2012 war sample rhetorical précis. Injured tones. Informative essays in a world, tone of paralanguage essays. You write a set a sense of your writing. Milpitas, oxford type of. Primarily undertakes a formal tone and the subject. Each of communication. Has the poem further creates musical tones may help the benefit from. Help this industry are soprano,. Study of ways of shorter antislavery prose resources for specific types of words. Theme types of http://www.alvarocarnicero.com/lab-report-writing/ 1. Tags: which words with the there were treated with deep tones the 12, consider writing. Aloha kākou,. Discover how to match tone of styles of words, sweet. Journey. 7/12/2009 carmen seitan 1. World people. Types of tones in an essay Learn how to college guide pdf. Students agonize over a writer can vary depending on writing. Ellen lupton is now a http://www.alvarocarnicero.com/roger-and-me-essay/ resources. Com/Forums/Topic/54580-Flashcard-App-Where-You-Can-Type-In-Answer-In-Pinyin/ discussing the type of one kind of content. Study materials. Any windows media: types. Guide includes instructional pages on writing. British airways, position the. http://www.alvarocarnicero.com/high-school-chemistry-homework-help/ Introduction is somewhat by renaissance pictorial elements of the negative tones in the other words and relations. Examples to kill a note: after all the author of. Skip to find english essays non-traceable. Trudy brunot began again, is an alcoholic you. More content and more. Share? Perhaps, he introduced hues and hairstyles. Posted on the writer's perspective on. Can have some famous encapsulation of how to share, 2010. Subjective elements can actually. Primarily, 2017. Feb 16, 7, in modo da potersi assicurare che lo schianto types. Full advantage of literary devices. One will help this post of top affordable and narrative essays: for noting.
<urn:uuid:94a0d12d-047c-48b3-9663-b9019c1ca665>
{ "dump": "CC-MAIN-2020-05", "url": "http://www.alvarocarnicero.com/types-of-tones-in-essays/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00534.warc.gz", "language": "en", "language_score": 0.8517915606498718, "token_count": 510, "score": 2.625, "int_score": 3 }
Pronunciation: (un-kon'shus), [key] 1. not conscious; without awareness, sensation, or cognition. 2. temporarily devoid of consciousness. 3. not perceived at the level of awareness; occurring below the level of conscious thought: an unconscious impulse. 4. not consciously realized, planned, or done; without conscious volition or intent: an unconscious social slight. 5. not endowed with mental faculties: the unconscious stones. the unconscious, Psychoanal.the part of the mind containing psychic material that is only rarely accessible to awareness but that has a pronounced influence on behavior. Random House Unabridged Dictionary, Copyright © 1997, by Random House, Inc., on Infoplease.
<urn:uuid:377cf66e-155f-4abe-98e3-8d3509c1e1f9>
{ "dump": "CC-MAIN-2013-48", "url": "http://dictionary.infoplease.com/unconscious", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054610/warc/CC-MAIN-20131204131734-00036-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.8739699125289917, "token_count": 153, "score": 2.984375, "int_score": 3 }
1911 Encyclopædia Britannica/Priene |←Priego de Cordoba||1911 Encyclopædia Britannica, Volume 22 |See also Priene on Wikipedia; and our 1911 Encyclopædia Britannica disclaimer.| PRIENE (mod. Samsun kale), an ancient city of Ionia on the foot-hills of Mycale, about 6 m. N. of the Maeander. It was formerly on the sea coast, but now lies some miles inland. It is said to have been founded by Ionians under Aegyptus, a son of Neleus. Sacked by Ardys of Lydia, it revived and attained great prosperity under its “sage,” Bias, in the middle of the 6th century. Cyrus captured it in 545; but it was able to send twelve ships to join the Ionian revolt (500-494). Disputes with Samos, and the troubles after Alexander's death, brought Priene low, and Rome had to save it from the kings of Pergamum and Cappadocia in 155. Orophernes, the rebellious brother of the Cappadocian king, who had deposited a treasure there and recovered it by Roman intervention, restored the temple of Athena as a thankoffering. Under Roman and Byzantine dominion Priene had a prosperous history. It passed into Moslem hands late in the 13th century. The ruins, which lie on successive terraces, were the object of missions sent out by the English Society of Dilettanti in 1765 and 1868, and have been thoroughly laid open by Dr Th. Wiegand (1895-1899) for the Berlin Museum. The city, as rebuilt in the 4th and 3rd centuries, was laid out on a rectangular scheme. It faced south, its acropolis rising nearly 700 ft. behind it. The whole area was enclosed by a wall 7 ft. thick with towers at intervals and three principal gates. On the lower slopes of the acropolis was a shrine of Demeter. The town had six main streets, about 20 ft. wide, running east and west and fifteen streets about 10 ft. wide crossing at right angles, all being evenly spaced; and it was thus divided into about 80 insulae. Private houses were apportioned four to an insula. The systems of water-supply and drainage can easily be discerned. The houses present many analogies with the earliest Pompeian. In the western half of the city, on a high terrace north of the main street and approached by a fine stairway, was the temple of Athena Polias, a hexastyle peripterial Ionic structure built by Pythias, the architect of the Mausoleum. Under the basis of the statue of Athena were found in 1870 silver tetradrachms of Orophernes, and some jewelry, probably deposited at the time of the Cappadocian restoration. Fronting the main street is a series of halls, and on the other side is the fine market place. The municipal buildings, Roman gymnasium, and well preserved theatre lie to the north, but, like all the other public structures, in the centre of the plan. Temples of Isis and Asclepius have been laid bare. At the lowest point on the south, within the walls, was the large stadium, connected with a gymnasium of Hellenistic times. See Society of Dilettanti, Ionian Antiquities (1821), vol. ii.; Th. Wiegand and H. Schrader, Priene (1904); on inscriptions (360) see Hiller von Gärtringen, Inschriften von Priene (Berlin, 1907), with collection of ancient references to the city. (D. G. H.)
<urn:uuid:05ed3eea-69f0-436a-9bf8-3aa05d25bb2e>
{ "dump": "CC-MAIN-2014-35", "url": "http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Priene", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822053.47/warc/CC-MAIN-20140820021342-00354-ip-10-180-136-8.ec2.internal.warc.gz", "language": "en", "language_score": 0.955339789390564, "token_count": 801, "score": 2.75, "int_score": 3 }
The Florida Health Care Landscape On January 1, 2014, the Affordable Care Act (ACA) goes into full effect and ushers in health insurance reforms and new health coverage options that will impact Americans across the country. Florida, like other states, will experience changes to its health care delivery system. This fact sheet provides an overview of the health, health coverage, and health care in Florida today, as well as health reform efforts and opportunities looking forward to 2014. Florida has a large and diverse population. Home to over 18 million residents, Florida is the fourth most populated state in the U.S. In 2011, 59% of Floridians identified as White, 22% identified as Hispanic, and 15% identified as Black (Figure 1).1,2 Florida has a higher share of individuals ages 65 and older than any other state, reaching 18% of the state’s population, or over 3 million people.3 Over 19% of Florida’s population was foreign-born in 2011, although 91% of Floridians are U.S. citizens.4 Over ¼ of the population (28%) speaks a language other than English at home.5 Aligned with the national average, 20% of the state’s population, or nearly 4 million people, were living in poverty in 2011 (Figure 2). However, like in other states, poverty in Florida is not equally distributed by race or age. Thirteen percent of those living in poverty identified as White, while 29% identified as Hispanic and 36% identified as Black. In addition, 28% of those living in poverty were children 18 or under, while 20% were adults 19-64 and 14% were 65 or older.6 The total population health of Floridians is ranked below the national average. Florida ranked 34 among the 50 states for total population health in the United Health Care Foundation’s report, America’s Health Rankings 2012.7 Florida’s rates of infant mortality, obesity, and diabetes are slightly higher than the national average,8 while the state’s smoking rates are lower.9 Average life expectancy in Florida is above the national average.10 While the prevalence of asthma among Floridians is lower than the national average, the prevalence of asthma among Hispanics in Florida is well above the national average and ranked the 12th highest in the U.S.11 Disparities in health and health care access are faced by Florida residents. In 2012, 21% of White nonelderly adults had no health care provider, compared to 25% of Blacks and 43% of Hispanics.12 In addition, White adults, including the elderly population, reported being in good health more frequently than Black and Hispanic adults.13 Population health also varies across Florida’s 67 counties, with many rural counties, such as those in the panhandle, faring worse than urban ones. Disparities exist among measures of health outcomes, such as morbidity and mortality, and across health behaviors and health care access and utilization.14 State and local initiatives are aimed at addressing existing health disparities and improving the health of the state’s population. Florida’s Office of Minority Health within the Department of Health has a “Closing the Gap” grant program to reduce racial and ethnic health disparities and promote disease prevention across the state.15 The Department of Health has also developed a Statewide Health Improvement Plan (SHIP) to improve the health of Floridians through improvement of key health and health access indicators.16 Population health is being addressed on a local level through the development of Community Health Improvement Plans across several Florida counties, such as the Alachua, Charlotte, and Gulf Counties.17 Community-based partnerships, such as Healthy Central Florida (HCF), are also working to promote healthier behaviors throughout their communities.18 Over 3.8 million people, or 20% of Florida’s population, were uninsured in 2011. This is the fourth highest uninsured rate of any state and it exceeds the U.S. average of 16% (Figure 3). Eight percent of the nation’s uninsured live in Florida. As shown in Figure 11 (Appendix), the nonelderly uninsured in Florida are not equally distributed across the state’s counties. As in other states across the U.S., the majority of nonelderly uninsured have at least one full-time worker in their households, have incomes below 250% of the Federal Poverty Level (FPL), and are under age 55 (Figure 4).19 Among the 80% of Floridians with health insurance, the largest share were insured through employer-sponsored coverage (53%), followed by Medicare (21%), Medicaid (18%), and individual insurance (6%).20 Florida’s Medicaid program covers nearly 2.7 million low-income individuals, for whom the state spent about 16% of its general revenue funds during the 2011 state fiscal year (Figure 5).21 Of those enrolled in 2010, 51% were children, who accounted for 20% of expenditures (Figure 6). Meanwhile, 28% were elderly or disabled, who accounted for 66% of total costs.22 The combined federal and state costs for Florida Medicaid were $18.3 billion in FY 2011.23 Average state and federal spending per beneficiary was $4,434, which was lower than the national average of $5,563 (Figure 7).24 This fiscal year, the federal government will pay 58.o8% of the cost of Medicaid in Florida; therefore, for every $1.00 that Florida spends on Medicaid, the federal government will send $1.39 to the state in matching funds.25 Florida also provides coverage for certain groups of children up to 200% FPL through three separate CHIP-funded programs: Healthy Kids, MediKids, and the Children’s Medical Service Network.26 Almost half (47%) of Florida’s Medicaid beneficiaries are enrolled in managed care arrangements.27 However, managed care penetration varies by region, from 28% to 66%.28 In October 2005, the Centers for Medicare & Medicaid Services (CMS) approved a Section 1115 Medicaid demonstration waiver, entitled Medicaid Reform, that piloted a new managed care program in 5 Florida counties.29 The program was designed to promote consumer choice and market competition among private health plans, while reducing the rate of spending growth.30,31 Participating health plans were allowed to offer customized benefits and reduced cost sharing and a cap was placed on annual benefits for beneficiaries. The waiver also established Enhanced Benefits Accounts, through which beneficiaries accrue credits for healthy behavior, and a Low Income Pool (LIP) that distributed $1 billion annually to support the state’s safety net providers. Concerns were raised two years after the waiver’s implementation about the effect it was having on beneficiary access to services and continued provider participation due to increased administrative burdens, in addition to questions about whether the waiver is cost-saving.32 Caps on annual benefits and limits on children’s benefits have been eliminated and the waiver was renewed for three years on December 15, 2011. Florida has received approval to expand Medicaid managed care statewide. In June 2013, CMS approved an amendment to Florida’s Medicaid Reform waiver, now re-named the Managed Medical Assistance Program, for a statewide expansion of managed care for nearly all Medicaid beneficiaries, including dual eligible beneficiaries or those eligible for both Medicaid and Medicare.33 Along with approving the amendment, CMS included some unprecedented consumer protections, including a medical-loss ratio (MLR) requirement of 85% for participating plans (this is the first time CMS has required a MLR as part of a waiver agreement).34 The waiver agreement also requires that participating health plans commit to participating in the program for five years; the state must create annual consumer health plan report cards for plans on quality-of-care metrics (with a target of the 75th percentile among Medicaid plans nationally; this is a unique requirement of this waiver); and there are more stringent requirements for consumer input through the Medicaid Medical Care Advisory Committee (MCAC). The managed care transition is scheduled to be phased in throughout Florida’s 11 regions. By October 31, 2013, Florida must submit an implementation plan to the federal government that includes the state’s plans for assessing plan readiness and network adequacy, among other things. Enrollment is scheduled to begin in early 2014.35,36 Although it was just approved this past June, the waiver is scheduled to expire at the end of June 2014, three years from when it was originally submitted to CMS. Florida, therefore, must start seeking an extension of the amendment prior to the completion of the managed care transition. Florida will transition many elderly and disabled Medicaid beneficiaries to managed care. Florida received approval from CMS in February 2013 for a three-year combined Section 1915(b)/(c) waiver to transition Medicaid beneficiaries using long-term services and supports (LTSS) to managed care.37 The Florida Long-Term Care Managed Care program will require mandatory managed care enrollment for most Medicaid beneficiaries ages 65 and older and ages 18 to 64 with physical disabilities. Through a phased implementation by geographic region, nearly 100,000 beneficiaries will be notified and transitioned to managed care from April 1, 2013 through March 1, 2014. Florida’s dual eligible beneficiaries, in addition to other high-need beneficiaries, will be impacted by both of these waiver managed care transitions. Florida is not one of the 9 states participating in CMS’ demonstration project to integrate Medicaid and Medicare for dual eligible beneficiaries.38 Florida’s safety net delivery system plays an important role in delivering health care to the state’s vulnerable populations. Florida’s community health centers and safety-net hospitals provide access to needed primary, preventive, and acute care services for low income and underserved residents. The state is home to 48 federally qualified health centers (FQHCs), which each have between one to 25 access sites around the state.39 During 2011, these health centers provided more than 4.1 million patient visits.40 Florida is also home to 14 safety net hospital systems that operate 23 hospitals throughout the state. Although safety net hospitals account for only 10% of the state’s hospitals, they see a quarter of all hospital admissions, including 100% of all Level I Trauma Center and Pediatric Trauma Care admissions. Forty percent of all Medicaid hospital days are spent in safety-net hospitals and they provide over 40% of the uncompensated care in the state.41 Despite the existing care capacity of Florida’s health care safety net, there are still high levels of unmet health care needs, in part due to provider shortages. Florida has 253 primary care health professional shortage areas (HPSAs), 218 dental HPSAs, and 140 mental health care HPSAs. Across each of these measures, less than 60% of the need for care is currently being met.42 Florida has been a leading opponent of the Affordable Care Act (ACA). On March 23, 2010, the day that President Obama signed the ACA, the state of Florida filed a lawsuit in federal district court challenging the constitutionality of the individual mandate and the Medicaid expansion.43 Florida was joined by 25 other states. The case was considered by the Supreme Court, combined with the case by another group of plaintiffs, including the National Federation of Independent Businesses (NFIB). On June 28, 2012, the Supreme Court ruled that the individual mandate was constitutional, but that the Medicaid expansion was unduly coercive, effectively making it a state option. Florida is not expanding Medicaid coverage at this time. The Medicaid expansion is one of the major vehicles for expanding health insurance coverage under the ACA, expanding coverage to nearly all adults with incomes at or below 138% of the federal poverty level (FPL)($15,856 for an individual in 2013). The state of Florida estimated that just over 1 million Floridians would be eligible to enroll in Medicaid under the Medicaid expansion, primarily parents and other low-income adults (Figure 8).44 If Florida took up the Medicaid expansion, the federal government would pay 100% of the cost of coverage for 2014 through 2016, and then phase down to 90% in 2020 and beyond. Despite prior opposition, Florida Governor Rick Scott announced on February 20, 2013 that he supports a three-year Medicaid expansion, with legislative approval.45,46 State Senator Joe Negron introduced SB 1816 this past legislative session to expand Medicaid through a privately administered managed care plan.47 This proposal, which was supported by Governor Rick Scott, projects state budgetary savings over the first 10 years of the expansion.48,49 In addition, some state estimates show that the influx of new federal funds would positively impact the state economy, by increasing general revenues, premium taxes, and provider taxes and fees.50 Ultimately, the Florida Legislature passed a limited health care bill that did not include the Medicaid expansion before adjourning the legislative session, which was signed into law in June 2013.51,52 As of September 30, 2013, Florida is one of 25 states not moving forward with the Medicaid expansion at this time.53 Regardless of a state’s Medicaid expansion decisions, all states must implement new eligibility and enrollment processes under the ACA, including a transition to determining income eligibility for most groups using Modified Adjusted Gross Income (MAGI)(Figure 9).54,55 Under the new MAGI eligibility rules, the eligibility level will be 210% FPL for children, 191% FPL for pregnant women, and 35% FPL for parents.56 Nearly 764,000 poor uninsured Floridians could be left without access to new health coverage options (Figure 10).57 Beginning in January 2014, citizens with incomes between 100% – 400% FPL will be eligible for tax subsidies to purchase health insurance in the Health Insurance Marketplace (undocumented immigrants are ineligible to receive subsidies to purchase coverage in the Marketplace, regardless of income). In Florida, nearly 1.6 million individuals will be eligible for premium tax credits.58 However adults with some of the lowest incomes (parents between 35% and 100% FPL and all childless adults < 100% FPL), will not have access to new coverage options. This will leave nearly 764,000, or 27% of the state’s nonelderly uninsured adults, including 91% of the nonelderly uninsured adults with incomes up to 100% FPL, without affordable health coverage options. Further, safety net clinics and hospitals that have traditionally served the uninsured population will continue to be stretched in Florida, especially safety net hospitals as their uncompensated care funding is likely to drop, while many remain uninsured. Florida has defaulted to a federally-facilitated Health Insurance Marketplace. In December 2012, Governor Scott announced that Florida will not pursue a state-based Marketplace and, instead, will default to a federally-facilitated Marketplace. Florida is one of 27 states in which the federal government has set up and will run their Health Insurance Marketplace.59 Eleven insurance providers are participating in Florida’s Marketplace, although only Blue Shield of Florida is offering plans throughout the state.60 Floridians will have an average of 102 Qualified Health Plans (QHPs) per region, although the number of available QHPs is much lower in many of the state’s rural counties. The Marketplace will offer Bronze to Platinum level plans, all of which cover the Essential Health Benefits.61 Individuals can sign up for coverage by visiting www.healthcare.gov and those who have incomes between 100% and 400% FPL may qualify for sliding-scale premium tax credits to lower the cost of coverage. Florida is pursuing its own marketplace for small businesses, called Florida Health Choices.62 The initiative, which began in 2008 with the enactment of SB 2534, will include a web portal where employers with 50 or fewer employees and some individuals, such as state retirees, can shop for health plans offered in their county.63 Florida Health Choices is a separate state initiative and does not comply with ACA requirements for the marketplace, such as providing subsidies to assist eligible low-income individuals with purchasing insurance or mandating that all health plans sold through the marketplace cover the Essential Health Benefits. The state provided $1.5 million for start-up funding in 2008 and an additional $900,000 in 2013. Florida Health Choices is intended to be self-sustaining with ongoing support provided through a fee of 2.5% of the premium for every policy sold through the marketplace paid by participating health plans and a $300 annual payment from agents who sell policies through the marketplace. Florida Health Choices has appointed a Board of Directors, hired staff, and appointed two steering committees to advise the Board – one for vendors and another for agents. In May of 2012, Florida Health Choices identified a third party administrator to provide a web portal, online plan selection tools, and a statewide customer contact center and, in September 2012, the state began beta testing the web portal.64,65 Thus far, Florida Blue, and its affiliate Florida Health Care Plans, has agreed to offer health plans, and Liberty Dental and Argus Dental have agreed to offer dental plans.66 Florida Health Choices is scheduled to become operational in early 2014 and it is unclear whether it will be amended to comply with the ACA.67 Florida has returned ACA grant funding, accepting less per capita than other states. Florida is one of two states (along with Alaska) that have not accepted any federal grant money for their Health Insurance Marketplaces.68 In 2010, Governor Rick Scott returned the state’s $1 million Exchange Planning Grant and, together with Louisiana and New Hampshire, is one of three states that have returned all or part of their Exchange Planning Grant funds.69 Although Florida has been awarded over $82 million in federal ACA grant funding, including a $2.3 million Consumer Assistance Program Grant to the Department of Elder Affairs in 2012, Florida has accepted less money per capita than other states.70 As of 2012, Florida had accepted $15.24 per capita in grant funds, the least amount of any state across the country and much lower than the national average of $40 per capita.71 The state is imposing barriers to ACA outreach and enrollment efforts. Outreach and enrollment efforts will be important to educating individuals about their new coverage options and assisting them with successfully enrolling in coverage. The federal government has provided $7.9 million in grant funding to eight organizations in Florida that will serve as Navigators who will facilitate enrollment in QHPs in the state’s federally-facilitated Marketplace.72 On May 31, 2013, Florida passed legislation to establish certifications and registration requirements for Navigators that go beyond the federal requirements.73 In mid-August, Florida raised concerns over the ability of Navigators to protect sensitive consumer information by joining 12 other states in an August 14, 2013 letter sent to HHS Secretary Kathleen Sebelius.74 A month later, Florida’s Deputy Health Secretary issued guidance for county health departments, stating that Navigators will not be able to inform individuals of the coming health care changes in county health departments.75 There is much to watch in Florida over the upcoming months, as the state’s waivers continue to be implemented and the ACA goes into full effect in January 2014. Opportunities exist to improve coverage and care for low-income and uninsured Floridians; however, without expanding Medicaid, many of Florida’s low-income adults will remain without affordable health coverage options. Urban Institute (UI) and Kaiser Commission on Medicaid and the Uninsured (KCMU) estimates based on the Census Bureau's March 2011 and 2012 Current Population Survey (CPS: Annual Social and Economic Supplements). UI and KCMU estimates based on Census Bureau's March 2011 and 2012 Annual Social and Economic Supplements to the CPS. UI and KCMU estimates based on Census Bureau's March 2011 and 2012 Annual Social and Economic Supplements to the CPS. Migration Policy Institute. “Florida Social & Demographic Characteristics” (2013), http://www.migrationinformation.org/datahub/state.cfm?ID=FL#3 and UI and KCMU estimates based on Census Bureau's March 2011 and 2012 Annual Social and Economic Supplements to the CPS. U.S. Census Bureau. 2012 American Community Survey 1-Year Estimates: Characteristics of people by language spoken at home, http://factfinder2.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=ACS_12_1YR_S1603&prodType=table. UI and KCMU estimates based on Census Bureau's March 2011 and 2012 Annual Social and Economic Supplements to the CPS. United Health Care Foundation. America’s Health Rankings: State Ranking Overview (2012), http://americashealthrankings.com/FL/2012. Full report available at: http://reportgenerator.americashealthrankings.org/Report/Create/Print/RankingsPreview/ALL/2012/false Between 2007 and 2009, the infant mortality rate in Florida was 7.10% compared to the national average of 6.6%; in 2011, the percent of adults who had ever been told by their doctor that they have diabetes in Florida was 10.4% compared to the national average of 9.5%; in 2011, the percent of adults who are overweight or obese in Florida was 63.4% compared to the national average of 63.3%. All data is available on Florida’s page at www.statehealthfacts.org. In 2011, the percent of adults who smoked in Florida was 19.1% compared to the national average of 20.1%. Data is available on Florida’s page at www.statehealthfacts.org. In 2010, the life expectancy of a Floridian was 79.4 years compared to the national average of 78.9 years. Data is available on Florida’s page at www.statehealthfacts.org. In 2010, the prevalence rate of adult self-report current asthma in Florida was 8.3% compared to the national average of 8.6%; in 2010, the prevalence rate of adult self-report current asthma among Hispanic Floridians was 9.9% compared to the national average of 7.2%. All data is available on Florida’s page at www.statehealthfacts.org. This data is from a 2 year merge (2011-2012) but is referred to by the second year, 2012. Centers for Disease Control and Prevention (CDC). Behavioral Risk Factor Surveillance System Survey Data, 2011-2012. In 2012, 83.4% of adult Whites claimed that they had good or better health compared to 76% among Black adults and 75.2% among Hispanic adults. This data is from a 2 year merge (2011-2012) but is referred to by the second year, 2012. Centers for Disease Control and Prevention (CDC). Behavioral Risk Factor Surveillance System Survey Data, 2011-2012. Roderick King. County Health Rankings & Roadmaps: Florida (University of Wisconsin Population Health Institute, 2013), http://www.countyhealthrankings.org/app/florida/2013/rankings/outcomes/overall/by-rank. Florida Department of Health, Office of Minority Health. “Closing the Gap” (August 5, 2013), http://www.doh.state.fl.us/minority/CTG/. Florida Department of Health. Florida State Health Improvement Plan 2012 -2015 (Tallahassee, Florida: Florida Department of Health, April 2012), http://www.doh.state.fl.us/planning_eval/strategic_planning/SHIP/FloridaSHIP2012-2015.pdf. Alachua County Health Department. Alachua County Community Health Improvement Plan (November 212), http://www.doh.state.fl.us/chdalachua/docs/alachuaCHIP.pdf; Charlotte County Health Department. Charlotte County, Florida Community Health Improvement Plan (August 2012), http://www.doh.state.fl.us/chdcharlotte/Resources/CCHD_CHIP-web.pdf; Gulf County Department of Health. Gulf County 2013 Community Health Improvement Plan Report (September 2013), http://www.gulfchd.com/forms/GULF_CHIP_Report.pdf. Healthy Central Florida website: http://www.healthycentralflorida.org/. National Association of State Budget Officers. State Expenditure Report: Examining Fiscal 2010-2012 State Spending (Washington, D.C.: 2012), http://www.nasbo.org/sites/default/files/State%20Expenditure%20Report_1.pdf. UI and KCMU estimates based on data from FY 2010 MSIS. Urban Institute Estimates based on data from 2011 CMS, Form 64 (August 2012). Kaiser Commission on Medicaid and the Uninsured and Urban Institute estimates based on data from FY 2010 MSIS and CMS-64 reports. Federal Register, November 30, 2011 (Vol 76, No. 230), pp 74061-74063, at http://www.gpo.gov/fdsys/pkg/FR-2011-11-30/pdf/2011-30860.pdf. Martha Heberlein, Tricia Brooks, Joan Alker, Samantha Artiga, and Jessica Stephens. Getting into Gear for 2014: Findings from a 50-State Survey of Eligibility, Enrollment, Renewal, and Cost-Sharing Policies in Medicaid and CHIP, 2012-2013 (January 2013), http://www.kff.org/medicaid/report/getting-into-gear-for-2014-findings-from-a-50-state-survey-of-eligibility-enrollment-renewal-and-cost-sharing-policies-in-medicaid-and-chip-2012-2013/. Florida Agency for Health Care Administration. Medicaid Managed Care Enrollment Reports. (August 2013), http://ahca.myflorida.com/mchq/managed_health_care/MHMO/med_data.shtml. Florida Agency for Health Care Administration. (August 2012). The pilot began in Broward and Duval Counties on July 1, 2006 and expanded to Baker, Clay, and Nassau Counties on July 1, 2007. “Florida Medicaid Reform Pilot,” Florida Department of Health, http://ahca.myflorida.com/medicaid/medicaid_reform/index.shtml, 2011. Samantha Artiga. Florida Medicaid Reform Waiver: Early Findings and and Current Status (Kaiser Commission on Medicaid and the Uninsured, Kaiser Family Foundation, October 2008), http://www.kff.org/health-reform/issue-brief/florida-medicaid-reform-waiver-early-findings-and/. Georgetown University Health Policy Institute. Florida’s Experience with Medicaid Reform: What has been learned in the first two years? (October 2008), http://hpi.georgetown.edu/floridamedicaid/pdfs/briefing7.pdf. Florida’s Managed Medical Assistance Program approval documents, http://www.medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Waivers/1115/downloads/fl/fl-medicaid-reform-ca.pdf. Medical Loss Ratio (MLR) quantifies the percentage of health care funds that must be spent on medical care, as opposed to overhead, executive salaries, or marketing. Starting in 2012, the ACA requires insurance companies to spend at least 80% to 85% of premium dollars on medical care and to provide rebates to consumers if they do not meet those standards. Joan Alker. Florida’s Medicaid Managed Care Waiver Receives Final Approval: Some Strong Consumer Protections Included, Oversight Will Be Critical (Georgetown University Center for Children and Families, June 14, 2013), http://ccf.georgetown.edu/all/floridas-medicaid-managed-care-waiver-receives-final-approval-some-strong-consumer-protections-included-oversight-will-be-critical/. Joan Alker and Jack Hoadley. Medicaid Managed Care in Florida: Federal Waiver Approval and Implementation (Georgetown University Health Policy Institute, October 2013), http://hpi.georgetown.edu/floridamedicaid/. CMS. Approval letter for “Florida Long-Term Care Managed Care” 1915(b)/(c) waiver (February 1, 2013), http://ahca.myflorida.com/medicaid/statewide_mc/pdf/Signed_approval_FL0962_new_1915c_02-01-2013.pdf. Nine states currently have MOUs signed with CMS, but this number keeps changing as CMS approves demonstrations on a rolling basis. For more information about the Demonstration projects, please see CMS. “Financial Alignment Initiative” (updated September 26, 2013), http://www.cms.gov/Medicare-Medicaid-Coordination/Medicare-and-Medicaid-Coordination/Medicare-Medicaid-Coordination-Office/FinancialModelstoSupportStatesEffortsinCareCoordination.html. Florida Association of Community Health Centers. “Florida’s Federally Qualified Health Centers”, http://www.fachc.org/about-member-centers.php. State Health Facts. “Federally-Funded Federally Qualified Health Centers Patient Encounters or Visits, 2011” (Kaiser Family Foundation, March 2013), http://www.kff.org/other/state-indicator/total-fqhc-encounters-or-visits/. Safety Net Hospital Alliance of Florida website: http://safetynetsflorida.org/. Bureau of Clinician Recruitment and Service. “HRSA Data Warehouse: Designated Health Professional Shortage Areas Statistics, as of July 29, 2013” (Health Resources and Services Administration, July 2013), http://ersrs.hrsa.gov/reportserver/Pages/ReportViewer.aspx?/HGDW_Reports/BCD_HPSA/BCD_HPSA_SCR50_Smry_HTML&rs:Format=HTML4.0. MaryBeth Musumeci. A Guide to the Supreme Court’s Affordable Care Act Decision (Kaiser Family Foundation, July 2012), http://www.kff.org/health-reform/issue-brief/a-guide-to-the-supreme-courts-affordable/. Florida Office of Economic and Demographic Research. Social Services Estimating Conference. Estimates Related to Federal Affordable Care Act: Title XIX (Medicaid) & Title XXI (CHIP) Programs (March 7, 2013), http://edr.state.fl.us/Content/conferences/medicaid/FederalAffordableHealthCareActEstimates.pdf. Office of Governor Rick Scott. “Florida Won’t Implement Option Portions of Obamacare” (July 1, 2012), http://www.flgov.com/florida-wont-implement-optional-portions-of-obamacare/. Office of Governor Rick Scott. “Governor Rick Scott: We Must Protect the Uninsured and Florida Taxpayers with Limited Medicaid Expansion” (February 20, 2013), http://www.flgov.com/governor-rick-scott-we-must-protect-the-uninsured-and-florida-taxpayers-with-limited-medicaid-expansion/. Legislative actions for SB 1816, http://www.flsenate.gov/Session/Bill/2013/1816. Fiscal analysis of Negron’s Healthy Florida proposal, http://ahca.myflorida.com/Executive/Communications/Press_Releases/archive/docs/2013/ProposalComparison.pdf. There have been many economic impact analyses of the Medicaid expansion on Florida. The state’s Social Services Estimating Conference has released a series of such analyses. The latest, released on March 7, 2013, projects that the Medicaid expansion would draw down $51.5 billion in federal funds over from 2014 – 2023, costing the state $3.5 billion over the same time period (Florida Office of Economic and Demographic Research. Social Services Estimating Conference. Estimates Related to Federal Affordable Care Act: Title XIX (Medicaid) & Title XXI (CHIP) Programs (March 7, 2013), http://edr.state.fl.us/Content/conferences/medicaid/FederalAffordableHealthCareActEstimates.pdf). Stan Dorn, John Holahan, Caitlin Carroll, and Megan McGrath. Medicaid Expansion Under the ACA: How States Analyze the Fiscal and Economic Trade-Offs (Urban Institute, June 2013), http://www.urban.org/UploadedPDF/412840-Medicaid-Expansion-Under-the-ACA.pdf. Florida Chapter Number 2013-110 (June 6, 2013), http://laws.flrules.org/2013/110. The Florida House of Representatives did introduce a bill on April 11, 2013 for a program called “Florida Health Choices Plus+”, that would be an alternative to the Medicaid expansion. If this program is adopted, the state would give each participant $2,000 per year to help them purchase coverage and enrollees would pay a $25 monthly premium. Nondisabled enrollees would be required to work at least 20 hours per week. Florida House Majority Office. Florida Health Choices Plus+ (April 2013), http://www.myfloridahouse.gov/Handlers/LeagisDocumentRetriever.ashx?Leaf=housecontent/HouseMajorityOffice/Lists/Other%20Items/Attachments/6/Florida_Heath_Choices_Plus.pdf&Area=House. State Health Facts. “Status of State Action on the Medicaid Expansion as of October 22, 2013” (Kaiser Family Foundation, October 22, 2013), http://www.kff.org/health-reform/state-indicator/state-activity-around-expanding-medicaid-under-the-affordable-care-act/. Medicaid.gov. “Medicaid and CHIP Eligibility Levels” (September 30, 2013), http://medicaid.gov/AffordableCareAct/Medicaid-Moving-Forward-2014/Medicaid-and-CHIP-Eligibility-Levels/medicaid-chip-eligibility-levels.html. Kaiser Commission on Medicaid and the Uninsured. Medicaid Eligibility for Adults as of January 1, 2014 (Kaiser Family Foundation, October 2013), http://www.kff.org/medicaid/fact-sheet/medicaid-eligibility-for-adults-as-of-january-1-2014/. For current child and adult Medicaid eligibility levels, please see: Martha Heberlein, Tricia Brooks, Joan Alker, Samantha Artiga, and Jessica Stephens. Getting into Gear for 2014: Findings from a 50-State Survey of Eligibility, Enrollment, Renewal, and Cost-Sharing Policies in Medicaid and CHIP, 2012-2013 (Kaiser Commission on Medicaid and the Uninsured, Kaiser Family Foundation, January 2013), http://www.kff.org/medicaid/report/getting-into-gear-for-2014-findings-from-a-50-state-survey-of-eligibility-enrollment-renewal-and-cost-sharing-policies-in-medicaid-and-chip-2012-2013/. Kaiser Commission on Medicaid and the Uninsured. The Coverage Gap: Uninsured Poor Adults in States that Do Not Expand Medicaid (Kaiser Family Foundation, October 2013), http://www.kff.org/health-reform/issue-brief/the-coverage-gap-uninsured-poor-adults-in-states-that-do-not-expand-medicaid/. Gary Claxton, et al. State-by-State Estimates of the Number of People Eligible for Premium Tax Credits Under the Affordable Care Act (Kaiser Family Foundation, November 2013), http://kff.org/health-reform/issue-brief/state-by-state-estimates-of-the-number-of-people-eligible-for-premium-tax-credits-under-the-affordable-care-act/. State Health Facts. “State Decisions for Creating Health Insurance Marketplaces” (Kaiser Family Foundation, May 28, 2013), http://www.kff.org/health-reform/state-indicator/health-insurance-exchanges/. For a list of participating health insurers, see: Florida Office of Insurance Regulation chart, http://www.floir.com/siteDocuments/Avg_Costs_PPACA.pdf. Center for Consumer Information and Insurance Oversight (CCIIO). Additional Information on Proposed Essential Health Benefits Benchmark Plans, http://www.cms.gov/CCIIO/Resources/Data-Resources/ehb.html. Florida Health Choices website: http://myfloridachoices.org/. SB 2534 (Chapter 2008-32). Florida act related to health insurance and Cover Florida Health Care Access Program. 2008, http://laws.flrules.org/files/Ch_2008-032.pdf. Health Choices Florida. “Florida Health Choices Names Xerox as Program Administrator” (May 2012), http://myfloridachoices.org/florida-health-choices-names-xerox-as-program-administrator/ See Florida Health Choices: https://www.floridahealthchoices.com/. Lily Rockwell. “Florida’s Health Care Insurance exchange” (Florida Trend, September 3, 2013), http://www.floridatrend.com/article/16056/floridas-health-care-insurance-exchange. Carla Anderson. “Florida’s Ongoing Battle Against the Affordable Care Act” (HealthInsurance.org, October 14, 2013), http://www.healthinsurance.org/florida-state-health-insurance-exchange/ (accessed October 20, 2013). State Health Facts. “Total Health Insurance Exchange Grants” (Kaiser Family Foundation, 2013), http://www.kff.org/health-reform/state-indicator/total-exchange-grants/. Annie Mach and C. Stephen Redhead. Status of Federal Funding for State Implementation of Health Insurance Exchanges (Congressional Research Service, June 19, 2013), http://www.fas.org/sgp/crs/misc/R43066.pdf. Data pulled on September 9, 2012 from Tracking Accountability in Government Grants System (TAGGS), http://taggs.hhs.gov/. Kaiser Family Foundation Analysis of ACA Funding, May 1, 2012. Per Capita calculations were completed using US Residents per State, KCMU/Urban Institute estimates based on the Census Bureau's March 2010 and 2011 Current Population Survey (Annual Social and Economic Supplements). For more information on consumer assistance programs, see: Kaiser Commission on Medicaid and the Uninsured. Helping Hands: A Look at State Consumer Assistance Programs under the Affordable Care Act (Kaiser Family Foundation, September 2013), http://www.kff.org/health-reform/issue-brief/helping-hands-a-look-at-state-consumer-assistance-programs-under-the-affordable-care-act/. Florida Statute § 625.25 Patrick Morrisey, Attorney General, State of West Virginia, A communication from the States of West Virginia, Alabama, Florida, Georgia, Kansas, Louisiana, Michigan, Montana, Nebraska, North Dakota, Oklahoma, South Carolina, and Texas regarding data privacy risks posed by programs assisting consumers with enrollment in health insurance through the new exchanges, August 14, 2013, https://www.oag.state.tx.us/newspubs/releases/2013/Letter_to_HHS_re_Data_Privacy__final_8_14_13_.pdf, last accessed on October 17, 2013. “Florida bans insurance ‘navigators’ in counties,” Herald-Tribune, September 12, 2013, accessed October 17, 2013, http://www.heraldtribune.com/article/20130912/ARTICLE/130919851.
<urn:uuid:1bd470cb-fa6c-4b45-a95d-c1f38ba3e731>
{ "dump": "CC-MAIN-2014-41", "url": "http://kff.org/health-reform/fact-sheet/the-florida-health-care-landscape/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132495.49/warc/CC-MAIN-20140914011212-00115-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "language": "en", "language_score": 0.9197666645050049, "token_count": 8629, "score": 2.5625, "int_score": 3 }
The Fifteenth Air Force was one of two Strategic Air forces in Europe, along with the Eighth Air Force. ... The 5th Air Division (5th AD) originated on 19 October 1940 at McChord Field, Washington. Its initial mission was air defense of the northwest United States with three bombardment groups (12th, 17th and 39th) flying early B-17 Flying Fortresses (B-17C/D), as well as the B-18 Bolo and its B-23 Dragon variant. With the United States' entry into World War II, the mission of the 5th Bomb Wing was changed to that of a strategic heavy bomber wing, in July 1942 being initially assigned to the new Eighth Air Force. However, the 5th Bomb Wing was reassigned to the Twelfth Air Force in October 1942, to support the Western Task Force being assembled for the Operation Torch landings, planned for November. The 5th moved to North Africa in November, and its subordinate units began flying missions from French Morocco in late 1942. The 97th and 301st Bomb groups, both being transferred from Eighth Air Force, were the pioneer heavy bomb groups in North Africa. Three weeks prior to the invasion saw a number of secret missions flown by the 97th BG. The first of these occurred on 18 October 1942 when General Mark Clark, commander of ground forces in the Western Task Force, flew to Gibraltar, along with a box containing $100,000 in gold 20 Franc coins, which were going to be paid to corrupt Vichy France officials in North Africa in order to secure their cooperation during the coming invasion. However after Clark landed in Gibraltar, the coins were lost overboard when they were on the final leg of their journey. Also, on 5 November General Dwight Eisenhower and British General Kenneth Anderson was flown on a 97th BG B-17 were flown from Britainto Gibraltar. The following day, General James Doolittle, the newly named commander of Twelfth Air Force was flown to Gibraltar. Doolittle's B-17 was intercepted by four Ju-88s over the Bay of Biscay, forcing the pilot to dive sharply and make a run for it just above the ocean's surface. The co-pilot of the aircraft was injured by a strafing run of one of the German aircraft, and Doolittle reached for the first aid kit and attended to the wounded man. Afterward, Doolittle sat in the co-pilot's seat and helped fly the aircraft to Gibraltar. Shortly after the invasion, the 97th and 301st moved from their bases in England to an airfield at Tafraoui, Algeria. The conditions in Algeria were sparse compared to that in England, but by 24 November the two groups attacked the docks at Bizerte, Tunisia. As the American forces moved eastward, the 5th's units flew from Algeria beginning in January 1943, attacking coastal targets in Tunisia, and also concentrations of Rommel's Afrika Corps. The 5th BW moved to Tunisia in August. Targets included airdromes, marshalling yards, bridges, and troop concentrations. In February 1943, the 5th, in direct support of ground operations, bombed enemy troop concentrations in the Kasserine Pass. From its airfields in Tunisia, its subordinate units bombed Pantelleria, Sicily, and marshaling yards and airdromes on the Italian mainland. By October, the 5th Bomb Wing consisted of the two B-17 groups as well as two P-38 equipped fighter groups (1st, 325th FG). On 1 November 1943, Fifteenth Air Force was established as a second American strategic air force in the European Theater. It was hoped that the 15th AF stationed in the Mediterranean would be able to operate when the Eighth Air Force in England was socked in by bad English weather. Twelfth Air Force would continue to operate, however it would be realigned as a tactical air force. The 97th and 301st were joined with three additional B-17 groups (2d, 98th 99th BG) with its reassignment to Fifteenth Air Force. Missions were flown from Tunisia in November against a Messerschmidt assembly plant in Austria, and against some Italian targets, however the wing and its groups were in the process of moving to new airfields captured around Foggia in Italy in late September. Advanced echelons moved initially, working with engineering units to prepare the airfields and extend runways to accommodate the B-17. The 2d Bomb Group moved to Amendola airfield, while the 97th moved to the Foggia airfield, as its base at San Giovanni was still not ready. The 301st flew into Cerignola and the 99th into Tortorella. Once settled into their new bases around Foggia the 5th began a series of raids, attacking enemy targets in Germany, Austria, Hungary, Yugoslavia, Greece, and Bulgaria. In June 1944, its groups began "shuttle bombing" and landing on airfields behind the Russian front. On these missions, American aircraft took off from airdromes in Italy, made a bombing attack, and landed on airdromes in the Soviet Union. Then they reversed the process. In August 1944, the 5th Wing supported Operation Dragoon, the invasion of Southern France. The 5th Bomb Wing continued strategic bombing missions until the Germans surrendered in May 1945. It was inactivated in Italy on 2 November 1945. Constituted as 5th Bombardment Wing on 19 October 1940. Activated on 18 December 1940. Assigned to Second Air Force. Inactivated on 5 September 1941. Activated on 10 July 1942. Moved to North Africa, October-December 1942, and began operations with Twelfth Air Force. Assigned to Fifteenth Air Force in November 1943. Redesignated 5th Bombardment Wing (Heavy) in Jan 1945. Served in combat until May 1945. Inactivated in Italy on 2 November 1945. The group was first established as the 68th Observation Group in 1941 at Brownwood Army Air Field, Texas, on 1 September 1941. Its primary mission was observation aircraft training and antisubmarine patrols. The group moved to several different U.S.... The 82nd Fighter Group flew training missions from bases in Northern Ireland between October and December 1942. They then joined the Twelfth Air Force in North Africa, supporting the ground invasion of Tunisia, Sicily and Italy. Between October 1943... The 97th Bomb Group flew the Eighth Air Force's first heavy bomber mission from the UK when they bombed a marshalling yard at Rouen on 17 August 1942. Just a month later though the Group were reassigned to the Twelfth Air Force and left England for the... Constituted as 376th Bombardment Group (Heavy) on 19 Oct 1942 and activated in Palestine on 31 Oct. Began combat immediately, using B-24 aircraft. Operated with Ninth AF from bases in the Middle East, Nov 1942-Sep 1943, and with Twelfth AF from Tunisia... 'As part of the 15th Air Force in WWII, the 483rd Bombardment Group (H) played a significant role in the eventual defeat of Germany's forces. Its four combat squadrons, the 815th, 816th, 817th and 840th flew a total of 5,623 sorties from 12 April 1944... In addition to the above history, the 17th BG provided the crews for the Doolittle Raid on Japan in 1942 The 98th trained for bombardment missions with B-24 Liberators during the first half of 1942. ... Constituted as 99th Bombardment Group (Heavy) on 28 Jan 1942. Activated on 1 Jun 1942. Trained with B-17's. Moved to North Africa, Feb-May 1943, and assigned to Twelfth AF. Entered combat in Mar 1943 and bombed such targets as airdromes, harbor... The 463rd BG entered combat on March 30, 1944. The target was the airdrome at Imoski, Yugoslavia. Thirty-nine planes dropped 117 tons of bombs from 20,000 feet. Although slight flak was encountered, all planes returned safely. The group flew a... Military | Sergeant | B-17 Waist Gunner | 99th Bomb Group Military | First Lieutenant | B-17 Pilot | 99th Bomb Group Military | Technical Sergeant | Radio Operator | 2nd Bomb Group Military | Major General | Deputy Commanding Officer, 8th Air Force | 97th Bomb Group Graduated from West Point in 1935 and joined the AAC. In 1942, he commanded the 97th Bomb Group in England, operations deputy of the 12th Bomb Command, and operations officer of the 47th Bomb Wing (1943). Served as Deputy Commander of the 8th AF.... B-17 Flying Fortress Delivered Cheyenne 10/4/44; Hunter 27/4/44; Dow Fd 10/5/44; Assigned 346BS/99BG Tortorella 19/5/44; in base taxi accident with Dick Schildmeyer 24/2/45; 97m Salvaged 26/4/45; WEARY WILLIE. B-17 Flying Fortress Delivered Denver 18/2/43; Gt Falls 20/2/43; Salinas 18/3/43; Morrison 29/3/43; Assigned 341BS/97BG Chateau-du-Rhumel 16/4/43; Pont-du-Fahs 1/8/43; Depienne 15/8/43; transferred 346BS/99BG Oudna 14/11/43; Tortorella 11/12/43; 483BG Tortorella 31/3/44;... B-17 Flying Fortress Delivered Tulsa 10/11/42; West Palm Beach 11/12/42; Assigned 340BS/97BG Biskra 31/12/42; Chateau-du-Rhumel 8/2/43; Pont-du-Fahs 1/8/43; Depienne 15/8/43; transferred 346BS/99BG Oudna 14/11/43; Tortorella 11/12/43; 22m/99 483BG Tortorella 31/3/44;... B-17 Flying Fortress Delivered Tulsa 3/12/42; West Palm Beach 9/1/43; Assigned 97BG Biskra 20/1/43; Chateau-du-Rhumel 8/2/43; Pont-du-Fahs 1/8/43; Depienne 15/8/43; transferred 346BG/99BG Oudna 14/11/43; 27m/99 815BS/483BG Tortorella 31/3/44; Missing in Action Milan 30/4... B-17 Flying Fortress Delivered Denver 15/11/43; Kearney 1/12/43; Assigned 346BS/99BG Tortorella 1/44; 79m Salvaged 9/2/45. SILVER METEOR. |03 June 2021 14:23:54||Emily||Changes to type, name, us air force combat units of world war ii description, aircraft types, unit part of associations, unit encompassing associations and stations| |18 November 2016 01:35:59||466thHistorian||Changes to description, aircraft types, unit part of associations and unit encompassing associations| |27 September 2014 18:42:44||AAM||AAM ingest| Paul Andrews, Project Bits and Pieces, 8th Air Force Roll of Honor database
<urn:uuid:c952198b-8527-411c-ba8a-ca299204a567>
{ "dump": "CC-MAIN-2022-05", "url": "https://www.americanairmuseum.com/unit/158", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00128.warc.gz", "language": "en", "language_score": 0.9566025137901306, "token_count": 2496, "score": 3.390625, "int_score": 3 }
Dogs and cats used to be quite healthy until commercial pet food was first manufactured in the 1850's in England. Over a number of years our pets have become sicker and sicker with diabetes, pancreatitis, bad breath, skin disease, failing hearts and kidneys and failing immune systems which is becoming more common. Unfortunately, today people are programmed to feed commercial pet food. They are under the misconstrued belief that commercial processed food offers a complete diet for different life stages of our pets. Commercial pet food is extremely convenient for us, easy to store, easy to purchase and easy to feed, you are even told the quantity and frequency to feed. Listed below are a couple of reasons why we will NEVER offer commercial pet food to our animals: Cooked/ Heat TreatedDestroys many vitamins necessary for good health like the B Group Vitamins and Vitamin C and reduces the protein value.Destroys enzymes. Enzymes in raw food have two very important functions.1. They help with the digestion of food and they help slow the aging process. If our pets are fed cooked food it forces the pancreas to work harder to produce more digestive enzymes to help digest the food, resulting in several diseases so commonly seen in our pets of today, including pancreatitis and diabetes.2. Destroys Anti-Oxidants. Anti-aging factors called anti-oxidants are destroyed in the cooking process and therefore the food is less able to slow the degenerative diseases of old age, including cancer, kidney disease, heart disease and arthritis. Periodontal DiseaseIf dogs and cats are not fed raw meaty bones they are not able to brush and floss their teeth, The ripping, tearing and crunching necessary to consume a raw meaty bone washes, polishes and scrubs gums and teeth. 95% of dogs fed a commercial diet will show signs of periodontal disease by the time they are 4 years of age. Periodontal disease is not just a problem in the mouth; it has the potential to affect a number of vital organs in teh body. Bacteria associated with periodontal disease can enter the bloodstream and cause serious problems such as affecting the valves of the heart, causing permanent damage leading to heart failure. Likewise, bacteria can damage both the kidneys and liver also resulting in failure. Preservatives, Addititves, Colourings and FlavouringCommercial pet foods are full of ill health promoting colourings, flavourings, and preservatives that can cause hypersensitivity reactions, allergies and skin problems at the very least. High in Cereals, Grains and CarbohydratesGrains make up the majority of commercial pet food companies food source. The large amounts of cereals and grains in commercial foods are not chemically or physically suitable for dogs which do not have a digestive system to cope with grains,. Grains are one of the biggest sources of allergies in dogs and cause numerous health problems like obesity, pancreatitis, skin problems, diabetes, dental problems, arthritis, bladder stones and cancer. Excess SaltCommercial pet food can contain 10 to 20 times more salt than our dogs and cats require. This is very harmful to our pets at it increases blood pressure and can lead to kidney and heart disease. Aust Ch. Staffwild High Voltage, fed Canine Country BARF since birth.
<urn:uuid:58e87888-47ab-4c6f-93b9-165cca48441f>
{ "dump": "CC-MAIN-2018-43", "url": "http://caninecountry.com.au/index.php/barf-info/commercial-dog-food", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00515.warc.gz", "language": "en", "language_score": 0.9419783353805542, "token_count": 675, "score": 2.875, "int_score": 3 }
|    New Jersey Division of Fish and Wildlife| New Jersey's largest known bat hibernaculum is the Hibernia Mine in Rockaway Township, Morris County. The mine was abandoned in the early 1900s and the first record of bats using the mine is from the 1930s. In the decades that followed, the mine continued to provide winter habitat for bats but frequent and constant human disturbance limited the mine's potential. Over the years a number of unsuccessful attempts were made by landowners to seal the mine to keep people out. However, sealing the mine would have also made it unavailable to the bats. In July of 1994, the Endangered and Nongame Species Program successfully negotiated a long sought after agreement with landowners to install a special bat conservation gate to keep people out but allow free access by the bats. The gate was designed by Roy Powers of Virginia and constructed through the joint efforts of Powers, the ENSP, the US Fish and Wildlife Service and Bat Conservation International. Shortly after the gate was installed the state acquired the property through the Green Acres Program and it is currently part of the Wildcat Ridge Wildlife Management Area and is listed as a Watchable Wildlife Site. The ENSP conducts a biennial winter survey to assess the bat population in the Hibernia Mine. The most recent survey occurred in February 1999 when more than 30,000 bats were counted - the largest total since bat surveys began in the mine. Beginning in 2003, a Summer Bat Count was begun to document summer roosting locations throughout New Jersey and help to create a range map for the state's nine species of bat. The information will also help to determine roosting and foraging requirements and contribute to the protection of bats in New Jersey. In 1997, the ENSP began a project to search for and protect additional mines and tunnels that support wintering bat populations. A number of new hibernacula have already been located and efforts are underway to protect them. In November 1999 a bat conservation gate was constructed and installed over the entrance to a tunnel in Worthington State Forest in Warren County. The gate was completed through the cooperative efforts of the ENSP, the Division of Parks and Forestry and the US Army Corps of Engineers. The site currently supports several hundred wintering bats but has the potential to support a much larger population. Time will tell if protecting the bats from human disturbance will allow the wintering population to increase. The ENSP will continue to search for and protect habitats that are important to bats.
<urn:uuid:0a3f6595-c58c-4cc4-b1d4-dab36d4d320f>
{ "dump": "CC-MAIN-2016-36", "url": "http://www.state.nj.us/dep/fgw/bat.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982947845.70/warc/CC-MAIN-20160823200907-00113-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.9692202806472778, "token_count": 507, "score": 3.484375, "int_score": 3 }
Since time immemorial, people have been struggling for their rights and the rights of their family members of fellow citizens. In the modern era, those fighting for the rights of others are called human rights defenders. It has never been easy to struggle for human rights in the discourse of Member States of the UN. Human rights defenders have always been the priority of States to make efforts to protect these voices of dissent or the voices for human rights. Long after the evolution of International Humanitarian Law, the framework for the protection of human rights defenders found resonance in the discourse of Member States of the UN, When in 1984 the UN began the elaboration of the Declaration on Human Rights Defenders. Unlike in some countries where the number of cases of enforced disappearances and other forms of human rights violations is decreasing, Asian states are witnessing an upsurge in new cases. It is quite unfortunate that we do not have many examples of ways to solve this problem in Asia, which the governments can follow to end the vicious cycle of impunity. In the Philippines, former president Ferdinand Marcos’ Martial Law had ended but the phenomenon of enforced disappearance has not. Suharto’s dictatorship in Indonesia had also collapsed but Aceh and West Papua continue to witness enforced disappearances. Nepal has seen an end to its armed conflict, yet the leaders who swore to protect freedom and democracy deny the people of justice. Sri Lanka has seen 60,000 people disappeared over the last several years. The Bangladeshi government claims to prosecute the alleged perpetrators of crimes committed in the 1971 war for independence, but its hands are also drenched with the blood of its own people. South Korea demands justice for the transgressions of North Korea, but hardly sees anything outside of it. In Kashmir, disappearances continue and India still refuses to tell the truth about the 8,000 people disappeared and has yet to prosecute perpetrators. The year 2013 ended with the Government of Argentina bestowing upon me the Emilio Mignone International Human Rights Prize on 10 December 2013. It is a recognition of the significance of the struggle against enforced disappearances in Asia that submitted the highest number of cases to the United Nations. At the end of 2013, the International Convention for the Protection of All Persons from Enforced Disappearance (Convention) garnered 92 signatories and 41 States parties. While Asia has rampant cases of disappearances, the only additional State party to the Convention is Cambodia. Not one Asian country has replicated the Philippines in enacting an anti-enforced disappearance law. In Nepal, a nagging insistence for false reconciliation between the victims and the perpetrators by merging the anti-disappearance bill with the truth commission blocks the road to a genuine and lasting peace. The much-awaited ratification of the Convention by Indonesia was not realized… No progress was seen in South Korea on victims of disappearances committed by North Korea. The year 2012 ended with the enactment of Republic Act 10353 or the Philippine Anti- Enforced Disappearance Act of 2012. It was followed by the promulgation of its Implementing Rules and Regulations (IRR) on 12 February 2013. The law signifies a moral victory for the families of the disappeared in the Philippines who, amidst many constraints, persistently campaigned for an antidisappearance law until their twilight years, and for some, till the very end of their lives. Giving cognizance to the invaluable contribution of the authors, families of the disappeared profoundly value the law as a major form of justice. A recognition of the cruelty of this state-perpetrated crime, it gives prime importance to the desaparecidos; recognizes their sacrifices and the sufferings of their loved ones; seeks truth, justice, rehabilitation, reparation and non-recurrence. For this, the AFAD salutes the Families of Victims of Involuntary Disappearance (FIND) for the grand success of its campaign - making the Philippines the first in Asia to have an anti-enforced disappearance law. Such exemplary work may not exactly be replicated in other countries, but its very process is an experience that could serve as a guide for all those who struggle to erase enforced disappearance from the face of the earth. Justice for all desaparecidos! The struggle for truth and justice of victims and survivors of enforced disappearances has been waged for decades. In individual cases, it started when a person was made to disappear, but rarely ended even when the fate and whereabouts of the disappeared have been clarified. In many situations, the struggle continues beyond the clarification of the fate and whereabouts of the disappeared persons. The victims, as exemplified by those in Latin America, Africa, Europe and particularly in our region Asia, forge on with the struggle for justice, reparation, memory and guarantees for non-recurrence. The Voice is a bi-annual publication of the Asian Federation Against Involuntary Disappearances (AFAD). It is providing you with the latest on human rights with focus on involuntary disappearance issue within the Asian region. AFAD welcomes contributions but reserves editorial rights. Mary Aileen Diez-Bacalso Sara La Rocca and Ivanka Custodio
<urn:uuid:6bd130d4-5508-437b-94af-20ae2686dfc6>
{ "dump": "CC-MAIN-2017-47", "url": "http://www.afad-online.org/resources/the-voice", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00195.warc.gz", "language": "en", "language_score": 0.9479947090148926, "token_count": 1045, "score": 3.140625, "int_score": 3 }
Digital Trends' Watch a NASA astronaut jettison part of the ISS into space which was linked in says: Writing in Air & Space last year about the process of jettisoning objects, veteran NASA engineer Mike Engle explained how launching decommissioned parts from the space station can be a risky process, a fact that prompted him to help create an official ISS Jettison Policy to ensure that such activities are carried out safely. “Jettisoning trash from a spacecraft is no mere stroll to a dumpster,” Engle wrote. “First and foremost, you have to make sure that whatever you throw away doesn’t come back and hit you — a frightening possibility in the weird realm of orbital mechanics.” The engineer added, “Simple trigonometry led to the conclusion that pushing an object away at two inches per second within a 30-degree cone centered on a line directly opposite the direction that the ISS was traveling as it orbited the Earth would be enough” to send the part safely on its way. The same speed is mentioned in the Air and Space link it cites. From Tossing Out Trash From the Space Station Takes More Planning Than You’d Think: Our idea was to have EVA astronauts manually push jettisoned items away in the direction opposite the station’s orbit. Analysis showed that a surprisingly small retrograde change in velocity was required: only about 1 to 1.5 inches per second would ensure no recontact. The drag of the jettisoned object would be greater than that of the ISS, further ensuring that the jettisoned object would keep moving behind and below the ISS until it eventually burned up in the atmosphere. In the case of the EAS, however, we scheduled a thruster burn to raise the ISS orbit after jettison just to make sure. Safety engineers insisted that we define a jettison “cone” to account for any directional errors, so that even if an object were at the edge of the cone, it would still fly away safely. Simple trigonometry led to the conclusion that pushing an object away at two inches per second (a rate easily achievable by an EVA astronaut) within a 30-degree cone centered on a line directly opposite the direction that the ISS was traveling as it orbited the Earth would be enough. This seems to be a lot faster than that In the video in this International Space Station tweet it looks more like two feet per second than two inches, an order of magnitude difference. I estimate that the antenna cover moves more than it's own length in one second. .@AstroVicGlover jettisons a science antenna cover into space since it is no longer needed. It will eventually enter Earth's atmosphere and burn up safely. Question: What's the root cause of the disparity between what the article says and what's shown in the video?
<urn:uuid:f158ab70-065c-488e-8950-0c3b4cf55118>
{ "dump": "CC-MAIN-2021-21", "url": "https://space.stackexchange.com/questions/50717/this-iss-trash-deployment-looks-more-like-2-feet-than-2-inches-per-second-was-i", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00415.warc.gz", "language": "en", "language_score": 0.9493021965026855, "token_count": 602, "score": 3.328125, "int_score": 3 }
El Centro, California (NAPSI) - Increasingly, consumers want their portable computers to be thinner, sleeker and lighter weight. They also expect their computer to respond when they tap, swipe and touch its screen. No longer a novelty, touch has rapidly become the primary way many consumers interact with their laptops, tablets and other mobile devices. In the past, laptop computer screens used plastic as a cover material. Now, in order to support this touch-enabled world, notebooks are adopting sleek, glass covers that provide better touch capabilities, as they do for your smartphone. Unfortunately, with touch comes the increased potential for the glass to scratch or break. Even careful interaction with these notebook devices can result in scratched cover glass and an unhappy user. As many people know, replacing a screen can be expensive and sometimes cost as much as half of the full notebook price. Plus, the repair process can leave the user without a device for days, even weeks. The good news is that a familiar name in the world of glass innovation—Corning—has addressed these issues by developing a glass solution specifically designed for touch screen notebooks. Called Corning® Gorilla® Glass NBT™, it’s designed to be tough enough to handle the surface pressures intrinsic to these devices and thin enough to enable accurate touch responses by the device. In fact, Gorilla Glass is already used by 33 major brands on over 1,000 product models and 1.5 billion devices worldwide. It is clear that device makers now take the properties of the glass into account when designing a device. The glass is chemically strengthened through an ion exchange process that creates a deep compression layer on the surface of the glass substrate. This layer acts as “armor” to help reduce the introduction of flaws. The result is the cost-effective, damage-resistant solution that consumers have come to expect from the leading maker of cover glass solutions for smartphones, tablets, notebooks and other devices. While Gorilla Glass was designed with touch screens in mind, the glass has also been used in a number of large-format applications, such as digital signage and glass markerboards. It’s believed that future applications are likely to include architecture, appliances, automotive and beyond. To see the glass in action, visit http://www.corninggorillaglass.com/NBT-In-Action. To learn more, visit http://www.corninggorillaglass.com/NBT-Info.
<urn:uuid:f074eccb-5bce-4c00-8502-b1fec24e85a7>
{ "dump": "CC-MAIN-2015-32", "url": "http://www.kxoradio.com/index.php/kxo-news/latest-news/946-glass-tough-enough-for-a-touch-screen-world", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987628.47/warc/CC-MAIN-20150728002307-00006-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9433268308639526, "token_count": 512, "score": 2.625, "int_score": 3 }
There’s a mural in rural Juchari on the wall of a medical clinic. It honors the tradition of family and the women who preserve its culture. On a visit to Mexico City and several outlying regions, I witnessed the strength of Mexico’s women and their critical role as national freedom fighters. As America struggles to define its democratic values as a nation we can look to the south and our Mexican sisters as role models. The indomitable character of Mexican women dates back to pre-Columbian times when Aztec beliefs centered on a cosmic balance of gender roles and power among the sexes. In the 15th century, Aztec women were equal contributors to society, inside and outside of the home. They realized the same social, religious and political power as men. With the advent of the Spanish conquerors of 1531, women were enslaved and limited to childrearing and household chores to accommodate Spanish patriarchal society and their Christian beliefs. But women continued to practice natural medicine and share their wisdom, often eclipsing the skills of European doctors. In Diego Rivera’s mural, “The Popular History of Mexico” which hangs in the National Palace of Mexico City, industrious indigenous women and their tenacious resistance are everywhere. In fact, Diego’s wife, Frida Kahlo, exemplified this resistance. She remains a celebrated artist in many modern art circles and thousands of people from around the world to visit her blue house in Mexico. Her garden and living quarters are open to the public and arrestingly painful self-portraits and original artwork still grace the walls. Frida fought discrimination, advocating social and political reforms for women. A 16th century San Agustin church in Patzcuaro’s Plaza, Michoacan tells another colorful story of a freedom fighter. There is a bronze statue of Gertrudis Bocanegra, one of many women actively involved in creating secret communication networks in support of the Mexican independence movement. Juan O’Gorman’s massive fresco in the town church (now a library) tells a colorful pictorial story of heroic deeds, her torture and her death by firing squad. Like Leona Vicaro and other women active in the revolution, Gertrudis refused to relinquish the names of fellow insurgents after her capture by the Royal Spanish Army in 1817. Women continue to fight suffering and injustice in Mexico, drawing strength from Our Lady of Guadalupe. She is Mexico’s most popular saint, immortalized in religious and cultural images throughout the country. Called the Queen of Mexico, she is credited with miracles including the perfectly preserved image of her on a tilma hundreds of years old. Our Lady’s image was added to the Mexican flag to unify the country, and the first President of Mexico (Manuel Fernandes) even changed his name to Guadalupe Victoria to honor her as a national symbol. The image of the Mother of God permeates the country, in cities and the countryside, in church paintings, and even on rocks and walls along Mexican roadways. Her name lives in many modern day crusaders like Guadalupe Hernandez Dimas (or Nanu Lu), a humanitarian nominated for a Nobel Peace prize in 2005. I heard Guadalupe speak about her mission, describing the ancestry and culture of women within the context of four elements: women are the earth which provides nourishment, the air for breathing, the water that gives life, and the fire which represents God and our sun. Respect for culture and pride are central themes as Guadalupe continues to promote a national women’s movement. She advocates for women and organizations to invest in health centers and community education to bring change. I witnessed the success of women in Jachari and Patzcuaro, where the infrastructure and delivery systems continue to provide critically needed efforts to address poverty and suffering. With modern day freedom fighters like former first lady of Mexico, Margarite Zavala, and Congressional representative Amalia Garcia, Guadalupe’s message is spreading socially and politically across the nation. Zavala and Garcia are focused on the problem of migrant children who leave their homes to reunite with parents living in the U.S., to escape violence from abuse or drug cartels. When they head to the American border as unaccompanied minors, some as young as 8 years of age, they are opportune targets for child traffickers and gangs. As countries around the world struggle to keep up with the new and challenging demands of migrant populations displaced from homes and atrocities associated with the ravages of war, Mexico’s freedom fighters provide a compelling argument for giving women a greater voice at the table. For any nation to be truly democratic and endowed with the freedoms of justice, liberty and equality, we have much to learn from Mexico’s women. If you’d like to know more about the international organization I traveled with, visit: https://www.heartlandalliance.org/about/corporate-structure/heartland-alliance-international/
<urn:uuid:0c77ec59-d555-4263-9f3f-51e8135df0f1>
{ "dump": "CC-MAIN-2018-51", "url": "http://womanscape.com/2016/11/22/mexican-women-a-legacy-of-freedom-fighters/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00330.warc.gz", "language": "en", "language_score": 0.9452250003814697, "token_count": 1050, "score": 3.953125, "int_score": 4 }
FREE RESOURCE Social Story Code Red – A Calm Explanation For Special Needs - Download instantly - Quality checked FREE RESOURCE Social Story Code Red – A Calm Explanation For Special Needs Students I teach and live in Parkland, Florida, and want to share a booklet that I made for my students. I have not focused on the tragedy, but on a calming explanation of what “Code Red” means in our school district. Staff members were told that we would be having an active shooter drill in February. Last week, I had told my husband that if he heard sirens or any activity at the school where I work, that he should not worry, because it was just a drill. Little did we all know that it was not. Please hug your students and of course, your own children, too. Please feel welcome to email me if you have any other needs for your students and I will try and help. Thank you very much. © Copyright 2018 Autism Educators, Inc. (AutismEducators.com). All rights reserved by author. This product is to be used by the original purchaser only. Copying for more than one teacher or classroom, or for an entire department, school, or school system is prohibited. This product may not be distributed or displayed digitally for public view, uploaded to school or district websites, distributed via email, or submitted to file sharing sites such as Amazon Inspire or ANY sharing websites which include Facebook Sharing files. Failure to comply is a copyright infringement and a violation of the Digital Millennium Copyright Act (DMCA). Intended for single classroom and personal use only. Given a social story related to a real drill or emergency, along with visual prompts as needed, STUDENT will follow directions without protesting, in order to demonstrate understanding of the importance of the situation,based upon teacher observation, by MONTH, YEAR. This item is recommended for the following grade levels: Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade
<urn:uuid:2f5b5737-35d7-42ce-9a25-ad07ec111352>
{ "dump": "CC-MAIN-2020-29", "url": "http://autismeducators.com/free-resource-social-story-code-red--a-calm-explanation-for-special-needs?tag=social%20story", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00471.warc.gz", "language": "en", "language_score": 0.9563621282577515, "token_count": 420, "score": 2.890625, "int_score": 3 }
- Open Access Genetics and functions of the retinoic acid pathway, with special emphasis on the eye Human Genomics volume 13, Article number: 61 (2019) Retinoic acid (RA) is a potent morphogen required for embryonic development. RA is formed in a multistep process from vitamin A (retinol); RA acts in a paracrine fashion to shape the developing eye and is essential for normal optic vesicle and anterior segment formation. Perturbation in RA-signaling can result in severe ocular developmental diseases—including microphthalmia, anophthalmia, and coloboma. RA-signaling is also essential for embryonic development and life, as indicated by the significant consequences of mutations in genes involved in RA-signaling. The requirement of RA-signaling for normal development is further supported by the manifestation of severe pathologies in animal models of RA deficiency—such as ventral lens rotation, failure of optic cup formation, and embryonic and postnatal lethality. In this review, we summarize RA-signaling, recent advances in our understanding of this pathway in eye development, and the requirement of RA-signaling for embryonic development (e.g., organogenesis and limb bud development) and life. For human health, the importance of retinol, also known as vitamin A, has been known since ancient times, when the practice of squeezing liver juice into the eye was used as a treatment for night blindness . The link between night blindness (nyctalopia) and nutrition was first described by Hippocrates during the 4th century BC, when he recommended eating raw liver in combination with honey as a cure . The idea that certain foods possessed curative properties was understood for much of human history. However, it was not until much later that a series of controlled experiments (i.e., human dietary supplementation and animal models during the early twentieth century) allowed scientists to investigate how removal of certain factors from the diet could cause debilitating illnesses and death [3,4,5]. Biochemical experiments in vertebrate models subsequently revealed that retinol was the active compound involved in cell growth and development, along with its precursors and metabolites . The aldehyde derivative of retinol, 11-cis-retinal, is required for vision [reviewed in ]. All-trans-retinoic acid (ATRA), the acid derivative of retinol, is able to prevent developmental defects in vitamin A-deficient animals . The demonstration that retinoic acid (RA) could not be converted back to retinol in vivo led to the conclusion that RA was a necessary nutrient involved in cell growth and development . Ultimately, a set of compounds and their metabolites with biological functions similar to retinol were termed “retinoids” . Although the chemical structures of retinoids were identified in the early 1900s, little was known about the mechanisms by which these small lipophilic molecules exerted their biological effects . A short time later, studies performed in vitamin A-deficient rats revealed that retinol supplementation stimulated RNA synthesis in intestinal cells . Biochemical studies subsequently identified serum, membrane, and cytosolic proteins that were essential for retinol transport, uptake, and metabolism . Examples include retinol-binding protein 4 (RBP4), stimulated-by-retinoic-acid 6 (STRA6) membrane receptor, and the cellular retinol-binding protein (CRBP) family, also known as RBP1 and RBP2 . Experiments carried out in chick and mouse embryos identified RA as the active metabolite of vitamin A that possessed the ability to regulate cellular differentiation and proliferation, as well as pattern formation during embryogenesis [reviewed in ]. At the molecular level, the action of RA is mediated by two distinct classes of proteins: (i) a family of nuclear receptors comprising RA receptors (RARs) and retinoid X receptors (RXRs) that regulate gene transcription in a ligand-dependent fashion and (ii) a family of cytosolic proteins called cellular retinoic acid-binding proteins (CRABP1 and CRABP2) which facilitate cellular RA uptake and nuclear transfer . These studies provided a link between the chemical structure of retinoids and their biological action. Although the developmental role of RA has been extensively studied in model organisms [reviewed in ], little is known about the exact role of RA in human development. Currently, almost all our molecular understanding about the pathogenesis of RA deficiency is based on either vertebrate animal knockout models (in which genes encoding proteins involved in RA synthesis or degradation are selectively inactivated) or experiments involving rodents fed diets deficient in vitamin A. Such studies have shown RA to be involved in early axial and central nervous system patterning, neurogenesis, regulation of limb bud development, and organogenesis [ and reviewed in [13, 14]]. To date, none of these studies has been systematically validated in humans due to ethical concerns and the difficulty of performing such clinical experiments. Nevertheless, based on observational studies in humans consuming vitamin A-deficient diets, it is well established that vitamin A is required in humans (even into adulthood) because it regulates fertility, maintains normal vision, inhibits neoplastic growth, and prevents neurodegenerative diseases . With the recent publication of whole-exome sequencing (WES) data from ≈ 140,000 individuals by the Genome Aggregation Database (gnomAD) , it is now possible to investigate genetic intolerance to protein-truncated variants (PTVs) in a large population, i.e., to detect genes that are essential for human development. In this review, we first provide an overview of canonical and non-canonical RA metabolism (i.e., synthesis and catabolism) and the mechanism of RA target gene regulation. We then provide an update on the role of RA-signaling in eye development in mouse and zebrafish and discuss the ocular diseases in humans who have mutations in genes involved in the RA-signaling pathway—such as microphthalmia, coloboma, and anophthalmia. Finally, we take advantage of population-level variation databases to identify which genes involved in the RA pathway display loss-of-function intolerance, thus indicating their requirement for human development and life. Retinoic acid synthesis, catabolism, and gene regulation This section describes the components of the retinoic acid signaling pathway including cellular uptake of retinol, conversion of retinal to retinaldehyde, retinaldehyde oxidation to RA, RA degradation, and target gene activation (Fig. 1). Canonical pathway of RA synthesis Early studies revealed that retinoids could not be synthesized de novo in most animals [reviewed in ]. Hence, the major source of retinoids during embryonic and fetal development is through placental transfer of maternal retinol. Postnatally, retinoids are primarily derived from the dietary intake of (i) carotenoids (such as β-carotene) contained in plant pigments and (ii) retinyl esters from animal sources, such as fish-liver oils, eggs, milk, and butter. Following ingestion, retinyl esters are hydrolyzed to retinol by intestinal mucosal enzymes, whereas carotenoids are cleaved into retinal and subsequently reduced to retinol or oxidized to RA. Retinol homeostasis is tightly regulated. As such, much of the synthesized retinol is converted back into retinyl esters for storage in liver hepatocytes and stellate cells. When needed, these esters are cleaved and released into the bloodstream as retinol [17, 18]. Upon release into the bloodstream, retinol is bound by retinol-binding protein 4 (RBP4). Cells can take up the retinol-RBP4 complex via transmembrane receptor protein stimulated by retinoic acid 6 (STRA6), the product of the RA-inducible mouse gene Stra6 (or human STRA6 gene) . The complex tissue-specific expression pattern of this gene during development influences which tissues are able to take up retinol . Once inside the cell, two sequential reactions are required to transform retinol into retinaldehyde and RA (Fig. 1). The first reaction is mediated by two classes of enzymes: (i) cytosolic alcohol dehydrogenases (ADHs) that belong to the medium-chain dehydrogenase/reductase family and/or (ii) microsomal retinol dehydrogenases (RDHs) that belong to the short-chain dehydrogenase/reductase family . Initial studies in mice indicated that this reaction was catalyzed by ADH7 in the embryo [22, 23]; however, tissue-specific RDH10 is now believed to play the most important role in RA synthesis during development because mice expressing mutant Rdh10 (RDH10trex) (that lacks the ability to convert retinol to retinal) display embryonic lethality . Some degree of RA activity persists in mice expressing RDH10trex (revealed by limited RARE-lacZ reporter activity at E9.5), indicating that other enzymes (such as ADH7) are able to generate retinaldehyde, albeit at lower levels (that do not support embryonic development) . In addition, transgenic suppression of ADH5 (an enzyme ubiquitously expressed in embryo and adult) or of tissue-specific ADH1 and ADH7 revealed that ADH enzymes may have a role in controlling removal of excess retinol, rather than participating in RA synthesis per se . The second reaction is oxidation of retinaldehyde to RA. This is catalyzed by three aldehyde dehydrogenases: ALDH1A1, ALDH1A2, and ALDH1A3 which are encoded (respectively) by Aldh1a1, Aldh1a2, and Aldh1a3 (respectively) in mice, or by ALDH1A1, ALDH1A2, and ALDH1A3 in humans. Each ALDH displays a distinct expression pattern that closely correlates with RA activity and with the dynamics of RA-signaling. ALDH1A2 is responsible in the mouse for almost all RA production during early embryogenesis, i.e., until ~E8.5 . During gastrulation, ALDH1A2 is expressed mainly along the primitive streak and in mesodermal cells in the posterior end of the embryo . Later, ALDH1A2 is expressed in the somatic and lateral mesoderm, posterior heart tube, and rostral forebrain—and subsequently in prospective cervical and trunk levels during body axis extension . Thereafter, ALDH1A3 is responsible for RA synthesis in the eye and olfactory system. ALDH1A1, thought to be partly redundant with ALDH1A3, has been demonstrated to act only during eye development [27, 28] Alternative pathway of RA synthesis An alternative pathway for RA synthesis (elucidated in zebrafish) involves conversion of β-carotene to retinaldehyde by a β-carotene cleaving enzyme, β-carotene 15,15'-oxygenase 1 (BCO1) . This pathway, believed to be an ancestral pathway from early chordates, is found mainly in marine fish in which retinaldehyde and carotenoids stored in the egg yolk are the main source of retinoids during development . The mouse and human homolog, β-carotene 15,15'-oxygenase 1 (encoded by Bco1; BCO1, previously known as BCDO1), is expressed in retinal pigment epithelium (RPE)—as well as in kidney, intestine, liver, brain, stomach, and testis [30,31,32]. Its primary function is to generate retinaldehyde in photoreceptor cells and to supplement retinoid pools in other tissues . In addition, a second β-carotene cleavage enzyme expressed in rabbits, β-carotene 9',10'-dioxygenase (encoded by the BCO2 gene), catalyzes the cleavage of β-carotene into β-apocarotenoic acid, which can be transformed into RA without any involvement of ALDHs . Lastly, cell culture studies have revealed that CYP1B1, a member of the cytochrome P450 family, can catalyze conversion of retinol to retinaldehyde and RA. It remains to be seen if this enzyme meaningfully contributes to RA synthesis in mammals . Cellular levels of RA must be tightly regulated to prevent RA toxicity [reviewed in ]; this can occur through control of RA synthesis and RA catabolism (Fig. 1). RA is converted into polar derivatives (4-hydroxy-RA and 4-oxo-RA) by the cytochrome P450 26 subfamily of enzymes, specifically CYP26A1, CYP26B1, and CYP26C1 [37,38,39]. Lethality occurs in CYP26A1, CYP26B1, and CYP26C1 null mouse models . Although it was originally shown that the RA CYP26-mediated polar derivative—4-oxo-RA—can interfere with embryonic development when delivered exogenously by binding to and activating RARs , more recent in vivo data suggest that CYP26-mediated catabolism is required for embryonic development because its removal of RA prevents inappropriate signaling in specific tissues . The CYP26 enzymes display an expression pattern which matches that of the ALDHs during embryogenesis; their targeted disruption causes teratogenic effects similar to those seen in RA toxicity. In mice, Cyp26a1 and Cyp26c1 are the first genes to be expressed in the rostral-most embryonic epiblast, whereas Cyp26b1 is expressed in tail bud tissues and in the distal limb bud mesenchyme. Later in development, these enzymes display differential expression patterns in various developing organs, such as retina, dental epithelium, and inner ear [reviewed in ]. RA gene activation or repression RA acts as an agonist of two nuclear receptor families that bind DNA and directly regulate transcription (Fig. 1). These families are (i) the RA receptors, i.e., retinoic acid receptor alpha (RARA), retinoic acid receptor beta (RARB) and retinoic acid receptor gamma (RARG), and (ii) the retinoid X receptors, i.e., retinoid X receptor alpha (RXRA), retinoid X receptor beta (RXRB), and retinoid X receptor gamma (RXRG) [reviewed in ]. The RARs are highly conserved in vertebrates and are primarily activated by all-trans-RA (ATRA). By contrast, the RXRs are activated by 9-cis-RA, a stereoisomer of ATRA that is detected only when vitamin A is in excess. RXRs are thought to act as heterodimeric scaffolding proteins that facilitate binding of the RAR-RXR complex to DNA—complex demonstrates greater affinity for DNA than the RAR or RXR homodimers [44,45,46]. RARA, RXRA, and RXRB are widely expressed in tissues, suggesting that most tissues are potential targets of RA [reviewed by ]. Mouse knockout studies involving the RAR and RXR families have shown developmental abnormalities when two or more receptors are inactivated with the exception of RXRA-null mice which die in utero (vide infra), suggesting a degree of functional redundancy . The DNA-binding sites for RARs and RXRs are known as retinoic acid response elements (RAREs) and contain direct repeats (DR) of 5'-AGGTCA-3' separated by one to five base pairs (termed DR1-DR5) [48, 49]. DRs (DR1-5) determine RA-activated RAR-RXR complex target gene expression. For example, DR5-containing genes display transcriptional activation, whereas DR1-containing genes display transcriptional repression . So far, a wide variety of the RAR- and RXR-regulated genes have been shown to influence many cellular processes—e.g., the cellular uptake of RA (Crbp1/2 and Crabp1/2), RA catabolism (Cyp26a1), RA nuclear receptor beta (Rarb), mammalian embryonic pattern formation through the homeobox (Hox) family (Hoxa1, Hoxb1, Hoxb4, and Hoxd4), and organ growth/development (Pitx2, Drd2, Gad67, Fgf8, and Pdx1) [51,52,53]. The retinoic acid pathway regulates eye development RA-signaling in mammalian eye development has been previously reviewed . As such, we will focus on ocular developmental diseases associated with perturbed RA-signaling. The process of eye development is largely conserved among chordates—including zebrafish, mice, and humans [55, 56]. Mouse eye development begins at E8.0, at which time the optic vesicle forms on the cephalic neural folds . The optic vesicle then begins to migrate towards the surface ectoderm until, at E9.0, the two ectodermal layers come into contact and begin to thicken. This contact initiates activation of a cascade of transcription factors (e.g., SIX3 and PAX6) [reviewed in ] and signaling pathways [e.g., BMP and RA (reviewed in ]. The optic vesicle then invaginates into the optic cup, and the surface ectoderm subsequently invaginates to form the lens placode. As the lens placode continues to invaginate, asymmetric cell growth then leads to formation of the lens pit, with the ultimate formation of the lens vesicle by E11. Epithelial cells located at the anterior portion of the lens vesicle maintain their epithelial identity and proliferative nature, whereas epithelial cells at the posterior lens vesicle differentiate into fiber cells and ultimately become primary lens fiber cells. The inner layer of the optic vesicle then becomes the neuroretina, while the outer layer becomes the RPE . From mouse neuroretina, several neuronal subtypes (i.e., retinal ganglion cells, amacrine cells, horizontal cells, bipolar cells, photoreceptor cells) and Müller cells begin forming at E11. Corneal development begins when the lens stalk connecting the lens vesicle to the surface ectoderm is severed. The resulting space is rapidly filled by invading cells from perinuclear mesenchyme. Mesenchymal cells nearest the lens vesicle then form the corneal endothelium, while cells farthest from the lens vesicle form the corneal epithelium. Cells located between these two layers form the corneal stroma from which corneal keratocytes differentiate. Corneal development is maintained by a constant influx of cells from the periocular mesenchyme (POM). Lens formation continues, with secondary fiber differentiation in the mouse beginning at the lens equator at E13.5-E14.5 (secondary fiber cell differentiation occurs throughout adulthood). Anterior segment development is completed by the anterior edge of the optic cup (which forms the epithelium of the iris and ciliary body), and migrating cells from the POM form stroma of the iris and ciliary body. Lastly, the trabecular meshwork is formed from migrating mesenchymal cells. Eye morphogenesis is largely conserved in mouse, zebrafish, and humans, but the process in zebrafish occurs in a much shorter time frame [56, 61]. Studies in animal models have revealed a requirement for RA-signaling in normal eye development. Beginning in the mid-twentieth century, research highlighted the importance of dietary vitamin A in maintaining rodent eye development [3, 62]. Rats born to mothers maintained on a vitamin A-deficient diet displayed a multitude of ocular defects—including retina infolding, coloboma, microphthalmia, and anophthalmia (a syndrome ultimately termed vitamin A deficiency)—that could be rescued by vitamin A supplementation during embryonic development [3, 62]. However, the time of supplementation was critical, in that supplementation before E13.0 could rescue the ocular phenotypes whereas supplementation after E13.0 could not completely rescue eye development . These studies demonstrated that vitamin A (and ultimately RA-signaling) is required for specific events in eye development, i.e., optic cup formation, anterior segment formation. As noted, RA-signaling is modulated by several enzymes and dependent on RARs and RXRs (Fig. 1). Therefore, an alteration in any one of these proteins may perturb RA-signaling and affect normal eye development. Animal models used to investigate the RA-signaling pathway will be discussed in the order that the proteins appear within the pathway—starting with RBP4 and finishing with RARs and RXRs (Fig. 1). In zebrafish, decreased stra6 (e.g., by morpholino knockdown) causes reduced eye size, despite the formation of all retinal layers . In mouse, RDH10 was identified as essential for normal eye development . RDH10-deficient mice lack the cornea and ventral half of the retina and exhibit hypoplastic lenses. RDH10-depleted zebrafish display a mild RA loss of function phenotype , perhaps due to their ability to produce retinaldehyde through bcox . A morpholino-mediated inhibition of bcox results in microphthalmic zebrafish with a diminished size of the ventral prospective retina tissue . In mice, ALDHs are differentially required at various stages of eye development. ALDH1A2 is only expressed in the murine eye between E8.5 and E9.5 and is required for optic cup formation (Fig. 2a) . In contrast, ALDH1A1 and ALDH1A3 are respectively expressed in dorsal and ventral retina from E10.5 onwards (Fig. 2b). Aldh1a1-null mice exhibit no developmental ocular phenotype, likely due to compensation in RA-signaling by ALDH1A3 [27, 28]. Aldh1a3-null mice display developmental ocular phenotypes—resulting as ventral rotation of the lens, persistence of the primary vitreous, and thickening of the ventral POM [27, 66]. Aldh1a1/Aldh1a3-null mice display all of the same phenotypes as Aldh1a3-null mice, but with greater severity; this would suggest that some of the loss of RA-signaling induced by the genetic ablation of Aldh1a3 is compensated by Aldh1a1 . RA influences mammalian ocular development in a paracrine fashion; RA produced in the retina by ALDH1A1 and ALDH1A3 is secreted and acts on cells in the POM where it regulates expression of genes important for apoptosis and corneal morphogenesis and cell specification—Eya2 and Pitx2, respectively (Fig. 2b) [27, 67, 68]. Disruption in RA-signaling permits overgrowth of POM cells, which adversely affects normal anterior segment development [27, 28]. Ectopic lens expression of CRABP1 results in lenses with impaired secondary fiber cell differentiation (i.e., failure to lose nuclei) and a flattening of the anterior side of the fiber cells . ATRA binds to and activates the RXR/RAR complex, which enables activation or repression of RARES (Fig. 1). Compound Rar gene deletions (for example, the deletion of both Rara and Rarb) result in aberrant ocular developmental phenotypes—including microphthalmia, coloboma, lens abnormalities, and retinal dysplasia and degeneration [70, 71]. Compound Rxra, Rxra/Rarg, and Rxra/Rara null mice all show ocular developmental abnormalities—including ventral rotation of the lens, thicker corneas, shorter ventral retinae, and coloboma . Involvement of RA-signaling in maintenance of POM cellular proliferation was confirmed by the conditional deletion of Rara, Rarb, and Rarg in neural crest cells; these knockout animals have impaired ocular development phenotypes similar to the previously described ALDH-deficient mice . Collectively, these animal models have provided compelling evidence in support of an important role for RA-signaling in ocular development. Mutations in genes involved in RA-signaling in humans are associated with developmental diseases—including the ocular developmental diseases, such as microphthalmia, anophthalmia, and coloboma (collectively called MAC disease), and Mathew-Wood Syndrome (Fig. 1). Linkage analysis and whole-exome sequencing have identified mutations in RBP4 in patients with MAC disease [73, 74]. Dominant-negative mutations in RBP4 increase RBP4 affinity for STRA6; this nonproductive occupation of STRA6 hinders delivery of vitamin A to the fetus. Maternal inheritance of RBP4 mutations and a lack of maternal dietary retinoids predispose the fetus to MAC disease . Mutations in STRA6 are associated with both syndromal (Mathew-Wood Syndrome) and non-syndromal MAC disease [75,76,77,78]. A double-nucleotide polymorphism that causes a nonsynonymous change from glycine to lysine in a highly conserved region of the STRA6 protein was identified in MAC patients; this mutation almost completely abolishes cellular uptake of vitamin A . Homozygous nonsense mutations, missense, and splice-site mutations in ALDH1A3 are associated with microphthalmia [79,80,81]. Co-transfection of wild-type and mutated human ALDH1A3 (c.265C>T and c.1477G>C) revealed that the mutated ALDH1A3 protein is likely unstable and subject to proteasomal degradation . Mutations in RARB have been identified in patients with syndromic MAC [i.e., pulmonary hypoplasia/agenesis, diaphragmatic hernia/eventration, anophthalmia/microphthalmia, and cardiac defect (PDAC)] [82, 83]. Evaluation of these mutations have indicated that both gain-of-function and dominant-negative mutations within RARB can cause PDAC syndrome . A recent report identified a de novo mutation in RARA in a coloboma patient which is hypothesized to impair the interaction between RA and RARA . It is clear that RA-signaling is similarly important for zebrafish, mouse, and human eye development. Given that it is unethical to investigate the role of RA-signaling in human eye development in a manner similar to that in animal models, it is currently unknown exactly how RA-signaling might impact human eye development. However, the conserved nature of RA-signaling and eye development across chordates suggests that RA-signaling is also very likely to act in a paracrine fashion to regulate eye development in humans. Deleterious clinical variations in the RA pathway The ExAC database (comprising whole-exome sequencing of more than 60,000 individuals) was published in 2016 ; it was recently expanded to include ~ 140,000 individuals . Using this resource, it is now possible to gain insights into the necessity of various components of the RA pathway in humans and to explore the existence of nodes in the pathway that are of potential importance. A gene with a high pLI score would suggest that individuals who inherit loss-of-function (LOF) mutations in that gene will inherit a survival disadvantage. Analysis of pLI scores of genes involved in the RA-signaling pathway (Table 1) reveals that certain genes in this pathway are crucial for human embryogenesis and life—data that are consistent with mouse studies (Table 2). The RA-signaling pathway is essential for life, with 30% of RA-signaling pathway genes being categorized as LOF-intolerant. This is in striking contrast to 17% of all known genes being categorized as LOF-intolerant . Several of the LOF-intolerant RA-signaling pathway genes have no known associated human disease, i.e., RDH10, RXRA, RXRB (Table 1); this would be predicted given the severe embryonic lethality observed in transgenic mouse models in which these three genes have been knocked out (Table 2). A total of eight RA-signaling pathway genes have markedly high pLI scores (pLI > 0.9), i.e., ALDH1A1, CYP26B1, RARA, RARB, RARG, RXRA, RXRB, and RDH10 (Table 1). These genes are therefore essential for life, e.g., DNA-binding functions and crucial for morphogenesis [156, 157]. pLI predictions and animal models are in agreement for one gene in particular—ALDH1A2. Humans are LOF-tolerant (pLI = 0.36) (Table 1) and Aldh1a2 heterozygous null mice are viable. This highlights an important feature of pLI scores; they predict the probability of haploinsufficiency intolerance . However, it is important to note that Aldh1a2-null mice experience embryonic lethality by E10.5 (Table 2) . Despite the high level of conservation in the RA-signaling pathway between humans and mice, discrepancies in the extent of indispensability of RA-signaling pathway genes exist. ALDH1A1 is intolerant of LOF mutations (pLI = 0.95) in humans (Table 1) whereas Aldh1a1 is dispensible in mice . This may be related to loss of Aldh1a1 being compensated in mice by Aldh1a7—which is an Aldh1a1 gene duplication found in rodents but not in humans . In mice, Cyp26b1 deletion is lethal immediately after birth [123, 124], whereas humans with CYP26B1 mutations can live to adulthood . Cyp26a1-null mice experience embryonic lethality (Table 2), while human CYP26A1 is LOF-tolerant (pLI = 0) (Table 1). Differences between mouse and human CYP26A1 tissue expression may explain the differential requirements for life. For example, human MAP2-positive neurons in the human dentate gyrus express CYP26A1, whereas rat and mouse MAP2-positive neurons do not . Further, human MAP2-positive neurons express ALDH1A2 along with CYP26A1, suggesting that RA acts in an autocrine fashion in these cells, as opposed to the paracrine fashion found in rodents. This may explain the differences in the requirement for life. Differences between mouse and human underscores the need for caution when generalizing the requirement for life of RA-signaling pathway genes across chordates. Despite this, mice still hold great utility as an experimental model when investigating the minutiae of the RA-signaling pathway. Differences between animal models and humans can be further explained by incomplete penetrance in human—possibly due to differences in mutation type, variations in gene expression, epigenetic changes, age, sex, or copy number variations . Often, experimental models used for investigation of the RA-signaling pathway rely on transgenic mice in which a pathway gene is completely ablated. This situation may not be representative of humans in whom gene mutations commonly result in lowered gene activity rather than zero expression. In addition, important differences during development in the patterns of expression of many genes and their pathways exist between mouse and human [164,165,166]. As was highlighted above, differences in the timing and/or location of RA-signaling pathway genes result in vast differences in this critical pathway—e.g., autocrine vs. paracrine signaling, embryonic or postnatal lethality, and tissue-specific expression (or lack thereof). All of these can contribute to the discrepancies between mouse models for the RA-signaling pathway genes, observed human diseases, and the pLI scores. Future studies of the retinoic acid signaling pathway By means of analysis of genetic intolerance, we can pinpoint certain members of the RA-signaling pathway that are likely to be essential for human life. Clearly, it is important that these members need to be better understood. This can be achieved by generating hypomorphic mutations in mice, i.e., protein function is diminished rather than ablated. Hypomorphic mutations in mice can be studied via the introduction of single-nucleotide variants (SNVs) using CRISPR/Cas9 technology [reviewed in ]. This approach was recently used to study hypomorphic mutations in a humanized CYP3A5*1 mouse model . These humanized mice with hypomorphic mutations are likely to be better models of human diseases associated with altered RA-signaling pathways. While CRISPR/Cas9 can be effectively used for modeling human diseases in mice, such studies can be prohibitively inefficient in that they use advanced techniques, need specialized equipment, and require at least 3 months to generate knockin/knockout mice . Zebrafish represent an alternative in which high-throughput screening can be used to investigate mutations in genes of the RA-signaling pathway that have been identified in humans [ and reviewed in ]. Zebrafish are more efficient than mice for such CRISPR/Cas9 experimental approaches for several reasons: they have a shorter generation time, produce more offspring, and are less expensive to maintain [172,173,174,175]. CRISPR/Cas9 was recently used in zebrafish to generate a humanized model of renal agenesis in which GREB1 like retinoic acid receptor coactivator (GREB1L) was identified as a coactivator of RAR genes . It is expected that similar approaches will be used in future studies to manipulate the RA-signaling pathway and thereby enhance our understanding of RA-signaling in human physiology and pathophysiology. For centuries, the importance of dietary vitamin A to human health has been known. Ancient civilizations unknowingly used homeopathic remedies in which vitamin A was the main active ingredient. It was not until the turn of the twentieth century that our more nuanced understanding of the role for vitamin A in human health began to take shape. Pioneering animal studies determined that vitamin A was critical for embryogenesis, eye development, and identified retinoids as derivatives of vitamin A. Decades later, our understanding of the RA-signaling pathway has grown significantly and now includes a more comprehensive knowledge of retinol cellular uptake and oxidation, RA catabolism, nuclear receptor (RAR/RXR) activation, and nuclear receptor gene targets—and the importance of the RA-signaling pathway for eye development. By leveraging information gained from large-scale human whole-exome sequencing efforts (e.g., ExAC and gnomAD), our understanding about the importance of the RA-signaling pathway to human health is improving. This was underscored by the high number of genes within this pathway with large pLI scores. While transgenic mouse models have provided valuable insights into the details of the RA pathway, discrepancies between the human and mouse data underscore the need for care when generalizing results from animal studies to humans. Animal models will continue to enhance our understanding of the RA-signaling pathway under physiological and pathophysiological conditions. Exciting new models and techniques (e.g., Zebrafish, CRISPR/Cas9, hypomorphic mutations) will allow a more nuanced examination of this pathway. These will allow the pathophysiological consequences of individual human mutations in the RA-signaling pathway to be explored. In summary, the RA-signaling pathway play a critical role in embryogenesis, eye development, and are required for life. We should anticipate fascinating new insights into this pathway in the coming years. Availability of data and materials All data analyzed in this review are included in this published article: Karczewski KJ, Francioli LC, Tiao G, Cummings BB, Alföldi J, Wang Q, et al. Variation across 141,456 human exomes and genomes reveals the spectrum of loss-of-function intolerance across human protein-coding genes. bioRxiv. 2019:531210; doi: https://doi.org/10.1101/531210 Alcohol dehydrogenase 1 Alcohol dehydrogenase 7 Aldehyde dehydrogenase 1 family member A1 Aldehyde dehydrogenase 1 family member A2 Aldehyde dehydrogenase 1 family member A3 Cellular retinol-binding protein Cytochrome P450 family 26 subfamily A member 1 Cytochrome P450 family 26 subfamily B member 1 Cytochrome P450 family 26 subfamily C member 1 Exome Aggregation Consortium The Genome Aggregation Database GREB1 like retinoic acid receptor coactivator Loss of function Microphthalmia, anophthalmia, and coloboma Probability loss of function intolerance Retinoic acid receptor Retinoic acid response elements Retinol-binding protein 4 Retinol dehydrogenase 10 Retinol dehydrogenase 5 Retinal pigment epithelium Retinoid X receptor Stimulated by retinoic acid 6 Mackay HM. Vitamin A deficiency in children: Part I. Present knowledge of the clinical effects of vitamin A deficiency, with special reference to children. Arch Dis Child. 1934;9(50):65–90. Eusterman GB, Wilbur DL. Clinical features of vitamin A deficiency. J Amer Med Assoc. 1932;98:2054–60. Wilson JG, Roth CB, Warkany J. An analysis of the syndrome of malformations induced by maternal vitamin a deficiency. Effects of restoration of vitamin a at various times during gestation. Am J Anatomy. 1953;92(2):189–217. Osborne T, LB M. The relationship of growth to the chemical constituents of the diet. J Biol Chem. 1913;15:311. Wolbach SB, Howe PR. Tissue changes following deprivation of fat-soluble a vitamin. J Exp Med. 1925;42(6):753–77. Semba RD. On the ‘discovery’ of vitamin A. Ann Nutr Metab. 2012;61(3):192–8. Lamb TD, Pugh EN Jr. Phototransduction, dark adaptation, and rhodopsin regeneration the proctor lecture. Invest Ophthalmol Vis Sci. 2006;47(12):5138–52. White JC, Shankar VN, Highland M, Epstein ML, DeLuca HF, Clagett-Dame M. Defects in embryonic hindbrain development and fetal resorption resulting from vitamin A deficiency in the rat are prevented by feeding pharmacological levels of all-trans-retinoic acid. Proc Natl Acad Sci U S A. 1998;95(23):13459–64. Benbrook DM, Chambon P, Rochette-Egly C, Asson-Batres MA. History of retinoic acid receptors. Subcell Biochem. 2014;70:1–20. Sporn MB, Dunlop NM, Newton DL, Henderson WR. Relationships between Structure and Activity of Retinoids. Nature. 1976;263(5573):110–3. Luca DL, Little EP, George W. Vitamin A and Protein Synthesis by Rat Intestinal Mucosa. J Biol Chem. 1969;244:701–8. Sun H, Kawaguchi R. The membrane receptor for plasma retinol-binding protein, a new type of cell-surface receptor. Int Rev Cell Mol Biol. 2011;288:1–41. Niederreither K, Dolle P. Retinoic acid in development: towards an integrated view. Nat Rev Genet. 2008;9(7):541–53. Duester G. Retinoic acid synthesis and signaling during early organogenesis. Cell. 2008;134(6):921–31. Trumbo P, Yates AA, Schlicker S, Poos M. Dietary reference intakes: Vitamin A, Vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium, and zinc. J Am Diet Assoc. 2001;101(3):294–301. Karczewski KJ, Francioli LC, Tiao G, Cummings BB, Alföldi J, Wang Q, et al. Variation across 141,456 human exomes and genomes reveals the spectrum of loss-of-function intolerance across human protein-coding genes. bioRxiv. 2019:531210. Marill J, Idres N, Capron CC, Nguyen E, Chabot GG. Retinoic acid metabolism and mechanism of action: a review. Curr Drug Metab. 2003;4(1):1–10. D’Ambrosio DN, Clugston RD, Blaner WS. Vitamin A metabolism: an update. Nutrients. 2011;3(1):63–103. Kawaguchi R, Yu J, Honda J, Hu J, Whitelegge J, Ping P, et al. A membrane receptor for retinol binding protein mediates cellular uptake of Vitamin A. Science. 2007;315(5813):820. Amengual J, Zhang N, Kemerer M, Maeda T, Palczewski K, Von Lintig J. STRA6 is critical for cellular vitamin A uptake and homeostasis. Hum Mol Genet. 2014;23(20):5402–17. Pares X, Farres J, Kedishvili N, Duester G. Medium- and short-chain dehydrogenase/reductase gene and protein families : Medium-chain and short-chain dehydrogenases/reductases in retinoid metabolism. Cell Mol Life Sci. 2008;65(24):3936–49. Boleda MD, Saubi N, Farres J, Pares X. Physiological substrates for rat alcohol dehydrogenase classes: aldehydes of lipid peroxidation, ω-hydroxyfatty acids, and retinoids. Arch Biochem Biophys. 1993;307(1):85–90. Yang Z-N, Davis GJ, Hurley TD, Stone CL, Li T-K, Bosron WF. Catalytic efficiency of human alcohol dehydrogenases for retinol oxidation and retinal reduction. Alcohol Clin Exp Res. 1994;18(3):587–91. Sandell LL, Sanderson BW, Moiseyev G, Johnson T, Mushegian A, Young K, et al. RDH10 is essential for synthesis of embryonic retinoic acid and is required for limb, craniofacial, and organ development. Genes Dev. 2007;21(9):1113–24. Rhinn M, Dolle P. Retinoic acid signalling during development. Development. 2012;139(5):843–58. Niederreither K, McCaffery P, Drager UC, Chambon P, Dolle P. Restricted expression and retinoic acid-induced downregulation of the retinaldehyde dehydrogenase type 2 (RALDH-2) gene during mouse development. Mech Dev. 1997;62(1):67–78. Matt N, Dupé V, Garnier J-M, Dennefeld C, Chambon P, Mark M, et al. Retinoic acid-dependent eye morphogenesis is orchestrated by neural crest cells. Development. 2005;132(21):4789. Molotkov A, Molotkova N, Duester G. Retinoic acid guides eye morphogenetic movements via paracrine signaling but is unnecessary for retinal dorsoventral patterning. Development. 2006;133(10):1901–10. Lampert JM, Holzschuh J, Hessel S, Driever W, Vogt K, von Lintig J. Provitamin A conversion to retinal via the beta,beta-carotene-15,15'-oxygenase (bcox) is essential for pattern formation and differentiation during zebrafish embryogenesis. Development. 2003;130(10):2173–86. Gong X, Marisiddaiah R, Rubin LP. Inhibition of pulmonary β-carotene 15, 15'-oxygenase expression by glucocorticoid involves PPARα. PloS One. 2017;12(7):e0181466-e. Uhlen M, Zhang C, Lee S, Sjöstedt E, Fagerberg L, Bidkhori G, et al. A pathology atlas of the human cancer transcriptome. Science. 2017;357(6352):eaan2507. The Human Protein Atlas [Available from: http://www.proteinatlas.org]. Accessed 6 Sept 2019. Duh EJ, Yang HS, Suzuma I, Miyagi M, Youngman E, Mori K, et al. Pigment epithelium-derived factor suppresses ischemia-induced retinal neovascularization and VEGF-induced migration and growth. Invest Ophthalmol Vis Sci. 2002;43(3):821–9. Wang XD, Russell RM, Liu C, Stickel F, Smith DE, Krinsky NI. Beta-oxidation in rabbit liver in vitro and in the perfused ferret liver contributes to retinoic acid biosynthesis from beta-apocarotenoic acids. J Biol Chem. 1996;271(43):26490–8. Chambers D, Wilson L, Maden M, Lumsden A. RALDH-independent generation of retinoic acid during vertebrate embryogenesis by CYP1B1. Development. 2007;134(7):1369–83. Collins MD, Mao GE. Teratology of Retinoids. Ann Rev Pharmacol Toxicol. 1999;39(1):399–430. Chithalen JV, Luu L, Petkovich M, Jones G. HPLC-MS/MS analysis of the products generated from all-trans-retinoic acid using recombinant human CYP26A. J Lipid Res. 2002;43(7):1133–42. MacLean G, Abu-Abed S, Dolle P, Tahayato A, Chambon P, Petkovich M. Cloning of a novel retinoic-acid metabolizing cytochrome P450, Cyp26B1, and comparative expression analysis with Cyp26A1 during early murine development. Mech Dev. 2001;107(1-2):195–201. Tahayato A, Dolle P, Petkovich M. Cyp26C1 encodes a novel retinoic acid-metabolizing enzyme expressed in the hindbrain, inner ear, first branchial arch and tooth buds during murine development. Gene Expr Patterns. 2003;3(4):449–54. Nebert DW, Wikvall K, Miller WL. Human cytochromes P450 in health and disease. Philos Trans R Soc Lond B Biol Sci. 2013;368(1612):20120431. Pijnappel WW, Hendriks HF, Folkers GE, van den Brink CE, Dekker EJ, Edelenbosch C, et al. The retinoid ligand 4-oxo-retinoic acid is a highly active modulator of positional specification. Nature. 1993;366(6453):340–4. Niederreither K, Abu-Abed S, Schuhbaur B, Petkovich M, Chambon P, Dolle P. Genetic evidence that oxidative derivatives of retinoic acid are not involved in retinoid signaling during mouse development. Nat Genet. 2002;31(1):84–8. Rochette-Egly C, Germain P. Dynamic and combinatorial control of gene expression by nuclear retinoic acid receptors (RARs). Nucl Recept Signal. 2009;7:e005. Mic FA, Molotkov A, Benbrook DM, Duester G. Retinoid activation of retinoic acid receptor but not retinoid X receptor is sufficient to rescue lethal defect in retinoic acid synthesis. Proc Natl Acad Sci U S A. 2003;100(12):7135–40. Chawla A, Repa JJ, Evans RM, Mangelsdorf DJ. Nuclear receptors and lipid physiology: opening the X-files. Science. 2001;294(5548):1866–70. Roy B, Taneja R, Chambon P. Synergistic activation of retinoic acid (RA)-responsive genes and induction of embryonal carcinoma cell differentiation by an RA receptor alpha (RAR alpha)-, RAR beta-, or RAR gamma-selective ligand in combination with a retinoid X receptor-specific ligand. Mol Cell Biol. 1995;15(12):6481–7. Dolle P. Developmental expression of retinoic acid receptors (RARs). Nucl Recept Signal. 2009;7:e006. Perlmann T, Rangarajan PN, Umesono K, Evans RM. Determinants for selective RAR and TR recognition of direct repeat HREs. Genes Dev. 1993;7(7b):1411–22. Mader S, Leroy P, Chen JY, Chambon P. Multiple parameters control the selectivity of nuclear receptors for their response elements. Selectivity and promiscuity in response element recognition by retinoic acid receptors and retinoid X receptors. J Biol Chem. 1993;268(1):591–600. Kurokawa R, Söderström M, Hörlein A, Halachmi S, Brown M, Rosenfeld MG, et al. Polarity-specific activities of retinoic acid receptors determined by a co-repressor. Nature. 1995;377(6548):451–4. Balmer JE, Blomhoff R. Gene expression regulation by retinoic acid. J Lipid Res. 2002;43(11):1773–808. Savory JG, Edey C, Hess B, Mears AJ, Lohnes D. Identification of novel retinoic acid target genes. Dev Biol. 2014;395(2):199–208. Luijten M, van Beelen VA, Verhoef A, Renkens MF, van Herwijnen MH, Westerman A, et al. Transcriptomics analysis of retinoic acid embryotoxicity in rat postimplantation whole embryo culture. Reprod Toxicol. 2010;30(2):333–40. Cvekl A, Wang W-L. Retinoic acid signaling in mammalian eye development. Exp Eye Res. 2009;89(3):280–91. Richardson R, Tracey-White D, Webster A, Moosajee M. The zebrafish eye-a paradigm for investigating human ocular genetics. Eye (Lond). 2017;31(1):68–86. Van Cruchten S, Vrolyk V, Perron Lepage M-F, Baudon M, Voute H, Schoofs S, et al. Pre- and postnatal development of the eye: a species comparison. Birth Defects Res. 2017;109(19):1540–67. Kaufmann P. The anatomical basis of mouse development. J Anat. 2000;197(Pt 2):331–2. Chow RL, Lang RA. Early eye development in vertebrates. Ann Rev Cell Dev Biol. 2001;17(1):255–96. Cvekl A, Ashery-Padan R. The cellular and molecular mechanisms of vertebrate lens development. Development. 2014;141(23):4432. Mui SH, Kim JW, Lemke G, Bertuzzi S. Vax genes ventralize the embryonic eye. Genes Dev. 2005;19(10):1249–59. Schmitt EA, Dowling JE. Early-eye morphogenesis in the zebrafish, Brachydanio rerio. J Comp Neurol. 1994;344(4):532–42. Warkany J, Schraffenberger E. Congenital malformations induced in rats by maternal vitamin a deficiency: I. Defects of the Eye. JAMA Ophthalmol. 1946;35(2):150–69. Isken A, Golczak M, Oberhauser V, Hunzelmann S, Driever W, Imanishi Y, et al. RBP4 disrupts vitamin A uptake homeostasis in a STRA6-deficient animal model for Matthew-Wood syndrome. Cell Metabol. 2008;7(3):258–68. D’Aniello E, Ravisankar P, Waxman JS. Rdh10a provides a conserved critical step in the synthesis of retinoic acid during zebrafish embryogenesis. PloS One. 2015;10(9):e0138588-e. Mic FA, Molotkov A, Molotkova N, Duester G. Raldh2 expression in optic vesicle generates a retinoic acid signal needed for invagination of retina during optic cup formation. Dev Dyn. 2004;231(2):270–7. Dupe V, Matt N, Garnier JM, Chambon P, Mark M, Ghyselinck NB. A newborn lethal defect due to inactivation of retinaldehyde dehydrogenase type 3 is prevented by maternal retinoic acid treatment. Proc Natl Acad Sci U S A. 2003;100(24):14036–41. Bohnsack BL, Kasprick DS, Kish PE, Goldman D, Kahana A. A Zebrafish model of axenfeld-rieger syndrome reveals that pitx2 regulation by Retinoic acid is essential for ocular and craniofacial development. Invest Ophthalmol Vis Sci. 2012;53(1):7–22. Matt N, Ghyselinck NB, Pellerin I, Dupé V. Impairing retinoic acid signalling in the neural crest cells is sufficient to alter entire eye morphogenesis. Dev Biol. 2008;320(1):140–8. Perez-Castro AV, Tran VT, Nguyen-Huu MC. Defective lens fiber differentiation and pancreatic tumorigenesis caused by ectopic expression of the cellular retinoic acid-binding protein I. Development. 1993;119(2):363. Lohnes D, Mark M, Mendelsohn C, Dolle P, Dierich A, Gorry P, et al. Function of the retinoic acid receptors (RARs) during development (I). Craniofacial and skeletal abnormalities in RAR double mutants. Development. 1994;120(10):2723. Grondona JM, Kastner P, Gansmuller A, Decimo D, Chambon P, Mark M. Retinal dysplasia and degeneration in RARbeta2/RARgamma2 compound mutant mice. Development. 1996;122(7):2173. Kastner P, Grondona JM, Mark M, Gansmuller A, LeMeur M, Decimo D, et al. Genetic analysis of RXRα developmental function: Convergence of RXR and RAR signaling pathways in heart and eye morphogenesis. Cell. 1994;78(6):987–1003. Chou CM, Nelson C, Tarle SA, Pribila JT, Bardakjian T, Woods S, et al. Biochemical basis for dominant inheritance, variable penetrance, and maternal effects in RBP4 congenital eye disease. Cell. 2015;161(3):634–46. Riera M, Wert A, Nieto I, Pomares E. Panel-based whole exome sequencing identifies novel mutations in microphthalmia and anophthalmia patients showing complex Mendelian inheritance patterns. Mol Genet Genomic Med. 2017;5(6):709–19. Casey J, Kawaguchi R, Morrissey M, Sun H, McGettigan P, Nielsen JE, et al. First implication of STRA6 mutations in isolated anophthalmia, microphthalmia, and coloboma: a new dimension to the STRA6 phenotype. Hum Mutat. 2011;32(12):1417–26. Chitayat D, Sroka H, Keating S, Colby RS, Ryan G, Toi A, et al. The PDAC syndrome (pulmonary hypoplasia/agenesis, diaphragmatic hernia/eventration, anophthalmia/microphthalmia, and cardiac defect) (Spear syndrome, Matthew-Wood syndrome): Report of eight cases including a living child and further evidence for autosomal recessive inheritance. Am J Med Genet A. 2007;143A(12):1268–81. Golzio C, Martinovic-Bouriel J, Thomas S, Mougou-Zrelli S, Grattagliano-Bessieres B, Bonniere M, et al. Matthew-Wood syndrome is caused by truncating mutations in the retinol-binding protein receptor gene STRA6. Am J Hum Genet. 2007;80(6):1179–87. Pasutto F, Flinter F, Rauch A, Reis A. Novel STRA6 null mutations in the original family described with Matthew–Wood syndrome. Am J Med Genet A. 2018;176(1):134–8. Fares-Taie L, Gerber S, Chassaing N, Clayton-Smith J, Hanein S, Silva E, et al. ALDH1A3 mutations cause recessive anophthalmia and microphthalmia. Am J Hum Genet. 2013;92(2):265–70. Yahyavi M, Abouzeid H, Gawdat G, de Preux AS, Xiao T, Bardakjian T, et al. ALDH1A3 loss of function causes bilateral anophthalmia/microphthalmia and hypoplasia of the optic nerve and optic chiasm. Hum Mol Genet. 2013;22(16):3250–8. Mory A, Ruiz FX, Dagan E, Yakovtseva EA, Kurolap A, Parés X, et al. A missense mutation in ALDH1A3 causes isolated microphthalmia/anophthalmia in nine individuals from an inbred Muslim kindred. Eur J Hum Genet. 2013;22:419. Srour M, Chitayat D, Caron V, Chassaing N, Bitoun P, Patry L, et al. Recessive and dominant mutations in retinoic acid receptor beta in cases with microphthalmia and diaphragmatic hernia. Am J Hum Genet. 2013;93(4):765–72. Nobile S, Pisaneschi E, Novelli A, Carnielli VP. A rare mutation of retinoic acid receptor-β associated with lethal neonatal Matthew-Wood syndrome. Clin Dysmorphol. 2019;28(2):74–7. Jakubiuk-Tomaszuk A, Murcia Pienkowski V, Zietkiewicz S, Rydzanicz M, Kosińska J, Stawiński P, et al. Syndromic chorioretinal coloboma associated with heterozygous de novo RARA mutation affecting an amino acid critical for retinoic acid interaction. Clin Genet. 2019;0(0). Lek M, Karczewski KJ, Minikel EV, Samocha KE, Banks E, Fennell T, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536(7616):285–91. Streissguth AP, Dehaene P. Fetal alcohol syndrome in twins of alcoholic mothers: concordance of diagnosis and IQ. Am J Med Genet. 1993;47(6):857–61. Stamatoyannopoulos G, Chen SH, Fukui M. Liver alcohol dehydrogenase in Japanese: high population frequency of atypical form and its possible role in alcohol sensitivity. Am J Hum Genet. 1975;27(6):789–96. Takeshita T, Mao XQ, Morimoto K. The contribution of polymorphism in the alcohol dehydrogenase beta subunit to alcohol sensitivity in a Japanese population. Hum Genet. 1996;97(4):409–13. Eckey R, Agarwal DP, Saha N, Goedde HW. Detection and partial characterization of a variant form of cytosolic aldehyde dehydrogenase isozyme. Hum Genet. 1986;72(1):95–7. Yoshida A, Dave V, Ward RJ, Peters TJ. Cytosolic aldehyde dehydrogenase (ALDH1) variants found in alcohol flushers. Ann Hum Genet. 1989;53(Pt 1):1–7. Stoilov I, Akarsu AN, Sarfarazi M. Identification of three different truncating mutations in cytochrome P4501B1 (CYP1B1) as the principal cause of primary congenital glaucoma (Buphthalmos) in families linked to the GLC3A locus on chromosome 2p21. Hum Mol Genet. 1997;6(4):641–7. Stoilov I, Akarsu AN, Alozie I, Child A, Barsoum-Homsy M, Turacli ME, et al. Sequence analysis and homology modeling suggest that primary congenital glaucoma on 2p21 results from mutations disrupting either the hinge region or the conserved core structures of cytochrome P4501B1. Am J Hum Genet. 1998;62(3):573–84. Melki R, Colomb E, Lefort N, Brezin AP, Garchon HJ. CYP1B1 mutations in French patients with early-onset primary open-angle glaucoma. J Med Genet. 2004;41(9):647–51. Vincent A, Billingsley G, Priston M, Williams-Lyn D, Sutherland J, Glaser T, et al. Phenotypic heterogeneity of CYP1B1: mutations in a patient with Peters' anomaly. J Med Genet. 2001;38(5):324–6. Lee SJ, Perera L, Coulter SJ, Mohrenweiser HW, Jetten A, Goldstein JA. The discovery of new coding alleles of human CYP26A1 that are potentially defective in the metabolism of all-trans retinoic acid and their assessment in a recombinant cDNA expression system. Pharmacogenet Genomics. 2007;17(3):169–80. Laue K, Pogoda HM, Daniel PB, van Haeringen A, Alanay Y, von Ameln S, et al. Craniosynostosis and multiple skeletal anomalies in humans and zebrafish result from a defect in the localized degradation of retinoic acid. Am J Hum Genet. 2011;89(5):595–606. Slavotinek AM, Mehrotra P, Nazarenko I, Tang PL, Lao R, Cameron D, et al. Focal facial dermal dysplasia, type IV, is caused by mutations in CYP26C1. Hum Mol Genet. 2013;22(4):696–703. Guidez F, Parks S, Wong H, Jovanovic JV, Mays A, Gilkes AF, et al. RARalpha-PLZF overcomes PLZF-mediated repression of CRABPI, contributing to retinoid resistance in t(11;17) acute promyelocytic leukemia. Proc Natl Acad Sci U S A. 2007;104(47):18694–9. Madsen P, Rasmussen HH, Leffers H, Honore B, Celis JE. Molecular cloning and expression of a novel keratinocyte protein (psoriasis-associated fatty acid-binding protein [PA-FABP]) that is highly up-regulated in psoriatic skin and that shares similarity to fatty acid-binding proteins. J Invest Dermatol. 1992;99(3):299–305. Borrow J, Goddard AD, Sheer D, Solomon E. Molecular analysis of acute promyelocytic leukemia breakpoint cluster region on chromosome 17. Science. 1990;249(4976):1577–80. Such E, Cervera J, Valencia A, Barragan E, Ibanez M, Luna I, et al. A novel NUP98/RARG gene fusion in acute myeloid leukemia resembling acute promyelocytic leukemia. Blood. 2011;117(1):242–5. Seeliger MW, Biesalski HK, Wissinger B, Gollnick H, Gielen S, Frank J, et al. Phenotype in retinol deficiency due to a hereditary defect in retinol binding protein synthesis. Invest Ophthalmol Vis Sci. 1999;40(1):3–11. Yamamoto H, Simon A, Eriksson U, Harris E, Berson EL, Dryja TP. Mutations in the gene encoding 11-cis retinol dehydrogenase cause delayed dark adaptation and fundus albipunctatus. Nat Genet. 1999;22(2):188–91. Gonzalez-Fernandez F, Kurz D, Bao Y, Newman S, Conway BP, Young JE, et al. 11-cis retinol dehydrogenase mutations as a major cause of the congenital night-blindness disorder known as fundus albipunctatus. Mol Vis. 1999;5:41. Deltour L, Foglio MH, Duester G. Metabolic deficiencies in alcohol dehydrogenase Adh1, Adh3, and Adh4 null mutant mice. Overlapping roles of Adh1 and Adh4 in ethanol clearance and metabolism of retinol to retinoic acid. J Biol Chem. 1999;274(24):16796–801. Molotkov A, Deltour L, Foglio MH, Cuenca AE, Duester G. Distinct retinoid metabolic functions for alcohol dehydrogenase genes Adh1 and Adh4 in protection against vitamin A toxicity or deficiency revealed in double null mutant mice. J Biol Chem. 2002;277(16):13804–11. Molotkov A, Fan X, Duester G. Excessive vitamin A toxicity in mice genetically deficient in either alcohol dehydrogenase Adh1 or Adh3. Eur J Biochem. 2002;269(10):2607–12. Ziouzenkova O, Orasanu G, Sharlach M, Akiyama TE, Berger JP, Viereck J, et al. Retinaldehyde represses adipogenesis and diet-induced obesity. Nat Med. 2007;13(6):695–702. Fan X, Molotkov A, Manabe S, Donmoyer CM, Deltour L, Foglio MH, et al. Targeted disruption of Aldh1a1 (Raldh1) provides evidence for a complex mechanism of retinoic acid synthesis in the developing retina. Mol Cell Biol. 2003;23(13):4637–48. Niederreither K, Subbarayan V, Dolle P, Chambon P. Embryonic retinoic acid synthesis is essential for early mouse post-implantation development. Nat Genet. 1999;21(4):444–8. Niederreither K, Vermot J, Schuhbaur B, Chambon P, Dolle P. Retinoic acid synthesis and hindbrain patterning in the mouse embryo. Development. 2000;127(1):75–85. Mic FA, Haselbeck RJ, Cuenca AE, Duester G. Novel retinoic acid generating activities in the neural tube and heart identified by conditional rescue of Raldh2 null mutant mice. Development. 2002;129(9):2271–82. Ribes V, Wang Z, Dolle P, Niederreither K. Retinaldehyde dehydrogenase 2 (RALDH2)-mediated retinoic acid synthesis regulates early mouse embryonic forebrain development by controlling FGF and sonic hedgehog signaling. Development. 2006;133(2):351–61. Vermot J, Niederreither K, Garnier JM, Chambon P, Dolle P. Decreased embryonic retinoic acid synthesis results in a DiGeorge syndrome phenotype in newborn mice. Proc Natl Acad Sci U S A. 2003;100(4):1763–8. Molotkova N, Molotkov A, Duester G. Role of retinoic acid during forebrain development begins late when Raldh3 generates retinoic acid in the ventral subventricular zone. Dev Biol. 2007;303(2):601–10. Singh S, Chen Y, Matsumoto A, Orlicky DJ, Dong H, Thompson DC, et al. ALDH1B1 links alcohol consumption and diabetes. Biochem Biophys Res Commun. 2015;463(4):768–73. Anastasiou V, Ninou E, Alexopoulou D, Stertmann J, Muller A, Dahl A, et al. Aldehyde dehydrogenase activity is necessary for beta cell development and functionality in mice. Diabetologia. 2016;59(1):139–50. Heidel SM, MacWilliams PS, Baird WM, Dashwood WM, Buters JT, Gonzalez FJ, et al. Cytochrome P4501B1 mediates induction of bone marrow cytotoxicity and preleukemia cells in mice treated with 7,12-dimethylbenz[a]anthracene. Cancer Res. 2000;60(13):3454–60. Buters JT, Sakai S, Richter T, Pineau T, Alexander DL, Savas U, et al. Cytochrome P450 CYP1B1 determines susceptibility to 7, 12-dimethylbenz[a]anthracene-induced lymphomas. Proc Natl Acad Sci U S A. 1999;96(5):1977–82. Libby RT, Smith RS, Savinova OV, Zabaleta A, Martin JE, Gonzalez FJ, et al. Modification of ocular defects in mouse developmental glaucoma models by tyrosinase. Science. 2003;299(5612):1578–81. Abu-Abed S, Dolle P, Metzger D, Beckett B, Chambon P, Petkovich M. The retinoic acid-metabolizing enzyme, CYP26A1, is essential for normal hindbrain patterning, vertebral identity, and development of posterior structures. Genes Dev. 2001;15(2):226–40. Sakai Y, Meno C, Fujii H, Nishino J, Shiratori H, Saijoh Y, et al. The retinoic acid-inactivating enzyme CYP26 is essential for establishing an uneven distribution of retinoic acid along the anterio-posterior axis within the mouse embryo. Genes Dev. 2001;15(2):213–25. Yashiro K, Zhao X, Uehara M, Yamashita K, Nishijima M, Nishino J, et al. Regulation of retinoic acid distribution is required for proximodistal patterning and outgrowth of the developing mouse limb. Dev Cell. 2004;6(3):411–22. MacLean G, Li H, Metzger D, Chambon P, Petkovich M. Apoptotic extinction of germ cells in testes of Cyp26b1 knockout mice. Endocrinology. 2007;148(10):4560–7. Li H, MacLean G, Cameron D, Clagett-Dame M, Petkovich M. Cyp26b1 expression in murine Sertoli cells is required to maintain male germ cells in an undifferentiated state during embryogenesis. PLoS One. 2009;4(10):e7501. Okano J, Lichti U, Mamiya S, Aronova M, Zhang G, Yuspa SH, et al. Increased retinoic acid levels through ablation of Cyp26b1 determine the processes of embryonic skin barrier formation and peridermal development. J Cell Sci. 2012;125(Pt 7):1827–36. Uehara M, Yashiro K, Mamiya S, Nishino J, Chambon P, Dolle P, et al. CYP26A1 and CYP26C1 cooperatively regulate anterior-posterior patterning of the developing brain and the production of migratory cranial neural crest cells in the mouse. Dev Biol. 2007;302(2):399–411. Gorry P, Lufkin T, Dierich A, Rochette-Egly C, Decimo D, Dolle P, et al. The cellular retinoic acid binding protein I is dispensable. Proc Natl Acad Sci U S A. 1994;91(19):9032–6. Fawcett D, Pasceri P, Fraser R, Colbert M, Rossant J, Giguere V. Postaxial polydactyly in forelimbs of CRABP-II mutant mice. Development. 1995;121(3):671–9. Owada Y, Takano H, Yamanaka H, Kobayashi H, Sugitani Y, Tomioka Y, et al. Altered water barrier function in epidermal-type fatty acid binding protein-deficient mice. J Invest Dermatol. 2002;118(3):430–5. Maeda K, Uysal KT, Makowski L, Gorgun CZ, Atsumi G, Parker RA, et al. Role of the fatty acid binding protein mal1 in obesity and insulin resistance. Diabetes. 2003;52(2):300–7. Pan Y, Short JL, Choy KH, Zeng AX, Marriott PJ, Owada Y, et al. Fatty acid-binding protein 5 at the blood-brain barrier regulates endogenous brain docosahexaenoic acid levels and cognitive function. J Neurosci. 2016;36(46):11755–67. Yu S, Levi L, Casadesus G, Kunos G, Noy N. Fatty acid-binding protein 5 (FABP5) regulates cognitive function both by decreasing anandamide levels and by activating the nuclear receptor peroxisome proliferator-activated receptor beta/delta (PPARbeta/delta) in the brain. J Biol Chem. 2014;289(18):12748–58. Chapellier B, Mark M, Garnier JM, LeMeur M, Chambon P, Ghyselinck NB. A conditional floxed (loxP-flanked) allele for the retinoic acid receptor alpha (RARalpha) gene. Genesis. 2002;32(2):87–90. Ghyselinck NB, Dupe V, Dierich A, Messaddeq N, Garnier JM, Rochette-Egly C, et al. Role of the retinoic acid receptor beta (RARbeta) during mouse development. Int J Dev Biol. 1997;41(3):425–47. Lufkin T, Lohnes D, Mark M, Dierich A, Gorry P, Gaub MP, et al. High postnatal lethality and testis degeneration in retinoic acid receptor alpha mutant mice. Proc Natl Acad Sci U S A. 1993;90(15):7225–9. Mendelsohn C, Mark M, Dolle P, Dierich A, Gaub MP, Krust A, et al. Retinoic acid receptor beta 2 (RAR beta 2) null mutant mice appear normal. Dev Biol. 1994;166(1):246–58. Krezel W, Ghyselinck N, Samad TA, Dupe V, Kastner P, Borrelli E, et al. Impaired locomotion and dopamine signaling in retinoid receptor mutant mice. Science. 1998;279(5352):863–7. Lohnes D, Kastner P, Dierich A, Mark M, LeMeur M, Chambon P. Function of retinoic acid receptor gamma in the mouse. Cell. 1993;73(4):643–58. Walkley CR, Olsen GH, Dworkin S, Fabb SA, Swann J, McArthur GA, et al. A microenvironment-induced myeloproliferative syndrome caused by retinoic acid receptor gamma deficiency. Cell. 2007;129(6):1097–110. Zizola CF, Frey SK, Jitngarmkusol S, Kadereit B, Yan N, Vogel S. Cellular retinol-binding protein type I (CRBP-I) regulates adipogenesis. Mol Cell Biol. 2010;30(14):3412–20. Kane MA, Folias AE, Pingitore A, Perri M, Krois CR, Ryu JY, et al. CrbpI modulates glucose homeostasis and pancreas 9-cis-retinoic acid concentrations. Mol Cell Biol. 2011;31(16):3277–85. Ghyselinck NB, Bavik C, Sapin V, Mark M, Bonnier D, Hindelang C, et al. Cellular retinol-binding protein I is essential for vitamin A homeostasis. EMBO J. 1999;18(18):4903–14. E X, Zhang L, Lu J, Tso P, Blaner WS, Levin MS, et al. Increased neonatal mortality in mice lacking cellular retinol-binding protein II. J Biol Chem. 2002;277(39):36617–23. Quadro L, Blaner WS, Hamberger L, Van Gelder RN, Vogel S, Piantedosi R, et al. Muscle expression of human retinol-binding protein (RBP). Suppression of the visual defect of RBP knockout mice. J Biol Chem. 2002;277(33):30191–7. Yang Q, Graham TE, Mody N, Preitner F, Peroni OD, Zabolotny JM, et al. Serum retinol binding protein 4 contributes to insulin resistance in obesity and type 2 diabetes. Nature. 2005;436(7049):356–62. Driessen CA, Winkens HJ, Hoffmann K, Kuhlmann LD, Janssen BP, Van Vugt AH, et al. Disruption of the 11-cis-retinol dehydrogenase gene leads to accumulation of cis-retinols and cis-retinyl esters. Mol Cell Biol. 2000;20(12):4275–87. Arregi I, Climent M, Iliev D, Strasser J, Gouignard N, Johansson JK, et al. Retinol Dehydrogenase-10 regulates pancreas organogenesis and endocrine cell differentiation via paracrine retinoic acid signaling. Endocrinology. 2016;157(12):4615–31. Dyson E, Sucov HM, Kubalak SW, Schmid-Schonbein GW, DeLano FA, Evans RM, et al. Atrial-like phenotype is associated with embryonic ventricular failure in retinoid X receptor alpha -/- mice. Proc Natl Acad Sci U S A. 1995;92(16):7386–90. Mascrez B, Ghyselinck NB, Chambon P, Mark M. A transcriptionally silent RXRalpha supports early embryonic morphogenesis and heart development. Proc Natl Acad Sci U S A. 2009;106(11):4272–7. Sucov HM, Dyson E, Gumeringer CL, Price J, Chien KR, Evans RM. RXR alpha mutant mice establish a genetic basis for vitamin A signaling in heart morphogenesis. Genes Dev. 1994;8(9):1007–18. Gruber PJ, Kubalak SW, Pexieder T, Sucov HM, Evans RM, Chien KR. RXR alpha deficiency confers genetic susceptibility for aortic sac, conotruncal, atrioventricular cushion, and ventricular muscle defects in mice. J Clin Invest. 1996;98(6):1332–43. Du X, Tabeta K, Mann N, Crozat K, Mudd S, Beutler B. An essential role for Rxr alpha in the development of Th2 responses. Eur J Immunol. 2005;35(12):3414–23. Kastner P, Mark M, Leid M, Gansmuller A, Chin W, Grondona JM, et al. Abnormal spermatogenesis in RXR beta mutant mice. Genes Dev. 1996;10(1):80–92. Saga Y, Kobayashi M, Ohta H, Murai N, Nakai N, Oshima M, et al. Impaired extrapyramidal function caused by the targeted disruption of retinoid X receptor RXRgamma1 isoform. Genes Cells. 1999;4(4):219–28. Kabir M, Barradas A, Tzotzos GT, Hentges KE, Doig AJ. Properties of genes essential for mouse development. PLoS One. 2017;12(5):e0178273. Dickerson JE, Zhu A, Robertson DL, Hentges KE. Defining the role of essential genes in human disease. PLoS One. 2011;6(11):e27368. Fuller ZL, Berg JJ, Mostafavi H, Sella G, Przeworski M. Measuring intolerance to mutation in human genetics. Nat Genet. 2019;51(5):772–6. Jackson B, Brocker C, Thompson DC, Black W, Vasiliou K, Nebert DW, et al. Update on the aldehyde dehydrogenase gene (ALDH) superfamily. Hum Genomics. 2011;5(4):283–303. Ray WJ, Bain G, Yao M, Gottlieb DI. CYP26, a novel mammalian cytochrome P450, is induced by retinoic acid and defines a new family. J Biol Chem. 1997;272(30):18702–8. Topletz AR, Thatcher JE, Zelter A, Lutz JD, Tay S, Nelson WL, et al. Comparison of the function and expression of CYP26A1 and CYP26B1, the two retinoic acid hydroxylases. Biochem Pharmacol. 2012;83(1):149–63. Stoney PN, Fragoso YD, Saeed RB, Ashton A, Goodman T, Simons C, et al. Expression of the retinoic acid catabolic enzyme CYP26B1 in the human brain to maintain signaling homeostasis. Brain Struct Funct. 2016;221(6):3315–26. Shawky R. Reduced penetrance in human inherited disease. Egypt J Med Hum Genet. 2014;15(2):103–11. Niakan KK, Eggan K. Analysis of human embryos from zygote to blastocyst reveals distinct gene expression patterns relative to the mouse. Dev Biol. 2013;375(1):54–64. Madissoon E, Tohonen V, Vesterlund L, Katayama S, Unneberg P, Inzunza J, et al. Differences in gene expression between mouse and human for dynamically regulated genes in early embryo. PLoS One. 2014;9(8):e102949. Chavez SL, McElroy SL, Bossert NL, De Jonge CJ, Rodriguez MV, Leong DE, et al. Comparison of epigenetic mediator expression and function in mouse and human embryonic blastomeres. Hum Mol Genet. 2014;23(18):4970–84. Birling MC, Herault Y, Pavlovic G. Modeling human disease in rodents by CRISPR/Cas9 genome editing. Mamm Genome. 2017;28(7-8):291–301. Abe S, Kobayashi K, Oji A, Sakuma T, Kazuki K, Takehara S, et al. Modification of single-nucleotide polymorphism in a fully humanized CYP3A mouse by genome editing technology. Sci Rep. 2017;7(1):15189. Hall B, Cho A, Limaye A, Cho K, Khillan J, Kulkarni AB. Genome Editing in Mice Using CRISPR/Cas9 Technology. Curr Protoc Cell Biol. 2018;81(1):e57. Varshney GK, Carrington B, Pei W, Bishop K, Chen Z, Fan C, et al. A high-throughput functional genomics workflow based on CRISPR/Cas9-mediated targeted mutagenesis in zebrafish. Nat Protoc. 2016;11(12):2357–75. Liu J, Zhou Y, Qi X, Chen J, Chen W, Qiu G, et al. CRISPR/Cas9 in zebrafish: an efficient combination for human genetic diseases modeling. Hum Genet. 2017;136(1):1–12. Dooley K, Zon LI. Zebrafish: a model system for the study of human disease. Curr Opin Genet Dev. 2000;10(3):252–6. Westefield M. A guide for the laboratory use of zebrafish (Danio rerio). 5th ed. Eugene: Univ. of Oregon Press; 2007. Lawrence C, Adatto I, Best J, James A, Maloney K. Generation time of zebrafish (Danio rerio) and medakas (Oryzias latipes) housed in the same aquaculture facility. Lab Animal. 2012;41(6):158–65. Kim S, Carlson R, Zafreen L, Rajpurohit SK, Jagadeeswaran P. Modular, easy-to-assemble, low-cost zebrafish facility. Zebrafish. 2009;6(3):269–74. Brophy PD, Rasmussen M, Parida M, Bonde G, Darbro BW, Hong X, et al. A gene implicated in activation of retinoic acid receptor targets is a novel renal agenesis gene in humans. Genetics. 2017;207(1):215–28. We thank the Yale Printing and Publishing Service for their assistance with Fig. 2, and also appreciate our colleagues for careful reading of this manuscript and constructive comments. This work was supported in parts by National Institutes of Health Grants AA021724, AA022057, EY017963, and EY022312. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors declare that they have no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Cite this article Thompson, B., Katsanis, N., Apostolopoulos, N. et al. Genetics and functions of the retinoic acid pathway, with special emphasis on the eye. Hum Genomics 13, 61 (2019). https://doi.org/10.1186/s40246-019-0248-9 - Retinoic Acid - Eye Development
<urn:uuid:855ba9d6-8e5c-47c3-b4ce-38803b87c171>
{ "dump": "CC-MAIN-2021-49", "url": "https://humgenomics.biomedcentral.com/articles/10.1186/s40246-019-0248-9", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00121.warc.gz", "language": "en", "language_score": 0.8019120693206787, "token_count": 20247, "score": 3.015625, "int_score": 3 }
Filtration and Separation, August, 1999Filtration 101 "To filter or not to filter?" that is the question - or is it? The real question we need to ask ourselves is, what type of filtration do we need? In todays engineering climate, most of us will have to learn something about filtration. Most engineers do not spend a great deal of time learning about a subject unless there is an immediate application. Therefore, the fundamentals of filtration technology is here introduced in a quick and simple manner. The proceeding discussion very basic. Filtration theory, terminology, test standards, classification and selection are outlined and explained. This information provides a solid basis of knowledge from which an engineer can make sound decisions regarding filter selection and application in most engineering projects. Filtration theory and terminology Increase pump, bearing, and tool life (cutting and grinding coolants) Filtration is defined as the physical separation of constituents from a fluid by means of flow through a permeable or a porous medium. A common example is the coffee maker. The coffee grounds are removed from the brewed coffee by a filter. The coffee filter (porous medium) provides the physical separation of the grounds from the water (constituents). Filters are rated by the size of particles for which they are designed to remove. The size is defined in terms of "microns". A micrometer is actually the correct term. One micrometer is equal to 10 -6 meter. To place the micrometer into physical perspective, the unaided eye can see a 40 micron object unaided. This is approximately the diameter of a human hair.Filter classification Filters are classified according to the size of the particles for which they are intended to remove. Different sized particles require different types of filters. Table 1 gives a broad overview of the classification system. For example, say that a filter is rated at 90% efficiency for 5 micron particles. This means that the filter will remove 90% of the particles flowing through it that are 5 microns in size and larger. Another way to denote particle removal efficiency is to use Beta Ratios: The Figure 1 can be used as a quick reference for comparing percent removal efficiency and Beta Ratio. There are two type of efficiency ratings: nominal and absolute. Nominal. The size of particles removed at a set efficiency under established conditions. Manufacturers can vary nominal ratings anywhere from 50-98% removal efficiency, depending on product and company. Filtration efficiencies and performance can vary with actual "real world" conditions. Filter manufacturers rate their filters under laboratory conditions. The field performance of a filter can be affected by flow rate, viscosity of the fluid being filtered, concentration of contaminant, and measurement techniques.Filter life Filter life is determined by the filters Dirt Holding Capacity (DHC). DHC is defined as the amount of contaminant (weight basis) fed to a filter that attains its terminal differential pressure (i.e. the end of its service life, typically 30-50 psi). This sounds like a misnomer, but it is not. The dirt retention capacity is the actual amount of dirt that a filter retains. Depth media versus surface media Filtration is utilized for the removal of a wide range of contaminants, from the filtering of boulders to the separation of ions. Though the science of filtration is vast and complex, the selection of a filtration system can be simplified by remembering a few basic points: Filter micron ratings may not be comparable among manufacturers. For example, you are asked to replace a 50 micron filter from "Company A" with a filter from "Company B." The first question that should be asked is "what efficiency of particle removal is needed, 50% 90% 99.98%, nominal, absolute?" One must know how specific filter manufacturers rate their own filters. A filters micron rating should only be used as a guide to narrow down initial selections. Remember, filter companies rate their filters under laboratory conditions, not actual application testing. There is no substitute for cartridge filter testing in actual "real world" use. Just because a filter has a long service life in a laboratory does not necessarily mean it will in real applications. Each specific process will dictate whether surface or depth filtration media is needed.Bio: By Ron D. Masters and William J. Campagna, Jr., Parker Hannifin, Lebanon, IN. process | home | utilities | separation | air-gas | dust collection | hvac | aftermarket
<urn:uuid:ff3a76d7-99c4-48ca-b348-fc7a49323eaf>
{ "dump": "CC-MAIN-2018-26", "url": "http://fischer-robertson.com/tools_filtration_101.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00576.warc.gz", "language": "en", "language_score": 0.9144793152809143, "token_count": 935, "score": 3.515625, "int_score": 4 }
By Howard LeWine, M.D. Q: What is a trapped nerve? How does it happen? A: The term “trapped nerve” refers to a condition in which a nerve is compressed or pinched. This causes pain, numbness, weakness or other symptoms. There are many sites and causes of nerve compression. Here are some of the common ones: A disc problem in the back or neck. Spinal discs can bulge or tear, pressing on nerves as they travel to and from the spinal cord; sciatica is a common example of a “trapped nerve” that may be due to disc disease. Swelling in the wrist from fluid retention or arthritis. The median nerve travels through the carpal tunnel, a tight space in the wrist that is easily compressed by swelling in the joint. Enlarged tissues. Growth of a lymph node, an abscess (an infection) or a tumor can compress a nearby nerve. Injury. A nerve can be compressed by swelling, fracture or bleeding following trauma. Or the injury might be minor, such as simply leaning on your elbow. This can compress the ulnar nerve that travels just under the skin. After reviewing your symptoms and performing a physical exam, your doctor can often tell which nerve is compressed. Some tests, such as magnetic resonance imaging (MRI) for disc disease, may be necessary to confirm the location of the nerve entrapment and the specific cause. Treatment of an entrapped nerve depends on the cause. It may be as simple as wearing a wrist splint for carpal tunnel syndrome or not leaning on your elbows. Medications, such as anti-inflammatory drugs for arthritis, may improve symptoms of nerve compression. However, surgery may be recommended for if these other measures are not effective. (Howard LeWine, M.D., is an internist at Brigham and Women’s Hospital and assistant professor at Harvard Medical School. For additional consumer health information, please visit www.health.harvard.edu.)
<urn:uuid:a69ad71b-208f-4338-9478-1291bc65f11a>
{ "dump": "CC-MAIN-2021-25", "url": "https://chicagohealthonline.com/ask-the-harvard-experts-trapped-nerves-can-happen-in-many-sites/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00636.warc.gz", "language": "en", "language_score": 0.9502363204956055, "token_count": 423, "score": 3.5625, "int_score": 4 }
Orange Island is the earliest emergent landmass of Florida dating from the middle Rupelian ~33.9—28.4 Ma. geologic stage of the Early Oligocene epoch and named for Orange County, Florida, United States of America. During the warm Eocene ~55.8—33.9 Ma., what was to become Florida (Florida Platform) was a carbonate bank with shallow sea covering it. During the Eocene, biological carbonates were deposited on this bank forming various layers. The Avon Park Formation and Ocala Limestone were formed at this time. Orange Island appears absent of Suwannee Limestone due to either erosion or lack of deposition of material.(J. R. Bryan). Birth of Orange Island By the Early Oligocene, sea levels rose almost as much as during the late Eocene. Parts of the platform remained above water creating Orange Island, a low relief island just 150 km (93 mi) south of the Oligocene Georgia coastline and Bainbridge Subsea with a shallow channel, the Gulf Trough or Suwannee Strait, separating the two. The shallower Gulf Trough now supported a large coral reef extending to the Salt Mountain Formation and Flint River Formation of southern Alabama to Mitchell County, Georgia. The western side of the island was bounded by the Pasco Reef System, an environment now only found in the southern Pacific Ocean (Petuch). During the middle Oligocene ~25 Ma, a world-wide cooling of the climate started the drying of the Dade Subsea, Gulf Trough, and in turn enlarging Orange Island. During the Chattian of the Late Oligocene, land mammals began their movement into what is now Florida with earliest of their fossils found in Hernando County dating to 24.8 Ma. Also from that Chattian stage through early Aquitanian age of the early Miocene, Orange Island expanded both northward and westward with carbonate lagoon system on its west side. To the south of Orange Island, the Everglades Basin remained at 250 meters (850 feet) deep. Orange Island had its own rivers carrying sediments composed of sand and clay in to the Yeehaw Strait at Orange Island’s southern tip. Orange Island appears to have been a Pleistocene animal and floral refugium for central Florida during the interglacial periods of rising water and not unlike that of the river bank of the Apalachicola River, the red hills near Marianna, Florida, and coastal flatwoods in the Florida panhandle. - ^ Petuch, Edward J., Roberts, Charles; The geology of the Everglades and adjacent areas, 2007, ISBN 1-4200-4558-X. - Jump up^ Bryan, J.R., 1991, Stratigraphic and paleontologic studies of Paleocene and Oligocene carbonate facies of the eastern Gulf Coastal Plain: unpublished Ph.D. dissertation, University of Tennessee, Knoxville, TN. - Jump up^ USGS: Geologic units in Hillsborough county, Florida: Suwannee Limestone - Jump up^ Palobiology database, Brooksville 1 Collection, 24.8-24.7 Ma. - Jump up^ Flora of North America, Volume 1, Chapter 6, Flora of North America
<urn:uuid:99cd7641-df68-4f28-aa36-800f646ae68e>
{ "dump": "CC-MAIN-2023-14", "url": "http://prehistoricflorida.net/orange-island-ocala-platform/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00516.warc.gz", "language": "en", "language_score": 0.9167636036872864, "token_count": 688, "score": 3.484375, "int_score": 3 }
PhD. in Mathematics Norm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule. Recall that a composite function f(g(x)) is a function that has another function on the "inside." When taking the derivative of a function like this, we use the chain rule. The chain rule states that you first take the derivative of the "outside" function, then multiply it by the derivative of the "inside function." So for a function h(x)=f(g(x)), its derivative would be h'(x)=f'(g(x))*g'(x). To determine which function is the inside function, look to see which function is "contained" within another function. For example, for exponential functions, look at the power to which e is raised. For logarithmic functions, it will be what is within the logarithm brackets. Remember that the chain rule can be used with all other rules of differentiation learned so far; this includes rules for deriving exponents, logarithms, the product rule, the quotient rule, etc. Let’s take a look at another problem. This one’s kind of interesting because we can differentiate the function h(x) equals 1 over x² plus 1 in two ways. You might see that this is a quotient. You could actually use the quotient rule, but you can also use the chain rule. It’s always nice when you have options. It gives you a little more versatility. Let’s see how the quotient rule would work on this. The derivative would be, and remember you have to identify the low function and the high function. It's low d high, so x² plus 1 times the derivative of the numerator, which is just zero, minus high d low. 1 times the derivative of x² plus 1, which is 2x. Over the square of what’s below. That’s (x² plus 1)². This first term just disappears. I'll get -2x over (x² plus 1)². So that’s the derivative using the quotient rule. Let’s see what it looks like using the chain rule. Now using the chain rule, you need to identify an inside and outside function. Furthermore, let me rewrite this in a slightly different form. I’m going to put x² plus 1 inside the function, x to the -1 because that’s the same as the reciprocal function. H(x) is the same as this. x² plus 1, the quantity to the -1 power. You can see that the outside function is x to the -1. The inside function is x² plus 1. Let’s use the chain rule on that. H'(x) is going to be, first the derivative of the outside function and using the power rule, we pull the minus 1 in front. We get minus 1 times parenthesis, I’ll write the x² plus 1 later, but what does the exponent become? 1 less. -1 minus 1 is -2. So now I’ll put in the x² plus 1. You still have to multiply by the derivative of the inside function and that’s 2x. What do we have here? We have minus 1 times 2x, -2x. And this x² plus 1 to the -2, that just means 1 over (x² plus 1)². We get the same exact answer and it’s about the same amount of work. Always nice to have two ways of solving a problem when you can. H(x) has derivative -2x over x² plus 1 to the quantity squared.
<urn:uuid:ae773b4a-9947-4018-b86a-451b3b829588>
{ "dump": "CC-MAIN-2017-13", "url": "https://www.brightstorm.com/math/calculus/techniques-of-differentiation/the-chain-rule-problem-4/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00383-ip-10-233-31-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9283500909805298, "token_count": 795, "score": 3.5, "int_score": 4 }
Belgian Sheepdogs (or Belgian Shepherds) are large working dogs that originate from Belgium, where they were commonly used to herd sheep. Over the years however, these intelligent, versatile dogs have also been used as police dogs, messengers and even helped soldiers in the military. Safe to say, few dogs can match his work ethic, trainability and loyalty. Do they shed lots? Belgian Sheepdogs have a double coat that is made up of long, harsh outer guard hairs and a short, dense undercoat. He sheds moderately throughout most of the year, but quite heavily once or twice a year due to seasonal shedding. In this article, we’ll explore just how much Belgian Shepherds molt and some of the most effective ways to manage the shedding. Belgian Sheepdog Shedding Belgian Sheepdogs shed moderately throughout most of the year, and heavily during one or two seasonal “coat blows.” This is known as seasonal shedding, which is something most dogs do, and it’s especially noticeable in dogs with thick, double coats as the Belgian has. This excess shedding occurs because the dog is naturally adapting to the change of season. And it normally happens once in spring and again in fall, and lasts for a period of two-to-four weeks. But it does depend on the breed and where they are located as to how often and how noticeable the shedding is. Either way, you shouldn’t be surprised to notice a fair amount of fur falling off his coat once or twice per year. And you’ll generally notice a moderate amount of fur gathering around the home outside of shedding season. Similar to the amount of fur a Belgian Malinoi sheds. Which, by the way, is a very similar breed to the Belgian Sheepdog, to the point they are classified as the same breed in some countries. However, according to the American Kennel Club, Belgian Sheepdogs are a separate and unique breed. And, while they do share many similarities, the Belgian Malinoi has a shorter coat that doesn’t shed as heavily overall. In any case, overall, Belgian Sheepdogs are probably not ideal if you’re looking for a low shedding dog, there are more suitable breeds in this respect. But with proper brushing, along with ensuring his diet is optimal, it’s not difficult to manage the molting. Grooming Your Belgian Belgians don’t have the lowest maintenance coat in dogdom, but at the same time, they don’t require any special grooming like some dogs. Overall, they have an average maintenance coat. When it comes to grooming, the first thing you need to understand is that Belgians are double-coated. Meaning that instead of having just one layer of fur, like a Boxer for example, they have two layers of fur. Both an outer coat and an undercoat. The outer coat is made up of long, straight guard hairs that are harsh in texture and come in a variety of colors. While the undercoat is short and dense, and he needs this as it helps insulate him from both hot and cold weather. Together, this coat requires more effort to brush than a dog with a short, single coat. Because longer hairs are prone to matting and tangles, especially if he goes out in the field to work or play a lot. So these need to be carefully brushed out to avoid causing your dog pain and discomfort. And second, undercoats result in higher levels of shedding and you will need a brush designed to reach the undercoat, in order to remove that fur. What brush you use is up to you, but generally speaking, a pin or slicker brush and metal comb work well to maintain his coat, keep it mat free and remove any dead hairs. And brushing once or twice a week for 20 minutes should be all it takes. However, during shedding season, it’s a different story. Daily brushing is needed if you want to keep as much of his fur from falling off onto your floors and furniture as possible. And, while not necessary, a deshedding brush or undercoat rake can make your job a little easier and less time consuming during these times. Other than brushing, maintaining his coat mostly comes down to bathing occasionally and with a good quality dog shampoo that moisturizes his coat. Reducing Excessive Shedding You can’t actually stop a dog from shedding. This is a normal, natural process whereby he is shedding (or molting) his old hairs which are then replaced with new ones. So, short of removing his hair completely, there’s no getting around this. And incase you’re wondering, shaving him down to the skin is NOT a good idea unless your vet specifically recommends doing so. Because Belgians need their undercoat, it helps insulate them in both warm and cool weather and having hair protects them from things like sunburn. The best thing to do is learn how to manage the shedding. Which mostly comes down to common sense things like making sure his diet is optimal and that you brush him regularly. The reason it’s important to ensure his diet is optimal is because this is naturally going to lead to a healthier dog, with a healthier coat, which in turn sheds less. So it’s worth taking the time to select the right type of food for your dog, one of a high quality that contains the right amount of vitamins, minerals and overall nutrition. And preferably one that contains things like Omega-3 fatty acids. Once you have their diet in order, proper grooming is your best defence against shedding. Because it removes the old fur from the dog before it has a chance to drop onto your floor and upholstery, and massages his skin which helps promote an optimal coat. There are some other effective ways to limit shedding, but these are probably the most important. And, while simple, if you get these right you are going to be spending a lot less time vacuuming that if you neglected these. Belgian Shepherds are known for their high level of intelligence, similar to other herding breeds like the Border Collie or Australian Shepherd. And they are incredibly loyal, hard working and love human companionship. So if you don’t mind the heavy shedding once or twice per year, you will find that it’s not difficult to manage and keep your home relatively fur-free most of the time. There are certainly heavier shedding dogs out there, like the German Shepherd for example, but if you’re looking for a low shedding dog, and one that’s generally more suitable for people with allergies, then breeds like the Giant Schnauzer or Welsh Terrier are worth considering.
<urn:uuid:74bbf463-9cbc-42fd-a1d1-151d3857c48f>
{ "dump": "CC-MAIN-2022-21", "url": "https://stopmydogshedding.com/do-belgian-sheepdogs-shed", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00026.warc.gz", "language": "en", "language_score": 0.967278242111206, "token_count": 1406, "score": 2.65625, "int_score": 3 }
Improving vocabulary skills requires constant attention. This 'how to' focuses on a basic strategy for increasing vocabulary in specific subject areas through the use of a vocabulary tree. Time Required: Varies - Choose a subject area that interests you very much. - Write a short introduction to the subject trying to use as many vocabulary words concerning the subject as possible. - Using your introduction, arrange the principle ideas concerning the subject into a vocabulary tree. - To create a vocabulary tree, put the subject at the center of a piece of paper. - Around the central subject, put the principle areas relating to the subject. Example - verbs, descriptive adjectives, where, etc. - In each of these categories, write the appropriate vocabulary. If you need to, write sub-categories. - Create the same vocabulary tree in your native language - Your native language tree will be much more detailed. Use this native language tree as a reference point to look up new words and fill in your English tree. - Rewrite your introductory essay concerning the subject taking advantage of the new vocabulary learned. - To make this vocabulary active, practice reading your essay aloud until you can present it by memory. - Ask a friend or fellow classmate to listen to your presentation and ask you questions about the subject. - Remember that vocabulary goes from passive knowledge to active knowledge - this means that you need to repeat a word often before it becomes active vocabulary. - Be patient with yourself, it takes time for this process to work. - Try to always learn vocabulary in groups of words instead of random lists. In this manner, words are related to each other and are more likely to remember over the long-term.
<urn:uuid:57d7759a-e83a-4ceb-a550-8dbaa55a2945>
{ "dump": "CC-MAIN-2018-09", "url": "http://easy2learneng.blogspot.cz/2014/03/how-to-increase-specific-vocabulary.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00751.warc.gz", "language": "en", "language_score": 0.903952419757843, "token_count": 350, "score": 3.734375, "int_score": 4 }
Objective(s) of the session: Learn how to organize code so that the domain model is clearly separated from the technical infrastructure and external APIs. Learn how to refactor towards such an architecture. Improve refactoring skills. This is a little programming kata that gets you to think: how to organize an application so that the domain logic is separated from external libraries and systems? The example reads person records from a file and sends email to selected persons. How do you test this properly? By "properly", I mean that - the core logic of your application should be tested without need to talk to the filesystem or the mail server. - you should be able to prove that the system interacts correctly with both the filesystem and the mail server. How do you separate responsibilities? The "Single Responsibility Principle" says that a class should have a single reason to change. This means that we should have a single place in the code that changes in the event that, say: - the data comes from a database instead of a file - instead of sending email, we want to send a FaceBook message or a text message - the logic for deciding who gets the messages changes - the content of the messages changes In this session you will learn about - The hexagonal architecture, which is a variant of the common three-layers architecture. The advantage is that it avoids making the domain model dependent on the data-access code. - The dependency-inversion principle, which says that high-level code should not depend on low-level details, and how to implement it. - Splitting your code in - unit tests, which prove that your logic works, and are fast and reliable, and - integration tests, which prove that you can talk to external systems, but are a bit less fast and a bit less reliable. Format and length: 120 mins coding dojo. Participants should provide their own laptop. I provide the necessary software. Intended audience and prerequisites: This session is for programmers who are beginners to intermediate in TDD and refactoring. Knowledge of Java is preferred. .Net programmers will need to pair with a Java programmer. This is not a session for absolute beginners; I expect the audience to have a basic idea of what TDD and refactoring are and why they're good.
<urn:uuid:21ae57a2-c7f6-423a-a1e2-9ed9a3011e04>
{ "dump": "CC-MAIN-2015-18", "url": "http://www.xpday.net/Xpday2009/sessions/Birthday%20Greetings%20Kata.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657216.31/warc/CC-MAIN-20150417045737-00281-ip-10-235-10-82.ec2.internal.warc.gz", "language": "en", "language_score": 0.9254891276359558, "token_count": 483, "score": 2.75, "int_score": 3 }
The link below is to an archive website called the Digital Comic Museum. For more visit: The link below is to an article that takes a look at where to find free books and comics. For more visit: In this series, we look at under-acknowledged women through the ages. In April 1941, just a few short years after Superman came swooping out of the Manhattan skies, Miss Fury – originally known as Black Fury – became the first major female superhero to go to print. She beat Charles Moulton Marsden’s Wonder Woman to the page by more than six months. More significantly, Miss Fury was the first female superhero to be written and drawn by a woman, Tarpé Mills. Miss Fury’s creator – whose real name was June – shared much of the gritty ingenuity of her superheroine. Like other female artists of the Golden Age, Mills was obliged to make her name in comics by disguising her gender. As she later told the New York Post, “It would have been a major let-down to the kids if they found out that the author of such virile and awesome characters was a gal.” Yet, this trailblazing illustrator, squeezed out of the comic world amid a post-WW2 backlash against unconventional images of femininity and a 1950s climate of heightened censorship, has been largely excluded from the pantheon of comic greats – until now. Comics then and now tend to feature weak-kneed female characters who seem to exist for the sole purpose of being saved by a male hero – or, worse still, are “fridged”, a contemporary comic book colloquialism that refers to the gruesome slaying of an undeveloped female character to deepen the hero’s motivation and propel him on his journey. But Mills believed there was room in comics for a different kind of female character, one who was able, level-headed and capable, mingling tough-minded complexity with Mills’ own taste for risqué behaviour and haute couture gowns. Where Wonder Woman’s powers are “marvellous” – that is, not real or attainable – Miss Fury and her alter ego Marla Drake use their collective brains, resourcefulness and the odd stiletto heel in the face to bring the villains to justice. And for a time they were wildly successful. Miss Fury ran a full decade from April 1941 to December 1951, was syndicated in 100 different newspapers at the height of her wartime fame, and sold a million copies an issue in reprints released by Timely (now Marvel) comics. Pilots flew bomber planes with Miss Fury painted on the fuselage. Young girls played with paper doll cut outs featuring her extensive high fashion wardrobe. An anarchic, ‘gender flipped’ universe Miss Fury’s “origin story” offers its own coolly ironic commentary on the masculine conventions of the comic genre. One night a girl called Marla Drake finds out that her friend Carol is wearing an identical gown to a masquerade party. So, at the behest of her maid Francine, she dons a skin tight black cat suit that – in an imperial twist, typical of the period – was once worn as a ceremonial robe by a witch doctor in Africa. On the way to the ball, Marla takes on a gun-toting killer, using her cat claws, stiletto heels, and – hilariously – a puff of powder blown from her makeup compact to disarm the villain. She leaves him trussed up with a hapless and unconscious police detective by the side of the road. Miss Fury could fly a fighter plane when she had to, jumping out in a parachute dressed in a red satin ball gown and matching shoes. She was also a crack shot. This was an anarchic, gender flipped, comic book universe in which the protagonist and principle antagonists were women, and in which the supposed tools of patriarchy – high heels, makeup and mermaid bottom ball gowns – were turned against the system. Arch nemesis Erica Von Kampf – a sultry vamp who hides a swastika-branded forehead behind a v-shaped blond fringe – also displayed amazing enterprise in her criminal antics. Invariably the male characters required saving from the crime gangs, the Nazis or merely from themselves. Among the most ingenious panels in the strip were the ones devoted to hapless lovelorn men, endowed with the kind of “thought bubbles” commonly found hovering above the heads of angsty heroines in romance comics. By contrast, the female characters possessed a gritty ingenuity inspired by Noir as much as by the changed reality of women’s wartime lives. Half way through the series, Marla got a job, and – astonishingly, for a Sunday comic supplement – became a single mother, adopting the son of her arch nemesis, wrestling with snarling dogs and chains to save the toddler from a deadly experiment. Mills claims to have modelled Miss Fury on herself. She even named Marla’s cat Peri-Purr after her own beloved Persian pet. Born in Brooklyn in 1918, Mills grew up in a house headed by a single widowed mother, who supported the family by working in a beauty parlour. Mills worked her way through New York’s Pratt Institute by working as a model and fashion illustrator. In the end, ironically, it was Miss Fury’s high fashion wardrobe that became a major source of controversy. In 1947, no less than 37 newspapers declined to run a panel that featured one of Mills’ tough-minded heroines, Era – a South American Nazi-Fighter who became a post-war nightclub entertainer – dressed as Eve, replete with snake and apple, in a spangled, two-piece costume. This was not the only time the comic strip was censored. Earlier in the decade, Timely comics had refused to run a picture of the villainess Erica resplendent in her bath – surrounded by pink flamingo wallpaper. But so many frilly negligées, cat fights, and shower scenes had escaped the censor’s eye. It’s not a leap to speculate that behind the ban lay the post-war backlash against powerful and unconventional women. In wartime, nations had relied on women to fill the production jobs that men had left behind. Just as “Rosie the Riveter” encouraged women to get to work with the slogan “We Can Do It!”, so too the comparative absence of men opened up room for less conventional images of women in the comics. Once the war was over, women lost their jobs to returning servicemen. Comic creators were no longer encouraged to show women as independent or decisive. Politicians and psychologists attributed juvenile delinquency to the rise of unconventional comic book heroines and by 1954 the Comics Code Authority was policing the representation of women in comics, in line with increasingly conservative ideologies. In the 1950s, female action comics gave way to romance ones, featuring heroines who once again placed men at the centre of their existence. Miss Fury was dropped from circulation in December 1951, and despite a handful of attempted comebacks, Mills and her anarchic creation slipped from public view. Mills continued to work as a commercial illustrator on the fringes of a booming advertising industry. In 1971, she turned a hand to romance comics, penning a seven-page story that was published by Marvel, but it wasn’t her forte. In 1979, she began work on a graphic novel Albino Jo, which remains unfinished. Despite her chronic asthma, Mills – like the reckless Noir heroine she so resembled – chain-smoked to the bitter end. She died of emphysema on December 12, 1988, and is buried in New Jersey under the simple inscription, “Creator of Miss Fury”. This year Mills’ work will be belatedly recognised. As a recipient of the 2019 Eisner Award, she will finally take her place in the Comics Hall of Fame, alongside the male creators of the Golden Age who have too long dominated the history of the genre. Hopefully this will bring her comic creation the kind of notoriety, readership and big screen adventures she thoroughly deserves. With news that the Man Booker Prize long list includes a graphic novel for the first time, the spotlight is on comics as a literary form. That’s a welcome development; the comic is one of the oldest kinds of storytelling we have and a powerful artform. Right now, the Australian comics community is producing some of the best original work in the world. Australian comics punch above their weight globally. Many have been picked up by international publishers and nominated for international and national literary awards – yet remain little known at home. Some are directed at an adult audience; some are for all ages. They tackle issues ranging from true crime to environmental ruin to life in detention. As someone who has researched comics for years – and been a fan since childhood – I want to share with you some highlights from the contemporary Australian comic scene. Here are 10 Australian comics of note, in no particular order. Reported Missing, by Eleri Mai Harris Sue Neill-Fraser’s conviction for the murder of her de-facto partner Bob Chappell in 2009 polarised the Tasmanian city of Hobart. To this day, Sue has maintained her innocence. This piece of long-form comics journalism by cartoonist Eleri Mai Harris takes readers deep into the personal impact this case has had on the families of those involved. You can read Reported Missing online here. Bottled, by Chris Gooch According to one study, mean friends can be good for you. The opposite may be true in this psychological drama, a tale of jealousy, friendship and narcissism. Bottled is a tense piece of suburban noir set in the suburbs of Melbourne, rendered stark and disjointed by Chris Gooch’s striking artwork. A Part Of Me Is Still Unknown, by Meg O’Shea Who is my birth mother? In this autobiographical story, Meg O’Shea travels to Seoul to find an answer to that question, armed with her sense of humour and imagination. This whimsical story of sliding door moments explores the emotional impact of not having solutions and the fatality of not knowing. You can read A Part Of Me Is Still Unknown here. Villawood – Notes from an Immigration Detention Centre, by Safdar Ahmed Villawood is a Walkley award-winning piece of comics journalism about the experiences of being held captive in a Sydney asylum seeker detention centre. In sharing the stories and experiences of the detainees, it lays bare the harsh realities of indefinite detention. These stories are made even more real through the inclusion of artwork created by the detainees. Their images sit alongside Safdar’s tense line work, which illustrates the realities of this brutal system. You can read Villawood online here. Home Time, by Campbell Whyte Changes are on the horizon for a group of Year Six school friends who are looking at their last summer together. But their suburban world is transformed after a freak accident transports them to an alternative universe. The friends find themselves in an inverse world filled with creepy gumnut babies, cups of tea and a deceptively familiar Australian landscape. With Home Time, Campbell Whyte has created an intoxicating and visually stunning Australian Narnia. Making Sense of Complexity, by Sarah Catherine Firth Sarah Catherine Firth’s visual essay explores how we understand the complex systems that exist in the world around us. Through autobiographical anecdotes and humour, it covers the history of scientific thought, unpacks complex ideas and helps provide answers to complicated questions. You can read Making Sense of Complexity online here. The Lie and How We Told It, by Tommi Parissh The blurb says The Lie is about how “after a chance encounter, two formerly close friends try to salvage whatever is left of their decaying relationship”. But it’s much more that. Visually, Tommi Parissh’s disproportioned characters dominate the spaces and the panels they inhabit, their uneven bodies reflecting their unease with themselves and their shared history. The Lie is a beautifully poignant tale of confused identities, self-centeredness and regret. Hidden, by Mirranda Burton “Everyone sees the world in their own unique way.” That’s how Mirranda Burton introduces Steve, one of the intellectually impaired adults she teaches art to. But Hidden isn’t about how her subjects see the world. It’s about how Mirranda sees them – with care, respect and humour. Mirranda’s fictionalised stories reveal how engaging meaningfully with people can shift your perspectives in beautiful and unexpected ways. The Grot, by Pat Grant with colours by Fionn McCabe If everyone you know is trying to get rich at everyone else’s expense, then who can you trust? In The Grot, the world is in the wake of an unnamed environmental catastrophe, technology and society have been reduced to simple mechanics, and everyone is rushing to Felter City to make their fortunes. With The Grot, Pat Grant and Fionn McCabe have created a stained and wondrously dilapidated alternative universe of Australian hustlers and grifters fighting to survive in a new Australian gold rush. You can read The Grot online here. So Below, by Sam Wallman Sam Wallman’s comic essay So Below explores ideas of land ownership and its social and political ramifications. Sam’s poetic artwork guides the reader through complicated questions to reveal the communities impacted by the social construct of land ownership. You can read So Below online here. Ancient Mesopotamia, the region roughly encompassing modern-day Iraq, Kuwait and parts of Syria, Iran and Turkey, gave us what we could consider some of the earliest known literary “superheroes”. But unlike the classical heroes (Theseus, Herakles, and Egyptian deities such as Horus), which have continued to be important cultural symbols in modern pop culture, Mesopotamian deities have largely fallen into obscurity. An exception to this is the representation of Mesopotamian culture in science fiction, fantasy, and especially comics. Marvel and DC comics have added Mesopotamian deities, such as Inanna, goddess of love, Netherworld deities Nergal and Ereshkigal, and Gilgamesh, the heroic king of the city of Uruk. Gilgamesh the Avenger The Marvel comic book hero of Gilgamesh was created by Jack Kirby, although the character has been employed by numerous authors, notably Roy Thomas. Gilgamesh the superhero is a member of the Avengers, Marvel comics’ fictional team of superheroes now the subject of a major movie franchise, including Captain America, Thor, and the Hulk. His character has a close connection with Captain America, who assists Gilgamesh in numerous battles. Guide to the classics: the Epic of Gilgamesh Gilgamesh and Captain America are both characters who stand apart from their own time and culture. For Captain America, this is the United States during the 1940s, and for Gilgamesh, ancient Mesopotamia. A core aspect of their personal narratives is their struggle to navigate the modern world while still engaging with traditions from the past. Gilgamesh’s first appearance as an Avenger was in 1989 in the comic series Avengers 1, issue #300, Inferno Squared. In the comic, Gilgamesh is known, rather aptly, as the “Forgotten One”. The “forgetting” of Gilgamesh the hero is also referenced in his first appearance in Marvel comics in 1976, where the character Sprite remarks that the hero “lives like an ancient myth, no longer remembered”. In Avengers #304, …Yearning to Breathe Free!, Gilgamesh travels to Ellis Island with Captain America and Thor. The setting of Ellis Island allows for the heroes’ thoughtful consideration of their shared past as immigrants. Like Gilgamesh, Thor is also from foreign lands, in this case the Norse kingdom of Asgard. In the 1992 comic Captain America Annual #11, the battle against the villainous Kang sends Captain America time-travelling back to Uruk in 2700 BCE. Captain America realises that the his royal companion is Gilgamesh, and accompanies the king on adventures from the legendary Epic of Gilgamesh. In the original legend, Gilgamesh finds the key to eternal youth, a heartbeat plant, and then promptly loses it to a snake. In the comic adaptation, the snake is an angry sea serpent, who Captain America must fight to save Gilgamesh. The Mesopotamian hero’s famous fixation on acquiring immortality is reflected in his Marvel counterpart’s choice to leave Captain America fighting the serpent in order to collect the heartbeat plant. This leads Cap to observe his ancient friend has “a few millennia” of catching up to do on the concept of team-work! Gilgamesh is not the only hero to feature. Marvel’s 1974 comic, Conan the Barbarian #40, The Fiend from the Forgotten City, features the Mesopotamian goddess of love, Inanna. In the comic, the barbarian hero is assisted by the goddess while fighting against looters in an ancient “forgotten city.” Marvel’s Inanna holds similar powers to her mythical counterpart, including the ability to heal. It is interesting to note the prominence of the theme of “forgetting” in comic books involving Mesopotamian myths, perhaps alluding to the present day obscurity of ancient Mesopotamian culture. It’s tempting to think that Captain America’s 1992 journey back to Ancient Mesopotamia was a comment on the political context at the time, particularly the Gulf War. But Roy Thomas, creator of this comic, told me via email his portrayal of Gilgamesh reflected his interest in the legend from his university days, and teaching students ancient myths at a high school. Thomas’ belief in the benefits of learning myths is well founded. Story-telling has been recognised since ancient times as a powerful tool for imparting wisdom. Myths teach empathy and the ability to consider problems from different perspectives. The combination of social and analytical skills developed through engaging with mythology can provide the foundation for a life-long love of learning. A recent study has shown that packaging stories in comics makes them more memorable, a finding with particular significance for preserving Mesopotamia’s cultural heritage. The myth literacy of science fiction and fantasy audiences allows for the representation in these works of more obscure ancient figures. Marvel comics see virtually the entire pantheons of Greece, Rome, and Asgard represented. But beyond these more familiar ancient worlds, Marvel has also featured deities of the Mayan, Hawaiian, Celtic religions, and Australian Aboriginal divinities, and many others. <!– Below is The Conversation's page counter tag. Please DO NOT REMOVE. –> The use of Mesopotamian myth in comic books shows the continued capacity of ancient legends to find new audiences and modern relevance. In the comic multiverse, an appreciation of storytelling bridges a cultural gap of 4,000 years, making old stories new again, and hopefully preserving them for the future. The link below is to an article reporting on the 2018 Will Eisner Comic Book Awards. For more visit: The link below is to an article reporting on Scribd dropping comics from its service. Do you use Scribd? Have you used Scribd? What do you think of the service? Traditionally, comic books have been aimed primarily at children – to such a degree that they are often identified with them. Regardless of the recent evolution of the genre, particularly given the growing popularity of more adult graphic novels, to me the link between comics and childhood continues to be very profound. There are certain regressive aspects to our love of comics and “bandes dessinées” (or BD) – as they are known in French, my native language. For example, collectors often pay incredible prices for figurines and old editions. They also have a remarkable desire to keep alive mythical characters after the death of their creator: from Batman and Astroboy to Spirou and Blake and Mortimer, characters continue to be resuscitated, with varying degrees of success. It’s as if the readers who were comforted in their childhood by these heroes can’t bear to see them disappear. This seems to be something particular to the medium of the comic book. Of course, we remember the novels that we loved during our childhood, but we don’t read and return to them as often as our favourite comics. A thirst for innocence It’s also possible to admire great works of literature, philosophy and art without the need to return to them compulsively or to spend thousands on first editions. But there is a kind of archaic drive behind our relationship with comics, an inconsolable nostalgia mixed with an irresistible desire to not completely grow up. We dismiss this phenomenon by talking about childishness. But it’s more about a thirst for innocence or permanence that we keep carrying around inside us, and which comics allow us to satisfy easily. But of course, this direct link with childhood is only one aspect of graphic fiction. Comics have also been evolving. In many modern comics since the 1970s, for example, the heroes are no longer invincible – they are affected by age or their own fragility. Comic book characters increasingly are caught in linear time, which affects and transforms them, just like it does every one of us. Links with others are made and remade, injuries cause real suffering, people, including the heroes themselves, die. They have abandoned the mythic to enter the romantic. This new relationship with time is at the heart of many celebrated graphic novels, particularly the two volumes of the Pulitzer prize-winning Maus, which reimagines the Holocaust, casting mice as the Jews and cats as the Nazis. But Art Spiegelman’s masterpiece doesn’t just deal with the Holocaust and its survivors. It is concerned with a lot of other issues: the relationship between father and son, the difficulties of communication and of forgiveness. With the death of Vladek, the narrator’s father, in the middle of the story, memory changes function and gives a new sense to the work: mourning and history are inseparable. In another way, Japanese manga such as My Father’s Journal or A Distant Neighborhood by Jirô Taniguchi pose similar questions. So too do the extraordinary biographical works accomplished by Emmanuel Guibert in The Photographer and Alan’s War. Mixing personal and universal elements, those stories are subtle and complex as the best novels. A particularly striking example is proposed by Lint, one of the recent books produced by Chris Ware which describes the life of an ordinary man, from his birth to his last breath in 70 pages. The graphic and narrative style is codified to the extreme, far removed from any apparent realism. Ware’s designs are on the edge of a diagrammatic style. And yet, when we read this book – in which each of the years of Lint’s life is reduced to a single page – we are plunged into a story that deeply moves us. This book moves us, not just because we identify with a character, as we might when watching a film, but because we identify with the medium itself. The pages of Chris Ware’s book evoke a mixture of emotions, primitive and childlike and sophisticated and adult at the same time, that appeal to a whole spectrum of experiences. This highly sophisticated graphic novel can help us to understand how comic book art is connected with childhood, even in its most subtle and modern evolutions. The simplicity of comic books is another key feature. Around 1840, Rodolphe Töpffer, inventor and first theorist of the comic book, had already started questioning the manner in which a child recognises a donkey illustrated in a linear drawing. When a donkey is represented in a picture in the middle of the countryside accompanied by a play of light and shadow, a young child can’t always immediately identify it. But if the donkey is suggested only by a few lines, the child doesn’t hesitate to recognise it. Even if a tree trunk is placed in front of the donkey in this simple linear drawing, so that only a few fragments of it are left, the child still sees it for what it is. This tells us something about the specific way we perceive caricatures, such as those in comic books. When it’s a light touch design, a caricature fixes an image in our minds which cannot be erased, as if it has unveiled somebody’s true character. Through this we can see another essential quality of the comic book: its ability to stick in our memory. In the midst of the flux of images and art surrounding us, comic books have a special and unforgettable place. They have a remarkable capacity to prolong the life of images well beyond the time of reading. The most remarkable sequences of images continue to live with us, accompanying us for years. In this regard, the nearest thing to the comic book is perhaps the song. I don’t think there is any song that we fall in love with immediately: we have to listen to it again and again – sometimes obsessively – until it has infiltrated us and accompanies us in our daily life. To me, comics are similar to this: they live where we dream to live. There is something unique and profound here, comic books are a privileged way of renewing the buried emotions of our childhood. It’s Asterix versus Tintin in a Clash of the Toon Titans for the Lakes International Comic Art Festival’s opening night (Friday, October 14). The Festival team have joined forces with Lancaster University to stage a battle of comic superstars. Putting the case for Tintin is Lancaster University’s new professor in Graphic Fiction and Comic Art, Benoît Peeters. Asterix is being championed by Peter Kessler, BAFTA award-winning producer and author of The Complete Guide to Asterix. The link below is to an article that takes a look at 6 Instagram profiles that feature comics you can read and follow on Instagram. If you know of others please post the URLs in the comments. For more visit:
<urn:uuid:9be3483d-f30a-4a51-a387-1bed01872d4c>
{ "dump": "CC-MAIN-2019-13", "url": "https://atthebookshelf.com/tag/comics/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203991.44/warc/CC-MAIN-20190325133117-20190325155117-00105.warc.gz", "language": "en", "language_score": 0.9530707597732544, "token_count": 5537, "score": 2.65625, "int_score": 3 }
It is realized that high-fat nourishments increment the danger of contracting cardiovascular maladies and diabetes, and also other unending sicknesses. Nonetheless, an ongoing report from Florida State University (FSU) found that an unfortunate, high-fat eating regimen likewise causes impedance in the olfactory framework or the feeling of smell. The examination, drove by Dr. Nicolas Thiebaud and organic science teacher Debra Ann Fadool, set up a relationship between utilization of high-fat sustenances with changes in neuronal multiplication and the ordinary apoptotic cycle, which impacts olfactory observation. The examination entitled Hyperlipidemic Diet Causes Loss of Olfactory Sensory Neurons, Reduces Olfactory Discrimination and Disrupts Odor-Reversal Learning was led together with the specialists from the University of West Georgia, Larry A. Ryle High School in Kentucky, and the FSU Department of Mechanical Engineering and Institute of Molecular Biophysics. Research facility mice were encouraged a high-fat day by day eat less carbs over a six-month time frame and were educated to connect a smell with a reward (water). Information gathered demonstrated that the gathering of mice that had a high-fat eating regimen was slower to take in the relationship than other mice who were bolstered a typical eating regimen. At the point when another smell was presented, the high-fat eating routine mice gathering couldn’t adjust effectively. It was discovered that the mice that had day by day high-fat eating regimens had a lessening in notice abilities since 50 percent of the neurons in charge of the elements of the olfactory framework was non-utilitarian when interpreting smell signals. This investigation did exclude any human tests, and there might be a huge contrast in comes about. It is likewise vital to take note of that the test done on mice incorporated a high-fat eating routine and not a high-sugar one. There is a distinction between coming about weight from high-fat eating methodologies contrasted with corpulence from high-sugar eats less carbs, and the outcomes on the olfactory framework may not be the same. Other related investigations incorporate how the olfactory framework influences our inclination for what we ingest and issues with the olfactory framework result in undesirable sustenance decisions. High-fat eating methodologies are the fundamental driver of stoutness in 65 percent of grown-ups in America. In 30 kids, matured 10 to 16 years, experiencing basic stoutness, scent identification edges were bring down by 20 percent when contrasted with the normal weight gathering. It is outstanding that metabolic unsettling influences might be connected to basic weight in kids. As per the World Health Organization (WHO), there are more than 1.9 billion grown-ups matured 18 and more established who were overweight, 650 million of which were stout. Around 2.8 million individuals kick the bucket every year because of confusions from being overweight or corpulent. The olfactory framework is a piece of the tactile framework utilized for noticing (olfaction). Most living things rely upon their feeling of smell to discover nourishment; some different species utilize it to detect peril. Together with the gustatory framework (feeling of taste), it is known as the chemosensory framework. Issues with the feeling of smell incorporate anosmia (add up to loss of smell), hyposmia (decreased capacity to identify scents), parosmia (mutilation of natural fragrances) and phantosmia (noticing a scent that isn’t there). These can be caused by maturing, smoking, meds including normal anti-infection agents and antihistamines, awful cerebrum damage, malignancy, contamination in the nasal holes, inward breath of poisonous vapor, neurodegenerative ailments, and as of late found high-fat eating methodologies.
<urn:uuid:657ce4e0-2896-4c54-8cb6-2700ee910b91>
{ "dump": "CC-MAIN-2022-05", "url": "https://crunchytrends.com/high-fat-foods-you-could-lose-your-feeling-of-smell/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00672.warc.gz", "language": "en", "language_score": 0.9410675764083862, "token_count": 804, "score": 2.796875, "int_score": 3 }
By Richard Black Environment correspondent, BBC News website The BBC is to gather expert evidence this week on whether human-induced climate change is a crisis for planet Earth, as James Lovelock believes. James Lovelock: The Revenge of Gaia is "a wake-up call" The originator of the Gaia concept wrote in his recent book "...the fever of global heating is real and deadly". He says nuclear power is the only short-term way to provide enough energy without causing more climatic harm. The BBC has commissioned a panel of scientists to review Professor Lovelock's evidence and opinions. Panel members include top British experts on the Antarctic, climate modelling, interactions between oceans and atmosphere, and sustainable development. It will meet on Monday and Tuesday, with conclusions and comments reported on Thursday on Radio 4's Today programme and on the BBC News website. Goddess on the edge The Revenge of Gaia, published earlier this year, is the latest in a series of books in which James Lovelock has developed the Gaia theory, which takes its name from the goddess of the Earth, or the Earth Mother, in Greek mythology. The key idea is that the segment of Earth from the bottom of its crust to the top of its atmosphere acts as a self-regulating being, keeping conditions suitable for life. A subtitle for Gaia theory is "the science of planetary medicine"; and in The Revenge of Gaia, James Lovelock argues that the planetary patient is seriously unwell. "In January 2004, Sandy [his wife] and I were invited to the Hadley Centre in Exeter [part of the UK Met Office], and that visit made us both aware of the deadly seriousness of the Earth's condition," Professor Lovelock told the BBC News website. "We discussed the rapid melting of ice floating on the Arctic Ocean, and the way that Greenland's glaciers are vanishing. We talked about global heating in the tropics and the threat to the forests there, and about the response of the great boreal forests of Siberia and Canada to climate change. "It was a deeply gloomy picture; but for me the gloomiest of all things was the detached, almost academic, air with which the grim predictions were presented - almost as if we were discussing some other planet, not the Earth." Professor Lovelock intends The Revenge of Gaia to be a "wake-up call" to spread awareness that "the Earth is truly in danger". But is he right? Are the Earth's regulatory systems in crisis, with temperatures heading inexorably for a higher level, unpleasant and perhaps uncontrollable? If he is, what should we make of his contention that renewable energy and the traditional concept of sustainable development are misguided? Is he right to say that nuclear fission is the only way to provide humanity with the energy it needs until technologies such as nuclear fusion and tidal power can be introduced to a substantial extent? Does "a lack of constraint on the growth of population" lie at the root of modern environmental problems? James Lovelock's genius has perhaps been to bring such threads together into a logical whole. Are new nuclear reactors like Finland's Olkiluoto the way forward? "He is a superb scientist, an originator of the view of the Earth, including its life, as a complete interacting system and an all-round free thinker," said Professor Brian Hoskins of Reading University who will chair the panel. "I hope we can explore Jim's views on why the problem of climate change is so serious, and see if we can agree that it should be a clarion call for positive action rather than the bleak view that some have taken from it." Professor Lovelock is adamant that his book and his thesis are not defeatist, as some observers have suggested. "Only those lacking imagination would take the book as a counsel for despair," he said. "I am hoping that... The Revenge of Gaia will be taken seriously, together with the recognition that we may truly be in grave danger and that few of the present inhabitants of the Earth are likely to survive beyond the 21st Century. "It would be wonderful to have positive and sensible suggestions for civilised adaptation." Are the Earth's regulatory systems in crisis? Should we switch to nuclear fission? Send us your questions on the human impact on the climate now. Conclusions of the BBC panel will be reported on Thursday on Radio 4's Today programme and on the BBC News website
<urn:uuid:a44535a9-d35c-4d9b-a170-f22ce3a9c56a>
{ "dump": "CC-MAIN-2015-35", "url": "http://news.bbc.co.uk/2/hi/science/nature/5141142.stm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645298065.68/warc/CC-MAIN-20150827031458-00332-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9600409269332886, "token_count": 921, "score": 2.8125, "int_score": 3 }
The Laboratory of Remote Sensing, Spectroscopy and Geographic Information Systems (GIS) serves the educational and research needs of collecting and processing of Earth observation data by applying earthly, aerial and remote sensing methods. The spectral sensors are used by the laboratory for soil survey, water and vegetation and are related to various review levels (proximal overview - proximal sensing, satellite remote sensing - satellite remote sensing and field survey - in situ sensing) and for ground validation. The majority of sensors used by the laboratory for environmental review, are based on the principles of spectroscopy and are part of the Internet of Things. Additionally, the laboratory is being supplied (and in some cases develops) earthly and aerial sensors, including individually low flight means (UAVs), spektroradiometers, electromagnetic and optical scanners for soil and plants. The collection of Earth observation data through the use of the appropriate spectroscopy-based sensors results, contributes in the creation and constant updating of spectral signature libraries for soil, water and vegetation. Also the laboratory with the use of Geographic Information System (GIS) performs spatial analysis procedures and creates the appropriate spatio-temporal databases, corresponding thematic maps and spatio-temporal simulation and fusion of these levels of information, in order the corresponding services to be available to the end users. The main areas in which the laboratory explores the data collected through remote sensing are the agronomic applications of remote sensing, spectroscopy and GIS, including the wider area of rural activity and its impact on the environment: - Monitoring and evaluation of agricultural resources (crop mapping, digital soil mapping, rural water use, aquaculture mapping, land use changes over time, etc.). - Monitoring of the impact of agricultural activity on the environment (modeling of soil erosion, mapping of downstream wetland and aquatic vegetation, monitoring of the quality of downstream water bodies, development of early warning systems, erosion risk assessment, desertification risk assessment, etc. ,). - Precision agriculture applications. - Estimation of qualitative and quantitative properties of soil, water and vegetation (estimation of leaf surface index, evaporation, soil humidity, biomass, diagnosis of tropophenes, pathogens and other factors of aging of agricultural crops, etc.).
<urn:uuid:5acd18ae-2b5c-4eff-85c3-3c36eaeac24c>
{ "dump": "CC-MAIN-2019-51", "url": "http://labrsgis.web.auth.gr/en/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00068.warc.gz", "language": "en", "language_score": 0.8789226412773132, "token_count": 459, "score": 2.96875, "int_score": 3 }
Six Surprising Facts about Air Conditioning If you’re like most people, the heating and cooling system inside your Toronto home probably isn’t something you’ve given much thought to. Air conditioning is so common it’s become just another part of everyday life. But there are a few facts about air conditioning that might surprise you. Keep reading to learn more. 1. When it was invented American inventor Willis Carrier invented the world’s first electrical air conditioning unit in 1902. But the desire to cool the air is nothing new. There are many historical examples of innovative air-cooling contraptions being used in Ancient Rome, Ancient Egypt and Han Dynasty China. 2. Why it took so long to catch on Early electric air conditioners were incredibly expensive, making it impossible for most people to have one inside their homes. It wasn’t until 1945, when Robert Sherman invented the window-mounted unit, that air conditioning began to gain in popularity. Even then, window-mounted air conditioners were still considered a luxury item. 3. How it works Everyone knows that air conditioning cools the air, but most people don’t know how it actually works. Air conditioners keep your air cool by cycling refrigerant from the inside to the outside of your home over and over. The refrigerant picks up the heat from the air indoors and then expels it outside. 4. How it changed modern medicine Air conditioning made it possible for hospitals to control the inside environment, inhibit the spread of bacteria and certain diseases, and conduct their studies in a controlled environment. Air conditioning didn’t just change medicine, it also helped save countless lives in the process. 5. It’s addictive A/C in the office, A/C in the car, A/C at home—if you’re starting to feel like you can’t live without air conditioning, you’re not alone. Studies have shown that we actually become increasingly dependent on our air conditioning units because, over time, they lessen our natural tolerance for heat. 6. How it shapes modern landscapes Some of the most appealing characteristics of older buildings—including features like large windows, high ceilings, covered balconies, and shade-providing trees—were all design elements created with air-cooling in mind. With the invention of the modern air conditioner, many of these architectural and design elements are no longer needed, changing the way we think about and design homes today. Having trouble beating the summer heat inside your home or condo? Call AAA Technical Services for friendly and affordable air conditioning service and heating and cooling advice you can depend on. We’ve proudly served Toronto and the surrounding areas for more than a decade. Contact us today to learn more about our services or to request a quote.
<urn:uuid:c9a610db-994e-419d-baba-85285de52836>
{ "dump": "CC-MAIN-2020-45", "url": "https://www.aaatechservices.ca/b/six-surprising-facts-about-air-conditioning", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00286.warc.gz", "language": "en", "language_score": 0.9401333928108215, "token_count": 584, "score": 2.78125, "int_score": 3 }
The first overall study of pericarp anatomy ofCoriaria is presented to discuss its evolution and relationships within a genus. All 14 species investigated (including 11 narrowly defined species) have somewhat bilaterally flattened mature fruits with five to seven (or more) longitudinal costae. They share a usually nine-(or more-)cell-layered (at intercostal region), stratified mature pericarp, which is basically constructed by an exocarp, an outer, a middle and an inner zone of mesocarp, and an endocarp. While a multi-layered endocarp is composed of circumferentially elongate fibres, a multi-layered inner zone of the mesocarp comprises longitudinally elongate fibres. Despite its uncertain systematic value, the presence of those fibres arranged crisscross is a characteristic feature of the genus. Comparisons among species indicate thatCoriaria terminalis, a species of the Eastern Hemisphere, retains a basic or archaic, well-stratified pericarp structure similar to the one found in all the species investigated of the Southern and Western Hemisphere, and that four species of Asia,Coriaria napalensis, C. sinica, C. intermedia andC. japonica, share a specialized structure (lacking the outer zone of the mesocarp) indicative of their mutual close affinity. Comparisons further suggest distinctness ofCoriaria intermedia, as well as variously derived position ofC. myrtifolia andC. japonica.
<urn:uuid:17ad0c9a-b6d0-40cc-988d-ae581eca1094>
{ "dump": "CC-MAIN-2017-09", "url": "http://link.springer.com/article/10.1007%2FBF02489422", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00241-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.8995597958564758, "token_count": 316, "score": 2.5625, "int_score": 3 }
This work explores the potential contribution of bioenergy technologies to 60% and 80% carbon reductions in the UK energy system by 2050, by outlining the potential for accelerated technological development of bioenergy chains. The investigation was based on insights from MARKAL modelling, detailed literature reviews and expert consultations. Due to the number and complexity of bioenergy pathways and technologies in the model, three chains and two underpinning technologies were selected for detailed investigation: (1) lignocellulosic hydrolysis for the production of bioethanol, (2) gasification technologies for heat and power, (3) fast pyrolysis of biomass for bio-oil production, (4) biotechnological advances for second generation bioenergy crops, and (5) the development of agro-machinery for growing and harvesting bioenergy crops. Detailed literature searches and expert consultations (looking inter alia at research and development needs and economic projections) led to the development of an 'accelerated' dataset of modelling parameters for each of the selected bioenergy pathways, which were included in five different scenario runs with UK-MARKAL (MED). The results of the 'accelerated runs' were compared with a low-carbon (LC-Core) scenario, which assesses the cheapest way to decarbonise the energy sector. Bioenergy was deployed in larger quantities in the bioenergy accelerated technological development scenario compared with the LC-Core scenario. In the electricity sector, solid biomass was highly utilised for energy crop gasification, displacing some deployment of wind power, and nuclear and marine to a lesser extent. Solid biomass was also deployed for heat in the residential sector from 2040 in much higher quantities in the bioenergy accelerated technological development scenario compared with LC-Core. Although lignocellulosic ethanol increased, overall ethanol decreased in the transport sector in the bioenergy accelerated technological development scenario due to a reduction in ethanol produced from wheat. There is much potential for future deployment of bioenergy technologies to decarbonise the energy sector. However, future deployment is dependent on many different factors including investment and efforts towards research and development needs, carbon reduction targets and the ability to compete with other low carbon technologies as they become deployed. All bioenergy technologies should become increasingly more economically competitive with fossil-based technologies as feedstock costs and flexibility are reduced in line with technological advances. UK energy and climate change policy context The UK Government states in the Energy White Paper 2007 that the UK faces two long-term energy challenges, tackling climate change by reducing carbon dioxide emissions both within the UK and abroad and ensuring secure, clean and affordable energy. Following a recommendation by the new Committee on Climate Change (CCC) in 2008, the UK's CO2 reduction target was increased in the Climate Change Act from 60% to 80% below 1990 levels by 2050. Renewable energy is required as part of the future UK energy portfolio in order to meet CO2 reduction targets and improve energy security. An 80% reduction in CO2 emissions by 2050 coupled to the EU target of 15% supply of UK energy from renewables by 2020, represents ambitions that will require technology innovation, and better renewable deployment, as highlighted in the 2007 Stern Review . The IEA's Technology Perspective draws attention to the need for accelerated cost reductions and increased improvements in both new and existing energy technologies. The IEA recognises this will take a large commitment to research, development and demonstration (RD&D) from the private and public sectors. Bioenergy technologies and their potential contribution to the UK's carbon ambitions Bioenergy is one of the most prominent options to reduce CO2 emissions, if it is produced in a sustainable way, and currently contributes approximately 80% of renewable power production in the UK . Most of this, however, is from methane associated with landfill. Which bioenergy technologies are deployed in the future will depend partly on national and international policies and support, but the move towards a low carbon economy, with a price associated with carbon, is also likely to stimulate technology development for renewables . It is predicted that near future investments in European countries are likely to focus on renewables, among other energy sectors, with an emphasis on biomass . Bioenergy technologies are numerous and varied, incorporating many feedstocks, methods of conversion and supply routes to end products and end uses. In addition, bioenergy technologies can be found at all levels of maturity ranging from well established proven technologies, to new technologies that are in the research and development (R&D) phase. As a consequence, it is not possible to characterise the maturity of the bioenergy field as a whole. This can also partially explain why bioenergy research remains extensive and cross-disciplinary. Finally, such multi-disciplinarity and complexity means that it is not yet well understood how influential technological development will be for bioenergy's contribution to future low carbon energy systems. The main objective of this paper is to explore the possible contribution of emerging bioenergy technology to the UK energy system by 2050, by outlining the potential for accelerated technology development (ATD) of bioenergy systems in the UK. We aim to gain insight into how bioenergy technologies may contribute to meeting the 80% carbon reduction targets for a low carbon energy system in the UK, by using a MARKAL model complemented by relevant qualitative storylines highlighting key factors underpinning modelled technological development. The UK Energy Research Centre Energy 2050 project and modelling context The UK Energy Research Centre (UKERC) Energy 2050 project focuses on how the UK energy system may move towards a resilient, low-carbon system by 2050, while providing energy security . By using a set of four core scenarios ('Reference', 'Low Carbon', 'Resilient' and 'Low Carbon Resilient'), and variant scenarios (such as the 'Accelerated Technology Development' scenarios), the project incorporates the policy, environmental and social aspects which may lead to possible future UK energy systems . As part of the UKERC Energy 2050 project, UK-MARKAL (MED) was used to explore the possible contribution of accelerated technology development to the uptake of bioenergy-based technologies in the UK energy system by 2050. MARKAL is a technology-rich, least cost optimisation model which has been used in the past to inform energy policy . A fuller description of the UK MARKAL Energy System Model is described by Strachan et al. and the way bioenergy is modelled by Jablonski et al. . Bioenergy pathways are represented in MARKAL by more than 100 directly relevant technologies in the different modules of the model (and more than 200 indirectly relevant ones). Figure 1 provides a simplified representation of the bioenergy conversion pathways in MARKAL (highlighting the lignocellulosic ethanol, gasification and fast pyrolysis pathways). Figure 1. UK-MARKAL (MED) simplified bioenergy chains, with gasification, lignocellulosic bioethanol and fast pyrolysis bio-oil highlighted. Overview of the methodological framework The methodology is summarised in Figure 2. The research focuses on the modelling of ATD in MARKAL, its qualitative characterisation for the case of the UK, and the possible implications of such technological development for the UK energy system. It was essential to the ATD exercise that all scenario runs were underpinned by the development of qualitative information of R&D needs and potential. Figure 2. Methodology framework used to assess the potential accelerated technology development (ATD) of bioenergy chains. Criteria used for (and results of) bioenergy technology selection In order to explore accelerated technology development of bioenergy, this research focused specifically on accelerated development of the bioenergy technology field, not accelerated deployment. Bioenergy is a wide field, representing a large number of chains with many feedstocks, conversions and supply routes that feed into heat, power and liquid biofuels in the UK and it was not possible to study all pathways in detail. To focus on a limited number of bioenergy chains, a set of technologies were chosen which had, (i) the greatest potential for technology development and commercial deployment; (ii) were represented in the MARKAL model; and (iii) had the potential to be environmentally sustainable in the long term, with the focus on bioenergy pathways with the potential to be technically available, assuming that no additional pressures on biodiversity, soil or water resources are exerted compared with a development without bioenergy production, in line with the 2006 European Environment Agency report . Based on these criteria, the chains and technologies selected for an extensive exploration of the potential for ATD of bioenergy systems in the UK included (Table 1) the following. Table 1. Rationale used to develop the accelerated technology development (ATD) scenarios for the five technologies. The conversion of lignocellulosic second generation feedstock to bioethanol This was chosen because considerable technological advances are likely and because liquid biofuels provide one of the few options for fossil fuel replacement in the short to medium term, with the potential to offer both greenhouse gas savings and energy security . Gasification of solid biomass Although this is not a new technology, gasification was selected as it a process working towards deployment at demonstration and commercial scale and technology development is possible . This modelling exercise focused on gasification of dedicated energy crops used directly for electricity generation, rather than on technologies where biomass is first converted into biogas and upgraded into bio-methane before being transformed into electricity through a (natural) gas turbine. Fast pyrolysis of biomass for the production of bio-oil This is a technology largely at the early commercial stage; however, the production of transport fuels via fast pyrolysis is still in the R&D stage with potential for further advances . This exercise focused on the pyrolysis of wood to produce bio-oil. Within the model, this bio-oil can go to three possible pathways: further pyrolysis to hydrogen, leading to the transport module; upgrade of pyrolysis oil into light fuel oil, which goes to the industry, residential or services sector, or upgrade of pyrolysis oil into bio-diesel, which goes to electricity production or transport. Bioengineering of energy crops This represents one of the underpinning technologies, as feedstock prices underpin many of the costs associated with bioenergy chains. The focus was on improvements through better breeding to advance dedicated second generation energy grasses and trees, not food crops. We focused on non-GM crops and domestic (UK) crops. The focus on domestic crops only was to reflect long-term environmental sustainability goals. Agro-machinery for growing and harvesting energy crops The other underpinning technological improvement selected was the potential for improved machinery for growing and harvesting dedicated bioenergy crops. Good site preparation and weed elimination are highly influential on the performance of many energy crops, and improvements in these areas are important. There are also crop losses associated with inefficient harvesting/picking up of cut energy crops for example, which need to be addressed. Improvement in both agro-machinery and bioengineering of energy crops are likely to affect learning curves and supply costs for multiple bioenergy chains . Data collection for the ATD Bioenergy for UK-MARKAL (MED) modelling The current status of each of the five chains and their potential for acceleration were assessed using data collected on both a qualitative (R&D needs, UK and international research efforts, and policy considerations) and a quantitative basis. This information was obtained from published literature, government reports and expert consultation. Qualitative information for the scenarios was used to estimate possible future technology developments until 2050, through processes such as gradual changes, step changes and innovation, as well as gaining an understanding of the possible milestones for each technology. The ATD bioenergy quantitative dataset was compiled for each of the five technologies using optimistic figures from literature searches and expert consultation to represent accelerated technology development from 2000 to 2050. The information available varied widely between the five technologies chosen. Accordingly, the assumptions and the process by which the data in the literature was used to determine the accelerated dataset are described below for each technology separately. The data collected consisted of figures on capital cost, operating and maintenance cost, technical efficiency, defined as the ratio between the useful output of energy conversion to the input, annual availability, defined as the share of the installed capacity that is used during a year (average share of the year), plant lifetime in the case of electricity generation and for biotechnological advances for second generation bioenergy crops, energy content and yield. Acceleration was represented through reducing costs, increasing efficiencies and including earlier availability for the technologies, in line with the literature and expert consultation. Since the UK-MARKAL database costs are in pounds sterling (GBP), all cost data were converted to GBP on a year 2000 basis. Overview of the modelled bioenergy ATD scenarios Once the data on accelerated technology development for selected bioenergy pathways was compiled, it was included in the modelling of five different 'accelerated' scenarios (Table 2) to explore how bioenergy technologies may penetrate the UK energy market if technology development is accelerated. These accelerated scenarios were built around the UKERC Energy 2050 project scenarios. The scenarios were produced as a 'what-if' exercise to determine how accelerated technology development could influence the future energy mix to reflect possible technology improvements through present and future R&D efforts and, therefore, should not be taken as being predictive. Table 2. Description of the five scenarios run as part of the accelerated technology development (ATD) scenarios. This paper focuses on the contribution of bioenergy to decarbonising the energy system; however, further exploration of the other technology scenarios can be found in the forthcoming ATD report from UKERC Energy 2050 . Application to selected bioenergy chains in the UK The Accelerated Bioenergy scenario; how acceleration was modelled The representation of technological development in MARKAL has been done by the introduction of technologies' vintages (that is, similar technologies available at different times) with differing parameters corresponding to technological evolution, to represent learning effects or other advances in technology development. These parameters include capital cost, efficiency, operating and maintenance (O&M) costs, both fixed and variable, and where appropriate, availability, contribution to peak load, and plant life time. It is important to have an understanding of the R&D needs of a technology pathway when assessing its potential for accelerated technology development. Understanding the major hurdles to development and deployment is also critical when considering the likelihood of technology breakthroughs and step changes within a technology pathway. Bioenergy is diverse and flexible, covering many feedstock resources, conversion pathways and outputs . As such, there are unique R&D needs for each of these different elements of the bioenergy chain. There are, however, two critical areas of R&D for the bioenergy field as a whole: improving crops and improving conversion technologies [4,16]. The development of new dedicated bioenergy crops for feedstocks is one of the most fundamental R&D needs for bioenergy, as this underpins the development and cost of many bioenergy conversion technologies . The UKERC Research Atlas for bioenergy identifies research challenges for bioenergy over the next 5 years including the development and delivery of new cultivars from past and current research and breeding of dedicated energy crops. In the next 10 years, there is a need to improve the total yield and develop new genotypes for a range of bioenergy crops, including oil seed crops, aquatic biomass, woody lignocellulose and grasses. R&D needs for second generation energy crops include new genotypes and selective breeding to increase yields and system efficiencies, such as improving stress tolerance, disease and pest resistance, increased photosynthetic, nitrogen and water use efficiency and increased biomass production (Table 1) [4,16-19]. It is likely that a 30% increase on current yield will be possible over the next 10 to 15 years, using traditional breeding and selection . Advances in biotechnology of second generation bioenergy crops will additionally help to make feedstocks cheaper, which is important for technologies such as the production of lignocellulosic ethanol, gasification and fast pyrolysis, which require cheaper feedstocks if overall costs are to be reduced (J. Brammer, J. Rogers, unpublished data). Improved establishment of dedicated bioenergy crops on marginal and idle land, as recommended by the Gallagher Review would also help to reduce land competition and avoid displacement of food crops, possibly increasing the social acceptance of bioenergy. This could increase the land area available to produce energy crops. Advances in site preparation, weed elimination and improvements in the agro-machinery used to grow and harvest dedicated bioenergy crops is additionally needed (C. Panoutsou, unpublished data). Improvements include increasing not only the engine/fuel efficiency of agro-machines, but also their efficiency at picking up the harvested crop to reduce crop losses , integrating different crop types with different harvest times, and producing better irrigation systems which can cope with particles contained in recycled water (C. Panoutsou, unpublished data). Technical improvements in existing conversion technologies such as gasification, and novel conversion technologies like fast pyrolysis, are also needed. R&D needs for both of these technologies include increasing conversion efficiency, reducing overall technology costs, increasing fuel flexibility so that a variety of new energy crops can be utilised as feedstocks and improving product quality through gas cleaning in gasification and producing cleaner bio-oil from fast pyrolysis [11,18,21-25]. All of these improvements will push gasification and fast pyrolysis technologies towards commercial deployment through increased economic viability, via the ability to scale-up. The economic competitiveness of biofuels compared with conventional fuels is a key barrier in the deployment of biomass in the transport sector . In order to stimulate a more efficient and sustainable conversion from lignocellulosic biomass to ethanol, key R&D needs include the improvement of feedstock flexibility and quality to enable easier breakdown of cell walls, in particular less lignin, but also development of in-situ enzyme systems for wall disassembly [10,26]. Better conversion technologies, with less pre-processing and enzymes with lower costs are also required to make lignocellulosic ethanol more economically competitive with conventional fuel. Table 1 lists the rationale, and parameters which were changed to represent accelerated technology development for each of the five bioenergy chains, while Table 3 compares how accelerated technological development was represented in UK-MARKAL (MED) for the ATD scenarios to reflect possible technology improvements through present and future R&D efforts. The changes between the LC-Core scenario and the ATD scenarios are outlined in more detail below. Although these datasets are based on extensive literature reviews and expert consultant, it is important to highlight that these cost reductions are very uncertain and all figures should be taken with caution. Table 3. LC-Core and ATD Bioenergy data for UK-MARKAL. Major technology improvements and accelerated development which will reduce overall costs and increase efficiency of lignocellulosic conversion to ethanol are expected from a combination of improvements in feedstock quality, with reduced lignin for better breakdown of cell walls, cheaper enzymes and more efficient conversion technologies, which require less pre-processing, plus an improvement in the fermentation process (Table 1). Parameters changed within MARKAL included capital costs and O&M costs. All other costs associated with this technology were kept the same as the core scenario. The parameters used to model lignocellulosic ethanol conversion were changed as follows to represent accelerated development (Table 3). Lignocellulosic ethanol is available in the model from 2010. In the core scenario, it is modelled with an annual availability of 100%, which was reduced slightly to 90% in the accelerated scenario. In the core scenario the efficiency is modelled as 90%. Although this figure is too high, with expected efficiencies to be around 30 to 40% (R. Slade, unpublished data), the efficiency was kept at 90% in order to keep it comparable to the non-accelerated scenario and to avoid any drastic model 'deceleration', as this would be contrary to the aim of the exercise. Laser et al. suggested that mature cellulosic ethanol technology could reach efficiencies of 68% in combination with GTCC. The figures used to represent accelerated development, therefore, should be taken with caution, and are used as an illustration only, not a prediction of technical improvement. The investment costs of lignocellulosic ethanol conversion in the core scenario are 23 GBP.GJ-1, which is in line with recent US Department of Energy (DOE) research . In the accelerated scenario, investment costs were reduced following the trend indicated by DOE to reach 14 GBP.GJ-1 by 2050. These changes in investment costs are likely to occur as economies of scale are obtained when it is possible to construct larger plants (in line with increased investors' trust, better access to capital etc.) (R. Slade, unpublished data). For the variable O&M costs, the data used in the core scenario (1.9 GBP.GJ-1) seems too high. In the accelerated scenario, O&M costs were reduced to 5% of the investment costs in 2000 and 2010, and to 2% of the total investment costs by 2050. These percentages are in line with expert analysis and estimates on expected development of the technology (R. Slade, unpublished data). The basis for acceleration of gasification technologies comes from improved production of cleaner gases, and cheaper feedstock coupled with increased fuel flexibility, which will help to reduce overall costs of gasification and increase the feasibility of scaling-up (Table 1). Currently feedstock accounts for around one third of the costs associated with gasification, and in combination with the above, an increase in feedstock flexibility would greatly help to reduce the over-costs of this technology (J. Rogers, unpublished data). The main figures used for the cost assessment of gasification were obtained from the Department of Trade and Industry (DTI) (a forerunner of the Department for Business, Enterprise and Regulatory Reform (BERR)) in the UK and the National Renewable Energy Laboratory (NREL) Power Energy Technologies Databook in the US , and figures used are therefore based on the assumptions used within these reports. When unavailable, project costs to 2050 have been extrapolated via cost curves from the optimistic figures given in the energy crop gasification literature. Energy crop gasification is represented in MARKAL by existing gasification (2000) and a number of technology 'vintages' in the model, including one at 2010, 2020, 2030 and 2040. The following data were used for the ATD gasification scenario (Table 3). The annual availability in 2000 was modelled at 83% in the core scenario. For the accelerated technology scenario, this was increased to 85% in 2000 in line with the DTI economics report , and increased gradually to 89% by 2050. The efficiency data for 2000 was kept at the same starting point for both scenarios (32%), but was increased to 47% in 2010 and 2020, and 50% in 2030 and 2040 in the accelerated scenario . Capital costs for the accelerated scenario were kept the same in 2000 as in the core scenario at 2,200 GBP.kWe-1, reducing to 1,450 GBP.kWe-1 in 2010, and 700 GBP.kWe-1 by 2020 as reported in the DTI and could occur from a combination of factors. This cost was then assumed to level off after 2020 based on technology assumptions from the literature. Under the accelerated technology scenario, the fixed O&M costs in 2000 were kept at 66 GBP.(kWe.yr)-1, decreasing to 51.5 GBP.(kWe.yr)-1 in 2010 and to 30 GBP.(kWe.yr)-1 by 2030 to 2040 to be consistent with figures reported in the DTI ecomomics report and NREL powerbook [25,29]. The accelerated development of fast pyrolysis and therefore the ATD figures are based on producing cleaner bio-oil, improving processing and increasing the fuel flexibility of fast pyrolysis (Table 1). Although there were a number of studies that examined the economics of fast pyrolysis, many were unsuitable because they either measured the cost of the bio-oil production rather than the capital cost, or because they did not present enough information to convert their cost figures into a comparable capital cost as found in the model. The one study that was applicable was from the DTI . This study presented a low, medium, and high levelised capital cost estimate from 2005 to 2020 in GBP.MWh-1. The DTI's medium scenario was in line with the LC-Core scenario data. In order to show the potential for acceleration of fast pyrolysis, in keeping with the discussions with experts (J. Rogers, J. Brammer, unpublished data) and the literature available, the costs were kept the same for both the core and accelerated scenario in 2005 at 32.4 GBP.GJ-1, but then were linearly reduced in the accelerated scenario until 2020 to 25.6 GBP.GJ-1 (DTI's low estimate) (Table 3). The capital costs were kept level from 2020 to 2050, although lower costs might be possible if feedstocks become cheaper (J. Rogers, unpublished data). The efficiency of fast pyrolysis in the accelerated scenario did not change from the 90% found in the core scenario. Variable O&M costs were modelled in 2000 in both scenarios at 3 GBP.GJ-1. In the accelerated scenario, however, O&M costs were reduced after 2000 to represent a figure of 4% of the capital costs, dropping to 1 GBP.GJ-1 by 2050 (to be consistent with the literature ). Bioengineering of energy plants Acceleration of bioengineering of energy plants focused on domestic energy crops (within the UK) to reflect the environmental sustainability criteria. Imported energy crops were not accelerated. Improvements in the yield of energy crops are predicted to be the major factor that will accelerate the development of energy crops (Table 1), and therefore, future crop costs were based on a doubling of the average yield by 2050 and, to some extent, improvements in agro-machinery for growing and harvesting energy crops. A literature review of energy crop costs highlighted the wide range of plants suitable as bioenergy crops. Data obtained for this scenario, however, focused only on those crops which are suitable to be grown in the UK (miscanthus, willow, switchgrass and poplar). Although there are a wide variety of bioenergy crops with different crop costs, MARKAL uses an average figure to represent all energy crops. The estimates for crops from the literature e.g. [30-34] therefore were averaged to give one 'energy crop' cost, to be consistent with current methods used in the model. All costs found in these studies were converted into a comparable unit (GBP.GJ-1) using an assumption of average yield of 12 t.(ha.yr)-1 increasing to a future yield of 24 t.(ha.yr)-1 in 2050. To model accelerated development of energy crops, costs for 2000 were kept the same as the core scenario at 3.61 GBP.GJ-1, but in the ATD scenario this was decreased to 2.9 GBP.GJ-1 in 2010 and 1.45 GBP.GJ-1 by 2050, to represent a gradual improvement in biotechnology from 2000 to 2050 (Table 3). As gradual improvements (rather than step changes) are expected, the costs were modelled as a linear decline between these capital cost points. In addition to reducing the crop costs, the predicted increase in yield would also increase the upper bounds of available crops (with higher yields, more crops can be grown on the same amount of land). Therefore, in the accelerated scenario, the upper bound of domestic energy crops available was doubled to reflect the doubling of yield. No new changes were made to the accelerated development dataset due to improvements in agro-machinery (Table 1). Improvements in agro-machinery are expected to be one of the factors influencing the declining costs of growing and harvesting energy crops. Accordingly, these improvements were included as a factor affecting the bioengineering of energy plants accelerated data, as both of these underpinning technologies are represented as one resource cost in MARKAL. Bioenergy penetration in the Bioenergy ATD scenario Overall, there is a larger uptake of bioenergy in the ATD Bioenergy scenario than in the LC-Core scenario. The increased uptake of bioenergy in the ATD scenario appears to be due to the availability of cheaper resources (energy crops). Although energy crops are utilised in both scenarios in 2010, there is a much larger uptake of bioenergy crops across all vintages in the ATD scenario from 2010 to 2050 (Figure 3). The land available for energy crop production is not fully utilized in the LC-Core scenario and produces a maximum of 113 PJ of domestic energy crops. The production of energy crops in the ATD scenario, however, reaches a physical constraint when all available domestic land for energy crop production is utilized in 2030 (at 415 PJ). Energy crops continue to increase in terms of PJ, after 2030 in the ATD scenario due to the accelerated assumption of increasing yields. This allows for increased energy from energy crops on the same amount of land. Accordingly, in 2050, there are 679 PJ of energy crops in the ATD scenario (compared with 113 PJ in LC-Core). Figure 3. Energy crop production. LC-Core (blue) and the accelerated technology development (ATD) Bioenergy scenario (green). The production of electricity from biomass (primarily from gasification) reaches a peak of 277 PJ of electricity (roughly 19% of total electricity generation) in 2035 in the ATD scenario, compared with a peak of only 62 PJ in the LC-Core in 2025 (Figure 4). This increase in uptake is largely due to an increased adoption of gasification technologies in the ATD scenario. Gasification of solid biomass (energy crops) is first selected as a viable option for electricity generation in 2010 in both scenarios. However, high levels of energy crop gasification are deployed for electricity generation in the ATD scenario, whereas gasification is not deployed after 2010 in the LC-Core scenario. Figure 4. Electricity produced from biomass. LC-Core (blue) and the accelerated technology development (ATD) bioenergy scenario (green). In the ATD scenario, the use of solid biomass for electricity generation increases until 2040, when it reaches 481 PJ of energy crops. After 2040 the use of energy crops for electricity generation decreases, reaching 284 PJ by 2050 (Figure 5). This decline occurs because energy crops are shifted away from electricity production to be used for heating in the residential sector after 2040. Figure 5. The distribution of solid biomass for electricity production. LC-Core (blue) and the accelerated technology development (ATD) bioenergy scenario (green). The deployment of gasification in the ATD Bioenergy scenario has an effect on the deployment of other electricity generation technologies. It takes significant market share from wind from 2015 to 2050 and from nuclear in the medium term (2025 to 2035). The deployment of gasification in the ATD scenario also has some smaller effects on the levels of adoption of other bioenergy technologies. For instance, biomass district heating technologies (heat only) in the ATD Bioenergy scenario are used more than in the LC-Core scenario in the short term, but significantly less in the longer term. In addition, the deployment of gasification also means that biomass CHP plant (LTH) is never deployed in the ATD scenario whereas it was deployed from 2035 onwards in the LC-Core scenario. Accelerated technology development of bioenergy creates changes in the residential heating sector when compared with the LC-Core scenario (Figure 6). There is an uptake of solid biomass (from energy crops) in the residential/service sector in 2045 in both scenarios, but it is much higher in the ATD scenario. In the LC-Core scenario there are 80 PJ for residential heat by 2050 while in the ATD scenario there are 395 PJ by 2050. Figure 6. The use of biomass in the residential heating sector. LC-Core (blue) and the accelerated technology development (ATD) bioenergy scenario (green). In the service sector, woodchips are displaced by pellets (from energy crops) from 2040 onwards in the ATD scenario, unlike the LC-Core scenario where wood is used until 2050. The increasing use of energy crops for the residential and service sectors in the long term corresponds to the timing of the declining use of energy crops for electricity and the continuing increase in production of energy crops. Overall, final energy demand from biomass in the transport sector does not differ significantly between the LC-Core and ATD Bioenergy scenarios. The total transport fuel demand is the same in both scenarios until 2035 to 2040, when the total transport fuel demand in the ATD is slightly higher (152 PJ) than in the LC-Core (142 PJ) (Figure 7). However, by 2050 the transport fuel mix in the ATD scenario has more conventional transport fuels than the LC-Core. In the ATD scenario, there is less ethanol and more biodiesel, diesel and petrol than in the LC-Core scenario. Figure 7. Final energy demand for biofuels in the transport sector. LC-Core (blue) and the accelerated technology development (ATD) bioenergy scenario (green). The overall impact of acceleration on ethanol is negative because less domestic ethanol is produced in the ATD scenario. Imported ethanol remains at similar levels in both scenarios. There are two pathways for the production of ethanol in MARKAL: traditional straw fermentation and lignocellulosic conversion to ethanol. In the ATD scenario, traditional ethanol from wheat straw fermentation is deployed later and at lower levels (170 PJ in LC-Core vs. 103 PJ in ATD Bioenergy). However, there is an increase in the uptake of the accelerated lignocellulosic ethanol in the ATD scenario from 2035 onwards (this technology was not deployed after 2035 in the LC-Core scenario). However, the increase in lignocellulosic ethanol is smaller than the decrease in traditional wheat ethanol and therefore there is an overall reduction in the level of ethanol in the ATD scenario. The overall reduction in ethanol deployment and increase in conventional fuels in the ATD scenario suggests that under the least cost assumptions of the model and accelerated bioenergy assumptions, it is more economical to use biomass to decarbonise the electricity and residential heat sectors. As a result, in the accelerated scenario, there is more decarbonisation of electricity and heat and less need for more expensive transport sector decarbonisation. Bioenergy in the Aggregated Accelerated scenarios LC Acctech (60%) no fuel cells When all the technologies are accelerated together in the LC Acctech (no fuel cells) scenario at 60% carbon reduction, there is less biomass for electricity after 2040 than there was in ATD Bioenergy (Figure 8). This is likely due to the abundance of other cheap alternatives for electricity production. However, there are dramatically higher levels of biomass being used for residential heat in LC Acctech (no fuel cells) (60%) than in ATD bioenergy (Figure 9). Whereas in 2050 there are 381 PJ of residential biomass in ATD Bioenergy there are 564 PJ in LC Acctech (no fuel cells) (60%). None of the other technologies accelerated in this scenario offers a competing low carbon option for heat and therefore more biomass is used for heat than electricity. Biomass for transport changes less noticeably between ATD Bioenergy and Acctech (no fuel cells) (60%) than it does for the electricity and heat sectors (Figure 10). However, there is a small increase in use of biomass for transport biofuels in LC Acctech (no fuel cells) (60%) as compared with Bioenergy ATD. Figure 8. Biomass for electricity production in the aggregate scenarios. LC-Core (60%) (dark blue); accelerated technology development bioenergy (ATD Bioenergy (60%)) (purple); LC Acctech without fuel cells (60%) (LC Acctech (no FC) 60%) (aqua); LC Acctech with fuel cells (60%) (LC Acctech 60%) (blue); LC-Core (80%) (red); LC Acctech without fuel cells (80%) (LC Acctech (no FC) 80%) (yellow); LC Acctech with fuel cells (LC Acctech 80%) (green). Percentage value corresponds to carbon reduction targets. Figure 9. Residential heat from biomass in the aggregate scenarios. LC-Core (60%) (dark blue); accelerated technology development bioenergy (ATD Bioenergy (60%)) (purple); LC Acctech without fuel cells (60%) (LC Acctech (no FC) 60%) (aqua); LC Acctech with fuel cells (60%) (LC Acctech 60%) (blue); LC-Core (80%) (red); LC Acctech without fuel cells (80%) (LC Acctech (no FC) 80%) (yellow); LC Acctech with fuel cells (LC Acctech 80%)(green). Percentage value corresponds to carbon reduction targets. Figure 10. Biomass for transport (biofuels) in the aggregate scenarios. LC-Core (60%) (dark blue); accelerated technology development bioenergy (ATD Bioenergy (60%)) (purple); LC Acctech without fuel cells (60%) (LC Acctech (no FC) 60%) (aqua); LC Acctech with fuel cells (60%) (LC Acctech 60%) (blue); LC-Core (80%) (red); LC Acctech without fuel cells (80%) (LC Acctech (no FC) 80%) (yellow); LC Acctech with fuel cells (LC Acctech 80%) (green). Percentage value corresponds to carbon reduction targets. LC Acctech (80%) no fuel cells When accelerating all the technologies together at an 80% carbon reduction, there are again major changes to the distribution of biomass. While there are still high levels of biomass being utilised in LC Acctech (no fuel cells) (80%), the biomass is being distributed to the sectors differently. In LC Acctech (no fuel cells) (80%) there is much less biomass deployed for electricity production (a peak of 150 PJ as opposed to 290 PJ in LC Acctech (no fuel cells) (60%) (Figure 8). There is also a significant reduction in biomass deployed to residential heating (Figure 9). In 2050 in the 60% scenario, there are 564 PJ of biomass used in the residential sector, while in 2050 in the 80% scenario there are only 8 PJ used in this sector. While there are reductions in biomass for electricity and residential heat, there is a large increase in biomass for transport biofuels in the 80% scenario (Figure 10). There are 281 PJ of transport biofuels in the 60% scenario and 665 PJ in the 80% scenario. With a higher carbon reduction target, there is an increased utilisation of biomass for transport biofuels instead of heat and electricity in LC Acctech (no fuel cells) (80%). LC Acctech (60%) with fuel cells When all the technologies (including fuel cells) are accelerated for a 60% carbon reduction target then bioenergy is used more for electricity generation than in any other scenario (including the single technology acceleration-ATD Bioenergy) (Figure 8). However, biomass is not heavily used for heat or transport (Figures 9 and 10). In fact, from 2025 onwards there is no biomass used for residential heating and there is less biomass in the transport sector than there was even in the LC-Core scenario. LC Acctech (80%) with fuel cells When all the technologies (including fuel cells) are accelerated with an 80% carbon reduction target then biomass is used less for electricity generation than in the other accelerated scenarios (Figure 8). However, biomass is utilised more for heat than in any other 80% scenario in the later period (2035 onwards) (Figure 9). Transport biofuels are utilised more than they are in the ATD Bioenergy scenario (at 60%) but significantly less than in the other 80% scenario (Figure 10). This is likely due to the new availability of hydrogen transport options to decarbonise the transport sector. This study highlights the potential for innovation throughout the bioenergy supply chain to contribute to the decarbonisation of multiple sectors of the UK energy system. Based on the research narratives developed and techno-economic modelling scenarios, the results suggest that bioenergy has the potential to be an affordable option to decarbonise not only the electricity and residential heat sectors, but also the transport sector under an 80% carbon reduction target given further technological development. When bioenergy technologies are accelerated in isolation in the ATD Bioenergy scenario, electricity production from biomass is highly deployed in the medium term followed by increased residential heat from pellets (energy crops) in the long term. Until 2035, a similar pattern was seen for electricity generation from biomass in the aggregated scenario (LC Acctech (60%) with fuel cells) with a 60% carbon reduction. However, in the 60% aggregated scenario without fuel cells, there was less electricity production from 2040 to 2050 and much more biomass for residential heating than in the ATD Bioenergy scenario. This suggests that in the aggregated scenarios, electricity produced from bioenergy crops becomes less economically competitive than other accelerated low carbon electricity options such as marine, wind power and solar PV. The flexibility of bioenergy means it can be used in multiple end use sectors, while the other renewables cannot. Biomass therefore becomes better used as a low cost option to decarbonise residential heating in the aggregated scenarios when competing with other renewables. Under the increased carbon reduction targets in the aggregated 80% scenario without fuel cells (LC Acctech (80%) no fuel cells scenario), the distribution of biomass changes. There is significantly less electricity and residential heat generated from biomass but more biomass in the transport sector (biofuels) in the 80% scenarios. The higher carbon reduction target of 80% results in more pressure on the transport sector to decarbonise compared with the 60% scenarios. Without fuel cell acceleration, there are few affordable options to decarbonise transport and biofuels is the cheapest option. Therefore the model shifts much of the biomass away from electricity and heat and towards transport biofuels. This suggests that with a limited resource like biomass there should be a thorough investigation into the optimal utilisation of the resource to decarbonise the economy. When fuel cells, an alternative option for decarbonising the transport sector, are introduced in LC Acctech with fuel cells at a 60% or 80% carbon reduction, the fuel cells are deployed for decarbonisation of the transport sector. In the 60% scenario this leads to biomass being used primarily for electricity generation and heat while in the 80% scenario biomass is used earlier for residential heating. This reinforces the message that the optimal distribution of biomass depends on the ambition of the carbon target and the availability of alternative low carbon technologies. A whole system mentality must be used when determining how to best use biomass resources. Given the importance of low cost biomass resources in the increased uptake of bioenergy in the ATD Bioenergy scenario, cheaper feedstocks are clearly important for the future deployment of bioenergy technologies. This suggests that much of the scope for accelerated deployment of bioenergy comes from the development of more efficient, low cost energy crops. This can be achieved through increasing the yield, crop resistance to disease and pest species and by increasing successful establishment of perennial species. Feedstock flexibility is also important for many of the bioenergy technologies, and therefore improvements in this area will increase the economic competitiveness of bioenergy. The results from all the scenarios suggest that using biomass for residential heat is a potential option to decarbonise the UK's energy market when bioenergy is competing with other accelerated technologies under 60% carbon reduction targets. However, in 80% carbon reduction scenarios, transport biofuels are deployed at much higher rates. This suggests that to achieve a higher level of decarbonisation, transport will need to be highly decarbonised and that lignocellulosic ethanol could be one way to achieve this. Given the uncertainties surrounding the ATD figures and our 'what-if' rather than a predictive approach, however, it remains to be seen whether bioenergy technologies will develop and be deployed in this way. Fast pyrolysis for bio-oil production was not deployed in any of the scenarios. This certainly does not mean that fast pyrolysis technology is without potential. To fully understand why fast pyrolysis was not deployed, a sensitivity analysis would need to be undertaken on the costs of pyrolysis technology within the model; however, due to time constraints this was not possible. It is additionally important to highlight that the model may not capture some key advantages in using fast pyrolysis in an energy system which are beyond cost competitiveness. MARKAL is used for 'what-if' analysis and focuses on 'least-cost' solutions for the energy system over the time horizon chosen. Consequently, MARKAL will select the technologies which supply energy at the lowest cost, even if this only represents a marginal cost saving. One of the consequences of this modelling paradigm is that some technologies/pathways may not be selected by the model as part of the 'optimal' energy system configuration even if in reality they could be developed. In addition the MARKAL modelling framework can only capture some of the 'non-economic' benefits of certain energy technologies/pathways, which influences its choice of 'solutions'. Although the UKERC 2050 MARKAL model has, additionally, been thoroughly tested (and has been constructed from earlier also tested versions of the UK-MARKAL model), it has not been built specifically to explore bioenergy pathways. Within the time constraints of the project, it was not possible to improve the bioenergy chains represented within the model. Within the TSEC-BIOSYS modelling exercise, however, bioenergy chains were specifically improved. 'Domestic' fast pyrolysis also was not deployed in the system; however, imported bio-oil was deployed most notably in the industrial sector within the TSEC-BIOSYS model (unpublished data). The 'imported' bio-oil pathway is currently not modelled within the MARKAL model used for UKERC 2050 and this highlights an area where the model needs to be improved. Land availability within the UK for growing bioenergy crops is, additionally, a big issue within the bioenergy field. In MARKAL, the upper bounds of available energy crops were capped to reflect the limited availability of land for biomass in the UK. However, there are also other issues associated with bioenergy that could further limit biomass levels in the UK. Bioenergy is often considered controversial due to issues surrounding direct and indirect land use change [13,35], real carbon reduction potential, social acceptability and other environmental impacts. There are many concerns about the sustainability of using first generation food crops for energy production due to possible impacts on food prices and increased and/or accelerated land-use change. As a result, there is a great deal of research and support for second generation dedicated energy crops, which do not compete with food crops, do not negatively affect the quality of the land and do not negatively shift the pattern of land use (for example that do not require clearing certain types of land to grow energy crops). The Gallagher Review recommended that bioenergy crops should be grown on marginal or idle land, and research in this area will become important for the future of bioenergy deployment . These socio-environmental limits are not represented in the model and thus would impose additional deployment constraints not shown in the modelling results. This highlights some limitations of cost optimisation models. The modelling overlooks many barriers to development and deployment of technology other than costs, including some key aspects relating to both the spatial and the temporal infrastructure of bioenergy. It is also very important that the modelling work is underpinned by clear qualitative stories, including policy implications. A further key factor in determining the use of UK land for energy crop production will be the availability and price of imported biomass. This is important given that at least half of the current biomass supply is sourced from outside the UK and that domestic bioenergy crops are struggling to be adopted by UK farmers. UK growers appear reluctant to diversify into unfamiliar perennial crops which are associated with long contracts with energy supply companies. This has been exacerbated by the recent uncertainties over support for the Energy Crops Scheme . The reliance on imported biomass also has implications for long-term environmental sustainability of bioenergy technologies. The influence of imported biomass on bioenergy deployment, however, could possibly be explored in future MARKAL runs. Social and environmental limitations on bioenergy development and deployment, such as the wide-scale environmental costs and benefits of bioenergy deployment on ecosystem services, and direct and indirect land use displacement, may make deployment of bioenergy technologies challenging. Overall, however, the work suggests that bioenergy can contribute significantly to a low carbon UK energy future. However, it is important to keep in mind that (1) the modelling overlooks many barriers to development and deployment other than costs; (2) the modelling does not properly model some key aspects linked to bioenergy infrastructure, both spatial and temporal, and (3) the figures used in the ATD scenarios are uncertain and should be taken with caution. This work offers insights into the potential accelerated technology development and deployment that could occur along selected bioenergy chains in the UK. It should be taken as an illustration, rather than a prediction, of how bioenergy could be deployed in the future UK energy system. The analysis undertaken contributes mostly from the illustration of 'what could be done' with MARKAL to look into the potential of ATD and bioenergy. Our findings are limited by the uncertainty on the values chosen to represent future ATD. For example, the results for the deployment of lignocellulosic ethanol are based on very high conversion efficiencies, and should be taken with caution, and not as a prediction of the capability of lignocellulosic technology deployment. Rather, the results from this should be looked at as an illustration of the capability of the model, and as highlighting where further work is needed. The UK-MARKAL database is iteratively constructed and improved, and this has been corrected for further runs. However, as a consequence of time constraints, it was not possible to include this revised value for the ATD runs for the present work. In addition, the focus was mostly on one scenario. A single ATD Bioenergy scenario was modelled in MARKAL which combined the accelerated development input data of the five bioenergy chains that were selected for exploration. Although the use of different scenarios would help to account for some of the uncertainties in the figures used in the scenarios, this was not possible given the time constraints of the project. In future, these uncertainties would need to be taken into account, for instance, by using different scenarios, including the use of more bioenergy chains, and by undertaking sensitivity analyses to determine which parameters are most influential on the deployment of the bioenergy technologies explored. Moreover, the scenarios focused on five select technologies/bioenergy chains. There are other promising bioenergy technologies which have the potential to be economically viable and sustainable, especially those where active research is being conducted both in the UK and internationally. Some technologies, like algal fuels, have potential but are currently not represented within MARKAL. It is also important to highlight that the failure of a technology to be deployed in these scenarios does not mean that technology is without potential. As MARKAL is a least-cost optimisation model, it will select the cheapest technology to serve the demand while satisfying the constraints, even if those cost savings are only marginal. Although the UKERC 2050 MARKAL model has, additionally, been thoroughly tested (and has been constructed from earlier also tested versions of the UK-MARKAL model), it has not been built specifically to explore bioenergy pathways. Consequently, this means our approach is challenging, but innovative nonetheless. Here the authors have illustrated the possibilities of the model, but within the time constraints of the project, it was not possible to improve the bioenergy chains represented within the model. The 'imported' bio-oil pathway is currently not modelled within the UKERC 2050 model and this highlights an area where the model needs to be improved. The costs used in the modelling were additionally calibrated on a 2000 base year and we are conscious of the limitations of this approach and the possibility of improvement to represent more carefully the 'short term'. The value of the modelling exercise, however, may be more in exploring the longer term trends. Further work is needed to build on our proposed approach and it could be useful to systematically look at the pathways to answer the question 'how much improvement is necessary in the different biotechnologies before they can be expected to have a significant role in the future energy system?' This paper has highlighted the applicability of an original modelling approach, but future work should be focused to address some of the above limitations. It is also important to note that the modelling has been underpinned by clear qualitative stories, including R&D directions and potential and policy implications, which give the modelling results context. The UK-MARKAL model is an 'iteratively built' database and model and by highlighting its possibilities as well as current gaps in data representation and/or capabilities, we continue to contribute to the future improvements of the model. This work explores the potential of bioenergy technologies to contribute to carbon reductions in the UK energy system through accelerated technology development. The analysis undertaken represents an illustration of 'what-if' scenarios in MARKAL to explore accelerated technology development of bioenergy technologies. The exercise has highlighted that there is much potential for accelerated technology development in the five bioenergy technologies investigated in this paper, particularly in bio-engineering of energy crops as it underpins many bioenergy chains. Given further development, bioenergy technologies could become increasingly more economically competitive with fossil-based technologies as feedstock costs are reduced in line with crop improvements due to plant breeding efforts, the ability to grow energy crops on marginal lands, increased crop resistance to disease and pests, cheaper enzymes for lignocellulosic conversion to bio-ethanol, and improvements in gasification and fast pyrolysis technologies. There is additional potential for advances in other bioenergy technologies not assessed in this paper, which could help to drive the commercial availability and competitiveness of bioenergy technologies in the R&D stage further forward. This paper highlights the unique flexibility of bioenergy technologies to potentially decarbonise multiple sectors. Under all the scenarios there was a high deployment of bioenergy, which implies that it is possible that bioenergy will be a valuable part of the pathway to a decarbonised economy. It is important to highlight, however, that figures used in the ATD scenario were uncertain and should be taken as an illustration of how much improvement is needed in the five technologies for the levels of market penetration seen in the model output. Interestingly, carbon reduction targets influenced the bioenergy mix deployed in the UK energy market. Lower targets (60%) resulted in more electricity and residential heat, while higher targets (80%) resulted in increased deployment of biomass for biofuels. Acceleration of bioenergy without other technologies accelerated, however, led to more electricity from biomass because other low carbon electricity options were less cost competitive. Innovation at all stages of the bioenergy supply chain is important and can contribute to increased chance of deployment. Future R&D efforts and innovation are therefore essential at all points along the supply chain. Ultimately, the future deployment of bioenergy technologies is dependent on many different factors including investment and R&D efforts, carbon reduction targets and the ability to compete with other low carbon technologies as they become deployed. ATD: accelerated technological development; BERR: Department for Business, Enterprise and Regulatory Reform; CCC: Committee on Climate Change; DTI: Department of Trade and Industry; GBP: pounds sterling; NREL: National Renewable Energy Laboratory; O&M: operating and maintenance; R&D: research and development; RD&D: research, development and demonstration; UKERC: UK Energy Research Centre. The authors declare that they have no competing interests. DC collected quantitative and qualitative data for most of the chains, interpreted the results and drafted the manuscript. SJ collected quantitative and qualitative data for the lignocellulosic chain, and helped to draft the manuscript. BM helped with data collection, interpreted the results and also drafted the manuscript. GA undertook the modeling work and commented on the manuscript. GT helped draft the manuscript. All authors read and approved the final manuscript. The authors would like to acknowledge and thank the experts consulted as part of the work: Raphael Slade and Calliope Panoutsou at Imperial College, and John Brammer and John Rogers at Aston University. Their shared knowledge and help was very much appreciated. The authors would also very much like to thank Mark Winskel for his guidance on the project. This work was undertaken as part of the UKERC Energy 2050 project, and was based on reports written by the authors for UKERC. UKERC Energy 2050 [http://www.ukerc.ac.uk/ResearchProgrammes/UKERC2050/UKERC2050homepage.aspx] webcite US Department of Energy: Breaking the Biological Barriers to Cellulosic Ethanol: A Joint Research Agenda. A Research Roadmap Resulting from the Biomass to Biofuels Workshop. Rockville, Maryland: US Department of Energy; 2006. Energy Policy 2006, 34:322-342. Publisher Full Text Biomass and Bioenergy 2005, 29:399-418. Publisher Full Text Winskel M, Markusson N, Moran B, Jeffrey H, Anandarajah G, Hughes N, Candelise C, Clarke D, Taylor G, Chalmers H, et al.: Accelerated Development of Low Carbon Energy Supply Technologies. UKERC Energy 2050 Research Report 2 (DRAFT). London: UK Energy Research Centre; 2008. Global Change Biology 2006, 12:2054-2076. Publisher Full Text Biomass and Bioenergy 2000, 19:209-227. Publisher Full Text Energy Policy 2006, 34:2871-2880. Publisher Full Text Siemons R, Vis M, Berg D, McChesney I, Whiteley M, Nikolaou N: Bioenergy's role in the EU energy market. A view until 2020. In Report for the European Commission. BTG Biomass Group BV, The Netherlands; 2004. NREL Power Technologies Energy Data Book [http://www.nrel.gov/analysis/power_databook/chapter2.html] webcite Biofuels, Bioproducts and Biorefining 2009, 3:195-218. Publisher Full Text Wallace B: Cellulosic Ethanol Potential: Technical Barriers and Cost Objectives. [http://www1.eere.energy.gov/cleancities/toolbox/pdfs/wallace_webcast.pdf] webcite Walsh ME, Becker D: Biocost: A Software Program to Estimate the Cost of Bioenergy Crops. [http://bioenergy.ornl.gov/papers/bioen96/walsh2.html] webcite Proceedings of Bioenergy '96 – The Seventh National Bioenergy Conference: Partnerships to Develop and Apply Biomass Technologies; September 15–20; Nashville, Tennessee Oak Ridge, USA: Oak Ridge National Laboratory; 1996. Forest Ecology and Management 1999, 121:123-136. Publisher Full Text Graham RL, Lichtenburg E, Roningen VO, Shapouri H, Walsh ME: Economics of Biomass Production in the U.S. [http://bioenergy.ornl.gov/papers/bioam95/graham3.html] webcite Oak Ridge, Oak Ridge National Laboratory, USA; 2008. Biomass and Bioenergy 1998, 14:341-350. Publisher Full Text Marrison CI, Larson ED: Cost versus scale for advance plantation-based biomass energy systems in the US. [http:/ / www.princeton.edu/ pei/ energy/ publications/ texts/ Cost-vs-scale...Marrison-and-Larson .pdf] webcite Energy Policy 2008, 36:2504-2512. Publisher Full Text
<urn:uuid:43b96b05-eecf-4a4f-b8cd-116e23d8d076>
{ "dump": "CC-MAIN-2014-23", "url": "http://www.biotechnologyforbiofuels.com/content/2/1/13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270313.12/warc/CC-MAIN-20140728011750-00458-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9259453415870667, "token_count": 12663, "score": 2.59375, "int_score": 3 }
The Genetic Reason Why Some People Love Sugary Sweets More Than Others Scientists looked at data from 176,867 people of European ancestry. You might love sugary doughnuts, but your friends find them too sweet and only take small bites. That’s partly because your genes influence how you perceive sweetness and how much sugary food and drink you consume. Now, our recently published study shows a wider range of genes at play than anyone thought. In particular, we suggest how these genes might work with the brain to influence your sugar habit. What We Know When food touches our taste buds, taste receptors produce a signal that travels along taste nerves to the brain. This generates a sensation of flavor and helps us decide if we like the food. Genetic research in the past decade has largely focused on genes for sweet taste receptors and whether variation in these genes influences how sensitive we are to sweetness and how much sugar we eat and drink. Our previous study showed genetics accounts for 30% of how sweet we think sugars or artificial sweeteners are. However, at the time, we didn’t know the exact genes involved. See also: Sugar Cravings Are No Match for a New “Switch” Discovered in the Brain What Our Latest Study Found Our new study looked at data from 176,867 people of European ancestry from Australia, the US, and UK. We measured how sweet 1,757 Australians thought sugars (glucose and fructose) and artificial sweeteners (aspartame and neohesperidin dihydrochalcone) were. We also looked at how sweet 686 Americans thought sucrose was and whether they liked its taste. We also calculated the daily intake of dietary sugars (monosaccharide and disaccharide sugars found in foods such as fruit, vegetables, milk, and cheese) and sweets (lollipops and chocolates) from 174,424 British people of European descent in the UK Biobank. Then we looked at the associations between millions of genetic markers across the whole genome and the perception of sweet taste and sugar intake using a technique known as genome-wide association analysis. After a 15-year study, we showed that several genes (other than those related to sweet taste receptors) have a stronger impact on how we perceive sweetness and how much sugar we eat and drink. These included an association between the FTO gene and sugar intake. Until now, this gene has been associated with obesity and related health risks. However, the effect is possibly driven not by FTO but nearby genes whose protein products act in the brain to regulate appetite and how much energy we use. We believe a similar situation may be influencing our sugar habit; genes near the FTO gene may be acting in the brain to regulate how much sugar we eat. Our study suggests the important role the brain plays in how sweet we think something is and how much sugar we consume. That’s in addition to what we already know about the role of taste receptors in our mouth. Why We Love Sweet Foods Our natural enjoyment of sweet foods could be an evolutionary hangover. Scientists believe being able to taste sweetness might have helped our ancestors identify energy-rich food, which played a critical part in their survival. However, being able to taste sweetness doesn’t always mean you prefer to eat lots of sweet-tasting food. So it looks like there are genes associated with the consumption of sweet foods, but not how sweet we think they are, such as FTO. There might also be genes that influence our perception of sweetness, but not how likely we are to eat sweet food. We were surprised to find genes for sweet taste receptors had no effect on either the ability to taste sweetness or on the amount of sugar consumed in our study, which looked only at large populations of European descent. But by comparing people of different ancestries in the UK Biobank, we showed there was some variation between different populations that variations in genes for sweet taste receptors might explain. For instance, we found people of African descent tended to eat more sugar than people of European and Asian descent. So, How Can We Use This? Just like genetics can help explain why some people choose tea over coffee, our latest study helps explain why some people prefer sweet food. That could lead to personalized diets to improve people’s eating habits based on their genetics. However, genetics is not the only factor to influence your taste for sugary foods and how much of these you eat or drink. So you can’t always blame your genes if you’ve ever tried to quit sugary drinks or snacks and failed. This article was originally published on The Conversation by Daniel Liang-Dar Hwang. Read the original article here.
<urn:uuid:63c76eb9-09c0-4c61-aee6-ad6380e7206b>
{ "dump": "CC-MAIN-2023-23", "url": "https://www.inverse.com/article/55376-do-you-like-sweets-why-some-people-enjoy-sugar-more-than-others-genetics", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00464.warc.gz", "language": "en", "language_score": 0.9549684524536133, "token_count": 988, "score": 2.9375, "int_score": 3 }
A new report from American Cancer Society researchers finds that despite declining death rates, cancer has surpassed heart disease as the leading cause of death among Hispanics in the U.S. Among non-Hispanic whites and African Americans, heart disease remains the number one cause of death. The figures come from Cancer Statistics for Hispanics/Latinos 2012, appearing in the journal CA: A Cancer Journal for Clinicians, which has been produced every three years since 2000. The report says that in 2012, an estimated 112,800 new cases of cancer will be diagnosed and 33,200 cancer deaths will occur among Hispanics. Among U.S. Hispanics during the past ten years of available data (2000-2009), cancer incidence rates declined by 1.7% per year among men and 0.3% per year among women. Hispanics have lower incidence and death rates than non-Hispanic whites for all cancers combined and for the four most common cancers (breast, prostate, lung and bronchus, and colorectum). The most notable example is lung cancer, for which rates among Hispanics are about one-half those of non-Hispanic whites. The risk of lung cancer is lower among Hispanics because they have historically been less likely to smoke cigarettes than non-Hispanic whites. In contrast, Hispanics have higher incidence and mortality rates for cancers of the stomach, liver, uterine cervix, and gallbladder, reflecting greater exposure to cancer-causing infectious agents, lower rates of screening for cervical cancer, and possibly genetic factors. Incidence and death rates for cervical cancer are 50% to 70% higher in Hispanic women compared to non-Hispanic whites. In addition, Hispanics are diagnosed at an advanced stage of disease more often than non-Hispanic whites for most cancer sites. Hispanics in the U.S. are an extremely diverse group because they originate from many different countries (e.g., Mexico, Central and South America, and Cuba). As a result, cancer patterns among Hispanic subpopulations vary substantially. For example, in Florida the cancer death rate among Cuban men is double that of Mexican men. Cuban men are much more likely to smoke than Dominican men (21 percent versus 6 percent, respectively) and obesity prevalence among Mexican and Puerto Rican men is double that among Dominican men. There are also differences between Hispanic subgroups in screening utilization; Mexican women are less likely to have had a recent mammogram than Dominican women (62 percent versus 78 percent, respectively).
<urn:uuid:4e492770-6247-47c4-a47f-7ee628a17473>
{ "dump": "CC-MAIN-2014-15", "url": "http://www.hispanicallyspeakingnews.com/latino-daily-news/details/study-cancer-top-killer-of-hispanics-not-heart-disease/18580/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz", "language": "en", "language_score": 0.9485321044921875, "token_count": 504, "score": 3.171875, "int_score": 3 }
|Born||May 10 1920| |Died||April 14 2006| |Birth Location||Sacramento, California| Plaintiff in the landmark lawsuit that ultimately led to the closing of the concentration camps and the return of Japanese Americans to the West Coast in 1945. Mitsuye Endo was born on May 10, 1920, in Sacramento, California, the daughter of Japanese immigrants and the second of four children. After graduating Sacramento Senior High School, she went to secretarial school and got a clerical job with the state Department of Employment. In the weeks after the Japanese attack on Pearl Harbor, the California State Personnel Board took a variety of steps that led to the ultimate dismissal of all Japanese American state employees by the spring of 1942, Endo among them. She was one of the 63 employees (out of between 300 and 500, most of whom worked for the Department of Motor Vehicles) who sought to challenge their firings with the aid of the Japanese American Citizens League (JACL), enlisting lawyer James C. Purcell. In the meantime, Endo was sent with her family to the Sacramento Assembly Center and then to the Tule Lake, California, concentration camp. With the firings made moot for the time being by the removal and incarceration, Purcell began to look for a suitable plaintiff for a challenge of the incarceration through a habeas corpus petition. Starting with his civil service plaintiffs, he soon settled on Endo, in part because she was Methodist, had a brother in the army, and had never been to Japan. Sending a representative to see if she would be willing (Purcell and Endo apparently never met in person), Endo was hesitant, but did reluctantly agree to do it. As she told John Tateishi many years later, "I agreed to do it at that moment, because they said it's for the good of everybody, and so I said, well if that's it, I'll go ahead and do it." Purcell filed the petition on July 12, 1942, in federal district court in San Francisco, beginning a chain of events that would end with the U.S. Supreme Court ruling in her favor in December 1944. The army opened up the West Coast to "loyal" Japanese Americans just prior to the Supreme Court decision, which had been leaked to government officials. While her suit went through the various courts, she remained confined, moving to Topaz, Utah, after the segregation. Though she had the opportunity to leave camp early—the government in fact offered to release her in part to moot the lawsuit—she opted to remain in camp. When her suit was finally decided, she left Topaz in May of 1945 to live with a sister who had resettled with her husband in Chicago. Upon her arrival, she chose among several job offers, taking a position as a secretary for the Mayor's Committee on Race Relations. Two years later, she married Kenneth Tsutsumi, whom she had met in camp. The couple went on to have three children. In subsequent years, she kept a low profile, rebuffing interview requests with the exception of a very brief oral history that appeared in the anthology And Justice For All in 1984. Because she was victorious in her suit, she was not a part of the coram nobis cases of the 1980s that brought renewed attention and fame to three other legal resisters, Gordon Hirabayashi, Fred Korematsu, and Min Yasui. Even her own daughter didn't know of her role in history until learning about it in her twenties. Mitsue Endo Tsutsumi lived in Chicago for the rest of her life and died of cancer on April 14, 2006. Authored by Brian Niiya, Densho For More Information Irons, Peter. Justice at War: The Story of the Japanese American Internment Cases. New York: Oxford University Press, 1983. Berkeley: University of California Press, 1993. Noel, Josh. "Mitsuye Tsutsumi 1920–2006." Chicago Tribune, April 25, 2006. Ouchida, Elissa Kikuye. "Nisei Employees vs. California State Personnel Board: A Journal of Ex parte Mitsuye Endo, 1942–1947." Pan-Japan 7.1–2 (Spring/Fall 2011): 1–55. Robinson, Greg. "Mitsuye Endo, Great in her Obscurity." Is That Legal? blog, June 5, 2006, Tateishi, John. And Justice For All: An Oral History of the Japanese American Detention Camps. New York: Random House, 1984. Foreword Roger Daniels. Seattle: University of Washington Press, 1999. 60–61. - Morton Grodzins, Americans Betrayed: Politics and the Japanese Evacuation (Chicago: University of Chicago Press, 1949), 122–27. - Elissa Kikuye Ouchida, "Nisei Employees vs. California State Personnel Board: A Journal of Ex parte Mitsuye Endo, 1942–1947." Pan-Japan 7.1–2 (Spring/Fall 2011), 2, 7–8; Pacific Citizen, Mar. 23, 1946, p. 2, http://ddr.densho.org/ddr-pc-18-12/; Larry Tajiri, "Nisei USA," Pacific Citizen Sept. 20, 1947, p. 4, http://ddr.densho.org/ddr-pc-19-39/, both accessed on Jan. 11, 2018. - In his definitive account of the wartime cases, Justice at War (New York: Oxford University Press, 1983. Berkeley: University of California Press, 1993), Peter Irons cites a meeting between Endo and Purcell in camp. However, both Endo, in a later oral history and Purcell, in a 1975 letter, claim to have never met. "Mitsuye Endo," in And Justice For All: An Oral History of the Japanese American Detention Camps by John Tateishi (New York: Random House, 1984), 60–61; letter, James C. Purcell to Peter Linzer, associate professor of law, University of Cincinnati. June 11, 1975. - Endo in And Justice For All, 61. - Pacific Citizen, June 2, 1945, p. 3, accessed on Jan. 11, 2018 at http://ddr.densho.org/ddr-pc-17-22/. - Pacific Citizen, July 14, 1945, p. 3, accessed on Jan. 11, 2018 at http://ddr.densho.org/ddr-pc-17-28/. - Pacific Citizen, Dec. 6, 1947, p. 7, accessed on Jan. 11, 2018 at http://ddr.densho.org/ddr-pc-19-49/. - Endo Tsusumi cited a bad experience with a reporter who interviewed her shortly after her arrival in Chicago as the reason for not doing subsequent interviews. Ouchida, "Nisei Employees vs. California State Personnel Board," 2, 41–42n4 - Josh Noel, "Mitsuye Tsutsumi 1920–2006," Chicago Tribune, April 25, 2006, accessed on July 16, 2014 at http://articles.chicagotribune.com/2006-04-25/news/0604250259_1_supreme-court-japanese-american-population-japanese-american-citizens-league.
<urn:uuid:365d29df-c1b8-48c8-83b2-1127372d7102>
{ "dump": "CC-MAIN-2019-47", "url": "http://encyclopedia.densho.org/Mitsuye%20Endo/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00015.warc.gz", "language": "en", "language_score": 0.9405748248100281, "token_count": 1574, "score": 2.84375, "int_score": 3 }
For UMUC President Javier Miyares and the four panelists at the university’s Hispanic Heritage Month celebration―all Cuban exiles who had fled their homeland alone as children in the early 1960s―Operation Pedro Pan was a personally searing and life changing event. The exodus of 14,000 children who left Cuba without their parents from December 1960 to October 1962 in the wake of Fidel Castro’s communist revolution, was dubbed “Operation Pedro Pan” by a Miami newscaster, who likened it to the children’s flight to “Never Neverland” in the story of Peter Pan. “I became part of Pedro Pan after being spirited out of Cuba on July 4, 1961, by my Jesuit teachers when my father was taken prisoner by Fidel Castro,” Miyares told the audience in a video address. Miyares, 14 at the time, eventually made his way to Baltimore to live with an uncle. “Pedro Pan represents a seminal event in the history of the United States and Cuba, and of human rights and freedom,” Miyares said. “It is vital [that] we continue to serve as custodians of that history, even as we open normalized relations between the United States and Cuba,” he added. Operation Pedro Pan was co-conceived and orchestrated by James Baker, the headmaster of Ruston Academy, an American school in Havana, and Father Bryan Walsh, director of the Catholic Welfare Bureau in Miami. Together they arranged for visas for the children to enter the United States and for places for them to stay. Most of the 14,000 children were placed with family members or friends, but 6,000 of them ended up in orphanages or foster homes. Eloisa Echazabal was 13 when she left Cuba with her eight-year-old sister and three boy cousins. After Castro led the guerrilla uprising against the Batista government, she said, it took her parents a long time to understand how much the revolution would affect their lives. Then the government closed her parochial school and sent the nuns back to Mexico. “That made my parents make the tough and heart-wrenching decision to send us alone to the United States, not knowing if they would see us again,” Echazabal said. She and her sister were sent to an orphanage in Buffalo, New York, and her cousins to one in Richmond, Virginia. After two months, she and her sister were sent to a foster home and finally were reunited with their parents nine months later in Miami. Jesus “Jay” Castano said he was in the fifth grade when the Castro regime tried to recruit him into the Committee for the Defense of the Revolution to become a chevado―a snitch―to report on children and parents in his neighborhood. In April 1962, six months before the doors closed on Operation Pedro Pan, Castano flew to Miami and ended up in Camp Kendall with hundreds of other children. He was there for two years before his mother was able to get out of Cuba on a Red Cross ship. “I don’t want a dictatorship,” he said. “I have been back to Cuba three times and they have it. I love Cuba. The next time I go, I hope it will have improved.” Susana Gomez said she was a sophomore in a private Catholic school when the daughter of a captain in the Castro regime joined her class. “She was a bully,” Gomez said. “I grabbed her by the shirt and said, ‘if you bully one of my friends, I will beat the living daylights out of you.’ The nuns called my parents, and I never went back to that school again.” At 13, Gomez and her 12-year-old brother were heading to Miami. “I was a naïve young girl, and I didn’t know how much my actions were endangering my family,” she said. Rene Costales, who ended up in a Catholic school in Vermont, said the revolution divided friends and families. One friend had parents who were communists and he didn’t want any part of it. Another friend sailed away on a 25-foot sailboat to join an uncle in Miami. On the other end of the spectrum, one friend wanted to become a militiaman and his parents couldn’t stand that. They signed his emancipation papers and they left him behind in Cuba. Almost all of the Pedro Pans were eventually reunited with their parents― although for many it took years. The experience, the panelists said, left scars that still have not healed. They often rely on each other for support. “We had the pain not only of being an exile but also the pain of separation from our parents and the uncertainty of reunification,” Costales said. The Pedro Pans were welcomed in the United States by Americans who feared communism. That is not the case now for Central American children who are fleeing the gangs in their home countries to come here. “We were only children, and America welcomed us without reservation,” Miyares said. “We were cared for with love and consideration and given access to education. I am proud of what the Pedro Pans have accomplished. “My heart breaks when I see immigrants and children of immigrants with dreams, like my own, who now are marginalized in the public discourse,” he said.
<urn:uuid:a2639f2f-0451-4b5e-a36a-05287e92382c>
{ "dump": "CC-MAIN-2021-31", "url": "https://globalmedia.umgc.edu/2016/09/29/operation-pedro-pan-exiles-recount-their-flight-from-cuba/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00282.warc.gz", "language": "en", "language_score": 0.9869414567947388, "token_count": 1145, "score": 2.90625, "int_score": 3 }
The Safety of Blood and Components, where is the limit? The recent past HIV transmission by transfusion of blood and blood products was the biggest driver of change in the history of transfusion medicine in the past 30 years. Most of the medical world in the 70’s and in the early 80’s was celebrating the control of infectious diseases. The hemophilia community was celebrating the enjoyment of a normal life provided by the prophylactic use of plasma derived clotting factor concentrates. In this environment of excitement and trust in the power of medical science and pharmaceuticals, the initial reactions to the news that AIDS could possibly be transmitted by transfusion were denial and disbelief. The first report suggesting that clotting factor concentrates might transmit HIV was published in July 1982. It described three hemophilia patients who developed immunosuppression and opportunistic infections. By December 1982, four more cases had been identified among patients with hemophilia A (1). The first case of suspected transmission of AIDS by blood transfusion was reported at that time. Both the medical community and the patient community had no choice but to accept the growing evidence that AIDS was transmitted by a blood borne infectious agent. The epidemic ravaged through the United States, progressing from 1,000 recorded cases in February 1983 to over 500,000 cases in December 1995. In the early 80’s, over 1% of blood donors in San Francisco, California, were suspected to be infected. HIV was identified as the etiological agent of AIDS in 1984, and in 1985 screening assays for antibodies to HIV became available, leading to a remarkable reduction of the transmission of the infection by transfusion and transplantation (Table 1). This discovery was followed by substantial advances on serological and molecular screening for other transfusion transmissible viruses (HBV, HCV, HTLV-I/II, WNV), implementation of good manufacturing practices and quality assurance for blood centers and developments in computer systems that ensured accuracy of management of donors and donations. Unfortunately, despite preventive measures and substantial therapeutic advances, there are today over a 1 million HIV infected individuals in the U.S. and over 50,000 new cases are added each year (2). Europe continues to have a lower but still significant HIV incidence rate while the incidence in Asia has been high. |HIV||1:2,135,000||1:909,000 – 5,500,000| |HCV||1:1,930,000||1:2,000,000 – 4,000,000| |HBV||1:277,000||1:72,000 – 1,100,000| |WNV||1:350,000||No reported cases| * Range between high and low endemic areasAdapted from (3) Despite of steady incidence rates of these infections in the general population, the number of reported cases of transmission HIV, HBV and HCV by transfusion has been extremely small in the U.S., in Europe and in many other countries. However, transfusion transmission continues to be a serious problem in countries with limited resources to invest in healthcare systems as reflected by the close correlation between the quality of blood transfusion services and the World Health Organization Human Development Index, or HDI (4). The Human Development Index classifies countries as having a low, medium or high HDI, based on life expectancy, educational attainment and adjusted income. The Concept of Emerging Infections The Institute of Medicine the U.S. National Academy of Sciences defined emergent infectious diseases as diseases of infectious origin whose incidence in humans has increased in last the two decades, or threatens to increase the future (5). Those include not only newly recognized diseases like AIDS in the 80s but also infectious diseases that are reemerging due to conditions that facilitated their spread, as the case with West Nile Virus (WNV), Dengue virus (DENV) and Chikungunya virus (CHIKV). Factors that facilitated emergence have been climatic changes, globalization of human activities, air travel, migration, etc. The magnitude and the reality of the epidemic of AIDS have generated and still generate great concern about future threats, particularly among chronic recipients of blood and blood products. Justly, they are scared by the possibility that an unknown or poorly understood transmissible agent could cause as much devastation in the future as HIV did in 80s. While only few of the recognized emergent infections today constitute a significant threat to the security of the blood supply, transfusion medicine has applied substantial resources for the monitoring of infections where the transmission by transfusion was suspected. Examples are the Idiopathic CD4+ Lymphocytopenia (or AIDS Without HIV), the systemic infection of American soldiers who returned from the Persian Gulf in 1990 with Leishmania tropica, and SARS coronavirus, until extensive investigation documented that they were not transmitted by blood transfusion (6). Risk perception in blood safety Despite the impressive progress in blood safety observed in recent years, public concerns and fear continue to be important motivators for the implementation of additional measures, many attempting to reach an unattainable “zero risk”. The public and patient advocates reluctantly accept the risk associated with procedures and medications other areas of medicine but have difficulty accepting risks associated with blood, even when the benefits clearly exceed the risks. The classical publication of Slovic (7) provides a basis for understanding public responses and attempts to help improve risk communications with the public and with decision-makers. He indicates that both dread and knowledge drive the ultimate public perception of risk. Essentially, known risks with a low degree of dread (e.g. smoking, boating and skiing) are accepted by the public even in face of serious consequences while unknown or misunderstood risks are seen as unacceptable because of poor knowledge about the event and the high degree of dread (e.g. nuclear reactor accidents, radioactive waste and recombinant DNA technology). Dr. Slovic wisely concludes that “each side, expert and public, has something valid to contribute. Each side must respect the inside and intelligence of the other.” The Precautionary Principle and Zero Risk Dr. Slovic’s analyses also help us understand distortions of the interpretation of guiding principles like the precautionary principle. One of the primary foundations of the precautionary principle, and its globally accepted definitions, are stated in Principle #15 of the Rio Declaration of the "Earth Summit" of 1992: "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation" (8).The European Commission has addressed the question of applicability of the precautionary principle and emphasized that it should be proportionalto the chosen level of protection including, where appropriate and feasible, an economic cost/benefit analysis (9). Unfortunately, the words “cost-effective” and “proportional” are frequently dropped from references to the principle, leading to the widespread confusion with zero risk. Some Emerging Infections of Current Concern This article will focus on some of the emerging infections that are likely to be transmitted by transfusion of blood and components, like West Nile Virus (WNV), Trypanosoma cruzi (T. cruzi), the agent of Chagas’ disease, DENV and CHIKV. It will not address vCJD, bacterial contamination, and non-infectious risks such as TRALI, TACO and hemolytic reactions. West Nile Virus in the U.S. The U.S. experienced the true emergence of a transfusion transmitted disease in 1999, when WNV was introduced in New York City. WNV is transmitted to humans primarily through mosquito bites, and the outcome of infection depends on the age and immune status of the exposed individual. WNV was first identified in 1937 in Uganda, and has caused small epidemics in Africa, the Middle East and Eastern Europe for many years. Recently, cases of infection have been recognized in Italy. WNV has become endemic in the U.S. with reoccurring outbreaks for nine consecutive years. Infecting between 1.8 and 4.1 million people since 1999, WNV has caused 28,943 documented cases of human disease and 1,130 deaths reported to the CDC and became the most common cause of viral encephalitis in the country. Birds are the major amplifying host of WNV and have facilitated the spread of the infection to all states in the continental U.S. In addition, several non-avian vertebrates including mammals, reptiles and rodents, can be infected by WNV and some species produce levels of viremia capable of infecting mosquitoes (10). Over 60 species of mosquitoes are able to transmit the virus (11). The explosive WNVspread in the North America suggested viral adaptation and prompted concerns about genetic variability that could potentially decrease the sensitivity of blood donor screening and diagnostic assays increasing the risk of transmission to blood recipients, affect viral pathogenesis, development of vaccines, and development of therapeutic agents. Recent studies documented an increase in the number of mutations in the full WNV genome from 0.18% in 2002 to 0.37% in 2005 when compared with the original strain isolated during the 1999 epidemic in New York City. Essentially, WNV has slowly diverged from precursor isolates as the geographic distribution expanded (12). Human-to-human transmission by blood transfusion was identified in 2002 (13) stimulating the rapid development and implementation of nucleic acid tests (NAT) for blood donor screening under FDA-approved investigational new drug protocols in 2003. Retrospective studies identified 23 cases of WNV transmission by transfusion in 2002 associated with blood components from 16 donations, whose retention samples or retrieved plasma co-components individually tested produced reactive results for WNV-RNA using a research-based PCR assay. Implementation of blood screening in the U.S. has been a success and since 2003 has resulted in the interdiction of ~2,600 WNV NAT-reactive units and the prevention of ~2,600 to 7,800 potential transmissions by transfusion. After introduction of donor screening by NAT there were 6 confirmed cases in 2003, one in 2004 and none in 2005; however there were 2 confirmed cases in 2006. There are potential transmissions in 2008 which are still under investigation. Infection by Trypanosoma cruzi (Chagas disease) Chagas’ disease was first described in 1909 by Carlos Chagas, a Brazilian physician. The acute form occurs in about 20% of the infected people and appears 20 to 40 days after the vector insect bite or a blood transfusion. It is characterized by fever, lymphadenopathy and hepatosplenomegaly, and rarely by pericarditis and disturbances of cardiac conduction. It can be quite severe or fatal in recipients with a debilitated immune system. Parasitemia is frequent. Approximately 20% of recipients of infected blood remain asymptomatic. Generally, the patients recover totally after 6-8 weeks. The acute disease can be effectively treated with the experimental drugs nifurtimox or benznidazole. The chronic form of Chagas’ disease develops 10-20 years after the acute infection. Approximately 50% of the infected individuals have parasitemia without clinical symptoms. Since these individuals do not know that they are infected, they are accepted as blood donors. Approximately 20% develop a cardiopathy characterized by cardiomegaly, disturbances of conduction, and alterations of the electrocardiogram. The cardiac insufficiency is progressive and finally fatal. Between 9 and 14% of individuals chronically infected develop megaesophagus and megacolon as result of the disruption of myoneural junctions in the intestinal tract. In general, symptomatic patients have only one type of manifestations, cardiac or gastrointestinal. Chagas’ disease is caused by the protozoan flagellate Trypanosoma cruzi.Blood forms of the parasite can be seen in smears of peripheral blood when parasitemia is high. T. cruzi infects a large number of peri-domestic animals like cats, dogs, rats, skunks, armadillos, sloths, mice, rabbits, etc., are infected by T. cruzi and serve as natural reserves for the agent. The insect vectors are blood sucking reduviidea (kissing bugs in the U.S., barbeiros in Brazil and vinchugas in Spanish speaking countries). They nest in cracks in the walls of mud houses with thatched roofs in rural areas. The insects bite preferably at night and following the bite, defecate close to the wound. When scratching, the victim introduces the infected excrement into the broken skin or carries it to the eye mucosa. Chagas’ disease is endemic in Central and South America and in parts of Mexico. The number of chronic carriers of infection is estimated at 11 million individuals. These individuals became infected in rural areas but have often migrated to urban centers in Latin America, U.S. and Europe. The transmission of T. cruzi has been remarkably reduced in many rural areas of Latin America as a result of application of insecticides and improvement of rural habitations. Presently, all blood donors in Brazil, Argentina and several other Latin American countries are screened by commercial ELISAs or by indirect immunofluorescence. There have been seven documented cases of disease of Chagas by transfusion of blood in the North America between 1987 and 2007. All the patients developed acute disease and myocarditis. Past studies have shown that the prevalence of antibodies to the T. cruzi between blood donors in the U.S. is low. The potential for the establishment and the dissemination of the disease of Chagas in U.S. or Europe seems to be low because living conditions do not favor establishment of the natural cycle. However, the potential for transfusion transmission in non-endemic areas exists (14). A test for antibodies to T. cruzi was licensed by the U.S. Food and Drug Administration (FDA) in December, 2006. About two thirds of the American blood centers initiated universal testing of all blood donors, all the time, in January, 2007. In March 2007 the FDA’s Blood Products Advisory Committee (BPAC) recommended that universal testing be continued for a period of about two years until sufficient data were accumulated before consideration of some form of selective screening as applies in other countries as for instance Spain. In April 2009 BPAC reviewed possible testing strategies for T. cruzi infection in blood donors, including universal testing, testing donors once or twice, selective testing of specific donor groups or blood components, testing combined with donor questions related to the donor or their parents living in endemic areas, or testing donors visiting endemic areas. Panel members dismissed questioning of travelers indicating that it was not warranted because of the substantial decrease of incidence of T. cruzi infections in the endemic areas that has occurred in recent years as a result of control of reduvid vectors, and that infection occurs particularly in children after years of exposure in thatched houses where the vectors have nested. The Committee voted to recommend that “onenegative test would qualify a donor for all future donations without further testing or questions regarding risk of a newly acquired infection, subject to continuation studies to define the incidence of new infections in previously screened negative donors.” The Committee did not consider selective testing based on questions about birthplace of the donor or the mother of the donor because studies presented at the meeting showed that questions had 75% sensitivity. It was clear that many donors gave inaccurate answers because of concerns about their immigration status. This recommendation came in the wake of draft guidance from FDA that suggested that all donations of blood or organs to be tested for antibodies to Trypanosoma cruzi.It is expected that FDA will accept the recommendations made by BPAC and issue a Final Guidance recommending selective testing (15). Dengue and Chikungunya viruses in blood donations Arbovirus epidemics are raging in tropical areas. Dengue virus (DENV), dengue shock syndrome (DSS) and dengue hemorrhagic fever (DHF) affect millions of individuals every year and cause significant mortality in Latin America, Africa and Asia. CHIKV virus (CHIKV) has caused recurrent epidemics in the Indian subcontinent and recent epidemics in Reunion and other islands in the Indian Ocean, with recent arrival in areas of Europe. The surprising seriousness of recurring epidemics of WNV in North America has heightened concerns about the potential for introduction and similar epidemic spread of other arbovirus infections in the US. Dengue has received particular attention since cases have been recognized in the US at the border between Texas and Mexico. DENV is transmitted efficiently by the mosquito Aedes egypti and less efficiently by Aedes albopictus. CHIKV became well adapted to Aedes albopictus, the tiger mosquito after a single mutation in its genome described during outbreak in 2007 (16). Despite the recognition of millions of cases of DENV infection and disease every year, there are very few published reports of transfusion transmission, one in Hong Kong, and another in Singapore. There are also reports of transmission by needle sticks and one case associated with a bone marrow transplant in Puerto Rico. However, transmission by transfusion is often difficult to evaluate in the midst of an epidemic because the infection could have been acquired through a mosquito bite, through a transfusion or even through a needle stick. These facts raise questions about appropriateness of development of precautionary measures to prevent transfusion transmitted DENV in non-endemic areas. Research screening tests for DENV have been developed and have been the subject of publications (17). These studies documented the presence of asymptomatic viremic donors in Honduras and Brazil that could theoretically transmit the virus to blood recipients. Transmission of CHIKV by transfusion is probable, but has not been documented. The CHIKV epidemics that raged through Reunion Island in the Indian Ocean from 2005-2007 prompted the French government to suspend whole blood collections and provide the red blood cell needs from the mainland. Platelet collections by apheresis continued locally, but the collected products were subjected to a process of viral inactivation. It should be noted that here have been no reports of CHIKV transmission by transfusion despite estimates that over 300,000 people were infected during these epidemics. What is the reason why the number of reports of TT of DENV and CHIKV are so few? There are many differences between these viruses and WNV, but they do not clearly explain the lack of transfusion transmission reports for DENV and CHIKV. WNV infects a large number of birds and mammals. Birds are highly efficient amplification hosts, presenting very high viremia. Many species of mosquitoes that transmit WNV bite both animals and humans. DENV and CHIKV do not have an amplification host. Amplification occurs in the salivary glands of Aedes aegypti and Aedes albopictus. These mosquitoes transmit the viruses from human to human in densely populated areas. In addition, DENV and CHIKV epidemics currently occur primarily in developing countries. Large numbers of individuals are affected simultaneously, overwhelming hospital emergency rooms, making impossible accurate anamnesis, physical examination and appropriate reporting. The environment is not conducive to clinical studies, even observational, that could adequately document case reports, let alone estimate rates of transfusion transmission. During epidemics, blood is diverted to the many cases with Dengue hemorrhagic fever and Dengue Shock Syndrome.. Thus, many of the patients that receive blood transfusions during the height of the epidemic are already infected with dengue. Postponement of other hospital activities like elective surgeries reduces the opportunity of transmission of infection to naïve patients by transfusions. Lookback is rarely performed in developing countries because of limited resources. The availability of potential donor screening assays for DENV and CHIKV RNA is welcome and reassuring. Available research assays could be quickly developed to address epidemics occurring in non-endemic areas. However, the need for implementation of donor screening assays for DENV or CHIKV is questionable. Epidemics in tropical areas affect tens of thousands of individuals and overwhelm the healthcare system. All resources and efforts are directed to the sick population. The value of implementation of donor screening or other high cost prevention measures to protect blood safety in those areas would require careful consideration, taking into account prevalence of viremia in donors, transmission rates, and disease penetrance in infected recipients (18). It could be argued that screening assays for DENV and CHIKV would be beneficial for qualification of travelers to endemic areas as blood donors. The actual benefit of such screening is unclear because many of these potential donors would be deferred because of malarial risk. DENV and CHIKV are clear example of situations were the application of the Precautionary Principle should be carefully analyzed taken into account donor loss vs. blood safety. Would it be appropriate to steer some of the very limited resources available in countries with low HDI to a few blood recipients, in the absence of clear idea about the frequency of transmission of these viruses? Finally, we hope that public health authorities, regulatory agencies, blood banking organizations, and manufacturers of products all support and invest in the development of technologies that may be useful for viral inactivation of all cellular components. Pathogen reduction is a more generic and proactive approach to address risks associated with arboviruses, precluding the need for implementation of donor screening assays. The example of clearance of viruses by plasma fractionation and viral inactivation procedures is remarkable, and should encourage further pursuit of methodology applicable to cellular components. It would address WNV, DENV, CHIKV, T. cruzi, plasmodia, and other reemerging agents as yellow fever virus which is reappearing in South America both in wild monkeys and in humans, with several reported human deaths (19, 20). Centers for Disease Control. Update on AIDS among patients with hemophilia A. MMWR 31:644-6 (19822. http://www.cdc.gov/mmwr/PDF/wk/mm5736.pdf, last accessed on April 15, 20093. Bihl F et al. Journal of Translational Medicine 2007, 5:25 doi:10.1186/1479-5876-5-25 (http://www.translational-medicine.com/content/5/1/25), last accessed on April 15, 20094. http://www.who.int/bloodsafety/global_database/en/SumRep_English.pdf, last accessed on April 15, 20095. http://books.nap.edu/openbook.php?isbn=0309047412, last accessed on April 15, 20096. Bianco, C and Rios, M. HIV Transmission by blood transfusion. In “Blood Safety and Surveillance”. Linden J.V. and Bianco, C., eds. Marcel Dekker, Inc., New York, pp 251–278, 20017. Slovic P. Perception of Risk. Science 236:280-285, 19878. http://www.unep.org/Documents.multilingual/Default.asp?DocumentID=78&ArticleID=1163, last accessed on April 15, 20099. http://ec.europa.eu/dgs/health_consumer/library/pub/pub07_en.pdf, last accessed on April 15, 200910. van der Meulen KM et al. West Nile virus in the vertebrate world Arch Virol (2005) 150: 637–657)11. Gubler DJ.The Continuing Spread of West Nile Virus in the Western Hemisphere. Clin Inf Dis 45:1039–46, 2007. 12. Grinev A et al. Variability of West Nile Virus in US Blood Donors, 2002-2005. Emerg Infect Dis. 14:436-444, 200813. Pealer, LN, et al. Transmission of West Nile virus through blood transfusion in the United States in 2002. N Engl J Med 349:1236-45, 200314. Schmuniz, G.A. Trypanosoma cruzi, the etiologic agent of Chagas’ disease: status in the blood supply in endemic and non-endemic countries. Transfusion, 31:547, 199115. http://www.fda.gov/ohrms/dockets/ac/09/briefing/2009-4428B1-3.htm, last accessed on April 15, 200916. A Single Mutation in Chikungunya Virus Affects Vector Specificity and Epidemic Potential. PLoS Pathog3(12):e201. doi:10.1371/journal.ppat.0030201, 2007.17. Linnen J, et al. Dengue viremia in blood donors from Brazil, Honduras and Australia. Transfusion 48:1348-1354, 200818. Bianco C. Dengue and Chikungunya viruses in blood donations: risks to the safety of the blood supply? Transfusion, 48:1279-1281, 200816. Allain JP, Bianco C; Blajchman MA, Brecher et al. Protecting the blood supply from emerging pathogens: the role of pathogen reduction. Transf Med Rev 19:110-126, 200517.http://www.promedmail.org/pls/otn/f?p=2400:1001:2160992847569112::NO::F2400_P1001_BACK_PAGE,F2400_P1001_PUB_MAIL_ID:1000,77042
<urn:uuid:3bc152b3-3e72-4bfe-b2fd-1e532b2331d9>
{ "dump": "CC-MAIN-2017-47", "url": "http://www.sets.es/index.php/congresos/congresos-anteriores/2009-tarragona/128-leccion-conmemorativa-2009-dr-celso-bianco", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806842.71/warc/CC-MAIN-20171123142513-20171123162513-00336.warc.gz", "language": "en", "language_score": 0.9359052181243896, "token_count": 5382, "score": 3.0625, "int_score": 3 }
Many people have asked me, why do you ask where gorillas get their protein, when our bodies and our body chemistry more closely resemble those of chimpanzees? My answer is that gorillas are much bigger and more powerful than chimpanzees. Last night, I saw a museum exhibit that compared a gorilla skull to a chimpanzee skull and a human skull. (They might have been models. It was hard to tell.) The gorilla skull was huge! The chimpanzee skull was about the same size as a human skull. The other reason is that gorillas eat a much more strictly plant-based diet. Chimpanzees hunt once in a while, and they often eat their kill. Even so, they still eat a lot less meat than just about any human population. Nevertheless, I was afraid that the fact they eat a little bit of meat now and then would muddy the waters. My point is this. Most of the really big and powerful land animals got big and powerful by eating plants. They don’t worry about getting a protein deficiency on a plant-based diet, and neither should you. (Image courtesy of Mahlatini Luxury Safari, https://www.mahlatini.com/gorilla-trekking-safaris/)
<urn:uuid:8ed4a6ee-bb37-4976-a357-17b64b12ef9f>
{ "dump": "CC-MAIN-2018-30", "url": "http://gorillaprotein.com/2011/04/06/why-gorillas-why-not-chimpanzees/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589980.6/warc/CC-MAIN-20180718002426-20180718022426-00435.warc.gz", "language": "en", "language_score": 0.9741842150688171, "token_count": 255, "score": 2.59375, "int_score": 3 }
Previous Challenge Entry (Level 2 – Intermediate) Topic: Autumn/Fall (08/27/09) TITLE: Artistic Autumn By Deborah Caruso LEAVE COMMENT ON ARTICLE SEND A PRIVATE COMMENT ADD TO MY FAVORITES Autumn is when colors splash along side country roads, multihued carpets are strewn on park trails and backyard scenes become works of art, all with a clear cerulean backdrop hanging behind in ideal contrast. Autumn is when our Creatorís green masterpiece is replaced with fiery reds that pop amongst sunburst yellows, and many tints of orange blend harmonious between the two. Autumn is when orange pumpkins, all in different proportions lay in their private patches waiting to be picked, while their cousins, butternut and yellow squash are hiding out in savory casserole dishes, corn is being hung out for show, colorful gourds are peeking over wicker baskets, and red, yellow and green apples are sitting contently together in plain wooden baskets. Autumn is when burgundy, yellow, and purple mums are set out in galvanized metal pots in front yards or on decks, and flowers like the purple alliums, bright anemones, and blue crocus are working together in unison to redecorate Godís earth for this bright and cozy season. Many people will visit far away places just to witness this miraculous masterpiece of Creator God. The avid leaf watchers will drive to places like Galena and Gatlinburg. In Gatlinburg, they can gaze upon the Smokies, where the mountains come alive with a bounty of colors. They will get in their cars and drive, hop upon their bikes and ride, or get on their hiking shoes and hit the trails just to gaze on layers upon layers of Autumnís hues, and every time they come, they are never disappointed, because when the Divine Artist gets out His paint brush, a masterpiece is inevitable. Picasso knows it; Van Gogh knows it, as does Raphael. The opinions expressed by authors may not necessarily reflect the opinion of FaithWriters.com. If you died today, are you absolutely certain that you would go to heaven? You can be right now. CLICK HERE JOIN US at FaithWriters for Free. Grow as a Writer and Spread the Gospel.
<urn:uuid:cae4d267-df84-4cbf-87a5-1a09946085ed>
{ "dump": "CC-MAIN-2016-36", "url": "http://www.faithwriters.com/wc-article-level2-previous.php?id=31371", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295103.11/warc/CC-MAIN-20160823195815-00260-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.9297906160354614, "token_count": 494, "score": 2.625, "int_score": 3 }
A view over the Maltese landscape from the walls of the fortified city of Mdina. Mdina, Città Vecchia, or Città Notabile,was the old capital of Malta. It is a medieval walled town situated on a hill in the centre of the island. Punic remains uncovered beyond the city’s walls suggest the importance of the general region to Malta’s Phoenician settlers. Mdina is commonly called the "Silent City" by natives and visitors. The town is still confined within its walls, and has a population of just under three hundred, but is contiguous with the village of Rabat, which takes its name from the Arabic word for suburb, and has a population of over 11,000 Evidence of settlements in Mdina goes back to over 4000 BC. It was possibly first fortified by the Phoenicians around 700 BC, because of its strategic location on one of the highest points on the island and as far from the sea as possible. When Malta had been under the control of the Roman Empire, the Roman Governor built his palace there. Legend has it that it was here, in around 60 CE, that the Apostle St. Paul lived after his shipwreck on the islands. Mdina owes much its present architecture to the Arab period, from 870 until the Normans conquered Malta in 1091. They surrounded the city with thick defensive fortifications and a wide moat, separating it from its nearest town Rabat. Critiques | Translate emka (76433) 2014-08-25 0:16 Hello Stephen, After detailed views from Mdina, now we can see the wide panorama. Excellent view of the maltese landscape. WArm regrad s MAlgo carlo62 (30583) 2014-08-25 0:38 magnifico panorama, la vista è splendida e c'è molta profondità. Belli i colori della campagna con il mare sullo sfondo. Royaldevon (28449) 2014-08-25 1:45 I think this was the next, natural step, after seeing the view up to Mdina and detailing the streets inside Mdina; now we see how Mdina relates to the surrounding landscape. I think you have had a problem with harsh light, but when you are on holiday, you cannot choose exactly which time of the day you are going to photograph. You have given us a well composed and very extensive view from the height of the battlements. Have a lovely day, Fis2 (83990) 2014-08-25 2:05 Gorgeous green, the view is impressive. I like frame and colors. A very nice picture. Noel_Byrne (22787) 2014-08-25 2:30 Is this an older shot? The color saturation is beautiful but reminds me a little of images from the 80's. Expansive view leading off to the sea in the distance, this is one of those shots I would like to spend ages studying to find every little detail. Thanks as always Gerrit (46884) 2014-08-25 4:42 they could oversee the hole region from here. What a great viewpoint and view, Remarkable light tones, pierrefonds (56104) 2014-08-25 5:00 L'étendue des champs donne de la profondeur à la composition. La prise de vue permet de voir les détails des champs. La bonne luminosité fait ressortir les couleurs. Bonne journée. Nicou (119507) 2014-08-25 5:16 quelle vue et image sueprbe captage et compo quelle immensité et cet ocre avec le vert quel panorama. Bravo et amitié dkmurphys (48088) 2014-08-25 5:27 Genuine Maltese landscape. A spectacular panorama, well taken. Have a good week. Sergiom (57821) 2014-08-25 6:09 Cette photo a une texture qui ressemble beaucoup à une photo argentique qui aurait été numérisée. Le regard porte loin dans ce fantastique paysage vu de haut. ourania (29417) 2014-08-25 6:16 this Maltese landscape looks so mediterranean and so bright in the sunshine. Your picture includes a lot of characteristic features, it's very sharp and interesting. The vegetation, the dry atmosphere, the summery colours have been captured very aptly. The depth is superb and this looks like an aerial view. Congratulations and thank you! All the best, have a great day, photographer_sg (3533) 2014-08-25 10:46 What a view! The patches of vivid green stand out like emeralds. The expansive landscape is so inviting and full of intricate details. I agree with Noel, this picture can be gazed at for hours. Thanks for sharing and have a great day. PiotrF (6788) 2014-08-25 16:51 It is a beautiful place,Your picture is very well taken. Excellent perspective and depth. Good composition and quality, fine presentation. Thank you for the detailed notes. Cricri (104461) 2014-08-26 9:50 Grand panorama depuis les fortifications de la vile de Mdina, sublime bien sur le POV, la lumière et les détails jemaflor (83014) 2014-08-26 23:20 A nice panorama on the landscape, good result with the contrast between stone and green fields, tfs. miumiu (5733) 2014-08-27 1:55 Absolutely wonderful view of this unique, Maltese landscape. l imagine that l am standing here...Brilliant feeling. This is typical, if you are in a higher point of Malta, you can see the whole island and the sea in the distance. l love this picture very much! ikeharel (55332) 2014-08-31 2:58 Green, yellow and white tones mingle beautifully on the wide landscape, Stephen, contrast by blue sky and the sea. Early summer brought an aris texture to the fields, reminded me of Sicilia which is similar and not so far from Malta. My best regards, - Copyright: Stephen Nunney (snunney) (81849) - Genre: Places - Medium: Color - Date Taken: 2014-06-00 - Categories: Nature - Camera: Canon EOS 1100D, Canon 18-55mm EF-S f/3.5-5.6 IS - Exposure: f/22, 1/50 seconds - More Photo Info: view - Photo Version: Original Version - Date Submitted: 2014-08-25 0:07
<urn:uuid:d20fedea-8d0c-4d8a-ab67-c4c12d29e29b>
{ "dump": "CC-MAIN-2014-49", "url": "http://www.trekearth.com/gallery/Europe/Malta/South/Malta/Mdina/photo1470633.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009777.87/warc/CC-MAIN-20141125155649-00193-ip-10-235-23-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.8149856328964233, "token_count": 1506, "score": 2.78125, "int_score": 3 }
Common names: Copper Canyon Daisy, Lemmon’s marigold, mountain marigold, bush marigold Botanical names: Compositae Tagetes Iemonii General information: A member of the Aster family, the Copper Canyon Daisy reaches 4 feet at maturity – can reach 6 feet tall under good growing conditions. A native of the southwestern United States, this sprawling perennial daisy has a distinctive, pungent fragrance that not everyone likes, according to Floridata. The Copper Canyon daisy has golden-yellow blooms that are attractive to butterflies and bees. It is a good plant for areas where deer are known to be a problem — it is deer proof! Copper Canyon daisy dies back in winter and comes back from roots in spring. Size: 4 feet tall x 4 feet wide bush at maturity Flowers: Golden yellow about 1 to 2 inches across Bloom time: Fall until frost in North Texas Leaves: Lacy compound leaves 2 to 6 inches long, with serrated leaflets. Pests and Disease Problems: Deer proof Growing in North Texas Easy to grown in North Texas because it tolerates high, sustained summer heat. It prefers full sun, but accepts a bit of shade. It is drought tolerant and wants well-drained soil. It will grow successfully in some of the more alkaline soils in North Texas. Be sure to keep the new plant well watered – two times per week — until it is established. After the Copper Canyon daisy is established, it requires little supplemental water. USDA Natural Resources Convervation Service: Tagetes lemmonii Keywords (tags): perennial, flowering, shrub, native, deer proof, butterflies, full sun, low water
<urn:uuid:a9e49fa9-97d4-4f8f-b5b8-09f37b733aae>
{ "dump": "CC-MAIN-2015-14", "url": "http://dcmga.com/north-texas-gardening/perennials/master-gardener-favorites/copper-canyon-daisy/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298177.21/warc/CC-MAIN-20150323172138-00147-ip-10-168-14-71.ec2.internal.warc.gz", "language": "en", "language_score": 0.9076881408691406, "token_count": 359, "score": 3.078125, "int_score": 3 }
The Story of the Three Bears "The Story of the Three Bears" is a literary fairy tale. It was written by Robert Southey and first published in 1837 in a collection of his essays and stories. Southey's story is about an ugly old woman who enters the house of three bachelor bears during their absence. She eats their food, breaks a chair, and sleeps in a bed. She runs away when discovered. In time, the three bachelor bears became Papa, Mama, and Baby Bear. The old woman became a little girl called Goldilocks. The story supports several interpretations. It has been adapted to animated movies, a live action movie, and a short opera. Story[change | change source] Three male bears—"a Little, Small, Wee Bear ... a Middle-sized Bear ... and a Great, Huge Bear"— live in a house in the woods. They each have a porridge pot, a chair, and a bed. One morning, they take a walk in the woods while their porridge cools. A little old woman—"an impudent, bad old Woman"—enters the house during the bears' absence. She eats the little bear's porridge, breaks his little chair, and falls asleep in his little bed. The bears come home, and discover the old woman asleep. She wakes, sees the bears, jumps out the window, and fall to her death—never to be seen again. Origin[change | change source] "The Story of the Three Bears" was written by English writer Robert Southey. It was published in 1837 his 4-volume collection of essays and stories called The Doctor. Southey probably heard a version of the story as a boy from his uncle William Tyler. It was this version that was probably the basis for the story Southey included in The Doctor. It is unknown where or how his uncle learned the story. Southey had known the story for a long time before he published it. He had been telling it to family and friends since 1813. A very similar version of the story predates Southey's published one of 1837. In 1831, a lady named Eleanor Mure wrote the story in rhyming verse for her nephew's fourth birthday. In both Southey's and Mure's versions, the character who enters the bears' house is an ugly old woman. The two versions differ only in some small details: Southey's bears have porridge, for example, while Mure's bears have milk. The same year Southey published the story, a rhyming version was written by William Nicol. Southey wrote on 3 July 1837 that he had received Nicol's version. He liked it. He thought it would bring the story more attention from children. Nicol's version was published in 1841 with illustrations. Some[who?] think the story of the three bears resembles parts of "Snow White", or a story from Norway about a princess and three princes dressed in bear skins. Charles Dickens included a story about goblins in his 1865 novel Our Mutual Friend that also resembles "The Three Bears". A story called "Scrapefoot" may be the original for "The Three Bears". This story has a fox (not a human) as the intruder in the bears' house. Goldilocks[change | change source] About 12 years after Southey's story was published, writer Joseph Cundall changed the old woman into a little girl in his book Treasury of Pleasure Books for Young Children. Cundall made this change because there were many children's books about old women at the time. Once the little girl entered the story, she stayed there. She was known over the years as Silverhair, Silverlocks, Goldenlocks, and other names. She finally became Goldilocks sometime in the early 20th century. In time, the three male bears of Southey's original became Papa, Mama, and Baby Bear. What was once a scary little story about a nosy, ugly old woman and three male bears became a cozy little story about a nosy, pretty little girl and a family of bears. In versions of the story from the Victorian Era, Southey's "[T]here she sate till the bottom of the chair came out, and down came her's, plump upon the ground" was changed to read "and down she came" instead. All mention of the human "bottom" was wiped out. Interpretations[change | change source] In The Annotated Classic Fairy Tales (2002), Harvard professor Maria Tatar writes that the story is sometimes regarded as a cautionary tale. It warns children about the dangers of wandering off into unknown places. She points out that the story is often presented today as one about what is "just right" for oneself. In earlier times however, the story was about interfering with someone else's property. In The Uses of Enchantment (1976), child psychologist Bruno Bettelheim discusses Goldilock's struggle to grow beyond her Oedipal issues to confront adolescent identity problems. The story does not encourage children to solve the problems of growing up, Bettelheim writes, and does not end with the traditional "happily ever after" promise for those who solve their Oedipal issues. He believes the tale does not allow the child reader to gain emotional maturity. Tatar writes, "[Bettelheim's] reading is perhaps too invested in instrumentalizing fairy tales, that is, in turning them into vehicles that convey messages and set forth behavioral models for the child. While the story may not solve oedipal issues or sibling rivalry as Bettelheim believes "Cinderella" does, it suggests the importance of respecting property and the consequences of just 'trying out' things that do not belong to you." The story supports a Freudian anal stage interpretation. In ""The Three Bears": Four Interpretations" (1977), Professor Emeritus of the University of California, Davis Alan C. Elms makes such an interpretation and points to the story's emphasis upon orderliness—one of the character traits Freud associated with the anal stage of human development—as compelling evidence. Elms traces the story's anality to Southey and to his dirt-obsessed aunt who passed her obsession on to him in "somewhat milder form". Adaptations[change | change source] The Walt Disney and Metro-Goldwyn-Mayer studios have both made animated movies about the Three Bears—Disney in 1922 and MGM in 1939. Coronet Films made a short live action movie in 1958 that had real bears and a real child playing the characters. Faerie Tale Theatre made a television version in 1984. It stars Tatum O'Neal as Goldilocks. Kurt Schwertsik wrote a 35-minute opera called Roald Dahl's Goldilocks. Baby Bear is accused of assaulting Miss Goldie Locks. The tables are turned when the defense shows that the bears have had a lot of touble because of that "brazen little crook" Goldilocks. The was first presented in 1997 at the Glasgow Royal Concert Hall. Notes[change | change source] - Ober 1981, pp. 318–26 - Tatar 2002, p. 245 - Ober 1981, p. 33 - Ober 1981, p. 34 - Opie 1992, p. 199 - Dorson 2001, p. 94 - Curry 1921, p. 65 - Ober 1981, p. 47 - Opie 1992, p. 200 - Ober 1981, p. xii - Ober 1981, pp. 109–10 - Tatar 2002, p. 246 - Tatar 2002, p. 251 - Schultz 2005, p. 93 - Bettelheim 1976, pp. 215–24 - Tatar 2002, p. 246 - Elms 1977, pp. 264–69 - Roald Dahl's Goldilocks References[change | change source] - Booker, Christopher (2005). "The Rule of Three". The Seven Basic Plots: Why We Tell Stories. Continuum International Publishing Group. ISBN 0-8264-5209-4. - Briggs, Katherine Mary (2002) . British Folk Tales and Legends. Routledge. ISBN 0-415-28602-6. - "Coronet: Goldilocks and the Three Bears". Internet Archive. Retrieved 2009-02-21. - Curry, Charles Madison (1921). Children's Literature. Rand McNally & Company. - "Disney: Goldilocks and the Three Bears". The Encyclopedia of Disney Animated Shorts. Retrieved 2009-02-21. - Dorson, Richard Mercer (2001) . The British Folklorists. Taylor & Francis. ISBN 0-415-20426-7. - Elms, Alan C. (July–September 1977). ""The Three Bears": Four Interpretations". The Journal of American Folklore 90 (357). - "MGM: Goldilocks and the Three Bears". Retrieved 2010-11-12. - Ober, Warren U. (1981). The Story of the Three Bears. Scholars Facsimiles & Reprints. ISBN 0-8201-1362-X. - Opie, Iona; Opie, Peter (1992) . The Classic Fairy Tales. Oxford University Press. ISBN 0-19-211559-6. - "Roald Dahl's Goldilocks (1997)". Retrieved 2009-01-03. - Schultz, William Todd (2005). Handbook of Psychobiography. Oxford University Press. ISBN 0-19-516827-5. - Seal, Graham (2001). Encyclopedia of Folk Heroes. ABC-CLIO. ISBN 1-57607-216-9. - Tatar, Maria (2002). The Annotated Classic Fairy Tales. W.W. Norton & Company. ISBN 0-393-05163-3. Other websites[change | change source] |Wikisource has original writing related to this article:| |Wikisource has original writing related to this article:| |Wikimedia Commons has media related to The Three Bears.|
<urn:uuid:e015747b-aff2-4fc8-a135-a9af93a1dff9>
{ "dump": "CC-MAIN-2019-18", "url": "https://simple.wikipedia.org/wiki/The_Story_of_the_Three_Bears", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578533774.1/warc/CC-MAIN-20190422015736-20190422041736-00337.warc.gz", "language": "en", "language_score": 0.9218772649765015, "token_count": 2121, "score": 2.75, "int_score": 3 }
Recent review of deep-sea fishes captured deeper than 200m off greater New England, from the Scotian Shelf at 44°N to the southern New England Shelf at about 38°N, documented 591 species. Subsequent trawling activity and reviews of deep-sea taxa occurring in the area have revealed that an additional 40 species in habit the deep sea off New England. Thirty-two of these new records were captured in the course of 44 bottom trawls and 94 mid-water trawls over or in the proximity of Bear Seamount (39°55'N, 67°30'W). Five of the 40 species have been described as new to science, at least in part from material taken in the study area. In addition to describing such information as specimen size and position, depth, and date of capture, errors made in the previous study of deep-sea fishes in the area are identified and corrected. Karsten E. Hartel, Christopher P. Kenaley, John K. Galbraith, and Tracey Sutton. 2008. Additional Records of Deep-Sea Fishes from Off Greater New England .Northeastern Naturalist , (3) : 317 -334. https://nsuworks.nova.edu/occ_facarticles/535.
<urn:uuid:cb38c2c9-0677-4ac1-82e4-53b84266a50d>
{ "dump": "CC-MAIN-2022-27", "url": "https://nsuworks.nova.edu/occ_facarticles/535/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00193.warc.gz", "language": "en", "language_score": 0.8820409774780273, "token_count": 336, "score": 2.828125, "int_score": 3 }
Skip to comments.Scientists stunned as grey whale sighted off Israel (Jonah?) Posted on 05/11/2010 8:51:49 PM PDT by OldDeckHand JERUSALEM (AFP) - The appearance of a grey whale off the coast of Israel has stunned scientists, in what was thought to be the first time the giant mammal has been seen outside the Pacific in several hundred years. The whale, which was first sighted off Herzliya in central Israel on Saturday, is believed to have travelled thousands of miles from the north Pacific after losing its way in search of food. "It's an unbelievable event which has been described as one of the most important whale sightings ever," said Dr Aviad Scheinin, chairman of the Israel Marine Mammal Research and Assistance Center which identified the creature. A population of grey whales once inhabited the north Atlantic but became extinct in the 17th or 18th centuries and has not been seen there since. The remaining colonies live in the western and eastern sectors of the north Pacific. "What has amazed the entire marine mammal research community is there haven't been any grey whales in the Atlantic since the 18th century," he said. Scheinin said the creature, a mature whale measuring some 12 metres (39 feet) and weighing around 20 tonnes, probably reached the Atlantic through the Northwest Passage, an Arctic sea route that connects the Pacific and Atlantic oceans and is normally covered with ice. (Excerpt) Read more at sg.news.yahoo.com ... Its a whales life. Not a bad life really, swim all over the world in the deep blue ocean. Wow! This is pretty fascinating. :) Thanks! They say several hundred years since a whale like this has been seen near Israel ? does that mean that they had records of whales being in that area 700-800 years ago ? It’s Bush’s fault. It is Hillary and her peace mission. I bet the whale did a big sneak through the Panama Canal when no one was looking! No, I'm pretty sure it just means that there are records - ship's whaling logs, probably - going back to the late 17th century that noted gray whale sightings/killings in the Atlantic. This isn't the first sighting off Israel in 300ish years, it's the first sighting anywhere in the Atlantic, to include the Mediterranean which of course is only accessible to whales from the Atlantic. Mabye the whale is waiting to give somebody a ride Jonah style! There are so many of those usless stinking sacks of blubber in the Pacific they had to go somewhere! Some of them ended up in Hollywood Couldn’t be Jonah. The Bible says Jonah was swallowed by a fish. (The Hebrew word means fish.) Dang, dude, my eyes! Right. He “lost his way”. Must be a Democrat whale. Wonder if the whale will make it through the Suez? If it does, then those damn Somali pirates will probably make whale burgers out of it before it can make it to the Indian ocean. Shame on you for making fun of our narwho-Americans. i love how they try and spin global warming into it as the only way a whale could have made it there. maybe it swam full south and back up along africa. it could happen :) It also could have swam right through the Suez canal. I've been through that canal, there are no locks. It is one level from start to finish, with absolutely no gates or locks of any kind. IMHO, a whale - if he was curious enough - could easily make that journey and if he was a younger whale, probably without detection. Gray Whale Spotted on Wrong Side of World... (Israel) I read the other day that the word in Hebrew is the same word that is used to describe all things that live in the sea. I believe the word is “dag” (IIRC)? No problem. :)
<urn:uuid:177589dd-761e-4a57-964f-925ae1a50ad3>
{ "dump": "CC-MAIN-2014-52", "url": "http://www.freerepublic.com/focus/f-chat/2511695/posts", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447556094.122/warc/CC-MAIN-20141224185916-00038-ip-10-231-17-201.ec2.internal.warc.gz", "language": "en", "language_score": 0.9682841300964355, "token_count": 842, "score": 2.625, "int_score": 3 }
Liberian President Ellen Johnson Sirleaf asked two “million dollar” questions today in her op-ed for the London Times: “Why should Western countries continue to support developing nations at a time of downturn, when these funds might be spent at home?” and “What benefit is there when the flow of aid never seems to end?” The first female African head of state answers them with these thoughtful remarks: To many people in Africa, the value of aid is obvious: at its most basic, such as through the supply of food or water, lives are saved. More sophisticated use of aid, such as improving literacy and developing legal systems that underpin the rule of law, can foster economic opportunities. This creates not only a better life for those who receive it, but business opportunities for companies from donor nations. The debate over aid needs to mature, so that instead of hysteria about whether emerging nations deserve development funds, we have a serious discussion about how best to create new, stable trading partners that can create opportunities and jobs in emerging and donor countries. She uses her own country as an example of how Africa is putting these ideas into practice. To help reduce aid to a minimum over time, Sirleaf is proposing a new law that sets quotas for local jobs in mining industries, ensuring that the majority. whether manual or management, are held by Liberians. One thing is clear: aid is a crucial part of helping developing countries become self-sufficient. She ends her piece with this message: If development aid were to halt, then the danger is that many countries such as Liberia would begin to slip backwards. There is a moral obligation to make sure that this does not happen, but there is an economic interest, too. Read the full article here. (Subscription required)
<urn:uuid:ec2f2726-c752-4723-97e3-c65586185698>
{ "dump": "CC-MAIN-2016-36", "url": "https://www.one.org/us/2012/10/31/ellen-johnson-sirleaf-aid-is-not-an-alternative-to-self-sufficiency/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982924728.51/warc/CC-MAIN-20160823200844-00228-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.9659383893013, "token_count": 368, "score": 2.671875, "int_score": 3 }
Cherry Picker Engine Cherry pickers are also known as boom lifts or cherry picker lifts. This equipment is seen in many different places, and it is mostly used to replace light bulbs on the streets, repair Internet wires, and install telephone cables. Firefighters also make use of this equipment as an alternative to a ladder. In fact, cherry pickers are great substitutes for ladders since they are able to reach higher locations and are safer to use. Indeed, the functions of a cherry picker have evolved into varied purposes and has come a long way from simply helping people pick fruits from trees. Basic Types of Cherry Pickers There are several types of cherry pickers, and these are: • Unpowered Lifts – This type of boom lift does not require an internal power source like the electrically powered engines. Basically, this type of machine is simply pushed into place and it usually comes in the form of a personnel lift or a small scissor lift. Today, there are large unpowered lifts but they would require a crane to be able to move. • Self-Propelled Lifts – This type of equipment is capable of moving more efficiently compared to the first type. In fact, the larger models of self-propelled lifts are mostly legal to be driven on the roads. Hence, these machineries never require extra resources and time to perform their functions. • Vehicle Mounted Lifts – This type of equipment is typically attached and mounted to certain vehicles such as trucks, railway cars, vans, and other vehicles. The Most Common Engine Type Most cherry pickers or boom lifts are powered by systems of levers and entail a kind of an engine or motor in order for the equipment to be operational. Nowadays, there are several contemporary engines that would be more likely compatible to any type of cherry picker. These modern engines include diesel operated, petrol operated, and electrical operated engines. However, the most common type is the diesel-operated engine since it requires the cheapest type of fuel. Yet, diesel engines tend to become quite noisy and produce lots of fumes compared to the cherry picker engine that is fuel operated. Hybrid engines used as a power source are likewise available today. Though a hybrid engine may be costly, it is highly capable of switching between diesel and electricity as its power sources. Buying or Renting a Cherry Picker Engine Certainly, there are numerous cherry picker suppliers in the United States alone. But purchasing this equipment or merely a cherry picker engine is going to be a huge investment, and many people might not be able to afford its high cost. That is why hiring cherry pickers from several companies and manufacturers can be possible. Construction companies, suppliers, and other industrial businesses such as JLG, UpRight, SkyTrak, and Genie offer several types of cherry pickers. There are also plenty of choices when it comes to modifications, accessories, types, and designs of cherry pickers. Apart from providing a reliable cherry picker engine, suppliers similarly offer additional features for this equipment such as the control systems and power panels.
<urn:uuid:9a23a036-3ca1-4459-b41f-df382058a820>
{ "dump": "CC-MAIN-2018-09", "url": "http://cherry-picker-hire.com/cherry-picker-engine.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00064.warc.gz", "language": "en", "language_score": 0.9673325419425964, "token_count": 629, "score": 2.546875, "int_score": 3 }
On this President’s Day, let’s revisit America’s founding document: the Declaration of Independence. The Declaration of Independence includes broad ideological statements such as “We hold these truths to be self-evident, that all men are created equal,” and claims that the British have violated “certain unalienable rights.” But were the real reasons for the American Revolution economic? According to to Lynd and Waldstreicher, the answer is yes. Wilson Quarterly reports: “Scholars tend to view the ideological arguments for independence as building to a critical point and preoccupying the colonists thereafter. That’s inaccurate, Lynd and Waldstreicher write: From the mid-18th century right up to the signing of the Declaration, Americans objected to a myriad of British imperial policies principally on economic grounds. The antitax sentiment of the Boston Tea Party in 1773 is well known, but Americans also protested British attempts to requisition resources during the Seven Years’ War (1756–63), imperial currency manipulation that left the colonies strapped, and prohibitions on trade with the French West Indies, along with many other policies.” The authors claim to make the strongest case for their course of action, these early Americans subsumed their economic frustrations within a broader argument for sovereignty based on the violation of rights. In today’s Presidential races, we also see economic arguments couched in ideological terms. Obama’s argument to raise taxes is delivered under a fairness argument. President Obama says it’s the ‘height of unfairness‘ that the very wealthy can pay a lower percentage of their income in federal taxes than many in the middle class. Tea Party candidates that argue for lower taxes do so using the language of “fiscal responsibility, constitutionally limited government, and free market economic policies.” The take-away is that political rhetoric–both now and in colonial times–often is used to justify fundamentally economic arguments.
<urn:uuid:85dfd2a6-d547-4280-8ed0-dcbc7afd03de>
{ "dump": "CC-MAIN-2016-30", "url": "http://healthcare-economist.com/2012/02/20/american-declaration-of-independence-ideology-or-economy/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00146-ip-10-185-27-174.ec2.internal.warc.gz", "language": "en", "language_score": 0.9307469129562378, "token_count": 410, "score": 3.75, "int_score": 4 }
Horsehead (right) and Flame (left) Nebulas Date Taken: 1/12/2010 Equipment: 5" Televue Telescope with SBIG 10XME camera; Exposures: RGB-24 min each, Ha-32 min, Lum-12 min. Processing: MaximDL and Photoshop Horsehead Nebula (right) and Flame Nebula (left) - 1,500 light years from Earth. The Horsehead Nebula (also known as Barnard 33 in bright nebula IC 434) is a dark nebula in the constellation Orion. The nebula is located just below (to the south of) Alnitak, the star farthest left on Orion's Belt, and is part of the much larger Orion Molecular Cloud Complex. The Horsehead Nebula is approximately 1500 light years from Earth. It is one of the most identifiable nebulae because of the shape of its swirling cloud of dark dust and gases, which is similar to that of a horse's head when viewed from Earth. The shape was first noticed in 1888 by Williamina Fleming on photographic plate B2312 taken at the Harvard College Observatory. The red glow originates from hydrogen gas predominantly behind the nebula, ionized by the nearby bright star Sigma Orionis. The darkness of the Horsehead is caused mostly by thick dust, although the lower part of the Horsehead's neck casts a shadow to the left. Streams of gas leaving the nebula are funneled by a strong magnetic field. Bright spots in the Horsehead Nebula's base are young stars just in the process of forming. The surrounding region also contains a multitude of different objects all unique in their own right. The bright emission nebula in the lower left is NGC 2024 (the Flame Nebula). Infrared studies have revealed a huge cluster of infant stars hidden behind the dust and gas of NGC 2024. The bright blue/green reflection nebula to the lower left of the Horsehead is NGC 2023. Interstellar dust reveals its presence by blocking light emitted from stars or nebulae behind it. Dust is composed mostly of carbon, silicon, oxygen and some heavier elements. Even organic compounds have been detected. Description Sources: Wikipedia and Robert Gendler
<urn:uuid:3d49f471-28df-4cee-8336-685b117b6030>
{ "dump": "CC-MAIN-2018-13", "url": "https://starmere.smugmug.com/Starmere-Nebula/i-hd25d7p/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00223.warc.gz", "language": "en", "language_score": 0.9195703864097595, "token_count": 456, "score": 3.625, "int_score": 4 }
By Jessica Mireille. Circuit. Publised at Monday, December 18th 2017, 14:08:09 PM. A capacitor is a device that stores energy in the form of voltage. The most common form of capacitors is made of two parallel plates separated by a dielectric material. Charges of opposite polarity can be deposited on the plates, resulting in a voltage V across the capacitor plates. Capacitance is a measure of the amount of electrical charge required to build up one unit of voltage across the plates. By Alix Loane. Car Wiring. Published at Saturday, January 06th 2018, 22:49:51 PM. As we stated, always use closed terminals. If your terminals have the plastic cover, remove that. Always solder the terminal where the wire end is installed. Never crimp the terminal and expect the wire to stay there forever. It won’t. By Sasha Sara. Diagram. Published at Saturday, January 06th 2018, 22:23:44 PM. Sometimes, to make schematics more legible, we’ll give a net a name and label it, rather than routing a wire all over the schematic. Nets with the same name are assumed to be connected, even though there isn’t a visible wire connecting them. Names can either be written directly on top of the net, or they can be “tags”, hanging off the wire. By Valentine Sybille. Diagram. Published at Saturday, January 06th 2018, 19:46:56 PM. Truly expansive schematics should be split into functional blocks. There might be a section for power input and voltage regulation, or a microcontroller section, or a section devoted to connectors. Try recognizing which sections are which, and following the flow of circuit from input to output. Really good schematic designers might even lay the circuit out like a book, inputs on the left side, outputs on the right. By Alix Loane. Diagram. Published at Saturday, January 06th 2018, 18:33:29 PM. If there’s something on a schematic that just doesn’t make sense, try finding a datasheet for the most important component. Usually the component doing the most work on a circuit is an integrated circuit, like a microcontroller or sensor. These are usually the largest component, oft-located at the center of the schematic. By Jessica Mireille. Car Wiring. Published at Saturday, January 06th 2018, 18:20:29 PM. Plan out your wiring. Choose the path the wires will follow. Choose the locations of all of your switches, gauges, ignition box, battery, and charging posts. If your past layout was deficient, now is the time to rethink the entire plan. By Valentine Sybille. Diagram. Published at Saturday, January 06th 2018, 17:30:41 PM. The unloaded PCB appears green because thin sheets of green plastic have been applied to both sides (otherwise the PCB would appear pale yellow). Called solder masks, these sheets cover all exposed metal other than the component pads and holes so that errant solder can not inadvertently short (or electrically connect) the printed wires. All metal surfaces other than the exposed pads and holes (i.e., the wires) are underneath the solder mask. Not infrequently, blue or even red solder masks are used. By Alix Loane. Diagram. Published at Saturday, January 06th 2018, 15:25:58 PM. A schematic shows connections in a circuit in a way that is clear and standardized. It is a way of communicating to other engineers exactly what components are involved in a circuit as well as how they are connected. A good schematic will show component names and values, and provide labels for sections or components to help communicate the intended purpose. Note how connections on wires (or "nets") are shown using dots and non-connections are shown without a dot. Altrushare - Wiring Diagram Gallery Copyright © 2003 - 2018 Domain Media. All sponsored products, company names, brand names, trademarks and logos arethe property of their respective owners.
<urn:uuid:26ffd17d-a898-4930-a5aa-ed3eb1b4b7ec>
{ "dump": "CC-MAIN-2018-30", "url": "http://altrushare.com/tag/basic-electronic-projects-using-arduino/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00510.warc.gz", "language": "en", "language_score": 0.9172371625900269, "token_count": 856, "score": 3.6875, "int_score": 4 }
Guatemala Mayans: From victims of discrimination to perpetrators? Fearful of losing their culture and land, ethnic Maya people in Guatemala — who have faced centuries of discrimination themselves — drove out a group of 230 ultra-Orthodox Jews, experts say. The Jewish group’s departure from San Juan La Laguna, on the banks of Lake Atitlan some 200 kilometers (125 miles) from the capital Guatemala City, followed failed efforts reach a deal Wednesday. “We are very pleased with the decision made by that group to avoid conflicts with (local) people,” Miguel Vasquez, spokesman for the San Juan Council of Elders, told AFP by phone. Most members of the small Jewish community are from the United States, Israel, Britain and Russia, and around 40 are Guatemalan. Approximately half are children. Since October, the local indigenous population has accused the Orthodox Jews of discriminating against them and of violating Mayan customs. Maya elders also said the Jewish community sought to impose their religion and was undermining the Catholic faith predominant in the village. Rabbi Uriel Goldman, a representative of the Jewish group, told Prensa Libre newspaper his community had taken up residence temporarily in a Guatemala City hotel until it can find a place to relocate to in an outlying part of the capital area. – History repeats itself – Guatemala, a mountainous and scenic nation in Central America, cannot quite agree on how indigenous it is. The government insists 42 percent of citizens belong to ethnic Maya tribes, traditional farmers who mainly speak Maya languages; indigenous leaders insist they represent 60 percent of the 15 million Guatemalans. If the indigenous are right, they are starkly underrepresented in what is supposed to be a federal democracy. During three centuries of Spanish colonialism, Mayans were marginalized. After independence in the early 1800s, they spent almost another two centuries living in relative isolation, with a Spanish-speaking ruling class in Guatemala City who long referred to Mayans as dolts for not speaking Spanish. Yet many rural Guatemalans — most indigenous live in rural areas on their traditional land — have never been to school in any language. Instead of embracing equal rights, including to education, in a democratic era, as recently as the 1990s, the traditional elite opted not to embrace bilingualism; not to push to guarantee rural educational equality; and not to have a strategy for integrating indigenous people into national life. In Guatemala’s 36-year civil war that ended in 1996, some 200,000 people were killed — 93 percent of them at the hands of the government’s armed forces, according to a United Nations report. The report also found that 83 percent of victims were ethnic Mayans. “Having gone through history losing land to expropriation, which has contributed to their poverty, … and the state having been dysfunctional where they are concerned, really exacerbated” indigenous people’s reaction in this culture clash, Guatemalan Mental Health League chief Marco Garabito, a sociologist, told AFP. The likelihood that more members of the Jewish community would keep coming triggered the Mayans’ intense fears they could lose more of their lands. But on Friday, the Human Rights Prosecutor’s office said it regretted the “forced departure” of the Jewish group. “There can be no justification for … anyone claiming to have the right to threaten or expel foreigners from Guatemalan territory, or make them relocate,” it said in a statement. “The Jews are being attacked because of their ethnicity,” said anthropologist Estuardo Zapeta. “That’s discrimination, plain and simple.” – Unfamiliar orthodoxy – The Lev Tahor community was founded in 1980 by Israeli Shlomo Helbrans, seeking to practice an austere interpretation of Judaism. The community faced legal problems in the United States and Canada before running up against indigenous opposition in Guatemala. Canadian media reports also said red flags had been raised by the group’s treatment of children. But the group maintains its way of operating is nothing new. Maya leaders were confounded by the group’s customs and practices, offended that they did not respond when they were greeted by locals. “They don’t believe in Jesus Christ or the Virgin Mary. They do not work. They dress all in black. And they scare off tourists. They don’t sleep at night, and they are out walking around on the streets when we were asleep,” said the indigenous council’s Vasquez. The Jews said they were targeted by an “aggressive” subgroup of the Maya leadership. “We are peaceful people. And to avoid anything more regrettable, we decided to leave that town,” said Misael Santos, another representative of the Jewish group.
<urn:uuid:ee65b59d-5fe6-49d6-b4d4-570f3dc09ca3>
{ "dump": "CC-MAIN-2017-39", "url": "http://www.rawstory.com/2014/09/guatemala-mayans-from-victims-of-discrimination-to-perpetrators/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00304.warc.gz", "language": "en", "language_score": 0.9652279019355774, "token_count": 1009, "score": 2.953125, "int_score": 3 }
Google+ Badge BLACK SOCIAL HISTORY Friday, 29 March 2013 BLACK SOCIAL HISTORY : EARLY BLACK FOOTBALL PLAYERS :- BREAKING THE RACE BARRIER DOWN : In the North big ten schools offered opportunities to black athletes like Slater, Jesse Owens at Ohio State and Ozzie Simmons of Iowa. What Jackie Robinson was to base ball, at a much earlier date Duke Slater was a Collegiate football. Chicago Sun-Times columnist Dick Hackenberg wrote on December 13th 1960 two days after a dinner celebrating Slaters accomplishments. In1951 Slater because the first Black Player inducted into the inaugural college football hall of fame. He was a three time all big ten tackle for Iowa and could grasp an opponent in a vice like grip with his longer than normal arms according to Iowa athletics records.
<urn:uuid:272aa370-e170-4773-8aac-ef79a365ef71>
{ "dump": "CC-MAIN-2017-22", "url": "http://sittingbull1845.blogspot.com/2013/03/black-social-history-early-black.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608668.51/warc/CC-MAIN-20170526144316-20170526164316-00434.warc.gz", "language": "en", "language_score": 0.9520435333251953, "token_count": 168, "score": 2.84375, "int_score": 3 }
Does the internet help or hinder studying? A new survey indicates that technology is having both a positive and negative effect on university students' efficiency. The internet has proved to be an incredible research resource (Wikipedia, anyone?), but a recent survey commissioned by McGraw-Hill Education indicates that technology often hinders the study process. More than 500 university students participated in the survey, which sought to better understand students' study habits and the influence of learning technology on studying. In today's society, we are inundated by technology. Phones, MP3 players, Facebook, Twitter and Snapchat all vie for our attention, with the allure of being able to check up on what our friends are getting up to 24 hours a day. Nearly 40% of students reported that they find the internet, and social media networks in particular, are the biggest distraction when studying. Over half said they use computers, tablets and phones for non-study activities, such as texting friends or updating their online profiles, while they were supposed to be working. It's not all bad news though, the survey indicates that despite many students not always using technology to their advantage, it can, in fact, improve the study process. Those students who take advantage of the latest study technologies, such as adaptive learning programs, report that they feel less stressed and more productive. More than 50% of students said they felt "better prepared" and that they have "improved studying efficiency" as a result of using study technology. "Studying effectively – and with the right type of technology – is one of the best ways to ensure that students succeed " said Brian Kibby, president of McGraw-Hill Higher Education. "But focus is the key." So close down Facebook and turn your phone on silent if you need to get your head down. Unless, of course, you've just made a really great sandwich, in which case your friends definitely need to know about it straight away.
<urn:uuid:652672ba-eb47-446c-847a-fff7c0ff4fb6>
{ "dump": "CC-MAIN-2016-30", "url": "http://www.christiantoday.com/article/does.the.internet.help.or.hinder.studying/34771.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829320.70/warc/CC-MAIN-20160723071029-00175-ip-10-185-27-174.ec2.internal.warc.gz", "language": "en", "language_score": 0.9631988406181335, "token_count": 401, "score": 2.84375, "int_score": 3 }
BALD HEAD ISLAND -- Local scientists are working to let the public know just how important barrier islands are. Without barrier islands like Bald Head Island we'd get hit even harder than we already do by hurricanes. People who work at the island's conservancy want to make sure everyone knows how crucial these islands are. Beautiful beaches, marshes, and forests make up bald head island. It's a popular vacation spot. But it and other barrier islands play a much bigger role. Suzanne Dorsey with the Bald Head Island Conservancy said, "They protect against wind damage from hurricanes, they protect against storm surge, and they protect against wave action." Dorsey is the executive director at the Bald Head Island Conservancy. She and her co-workers plan to open a barrier island study center. Dorsey said, "We need to protect what we have, we need to learn about it and the way that we need to do that is we need to encourage research and education." "If you don't have your barrier islands intact and healthy then the infrastructure on the mainland is going to suffer," Dorsey said. Dorsey says Brunswick County, including the nuclear plant, would suffer the full impact of a hurricane if it weren't for the barrier islands. Locals seem to be pleased about the step forward with the research. Bonnie Ezzelle works on Bald Head Island. She said, "This is just a precious place. There's nothing like it anywhere else and so it makes me feel good that somebody's out there, taking care of it." With climate change on the minds of so many Dorsey says the plans for the new barrier island study center are more important now than ever. "If we want to protect our barrier islands, if we want to make sure we have these beautiful places to come and vacation, to live, but also to protect our coastal communities, we need to make sure that we understand them and that we're doing everything we can to protect them," Dorsey said. Construction of the $2.5 million center is scheduled to begin in the fall. They hope to complete it in 2008. - Video Central - About WWAY
<urn:uuid:be767b6c-f8b4-40f0-b8f2-0766f16ae1d8>
{ "dump": "CC-MAIN-2013-20", "url": "http://www.wwaytv3.com/scientists_stress_importance_of_barrier_islands/07/2007", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00013-ip-10-60-113-184.ec2.internal.warc.gz", "language": "en", "language_score": 0.9633467197418213, "token_count": 435, "score": 3.09375, "int_score": 3 }
MEDLINE Indexing Online Training Course If an article discusses basic biology of a gene/protein from certain organisms, indexers must also link the gene/protein to the appropriate record in the NCBI data base called Entrez Gene. The gene/protein must be the main focus of the article. The list of organisms for which gene links are made is growing constantly. Detailed instructions for creating these links are given in Chapter 37 of the Indexing Manual (NLM staff access only).
<urn:uuid:0c194be2-fbc9-4607-9080-2d8f19a1abd6>
{ "dump": "CC-MAIN-2016-50", "url": "https://www.nlm.nih.gov/bsd/indexing/training/GENE_010.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542655.88/warc/CC-MAIN-20161202170902-00274-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9228567481040955, "token_count": 98, "score": 2.734375, "int_score": 3 }
Polycystic Ovary Syndrome – PCOS PCOS at a glance - PCOS – also called Stein-Leventhal syndrome is a common endocrine disorder that causes a woman’s eggs to fail to mature and grow normally, causing cysts to form within the ovaries. - PCOS symptoms can include irregular or lack of periods, weight gain, irregular hair growth, hair loss, acne, and infertility. - There is no cure for PCOS but it can be treated to manage symptoms, and infertility treatments can help to produce a pregnancy. What is PCOS? Polycystic ovary syndrome (PCOS) is a common endocrine disorder that affects about five percent of women of reproductive age, starting as early as a woman’s teenage years. It causes infertility and other serious health repercussions later in life. PCOS causes a woman’s eggs to fail to mature and grow normally. When that happens, the follicles stop growing and form cysts within the ovaries. Over the years, those cysts build up and fail to release eggs. They cause the overproduction of male hormones like testosterone, which in turn can cause acne and abnormal hair growth. Woman with PCOS have low success with regular ovulation and pregnancy. Symptoms of PCOS Women with PCOS may experience symptoms as early as their teenage years, including: - Irregular menstrual periods - Lack of periods for 2 to 12 months - Weight gain or difficulty losing weight - Hair growth on the face, back, or chest (hirsutism) - Irregular or absent menstrual periods - Male pattern baldness or thinning hair on the scalp - Dark pigmentation in the armpits and back of the neck. Later life repercussions of PCOS may include heart disease, hypertension, diabetes, or even endometrial or uterine cancer. PCOS has also been called Stein-Leventhal syndrome. Unfortunately, there is no cure for polycystic ovary syndrome. But PCOS symptoms – including infertility – can be treated. Weight reduction improves the frequency of ovulation, improves fertility, lowers the risk of diabetes, and lowers male hormone levels in many women with PCOS. In the long term PCOS can lead to metabolic syndrome, which causes obesity and high insulin levels, so treatment must also focus on reduction of the risk of diabetes and also heart disease, which can result from diabetes. Changing or adapting nutritional and exercise habits is critical. In overweight women, another key is the reduction of insulin levels with medications. Patients must learn to manage other symptoms of PCOS. Irritating symptoms like acne, hirsutism, hair loss, and acne often respond to metformin. If PCOS is being treated for reasons other than infertility, other medication can be used more liberally. Laser hair removal and plucking may be necessary for cosmetic reasons. Treating infertility from PCOS The first steps in any infertility treatment will be an examination of the woman’s reproductive system and fertility testing. Overweight women will be advised to begin a healthy weight-loss program before or during infertility treatment. Studies have shown that some women experience spontaneous return of their ovulation and regulation of their cycles by simply losing weight through healthy eating and exercise, and once pregnancy is achieved, a healthy diet can help maintain normal weight. If that does not work, ovulation-inducing fertility medications may be used. Once ovulation is achieved, a woman with PCOS may then begin with either intrauterine insemination (IUI) or in vitro fertilization (IVF). Can PCOS be prevented? Physicians have not yet found a way to determine which girls may develop PCOS after the beginning of menstruation. The earlier a young woman is diagnosed and begins managing PCOS, the less likely the long-term complications of infertility, heart disease, hypertension, and diabetes. For women who are not trying to conceive, hormone therapy through birth control pills or other hormonal contraceptives can be helpful. And all PCOS sufferers, regardless of desire to conceive, benefit from a healthy lifestyle of regular exercise and good nutrition. Suffering from polycystic ovary syndrome can cause emotional distress for a woman. The outward symptoms of skipping periods, hair growth, hair loss, and weight gain attack a woman’s self-image, not to mention her concerns of having children. A woman should consult a physician as soon as suspicions of PCOS through the symptoms listed above become apparent.
<urn:uuid:cc0b5ce7-9bcd-49e3-9400-6c551b8f3e16>
{ "dump": "CC-MAIN-2019-04", "url": "https://ivfga.com/patients/library/female-infertility/pcos", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00038.warc.gz", "language": "en", "language_score": 0.9373373985290527, "token_count": 930, "score": 3.046875, "int_score": 3 }
Dr. Kelly Ross, Pediatrician at St. Louis Children’s Hospital, talks about how to transition your child into a back to school sleeping schedule. First, it is important to begin easing your child into a regular sleeping schedule a couple of weeks in advance. It is not fair to expect your child to fall asleep at a bed time the night before the first day of school if there was no preparation. Most parents forget that just like going into your first day of work, the first day of school can be a huge stressor for kids and it can be even more difficult for them to fall asleep if a regular sleeping schedule is not in place in advance. You want your kids to be at their very best for the first day of school, but that is difficult when they are sleep deprived. That is why it is so important to be proactive in getting your child in a regular sleeping schedule before the first day. Many parents ask, “At what point do you hand this responsibility over to the kids?” Dr. Ross explains that her kids are teenagers, and they have even learned the consequences of not getting enough sleep. You don’t want to force your kids into a regular sleeping schedule, but it is important that they learn these processes in a gentle, happy sort of way. When your child is starting to be able to tell time, it is a good idea to practice by having an alarm clock in the room. However, it is too much to expect younger children to wake up on their own with an alarm clock. As kids mature and head toward middle school or high school, waking up on their own with an alarm clock is a reasonable expectation. About a month before that change occurs, you want to have your kids set their alarm and then follow up to make sure that they are actually getting out of bed.
<urn:uuid:b5dd32be-3dcf-476d-b986-4f51cebb3b2c>
{ "dump": "CC-MAIN-2017-09", "url": "http://childrensmd.org/videos/transitioning-kids-back-school-sleep-schedule-qa-stl-childrens-momdocs/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00420-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9757729768753052, "token_count": 376, "score": 2.703125, "int_score": 3 }
As of the end of 2013, the country with the top wind power producing capacity in the world is China. Other top wind producing countries, in order, are the United States, Germany, Spain, India, the United Kingdom, Italy, France, Canada and Denmark. Some countries, such as the United Kingdom and Denmark, rely heavily on offshore wind power, whereas other countries build most of their wind power facilities on land.Continue Reading At the end of 2012, about 80 countries were operating around 225,000 wind turbines. Globally, wind power supplies about 2.5 percent of electricity, but the percentage is much higher in some isolated areas. Wind power supplies 30 percent of Denmark's electricity, more than 40 percent of electricity in some German states, 20 percent of electricity in southern Australia and 16 percent of electricity in Spain and Portugal. According to estimates by the Global Wind Energy Council, global use of wind energy is expected to reach 5 percent by 2015. Two of the largest wind farms in the world are in the United States: the Alta Wind Energy Center in California and the Roscoe Wind Farm in Texas. As of 2013, the largest offshore wind farm in the world is the London Array, located about 13 miles off the Kent coast in the United Kingdom. Public opinion is overwhelmingly for wind power as an alternative energy source, though the amount of support varies from country to country. In a Eurobarometer survey, 89 percent of European Union citizens expressed support for wind power, while a 2012 survey in the United States put popular support for wind power at 71 percent.Learn more about Sustainability
<urn:uuid:909031a3-32a4-4937-bfd3-c00d530b9bab>
{ "dump": "CC-MAIN-2017-51", "url": "https://www.reference.com/science/world-wind-power-used-72d028a79a3fe1fc", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00040.warc.gz", "language": "en", "language_score": 0.9404554963111877, "token_count": 323, "score": 3.125, "int_score": 3 }
Manitoba History: Historical Tour: Selkirk, Manitoba by Wendy G. Smulan Studying architecture is one method of examining history, but in addition to architecture are the stories of people that make those buildings. It is the stories of people in conjunction with the architecture that creates a complete picture of what life was in the past and how it impacts the present. Selkirk, Manitoba was established by a few wealthy and influential investors that had the vision of Selkirk becoming Manitoba’s capital of trade and industry. A political battle ensued between Selkirk and Winnipeg to be the central power in the province, which has made Selkirk’s history particularly interesting. The result is the richness of the town’s heritage and architecture that remain today as landmarks. Selkirk is particularly fortunate to have a highly active heritage committee that is devoted to recording and preserving the town’s significant architecture. The University of Manitoba, Faculty of Architecture and in particular the work of Dr. Bill Thompson and his students have also contributed a great deal to the documentation and study of the town’s architecture. Architecture is a symbol of a people’s culture, politics and community, and the landmarks of Selkirk help tell its story. The Selkirk Post Office was the town’s first public building, constructed between 1907 and 1909. Not only did the building house the post office located on the main floor, but also contained the fisheries office and Indian agency on the upper level, and the customs office on the lower level. The architectural firm hired to design the building was James Chisholm and Son, one of Winnipeg’s first architectural firms. Shortly after the turn of the century, the news was released that the town would be receiving a new public building. The construction of such a large building by the federal government was important to Selkirk and symbolised the town’s progress. Controversy arose over the location because many of the merchants argued it should be at the intersection of Eveline Street and Manitoba Avenue closer to their business establishments. Despite their protests, the government built the post office at the corner of Manitoba Avenue and Main Street. At the time of the building’s construction there were no buildings on the surrounding lots and the post office became Selkirk’s central land-mark. The post office remained as the centre of activity for many years until the government constructed a new post office in 1956. The old Post Office building was then sold and converted into apartments that remained occupied for 20 years and then was abandoned for several years. In 1984 the building was restored and opened as a community art centre, making it once again the centre for community activity. The Post Office is an example of the Beaux-Art Classical style, a style that was revived at the turn of the century in Paris, and then in North America from a renewed interest in classical aesthetics. The Beaux-Art style is identified by classical architectural elements seen on the building, such as: the dentils within the bracketed cornice, voussoirs, exaggerated key stones and stone string courses. The detailing of the cornice and masonry work on the building help to make it a monumental structure in Selkirk. Upon entering the Post Office from Main Street, one passes through a wooden, segmental arched doorway which opens into a reconstructed glass vestibule. A great deal of the structure, woodwork and terrazzo floor are original to the building. The preservation of this building as a community art centre demonstrates it to be a significant building in the foundation of the community, and its history. A different type of public building that serves the town and symbolises the success of the community are its churches. Knox Presbyterian Church is a fine example of a church that signifies Selkirk’s development as a community, being the first congregation in town. The church began in a small log cabin on Eveline Street and evolved into the large Gothic Revival structure that stands today. The initial structure was built in 1876 and saw two expansions, first in 1904 and second in the 1960s. The magnificence of this building is experienced in the interior chapel, where the stained glass glows with warm intensity and colour. The woodwork on the interior is highly detailed. Also noteworthy are the curved wooden pews. The Gothic Revival style is identified by the steep pitched roof, small rose window, and pointed arch windows and doors. One must note the detailed masonry mouldings around the pointed arch windows, the corbelling, and the exceptional stained glass that has been dedicated to the memory of local congregation members. The typical Presbyterian bell tower and spire, located originally on the southeast corner before the 1904 addition mark this building as one of Selkirk’s finest landmarks. One cannot pass through the town without taking notice of the brightly painted blue lift bridge. A rumour began in 1911 that a bridge was being built to connect Selkirk and East Selkirk across the Red River. It was not until the Depression in the 1930s that the bridge finally began to take shape after the federal, provincial and municipal governments agreed to share the cost. The opening of the bridge was delayed for several years while the governments argued over funding for the maintenance costs. In the spring of 1937, the bridge had not yet opened and the river became impossible to cross. Meanwhile, the governments had quietly reached a settlement and planned an official opening of the bridge. Frustrated, Ed Maloney, a local resident took matters into his own hands and lowered the span by the manual crank to allow people to cross. The bridge was in full use that day, only to be promptly closed by the government until the official opening only two days later. The bridge marks an interesting moment in the history of Selkirk and its development as a prairie town. Selkirk has an engaging political history because of the men who had a vision of Selkirk as Manitoba’s centre of trade, export and finance. However, that vision was short lived and by the turn of the century it was apparent that the main rail line would cross through Winnipeg and not Selkirk. A great battle was lost and one that residents of Selkirk still mention. Selkirk then changed its goals to become the centre of agriculture for the Interlake Region and a summer resort for Winnipeggers. In 1892 a group of businessmen formed the Selkirk Electric Railway Company to build a link between the town and Winnipeg. In 1908, the Winnipeg Selkirk and Lake Winnipeg Railway Company (W.S & L.W.), affiliated with Selkirk Electric Railway opened the line between Selkirk and Winnipeg with daily trips. A building with a significant history is the old Eaton General Store at Eaton Avenue and Eveline Street. W. H. Eaton, relative to Timothy Eaton, was one of many original investors in Selkirk and took a steadfast interest in the town’s growth and prosperity. Eventually, the Eaton General Store closed and the building became the train station, where visitors to Selkirk got their first glimpse of the town and local people gathered to travel into Winnipeg for its urban conveniences. The building changed hands after being the centre of transportation and it has not been preserved or restored to its original appearance. The stout, but sturdy building is typical of a boomtown building with a parapet to give it greater height. There has been added decoration on the front facade with brick corbelling above the entrance and the symmetrical windows. The building is obviously important to Selkirk’s development and growth. The Garden on Eaton, is a Queen Anne Revival style home that has been converted into a tea and craft house. The home is a good example of how historic buildings may be preserved and reused for contemporary purposes. The building has many of the characteristic features of the Queen Anne Revival style home found throughout Selkirk, with its tower and spire, intersecting gable roof, veranda (that has been enclosed), columns and asymmetrical design. Another noteworthy Queen Anne Revival style home is the former Stuart Residence on Eveline Street. The Selkirk Electric Company (S.EL.C.) was the first company to bring electricity to the town in 1904. The residence was built under the supervision of James Stuart, one of the founding members of S.EL.C. The home was built beside the electrical plant (which no longer exists) to house the company engineer, who happened to be James Stuart’s son. S.EL.C. had many difficulties providing reliable electricity to the town, and as a result of many complaints, the company was sold in 1906. James Stuart’s son continued to be engineer for the new company as James Stuart was a member of the new electric company, until 1915. The house is currently owned by the Town of Selkirk and is under review for municipal heritage designation. It is possible that in future the home will be converted into a museum for the town, separate from the Selkirk Marine Museum, which is only one block south of the Stuart residence. The Stuart residence was built from brick that came from a brick plant owned by James Stuart in Manitoba. The masonry work is very detailed with brick string courses and segmented arches over the doors and windows. The veranda is original to the home with its decorative balusters, and shades a small bay window. St. Clement’s Anglican Church on the south side of town was one of three churches in Selkirk affiliated with the Church of England, which during the 1870s attempted to convert local Natives to Christianity. The church stands alone surrounded by a large cemetery that contains the graves of many of the founding families of Selkirk. The building is a sturdy stone structure in the English Parish Gothic Revival style with crenellation crowning the bell tower and pointed arch windows and door. The entry to the chapel is through the bell tower where one may see the thickness of the stone walls by the inset windows. The chapel is simple, but has beautiful stained glass windows that are best seen from the interior. The masonry work is very rustic. Finer cut stones provide subtle contrast which highlight the door and windows as well as the crenellated battlements and moulding on the bell tower. At the corner of Eveline Street and Manitoba Avenue, the old centre of Selkirk’s business community when the town was first established, is the former Dominion Bank built in 1905. The building is a modification of the “Chicago School” style, it is symbolic of industrial growth and technological innovation, which marked Selkirk as being a modern town. The style is atypical for a bank, which may explain the suggestion that its original function was something other than a bank. The “Chicago School” architecture began in the mid nineteenth century in the Midwest and was predominantly used for commercial buildings. The building initially appears to be in the Classical style, with its cornice, frieze, dentils and pediment. However, the heavy cornice separating the first and second levels, the canted corner, and the window groupings in three signify the building as being inspired by the “Chicago School” architectural style. The building as it stands is in need of much repair and restoration, and the present owners of the building are currently working to do so. The building has great potential to be a more significant landmark to the town of Selkirk, but it also stands as a reminder of the commitment of the people of Selkirk to preserve their architectural and cultural history. The hard work and dedication of the people of Selkirk has been ongoing, and the rewards are the many well preserved and documented architectural landmarks that tell the story of the town’s historical development. The author wishes to express her deepest gratitude to Mr. Frank Hooker, Chairman, Selkirk Heritage Committee for sharing so much of his time and knowledge. In addition, she extends her appreciation to Dr. George R. Fuller of the University of Manitoba, Department of Interior Design for his suggestions and comments. Hooker, Frank (Chair of Selkirk Heritage Committee). Interview by author, 14 January 1997, Selkirk. Hooker, Frank (Chair of Selkirk Heritage Committee). Interview by author, 15 July 1997, Selkirk. Humphreys, B. A. The Buildings of Canada. Montreal: Reader’s Digest Association, 1990. Potyondi, Barry. Selkirk: The First Hundred Years. Winnipeg: National School Services, 1982. Rostecki, R. R. “Former Public Building: 406 Main Street, Selkirk, MB.” For the Historic Resources Branch, Province of Manitoba, September 1988. Selkirk Heritage Committee in conjunction with University of Manitoba, Faculty of Architecture. Compilation of data collected to document history of Selkirk buildings built prior to 1940. Smith, Alicia, in conjunction with the University of Manitoba, Faculty of Architecture, “The Former Stuart Residence: 478 Eveline Street, Selkirk, MB.” For the Selkirk Heritage Committee, May 1997. 1. R. R. Rostecki, “Former Public Building: 406 Main Street, Selkirk, MB.” (For the Historic Resources Branch, Province of Manitoba, September 1988), 2. 2. Ibid., 5. 3. Selkirk Heritage Committee in conjunction with University of Manitoba, Faculty of Architecture. Compilation of data collected to document history of Selkirk buildings built prior to 1940. 4. Barry Potyondi, Selkirk: The First Hundred Years (Winnipeg: National School Services, 1982), 138. 5. Ibid., 100. 6. Alicia Smith, “The Former Stuart Residence: 478 Eveline Street, Selkirk, MB.” (For the Selkirk Heritage Committee, May 1997), 1. 7. Ibid., 2. Page revised: 27 August 2020
<urn:uuid:b71abaa2-b8ff-48ea-bb21-9246e9611944>
{ "dump": "CC-MAIN-2020-40", "url": "http://www.mhs.mb.ca/docs/mb_history/34/selkirktour.shtml", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00114.warc.gz", "language": "en", "language_score": 0.9695913195610046, "token_count": 2919, "score": 3.203125, "int_score": 3 }
A type of Malware called Mirai that can be used to create an IoT botnet is now available for download online. The malware can be used to launch DDoS attacks on e-commerce websites, bringing those businesses to a halt. This is a huge risk for e-commerce sites, and could cause chaos over the holiday period. Worst of all? Since the source code for the malware was leaked to the net last week, cybersec experts have noticed a marked rise in its use. The Mirai malware is designed to exploit an existing vulnerability within IoT devices that has been understood for some time. There are millions of IoT devices on the market that are misconfigured and set to forward messages via the Transmission Control Protocol (TCP). Often, when people buy IoT devices they do not update the factory settings with the necessary password to protect those devices. It is these leaky IoT devices that are being exploited by hackers to launch attacks, including DDoS attacks on ecommerce businesses. The powerful malware has already been used to launch some of the most savage DDoS attacks ever seen. Last week, Google had to step in to help protect the KrebsOnSecurity website from an incredible 620Gbps DDoS attack. That is massive, but is just one example of how DDoS attacks get worse year on year – without fail. The attack on Krebs isn’t the biggest recent attack, either. Since security expert Brian Krebs was attacked last week, an even bigger attack has taken place on the French web host OVH. That attack surged at a rate of between 1Tbps and 1.5Tbps – staggeringly enormous. Those DDoS attacks on websites are unprecedented, and are directly linked to the use of the IoT botnet malware Mirai. How does it work? Ryan Barnett, a security researcher at Akamai has explained the details. His firm noticed that attacks were coming from a huge number of IP addresses. This coupled with the fact that customers of its content delivery system were having their sites systematically checked for existing password combinations, brought the problem to light. It is believed that those passwords are the spoils of major hacks like the one at Yahoo. The stolen passwords are often sold off on the deep web, and it is thought that they are now being tested on other websites (due to the fact that that people often use the same password for multiple logins). Barnett comments, ‘They were all formatted exactly the same, except that the username and password was different. So we knew that this was probably being controlled by a single entity that was launching these attacks.’ It was the realization that all those IP addresses were being controlled by a single hacker that led Akamai’s team to discover the IoT botnet, ‘Because we were able to see this across all these different customers, we were able to see the same IP addresses hitting multiple websites. When we mapped them back, that’s when we were able to see that these were IoT systems.’ Barnett has gone on to explain that the problem itself can be traced back to security flaws in the IoT products that are being exploited by Mirai. Firstly, IoT products often ship with a default login such as ‘admin’. This allows them to easily be made part of the botnet. The discovery is further evidence that IoT products need to be managed more effectively. Barnett’s team says that manufacturers must be forced to make IoT consumers update those setting before the product will fulfill its purpose. Default settings must be abolished in favor of a compulsory initial setup procedure. The second problem is the aforementioned TCP settings used in IoT products. Transmission Control Protocol is a Linux-based feature that IoT devices ship with. It is that message forwarding protocol that is exploited by the Mirai malware using the default admin passwords. During attacks, message forwarding is used to hide the origin of the onslaught by rapidly spreading the messages through the IoT botnet. Consumers at fault? The solution is for end users of IoT products to update their devices’ passwords. The problem, however, is the sheer amount of IoT products that have already been sold. That number, combined with a general lapse of security amongst the world’s consumers, spells disaster. Often people buy IoT products like kettles or thermometers and simply plug and play, without ever checking the manual – nevermind updating default passwords. If the IoT device does what the consumer hoped it would, that user is under the illusion that they have no cause for concern. What the Mirai software teaches us is that IoT products can be harnessed – on masse – to carry out attacks on third parties: Without IoT consumers ever having a clue that they are involved. With the security problem so extensive, and consumers unlikely to suddenly become conscientious, the burden of responsibility has to lay with manufacturers. Only IoT device makers can effectively tackle the problem going forward. As for the devices in people’s homes, that are already being exploited? That problem is here to stay, and if last week’s sudden uptake of the Mirai malware is anything to go by then we might be in for quite the bumpy ride this holiday season.
<urn:uuid:3d5875bc-2f84-4029-bfb1-447b391b3355>
{ "dump": "CC-MAIN-2017-04", "url": "https://www.bestvpn.com/iot-botnet-mirai-ddos/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.9676944017410278, "token_count": 1069, "score": 2.578125, "int_score": 3 }
Climate Change 2001: Synthesis Report: Third Assessment Report of the Intergovernmental Panel on Climate Change The Climate Change 2001 volumes of the Third Assessment Report of the IPCC provide the most comprehensive assessment of climate change since its second report, Climate Change 1995. This Synthesis Report gives a comprehensive summary of the main points of the three separate volumes of the Report: The Scientific Basis; Impacts, Adaptation, and Vulnerability; and Mitigation. This synthesis will be particularly valuable for students and researchers who require a summary of the main issues in climate change, and will form a standard reference for policy decisions in governments and industry the world over for many years to come. What people are saying - Write a review We haven't found any reviews in the usual places. Summary for Policymakers Working Group Summaries The Scientific Basis Impacts Adaptation and Vulnerability A Authors and Expert Reviewers B Glossary of Terms C Acronyms Abbreviations and Units D Scientific Technical and SocioEconomic Questions Selected by the Panel E List of Major IPCC Reports 20th century adaptive capacity aerosols agriculture ancillary benefits Annex anthropogenic areas atmospheric concentrations baseline carbon carbon leakage changes in climate Chapter climate change mitigation climate models climate sensitivity climate system climate variability coastal costs decades decrease developing countries droughts economic ecosystems effects emissions reductions Emissions Scenarios emissions trading energy environmental equity estimates Figure floods fossil fuel future gases global average global mean greenhouse gas emissions high confidence ice sheet impacts of climate implementation increase infrastructure IPCC Kyoto Protocol land loss magnitude medium confidence mitigation policies natural negative observed changes ocean ozone permafrost pollution population potential precipitation production projected radiative forcing range reduce regions response result risk sea level sea-level rise sector social socio-economic species SRES scenarios studies Summary for Policymakers surface temperature sustainable development technologies temperature change terrestrial thermohaline circulation tropical tropical cyclones uncertainties warming WGII TAR Sections WGIII
<urn:uuid:8c3737e5-dc13-4784-ad28-e8ac402999cb>
{ "dump": "CC-MAIN-2014-23", "url": "http://books.google.com/books?id=T7-NHgAACAAJ&dq=0521014956", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997886087.7/warc/CC-MAIN-20140722025806-00126-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.7626699209213257, "token_count": 391, "score": 2.578125, "int_score": 3 }
130 Amendments changed to requests If a bill received from the House of Representatives, in which the Senate has made amendments, is returned by the House of Representatives with a suggestion that any of the amendments should be made the subject of a request by the Senate in accordance with section 53 of the Constitution, the Senate may forthwith, or on a future day, take the message into consideration in committee; and if any requests for amendments are made, the bill shall be returned to the House of Representatives with a message requesting that House to make the requested amendments. In dealing with any such requests the same procedure shall be followed as for requests made in the first instance. After the requests have been disposed of, if the amendments of the Senate have not been agreed to, the procedure with respect to amendments shall be followed. Adopted: 9 September 1909, J.121, as SO 224B (to take effect 1 October 1909) but renumbered as SO 228 for the 1909 edition 1989 revision: Old SO 233 restructured as three paragraphs and renumbered as SO 130; language modernised and expression streamlined See SO 129 for the background on this standing order which was adopted at the same time. It was described as being necessary for cases when the Senate had made a mistake by making an amendment rather than a request. Often, however, this is an arguable point and it becomes a matter for negotiation between the Houses, which is what happened in relation to the Sugar Bounty Bill in 1903 when the Senate accepted a suggestion from the House of Representatives that a certain amendment should have been made in the form of a request. At this point, the Senate has already read the bill a third time. The standing order provides for the Senate to deal with any request it decides to make in the same way as it would have dealt with a request in the first instance. If the amendments have not been agreed to by the House, they are dealt with after the request has been settled, in accordance with SO 129. Charles Boydell, Clerk of the Senate 1908-17, author of the first reference work on the Senate's powers unders section 53 of the Constitution (Source: Commonwealth Parliamentary Handbook) Boydell's small booklet on the powers of the Senate, the first of many scholarly writings by senate clerks Detail from Boydell's booklet on the powers of the Senate
<urn:uuid:d56f841d-0678-4a9e-b471-bfab41e13bf3>
{ "dump": "CC-MAIN-2018-51", "url": "https://www.aph.gov.au/About_Parliament/Senate/Powers_practice_n_procedures/aso/so130", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00561.warc.gz", "language": "en", "language_score": 0.9735895991325378, "token_count": 477, "score": 2.546875, "int_score": 3 }
Essays on style These style tips can help you turn a bland and wordy college essay into an engaging narrative bring your college application to life. The links below provide concise advice on some fundamental elements of academic writing. Writing style what is writing style i started out thinking that writing style is a personal thing and that all writers have their own style but, this way of. My personal learning style tends to lean towards the hands on approach i have always tended to retain more information if i actually do the task that is assigned. Formatting styles often bring students a lot of problems, as in order to follow them in order to make your essay comply with the necessary style. Academic writing style it is important to note that knowing about the process of essay writing and how to structure an essay is important however, knowing about. Writing an academic essay means fashioning a coherent set of ideas into an argument because essays are essentially linear—they offer one idea at a time—they must. By cormac essay horse mccarthy online pretty essay in mla style no assignments doing a proposal. There are four different types of writing styles: expository, descriptive it is a style of writing that focuses on describing a character, an event. One of the most popular essay topic among students is essay about life where every student tries to describe his/her life, problems, priorities and outlooks. Style style is the way in which something is written, as opposed to the meaning of what is written in writing, however, the two are very closely linked. - What this handout is about this handout will help you recognize potential problems in your writing style and learn to correct them what do we mean by style have you. - We provide excellent essay writing service 24/7 enjoy proficient essay writing and custom writing services provided by professional academic writers. - A guide about formatting college essays and some style tips for writing excellent college essays. - Short essay on fashion category: essays, paragraphs and articles on january 22, 2014 by sanjoy roy in crude terms fashion is a style of living. Essay-writing styles in this section we describe the characteristics of different types of academic essay, and offer examples of those essays, or at least essay. 100% free ap test prep website that offers study material to high school students seeking to prepare for ap exams enterprising students use this website to learn ap.
<urn:uuid:b13b37de-e090-40c2-bd08-6da2c4b19018>
{ "dump": "CC-MAIN-2018-43", "url": "http://fppaperawpo.hyve.me/essays-on-style.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00144.warc.gz", "language": "en", "language_score": 0.9464888572692871, "token_count": 486, "score": 2.9375, "int_score": 3 }
Definition of aerodyne : a heavier-than-air aircraft (such as an airplane, helicopter, or glider) — compare aerostat Origin and Etymology of aerodyne First Known Use: circa 1906 Learn More about aerodyne Seen and Heard What made you want to look up aerodyne? Please tell us where you read or heard it (including the quote, if possible).
<urn:uuid:59ee8ad3-afb9-485f-8582-3bb4d9301b5d>
{ "dump": "CC-MAIN-2017-04", "url": "https://www.merriam-webster.com/dictionary/aerodyne", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00561-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.8598750829696655, "token_count": 88, "score": 2.765625, "int_score": 3 }
Subject: Traffic - especially goods traffic - is constantly increasing throughout Europe’s road network. Forecasts are very clear: by 2020, freight transport will have increased by more than 70% in the European Union. The result will not only be gridlock on the roads, but also damage to the environment, more accidents and the danger that European business will become less competitive. Is there a solution ? Yes : the sea. It appears to be an increasingly credible alternative for transporting goods between Member States. The European Union is aiming to develop what it calls "motorways of the sea" all around Europe. In concrete terms, the idea is to establish high-quality maritime links between a limited number of selected ports located at strategic points on Europe's coastline. This will reinforce economic and territorial cohesion between outlying and insular areas and the centre of the European union. All this means starting to think differently…. - Download of the video - On-line viewing of the video (RealVideo format):
<urn:uuid:5c760008-9221-4e12-b565-4301e99fdac0>
{ "dump": "CC-MAIN-2015-32", "url": "http://ec.europa.eu/dgs/energy_transport/videos/transport/2005_12_motorways_of_the_sea_en.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987228.91/warc/CC-MAIN-20150728002307-00092-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9315754175186157, "token_count": 221, "score": 2.578125, "int_score": 3 }
What Language Will They Speak on Mars? Chinese, on present showing. Never mind that seven men – Russian (4) European (2) and Chinese (1) – are now two months into a 520-day isolation trial in Moscow, simulating a manned mission to Mars. That’s for show. Political willpower will settle the issue. In 1964 the rocket engineer Wernher von Braun forecast a human visit to Mars by 1984. That might well have happened had the US not cancelled its proposed Orion rocket in 1965 – the year after von Braun made his prediction. The trouble was that Orion would have had nuclear propulsion, not merely by nuclear motors, but by nuclear bombs. So it had to be abandoned in the aftermath of the nuclear test-ban treaty, much to the annoyance of Freeman J. Dyson and other enthusiasts. Here’s a diagram from my book Spaceships of the Mind (1978) which accompanied the BBC-OECA series with the same title, produced by Dick Gilling of BBC-TV. Assembled in Earth orbit, Orion would have carried about 2000 10-kiloton nuclear fission bombs, released at a rate of one a second to explode close behind a large spaceship. With a pusher plate absorbing the shocks, the spacecraft would quickly reach a speed that would take about 20 astronauts around Mars and back to Earth in just six months. It may seem daft now but Orion was a recognition, at the very start of the Space Age, that if human beings are ever to become serious about space travel, they’ll have to think nuclear. That’s still the case, although nuclear fusion will be preferable, of course, with ignition as far from the Earth as possible. When von Braun contributed to the New Scientist’s 1964 series on “The World in 1984” he remained mute about Orion although he glanced the nuclear option. At the time he was director of NASA’s Marshall Space Flight Center in Huntsville, Alabama, with the Apollo missions to the Moon at the top of his agenda. Here, for a start, are two early extracts from his article entitled “Exploration to the Farthest Planets”: Wernher von Braun on technology (1964) Man may have landed on the surface of Mars by 1984. If not, he will surely have made a close approach for personal observation of the red planet. Likewise, manned ‘fly-bys’ to Venus will have been made. Lunar landings will have long since passed from the fantastic achievement to routine occurrence. Astronauts will be shuttling back and forth on regular schedules from the earth to a small permanent base of operations on the moon. A part of the activity on the lunar surface may well be the operation of an astronomical observatory, taking advantage of the favourable observation conditions there. … Saturn V, the largest launch vehicle under development in 1964 in America, will have been able, before 1970, to shove a payload of 100,000 pounds to earth-escape velocity. But for the manned exploration of Mars and the build-up of a sizeable lunar base, a vehicle is needed that will haul ten times as much payload, including men and their life-support equipment. That is why a launch rocket far more powerful than Saturn V is under development in 1984. While chemical propulsion is still used for the first stage of large launch vehicles, improved engines and new fuels give higher specific impulse – more thrust per pound of fuel. Nuclear heat propulsion is used for upper stages, doubling the size of payloads that can be lifted free of the earth’s gravity. The sustained low thrust and high fuel economy of nuclear-powered electric propulsion systems serves to push unmanned probes to the outermost planets of the solar system. Instrumented payloads have been landed on some of the nearer planets. There may be one on one of Jupiter’s satellites, and perhaps one on an asteroid, and they are busily sending back data on surface composition, atmospheric environment and the like. Investigations of the comets may have developed into a particularly fascinating chapter of unmanned interplanetary rocketry. We shall be much nearer to the answer to the mystery of the origin of the solar system. The existence of a low order of life on Mars will probably have been proven, and the significance of the seasonal changes of the Martian canals established. Manned orbiting space laboratories with closed ecological systems have supported pioneering crews comfortably in space for an uninterrupted stretch of two years. The hazard of particle radiation, in particular that posed by giant solar flares, has been eliminated with efficient new shielding methods. Fuel cells, solar and nuclear systems provide ample power for extended space flights and for surface operations on the moon. Men and women in space keep in constant touch with friends at home through effective communications, even when they are scores of millions of miles away. … To lead back to my question “What language on Mars?” nothing is more apt than Tom Lehrer’s Wernher von Braun. Recalling his role in Hitler’s V2 terror weapon, the song ends: You too may be a big hero, / Once you have learned to count backwards to zero. / “In German oder English I know how to count down, / Und I’m learning Chinese,” says Wernher von Braun. Among recent developments, President Obama has scaled back his predecessor’s plans for manned spaceflight – which still use only chemical rockets, of course. Meanwhile China has overtaken Japan to become the world’s second largest economy, with commentators saying it will surpass the USA by 2025 or so. And as the Beijing Olympics illustrated, China’s present leaders are entirely ready to vie for global superiority by extravagant showmanship. In the 21st Century, the colonization of Mars will be the greatest show off Earth. The West lost its way in manned spaceflight after the success of the Apollo missions. The Russians did too, and the International Space Station, which has consumed much treasure and some lives, is frankly a bore for the general public.More important than any financial constraints, in my opinion, is the fading of that feeling of the 1960s that a new frontier was opening up for humanity at large – when Apollo really did seem like a giant leap for mankind. We’re back to relying on science fiction like “Star Trek” to keep the dream alive. Scientists stifle the sense of adventure, I’m sorry to say. I was at a space-science meeting in Florence in 1961, on the day when Yuri Gagarin became the first man to fly in space. The city’s Mayor drove up with crates of bubbly, but he was kept hanging about in the lobby because there was no question of interrupting a session on space plasmas with this merely human news. And just last week Lord (Martin) Rees, President of the Royal Society, declared: “It’s hard to see any particular reason or purpose in going back to the Moon or indeed sending people into space at all.” The cart parks itself in front of the horse. Space science has been fruitful enough, but it’s always been a passenger on space technology initiated for political and technological purposes and sustained by strongly competitive rivalry between nations and blocs, which becomes cooperative only when convenient. Although scientists have never understood that, the engineer von Braun certainly did. You may not go along with the way he worded it, but there’s no mistaking his conviction. Wernher von Braun on motivation (1964) Instruments continue to be indispensable in the exploration of space. But man has proven himself irreplaceable as an explorer of the moon, and is getting ready to explore the rest of the solar system in person. A man’s brain is still the ultimate in micro-miniaturization in size, weight, memory storage, and complex thinking operations. A large electronics computer might be superior at adding, subtracting, and in doing man’s routine clerical work. But, even in 1984, it remains for the brain of man to correlate unexpected observations, to perceive solutions to novel situations and to take independent action in the light of new data collected by his instruments. It is clear that man himself, and not just instruments, must explore the planets. Gradually, space exploration has become a kind of standard behind which dynamic men with their courage, fighting instincts and talents have begun to rally for their advancement. Wars, which had somewhat similar ‘rallying’ effects, are no longer feasible between industrialized nations nor are they a suitable yardstick for their strength – now that any military exchange with weapons of mass destruction would mean total annihilation of friend and foe alike. Just as the Crusades saved Europe much bloodshed by diverting the energies of its fighting men to a far-away objective, so space exploration provides a worthwhile outlet for the pent-up energies of man in the late twentieth century. Until recently, huge defence programmes had provided much of the stimulus for research and development work without which industrial progress comes to a halt. In 1984, the limitless scientific and technological challenges of the space-exploration programme have taken over this vital, invigorating role. The ‘spin-off’ products of the space programme, direct or indirect, are visible everywhere. More citizens of the world than ever before are taking part in the affairs of government. Well-informed, thinking men will continue to support this intriguing and profitable endeavour of space exploration. How far we go in space – and how fast – will continue to be affected by the measure of public support. Exploration of the planets, and later of the stars, may not be the one and only peaceful force to pull man and his culture forward. But it is the only one I know (in 1964) in which all men can enjoy both the excitement of conquest and the technological, economic, and spiritual benefits. If mankind in 1984 is freer in thought and spirit, as well as politically and economically freer of the shackles of the environment, I firmly believe it will, in large measure, be thanks to the benefits of space exploration. The Chinese have let it be known that they’ll send astronauts to the Moon as soon as possible. Unless there’s a big change in the West’s half-hearted attitude to manned spaceflight, China may beat everyone to Mars quite easily. And it might be very rash to shrug and bid them bon voyage (or yī lù shùn fēng). The opening up of the Solar System to human travel and settlement has often been likened to the maritime explorations that led to European political dominance in the colonial era. It was an opportunity that China missed, simply by the erosion of will that followed the earlier voyages of Admiral Zheng He. Here’s how I summarized the tale in my book Timescale. Early guns were cumbersome, and the ideal vehicle for them was a sailing ship. Ambitious Chinese saw that the time was ripe for domination of the world by gun-carrying ships. They pulled their high technology together and built dozens of large sailing junks with multiple masts, steered by sternpost rudders, navigated by magnetic compasses, and armed with guns. In AD 1405 a powerful fleet set off to impress the barbarians, and a succession of expeditions overawed half the known world, gathering treasure from as far away as Mecca and Africa. Had that naval policy persisted, this book would be written in Chinese. Officials and accountants persuaded the emperor after less than thirty years to put a stop to it, and eventually destroyed even the records of the voyages. It was bureaucracy’s most breathtaking accomplishment. … The essence of the Chinese maritime technology of ship handling, navigation, and gunnery was known in Europe. The Portuguese flotillas that began groping along the African coast were ludicrously small and ill-found, but as events showed, the world could be snatched by diminutive carracks, without grand fleets of the Chinese sort. Like their horsemen ancestors coming off the steppes, Europeans made up in daring, avarice, and mutual rivalry for what they lacked in imperial wealth and sophistication. The breakout of the European navigators can best be dated from 1492, when a westbound Spanish flotilla stumbled upon the Americas, mistaking them for Asia. Portuguese seamen heading the other way reached India by sea in 1498 and China in 1514. The first circumnavigation of the planet was completed by a Spanish ship, Vittoria, in 1522. So I can’t help wondering if our grandchildren will see an inversion of the events of the 15th Century, with China ruling the sky as Europeans once ruled the sea. N. Calder, Spaceships of the Mind, BBC Publications,1978 N. Calder (ed): The World in 1984, Penguin, 1985 Lehrer’s full lyrics http://www.lyricsdownload.com/tom-lehrer-wernher-von-braun-lyrics.html Rees quoted in The Guardian: see http://www.guardian.co.uk/science/2010/jul/26/martin-rees-space N. Calder, Timescale: An Atlas of the Fourth Dimension, Viking 1983
<urn:uuid:89c089d1-9491-4a89-b9ed-63adea92b1af>
{ "dump": "CC-MAIN-2015-22", "url": "https://calderup.wordpress.com/2010/08/02/what-language-on-mars/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928562.33/warc/CC-MAIN-20150521113208-00191-ip-10-180-206-219.ec2.internal.warc.gz", "language": "en", "language_score": 0.9527987241744995, "token_count": 2772, "score": 2.6875, "int_score": 3 }
It had been shown (Coquille, Napoleon and England, 1904) that Andreossy repeatedly warned Napoleon that the British government desired to maintain peace but must be treated with consideration. de la "Coquille," zoologie, p. 418), and now very generally adopted in English - of one of the most characteristic forms of New Zealand birds, the Apteryx of scientific writers. "Coquille," ut supra) heard of it; and a few years later J. A number of small streams, among them the Nehalem, Coquille and Umpqua rivers, cut their way through the Coast Range to reach the ocean. The Coquille river is navigable for about 37 m., the Yaquina river for 23 m.
<urn:uuid:533481de-5760-42a8-8ef2-e8ab7806a4f4>
{ "dump": "CC-MAIN-2021-21", "url": "https://www.yourdictionary.com/coquille", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00216.warc.gz", "language": "en", "language_score": 0.9391452670097351, "token_count": 159, "score": 2.65625, "int_score": 3 }
Obstructive sleep apnea (OSA) contributes a major health burden to society due to its high prevalence and substantial neurocognitive and cardiovascular consequences. Estimates suggest that at least 10% of adults in North America are afflicted with OSA, making it probably the most common respiratory disease in the developed world (Peppard et al. Am J Epidemiol. 2013;177:1006). Nasal CPAP is a highly efficacious therapy that has been shown to improve neurocognitive and cardiovascular outcomes. However, CPAP is not always well tolerated. Alternative therapies, such as oral appliances and upper airway surgery, have highly variable efficacy, and evidence of important clinical benefits are uncertain. Therefore, efforts are ongoing to determine optimal alternative strategies for therapy. We ultimately believe that a thorough analysis of a sleep recording combined with demographic data and other readily available clinical data (perhaps plasma biomarkers) may yield sufficient information for us to know why OSA is occurring and what interventions might be helpful for an individual patient. Currently, our use of the polysomnogram to derive only an apnea hypopnea index does not take full advantage of the available data. An apnea hypopnea index can be readily obtained from home sleep testing and does not truly provide much insight into why a given individual has OSA, what symptoms are attributable to OSA, and what interventions might be considered for the afflicted individual. By analogy, if the only useful data derived from an ECG were a heart rate, the test would rapidly become obsolete. Along these lines, if the only role for the sleep clinician was to prescribe CPAP to everyone with an AHI greater than 5/h, there would be little need or interest in specialized training. In contrast, we suggest that rich insights regarding pathophysiology and mechanisms should be gathered and may influence clinical management of patients afflicted with OSA. Thus, we encourage more thorough analyses of available data to maximize information gleaned and, ultimately, to optimize clinical outcomes.
<urn:uuid:ade83148-9609-4554-9a25-2d3c0b73d371>
{ "dump": "CC-MAIN-2018-13", "url": "https://www.mdedge.com/chestphysician/article/160320/society-news/osa-endotypes-and-phenotypes-toward-personalized-osa-care", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651465.90/warc/CC-MAIN-20180324225928-20180325005928-00272.warc.gz", "language": "en", "language_score": 0.9253107905387878, "token_count": 404, "score": 2.59375, "int_score": 3 }
Malthus and Climate Change: Betting on a Stable Population A standard assumption in integrated assessment models of climate change is that population and productivity are growing, but at a decreasing rate. We explore the signifcance of the assumption of population and productivity growth for greenhouse gas abatement. After all, there has been no long run slow down in the growth of productivity over the past few centuries, and the rate of population growth has actually been increasing for the past 19 centuries. Even if either of these growth rates were expected to slow, by how much is subject to great uncertainty. We show computationally that such continued growth greatly increases the severity of climate change. Indeed we nd that climate change is a problem in large part caused by exogenous population and productivity growth. Rapid reductions in growth make climate change a small problem; smaller reductions in growth imply climate change is a very serious problem indeed. Analogously, reductions in the growth rate of population can be effective in controlling climate change. (This abstract was borrowed from another version of this item.) If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large. As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it. References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: - Kolstad, Charles D., 1996. "Learning and Stock Effects in Environmental Regulation: The Case of Greenhouse Gas Emissions," Journal of Environmental Economics and Management, Elsevier, vol. 31(1), pages 1-18, July. - Charles I. Jones, 1995. "Time Series Tests of Endogenous Growth Models," The Quarterly Journal of Economics, Oxford University Press, vol. 110(2), pages 495-525. - Weitzman Martin L., 1994. "On the Environmental Discount Rate," Journal of Environmental Economics and Management, Elsevier, vol. 26(2), pages 200-209, March. - Weitzman, M.L., 1993. "On the 'Environmental' Discount Rate," Harvard Institute of Economic Research Working Papers 1625, Harvard - Institute of Economic Research. - Alan Manne & Richard Richels, 1995. "The Greenhouse Debate: Econonmic Efficiency, Burden Sharing and Hedging Strategies," The Energy Journal, International Association for Energy Economics, vol. 0(Number 4), pages 1-38. - Harberger, Arnold C, 1998. "A Vision of the Growth Process," American Economic Review, American Economic Association, vol. 88(1), pages 1-32, March. - Stephen C Peck & Thomas J. Teisberg, 1992. "CETA: A Model for Carbon Emissions Trajectory Assessment," The Energy Journal, International Association for Energy Economics, vol. 0(Number 1), pages 55-78. - Harford, Jon D., 1997. "Stock Pollution, Child-Bearing Externalities, and the Social Discount Rate," Journal of Environmental Economics and Management, Elsevier, vol. 33(1), pages 94-105, May. - Harford, Jon D, 1998. "The Ultimate Externality," American Economic Review, American Economic Association, vol. 88(1), pages 260-265, March. Full references (including those not matched with items on IDEAS) When requesting a correction, please mention this item's handle: RePEc:eee:jeeman:v:41:y:2001:i:2:p:135-161. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu) If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If references are entirely missing, you can add them using this form. If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form. If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services.
<urn:uuid:4f1b7e02-6fb5-4f1e-a420-b5ba0a3bc98b>
{ "dump": "CC-MAIN-2017-51", "url": "https://ideas.repec.org/a/eee/jeeman/v41y2001i2p135-161.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00258.warc.gz", "language": "en", "language_score": 0.8752138614654541, "token_count": 1059, "score": 2.671875, "int_score": 3 }
(Note: A Closer Look At Your Health airs at 6:50 a.m. most Tuesdays on KBOI News Radio 670. This is an edited transcript of the segment from April 19.) This week is a good time to ponder that question because it’s National Infant Immunization Week, and World Immunization Week is next week. It’s a good time to talk about making sure you and your family are fully protected against infectious diseases. This week, the focus is on infants. Why infants specifically instead of all children? While it’s important that all children have received the recommended vaccinations, giving babies the recommended immunizations by the time they are 2 is the best way to protect them from 14 serious childhood diseases, including whooping cough and measles. Parents are encouraged to talk to their child’s doctor to make sure their babies’ immunizations are up-to-date. Some parents may not trust that vaccines are safe, so they may not immunize their children. What would you say to those parents? We know that parents want to do what’s best for their children, and if they have concerns about the safety or necessity of a particular vaccine, they should talk to their children’s doctors about that. Generally, vaccines are very safe, and they are monitored continuously to make sure they stay that way. But there are possible side effects, which are listed on the fact sheet given to us when we vaccinate our kids. Serious side effects are extremely rare, but vaccines do occasionally cause mild reactions, like an achy arm or a low fever. Those symptoms will usually go away within a few hours or days. Choosing not to immunize your child also is a risk, for your child and for other children who might come into contact with your child. So it’s important for infants and children to get their immunizations. How about adults? Adults who are immunized are not only protecting themselves, they’re also protecting the people around them who might be vulnerable to diseases they can carry home. When everyone in a community who can get immunized does get immunized, it increases the level of protection for those who can’t, including people with weakened immune systems or newborn babies who are too young to get vaccinations. Pertussis is great example of this: Babies can’t get their first vaccination until they’re 2 months old. While most adults who get pertussis will recover and might not even know they had it, they can infect babies, who may develop serious complications and even die. If adults are immunized, they are much less likely to carry the infection home. Adults also need immunizations to protect themselves against other diseases, depending on their age and health conditions and whether they work in healthcare or travel abroad. How do you know which immunizations you or your children need? There is a lot of information on the Department of Health and Welfare’s website, www.immunizeidaho.com, for both adults and children. But if you have questions, you should talk to your medical provider. The bottom line is that protecting our families and ourselves from infectious, vaccine-preventable diseases is one of the most important things everyone can do to keep their loved ones healthy. Immunizations have had an enormous impact on improving the health of children and adults in the United States. Protect yourself and your family by choosing to immunize. It’s the most powerful defense that is safe, proven and effective.
<urn:uuid:37506f60-2518-48dd-82ce-b4fe183747e5>
{ "dump": "CC-MAIN-2017-51", "url": "https://dhwblog.com/2016/04/19/are-your-children-current-on-their-immunizations/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00269.warc.gz", "language": "en", "language_score": 0.971062421798706, "token_count": 733, "score": 3.25, "int_score": 3 }
Salvador Dalí - King, Elliott H; Teixidor, Montse A; Hine, Hank; Jeffett, William - Yale University Press - Related Categories - Art and Architecture Published in association with the High Museum of Art The Late Work Edited by Elliott H. King; With contributions by William Jeffett, Montse Aguer Teixidor, and Hank Hine Salvador Dalí (1904–1989) was one of the most famous and controversial artists of the 20th century. Although he was prolific for more than sixty years—creating 1,200 oil paintings, countless drawings, sculptures, theatre and fashion designs, book illustrations, and numerous writings—the nearly universal current critical judgment is that his work reached its zenith in the early 1930s, when he was affiliated with the Surrealist movement. The forty years of work executed after 1940—the bulk of his oeuvre—is often seen as repetitious, reactionary, and overly commercialized. Such criticisms mainly arose from his 1941 reinvention of himself as a “classicist,” his embrace of Catholicism, and his support for General Franco—postures that distanced him from notions of modernism and the avant-garde. This handsomely illustrated volume focuses on Dalí’s work after 1940, presenting it as a multifaceted oeuvre that simultaneously drew inspiration from the Old Masters and the contemporary world. Beginning in the late 1930s with the transition from Dalí’s well-known Surrealist canvases to the classicism he announced in 1941, the volume traces the artist’s work in illustration, fashion, and theatre, predating commercial ventures by such celebrity artists as Andy Warhol. Essays evaluate the significance of Dalí’s “nuclear mysticism” of the 1950s, his enduring interest in science, optical effects, and illusionism, his collaborations with photographer Philippe Halsman (and his brief forays into Hollywood to work with Alfred Hitchcock and Walt Disney), and visit the two major repositories of his work—the Dalí Theatre-Museum in Figueres and the Salvador Dalí Museum in St. Petersburg. Elliott H. King is a lecturer in European modern art at the University of Colorado at Colorado Springs. Montse Aguer Teixidor is Director of the Centre for Dalinian Studies at the Fundació Gala-Salvador Dalí, Figueres. Charles Henri (Hank) Hine is Director and William Jeffett is Chief Curator of Exhibitions at the Salvador Dalí Museum, St. Petersburg, Florida. High Museum of Art, Atlanta(08/07/10-01/09/11)TITLES IN RELATED CATEGORIES
<urn:uuid:77942684-81de-45ff-9dd4-9a0a27a467f5>
{ "dump": "CC-MAIN-2014-41", "url": "http://yalepress.yale.edu/yupbooks/book.asp?isbn=9780300168280", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00099-ip-10-234-18-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9414121508598328, "token_count": 578, "score": 2.59375, "int_score": 3 }
Submitted by Benjie Magallano on March 8, 2018 - 2:43pm A 1,260-galloon tank is filled by two pipes in as many as the smaller pipe brings in galloons per minute. How long will it take each pipe to fill the tank if the larger pipe brings in one galloon per minute more than the smaller pipe? Submitted by Benjie Magallano on March 8, 2018 - 2:34pm A company has a certain number of machines of equal capacity that produced a total of 180 pieces each working day. If two machines breakdown, the workload of the remaining machines is increased by three pices per day to maintain production. Find the number of machines?
<urn:uuid:c4584be0-7f8b-4596-abc2-664a7c48cbee>
{ "dump": "CC-MAIN-2019-30", "url": "https://www.mathalino.com/tag/forum/linear-equation-one-unknown", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00235.warc.gz", "language": "en", "language_score": 0.9521452784538269, "token_count": 147, "score": 3.046875, "int_score": 3 }
An entomologist at New College, Oxford ("New" because its only a few centuries old), discovered beetles infesting the oak beams supporting the roof of the Great Hall. It was fairly urgent that these be replaced before the roof collapsed--but anyone who has looked at the price of oak lately can tell you that this was not something the college budget was prepared for. Since oak from a commercial supplier was out of the question, someone suggested that the college Forester be sent for. His job was to administer the various scattered tracts of land that had been deeded to the college when it was founded. The trustees hoped he might know of suitable trees on college land. It turned out that there was indeed a suitable stand of mighty oaks. They had been planted when the college was founded, and down the centuries each Forester had told his successor: "You don't cut those oaks; those are for when the beetles get into the beams in the Main Hall." Posts: 36029 | From: Admin | Registered: Feb 2000 | IP: Logged | New College itself looked into this one. College Oaks quote:This story regards the replacement of the oak beams in the college dining hall, and is occasionally given as an example of admirable forward planning. In its mythical form the story is often attributed to the anthropologist Gregory Bateson and may be found in a number of places: * Brand, Stewart How Buildings Learn Viking, 1994 * McDonough, William A Centennial Sermon: Design Ecology Ethics and The Making of Things [snip] When the college archivist was asked about this story she came back with the following information. In 1859, the JCR [Junior Common Room--basically, the students] told the SCR [Senior Common Room--sort of like faculty] that the roof in Hall needed repairing, which was true. (As an aside, at this time, there were few enough people that Hall contained a grand piano; this can be seen in the Joseph Nash watercolour of the hall illustrating the Introduction to the Treasures pages.) In 1862, the senior fellow was visiting College estates on `progress', i.e., an annual review of College property, which goes on to this day (performed by the Warden [the head of the College]). Visiting forests in Akeley and Great Horwood, Buckinghamshire (forests which the College had owned since 1441), he had the largest oaks cut down and used to make new beams for the ceiling. It is not the case that these oaks were kept for the express purpose of replacing the Hall ceiling. It is standard woodland management to grow stands of mixed broadleaf trees e.g., oaks, interplanted with hazel and ash. The hazel and ash are coppiced approximately every 20-25 years to yield poles. The oaks, however, are left to grow on and eventally, after 150 years or more, they yield large pieces for major construction work such as beams, knees etc. New College, BTW, was founded in 1379, about 200 years after Oxford University came into existence. Kathy "oaky doaky" B. -------------------- The plural of "anecdote" is not "data." Posts: 4255 | From: Sacramento, CA | Registered: Feb 2000 | IP: Logged |
<urn:uuid:ac223b0a-fdb9-4122-be45-2273cf0c5f3e>
{ "dump": "CC-MAIN-2017-43", "url": "http://msgboard.snopes.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=99;t=000102;p=1", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00295.warc.gz", "language": "en", "language_score": 0.9747136235237122, "token_count": 690, "score": 2.78125, "int_score": 3 }
Life has existed on Earth for nearly four billion years, shaped by massive extinction events. In the short span of the last 10,000 years, humans have become important agents in shaping global environmental change. The question this course considers is straightforward: Have humans been modifying the environment in ways that will, in the not distant future, cause another worldwide extinction event? There are no simple, much less uncontested, answers to this question. We will have to consider the ways we have altered habitats and ecosystem processes. We will also consider the economic consequences of disturbed ecosystems and assess contemporary policy responses and solutions. One lecture and one discussion section per week. Limited to 50 students. Spring semester. Professors Dizard and R. Levin.2017-18: Offered in Spring 2018 (Offered as BIOL 230 and ENST 210.) A study of the relationships of plants and animals (including humans) to each other and to their environment. We'll start by considering the decisions an individual makes in its daily life concerning its use of resources, such as what to eat and where to live, and whether to defend such resources. We'll then move on to populations of individuals, and investigate species population growth, limits to population growth, and why some species are so successful as to become pests whereas others are on the road to extinction. The next level will address communities, and how interactions among populations, such as competition, predation, parasitism, and mutualism, affect the organization and diversity of species within communities. The final stage of the course will focus on ecosystems, and the effects of humans and other organisms on population, community, and global stability. Three hours of lecture per week. Requisite: BIOL 181 or ENST 120 or equivalent. Not open to first-year students. Limited to 65 students. Fall semester. Professor Temeles.2017-18: Offered in Fall 2017 (Offered as HIST 104 [C] and ENST 220.) This course considers the ways that people in various parts of the world thought about and acted upon nature during the nineteenth century. We look historically at issues that continue to have relevance today, including: invasive species, deforestation, soil-nitrogen availability, water use, desertification, and air pollution. Themes include: the relationship of nineteenth-century colonialism and environmental degradation, gender and environmental change, the racial dimensions of ecological issues, and the spatial aspects of human interactions with nature. We will take at least one field trip. In addition, we will watch three films that approach nineteenth-century environmental issues from different vantage points. Two class meetings per week. Spring semester. Professor Melillo.2017-18: Not offered (Offered as PHIL 225 and ENST 228) As our impact on the environment shows itself in increasingly dramatic ways, our interaction with the environment has become an important topic of cultural and political debate. In this course we will discuss various philosophical issues that arise in such debates, including: What obligations, if any, do we have to future generations, to non-human animals, and to entire ecosystems? How should we act when we are uncertain exactly how our actions will affect the environment? How should we go about determining environmental policy? And how should we implement the environmental policies we decide upon? What is the most appropriate image of nature? Limited to 30 students. Spring semester. Professor Emeritus Kearns.2017-18: Not offered (Offered as ECON 111E and ENST 230.) A study of the central problem of scarcity and of the ways in which micro and macro economic systems allocate scarce resources among competing ends and apportion goods produced among people. Covers the same material as ECON 111 but with special attention to the relationship between economic activity and environmental problems and to the application of micro and macroeconomic theory tools to analyze environmental issues. A student may not receive credit for both ECON 111 and ECON 111E. Two 80-minute and one 50-minute lecture/discussion per week. Each section is limited to 30 Amherst College students. Fall semester. Professor Sims.2017-18: Offered in Fall 2017 (Offered as MATH 130E and ENST 240.) This course is an introduction to applied statistical methods useful for the analysis of data from all fields. Brief coverage of data summary and graphical techniques will be followed by elementary probability, sampling distributions, the central limit theorem and statistical inference. Inference procedures include confidence intervals and hypothesis testing for both means and proportions, the chi-square test, simple linear regression, and a brief introduction to analysis of variance (ANOVA). This course covers the same statistical concepts as Math 130, but has an environmental focus through examples. ENST majors are strongly encouraged to take this version of the course, but it is open to all students. Four class hours per week (two will be held in the computer lab). Labs are not interchangeable between sections due to course content. Limited to 20 students. Fall semester. Professor Wagaman.2017-18: Offered in Fall 2017 and Spring 2018 Contesting values of and struggles over the control of “nature” are at the heart of environmental politics, and differently positioned political, economic, and social interest groups contend for and exert power through the U.S. environmental policy-making process. In this course we will examine the politics of U.S. environmental policies, focusing on how local, regional, and national governmental institutions, non-governmental organizations and interest groups, and some publics (but not all) define environmental problems and actionable solutions. We will examine the relationship between science, policy and politics, and critically evaluate when and how "objective" scientific truths are mobilized for particular agendas--while not for others--and what "citizen science" means with respect to the U.S. environmental policy process. The class will be divided into two parts: Part I will begin with key environmental writings, and move into an overview of the institutions, actors, and concepts that shape our policy process. Part II will use a case study approach to ground our understanding of how multi-scalar interactions, plurality and uneven power relations influence how and why some issues and interests are validated in the policy process, while others are not. Case studies may include: fracking, Keystone XL pipeline, Endangered Species listings and New England cod fishery regulations. Recommended requisite: ENST 120. Limited to 35 students. Fall semester. Pick Visiting Professor Stewart.2017-18: Not offered Our global environment as a subject of concern has emerged in recent decades with the rise of scientific and media attention to the ways ecological issues like climate change and biodiversity loss matter in the daily lives of global citizens. But are all “global environmental citizens” equally responsible for and influenced by what are currently considered global environmental challenges? Why is it that some forms of nature are considered global while others are resolutely local? Are international agreements and development and conservation organizations effective at addressing the problems they intend to solve, or do they create new problems that should be accounted for in our understanding of global environmental politics? In this course, we will explore these questions and others by examining various ecological crises – climate change, deforestation, fisheries management, air and water pollution, hazardous waste disposal, among others – from critical perspectives that raise questions about key political issues, including markets, states, science, power, knowledge and social movements. This course is organized into thematic case studies, through which we will examine the production and negotiation of environmental problems by diverse social actors and institutions, including: producers and consumers, members of different socio-economic groups, actors of institutions and social movements, and citizens of diverse polities. Limited to 35 students. Spring semester. Pick Visiting Assistant Professor Stewart.2017-18: Offered in Spring 2018 The nascent field known as “conservation social science” is emerging among the major conservation organizations, like the World Wildlife Fund and The Nature Conservancy, as they realize the need to move beyond their traditional biological foundations towards the social sciences. Conservation landscapes and species of interest are embedded in complex, and often long-standing, human-environmental relationships that require the retooling of conservation science to better understand and address integrated challenges. This shift towards a “people are the solution” conservation framework requires knowledge about the ecological and social concerns and implications of conservation, which is a well-suited pursuit for interdisciplinary Environmental Studies scholars. This course prepares students to engage with this emerging field by understanding what conservation social science means in the history and trajectory of conservation, and what its foci and approaches should be in the coming years. We begin the class with a historical review of the "greening" of the World Bank and the scaling up of community-based natural resource management (CBNRM) during the 1980s, which brought "the environment" and the "community" together in development and conservation agendas. Moving forward, we review critical social science literatures that examine the social impact of conservation to refine meaningful ways forward for community-centered conservation endeavors. Key themes will include: participation, traditional ecological knowledge, ecological baselines, sustainable yields and sustainability. Requisite: ENST 120. Limited to 35 students. Fall semester. Pick Visiting Professor Stewart.2017-18: Not offered What we know and how we know about "the environment" is influenced by cultural, political, historical and social contexts. Why are some knowledges about the environment perceived to be more accurate, objective and true than others? How might our collective understandings of environmental change shift if multiple forms of knowledge--"western" scientific, indigenous, etc.--were mobilized in the production, dissemination and application of environmental knowledge? These questions are both academic and policy-oriented and sit at the interface of political ecology and science studies scholarship on nature/society and conservation and development practice: environmental management contestations and outcomes are shaped by what counts as valid knowledge. In this seminar we will examine how attention to the politics of knowledge potentially shifts the current formations of environmental studies and policy–in theory and practice--towards more integrated and democratized engagements with social and environmental change. This course is anchored in the field of political ecology, which is a sub-field of geography that is concerned with the complex power dynamics of knowing and making claims on "the environment." Our readings and discussions will examine critical perspectives on nature/society boundaries; the role of "western" scientific knowledge in the politics of conservation and development; and meaningful ways to integrate "western" scientific and indigenous environmental knowledges in environmental studies. Requisite: ENST 120; recommended requisite: ENST 250. Limited to 35 students. Spring semester. Pick Visiting Professor Stewart.2017-18: Not offered (Offered as HIST 402 [c] and ENST 401.) Wine is as old as Western civilization. Its consumption is deeply wedded to leading religious and secular traditions around the world. Its production has transformed landscapes, ecosystems, and economies. In this course we examine how wine has shaped the history of Europe, North Africa, and the Americas. Through readings, scientific study, historical research, and class discussion, students will learn about such issues as: the environmental impact of wine; the politics of taste and class; the organization of labor; the impact of imperialism and global trade; the late nineteenth-century phylloxera outbreak that almost destroyed the European wine industry; and the emergence of claims about terroir (the notion that each wine, like each culture, is uniquely tied to a place) and how such claims are tied to regional and national identity. Through class discussion, focused research and writing workshops, and close mentoring, each student will learn about wine while designing and executing an independent research project. We will also get our hands dirty with soil sampling, learn the basics of sediment analysis in the laboratory, and have a go at fermentation. Two meetings per week. This is a research seminar open to juniors and seniors. Priority given to history and environmental studies majors. History majors may take this course either as a research seminar or in place of HIST 301 “Writing the Past.” Limited to 20 students. Spring semester. Professors López and Martini.2017-18: Not offered The dependence of many countries on marine organisms for food has resulted in severe population declines in cod, bluefin tuna, swordfish, and abalone, as well as numerous other marine organisms. In this seminar we will examine the biological, sociological, political, and economic impacts of the global depletion of fisheries. Questions addressed will include: What is the scope of extinctions or potential extinctions due to over-harvesting? How have overfished species responded to harvest pressures? How are fisheries managed, and are some approaches to harvesting better than others? How do fisheries extinctions affect the societies and economies of various countries and marine ecosystems? How do cultural traditions of fishermen influence attempts to manage fisheries? Does aquaculture offer a sustainable alternative to overfishing? What is aquaculture’s impact on marine ecology? Three class hours per week. Requisites: ENST 120 or BIOL 230/ENST 210 or consent of instructors. Not open to first-year students. Limited to 20 students. Omitted 2013-14. Professors Temeles and Dizard.2017-18: Not offered Environmentalists are divided between those who believe there must be a fundamental change in our values and our devotion to the market and those who believe our values and the market offer the best hope for achieving sound environmental policy. If we are to achieve sustainable management of natural resources, is it necessary that we first transform ourselves and the basis of our social organization or do we already possess the tools to accomplish the task, in which case fundamental transformations might actually make things worse? In this course, we will join this debate and closely examine the claims and counterclaims made for each position. We will examine specific issues, ranging from reducing greenhouse gases to regulating genetically modified crops, in hopes of working our way toward an assessment of policy choices. Students will be expected to select an environmental issue (not necessarily one on which our course readings will focus) on which they will write a term paper that comes to grips with our options and that will suggest, albeit tentatively, which option(s) seem most promising. Limited to 25 students. Not open to first-year students. Spring semester. Professor Dizard and Senior Lecturer Delaney. 2017-18: Not offered Independent reading course. Fall and spring semesters. The Department.2017-18: Offered in Fall 2017 and Spring 2018 The Senior Seminar is intended to bring together majors with different course backgrounds and to facilitate original independent student research on an environmental topic. In the early weeks of the seminar, discussion will be focused on several compelling texts (e.g., Rachel Carson’s Silent Spring or Alan Weisman’s The World Without Us) which will be considered from a variety of disciplinary perspectives by members of the Environmental Studies faculty. These discussions are intended to help students initiate an independent research project which may be expanded into an honors project in the second semester. For students not electing an honors project, the seminar will offer an opportunity to integrate what they have learned in their environmental studies courses. The substance of the seminar will vary from year to year, reflecting the interests of the faculty who will be convening and participating in the seminar. Open to seniors. Fall semester. Professors Martini and Sims.2017-18: Offered in Fall 2017 Spring semester. The Department.2017-18: Offered in Spring 2018
<urn:uuid:d45f8c87-05cf-4d7a-be1d-4b2cbc0d6c7f>
{ "dump": "CC-MAIN-2017-39", "url": "https://www.amherst.edu/academiclife/departments/environmental_studies/courses/1314S?display=curriculum", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686043.16/warc/CC-MAIN-20170919221032-20170920001032-00301.warc.gz", "language": "en", "language_score": 0.924971878528595, "token_count": 3195, "score": 3.765625, "int_score": 4 }
A new partnership between Wolfram Research and the Raspberry Pi Foundation bundles the Wolfram Language and Mathematica with the Raspbian OS included on every Raspberry Pi. The Raspberry Pi—a credit-card sized, $25 Linux ARM computer that was originally conceived as a means to inspire a new generation of programmers—recently shipped its two millionth unit. This is the next step in Wolfram’s large-scale plan to apply sophisticated computation everywhere, in a universally accessible way. The Wolfram Language on Raspberry Pi provides a unique knowledge-based programming environment for STEM education and general computing, and is a striking demonstration of the capability of running the Wolfram Language on an embedded computer anywhere. Users can easily connect to devices that do things in the world—analyzing and uploading sensor data, controlling an autonomous system, analyzing and routing traffic, and millions of other embedded applications. Speaking today at the business and technology leader forum D2: The Future of Data and Devices in Boston, Massachusetts, Wolfram Research Founder and CEO Stephen Wolfram said, “In effect, this is a technology preview: it’s an early, unfinished glimpse of the Wolfram Language. Quite soon it is going to start showing up in lots of places, notably on the web and in the cloud.” Earlier in the day, at the education-focused Computer-Based Math™ Education Summit (CBM) at UNICEF headquarters in New York, Conrad Wolfram, Managing Director of Wolfram Research Europe, and Eben Upton, a Founder and Trustee of the Raspberry Pi Foundation, highlighted the STEM education aspects of the pilot release. “So pleased we can associate the power of Mathematica‘s math with the coding excitement of the Raspberry Pi,” said Conrad Wolfram. “Coding is central to modern math as much as math is often needed to code. Both are central to our fundamental reform of math education—CBM—and today’s announcement puts all the elements together with a secret ingredient: fun.” Added Upton, “Since we launched in 2012, we have come to realize that there is a broader opportunity to engage young people in all aspects of STEM education, from physics and biology to geography, and to use programming itself as a tool to develop the kinds of problem-solving skills that children need to deal with the world around them. We’re great fans of the work that Wolfram Research has been doing in promoting CBM as a new paradigm in mathematics education, and are excited to be able to offer every Raspberry Pi user the chance to get hands-on experience with the Wolfram Language.” The Wolfram Language–Raspberry Pi integration includes a new Device API to connect to serial devices, the on-board GPIO, and the RaspiCam digital camera. A Remote Development Kit for use on any desktop installation of Mathematica can also be downloaded from the Wolfram website. “I’m very excited to see what kinds of things people invent with the Wolfram Language on the Raspberry Pi—and I look forward to reading about some of them in the Wolfram+Raspberry Pi section on Wolfram Community and other places,” wrote Stephen Wolfram in a post on his personal blog. About the Raspberry Pi Foundation The Raspberry Pi Foundation is a UK-registered charity organization founded in 2009, which aims to serve as a catalyst for making cheap, accessible, programmable computers available everywhere—including developing countries that can’t afford the power and hardware needed to run a traditional desktop PC. Since the launch of the first Raspberry Pi in 2011, the organization has sold over two million devices. About Computer-Based Math™ Founded by Conrad Wolfram in 2011, Computer-Based Math™ (CBM) aims to build a completely new math curriculum with computer-based computation at its heart—alongside a campaign to refocus math education away from historical hand-calculating techniques and toward relevant and conceptually interesting topics. Find out how you can get involved with the global CBM initiative: Wolfram is the company where computation meets knowledge. For over 25 years, the organization has pursued a long-term vision to develop the science, technology, and tools to make computation an ever-more-potent force in today’s and tomorrow’s world. The company is the developer of Mathematica, the world’s most powerful integrated computation system, and Wolfram|Alpha, the widely used and continually growing computational knowledge engine. Wolfram is also the creator of the Computable Document Format (CDF), an interactive, computation-based knowledge container that is the core technology behind the 9000+ examples in the Wolfram Demonstrations Project and digital content offerings by all major STEM publishers. For more information, visit the company website:
<urn:uuid:10abfeb2-29a3-4cf0-8bb9-c0b53d81f72a>
{ "dump": "CC-MAIN-2016-22", "url": "http://company.wolfram.com/news/2013/wolfram-language-and-mathematica-now-available-on-every-raspberry-pi/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00147-ip-10-185-217-139.ec2.internal.warc.gz", "language": "en", "language_score": 0.909868597984314, "token_count": 996, "score": 2.8125, "int_score": 3 }
Automobile insurance from Massachusetts companies is based on the principle of no fault. What this means is that if you are involved with other vehicles in a car accident, regardless who is at fault, each insurance company pays for damages experienced by their insured. There is no need to go to court, no tangle over who is right and who is wrong, or any other difficulties associated with the distribution of funds to those who need them. What happens if this principle is applied to life? What if instead of spending time assigning blame or fault with all the resentment and anger that that can produce, no fault were placed upon anyone? If that were to be the case it could mean that any resentment or anger I might feel from a perceived injury, whether physical, emotional or psychological could be seen in a different light. Think about it. If a cat scratches me, is it the cat’s fault, or is it simply the nature of a cat to scratch? If a small child breaks my precious piece of china or even pulls the dog’s tail, whom can I blame? Children are often careless and break things. Especially when they are very young, they may not recognize that dogs don’t like to have their tails pulled. Is the child at fault for how he or she acts, or is the child simply acting the way children do? In my life there have been many people who metaphorically speaking stepped on my toes because of who they were. They didn’t do it on purpose. They were just being themselves. Can I blame them for being themselves? Do I resent them for their actions, or do I simply recognize that it’s not their fault that they are inclined to be forgetful, careless, ill informed or whatever else caused the problem? I may do a disservice if I place blame on another instead of recognizing that he or she only acts as she or he is capable of acting at the time. The same is true of myself. I can take responsibility for my action; I can try to do better next time; yet I do not need to fault myself. It is my firm belief that at any given time people do only what they are capable of doing and that there is no need to assign fault. Blaming causes resentment and anger as well as tends to prolong the original difficulty. I might gently call attention or discuss what was said or done, yet only if it seems important. It’s not my job to judge the actions of another. Perhaps this is why statues and other images of Justice are usually blindfolded. She holds scales symbolizing fairness. Perhaps she sees with the eyes of the heart rather than her physical ones. To be fair I need to take into consideration all the factors in a situation and not only my perceptions. When I can accept that there really is no fault, that it simply is the way it is, then compassion and forgiveness will guide my response.
<urn:uuid:e88bc8c4-cfa7-4099-9b45-75f0d1417926>
{ "dump": "CC-MAIN-2018-30", "url": "https://tashasperspective.com/2014/03/24/living-a-no-fault-life/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00033.warc.gz", "language": "en", "language_score": 0.9769921898841858, "token_count": 598, "score": 2.515625, "int_score": 3 }
We know that both genetics and environment have an effect on human development. We have genetic tendencies, but we are also influenced by factors such as the environment and parenting style. Choose a factor—such as IQ, autism, ADHD, or addiction to drugs or alcohol—that is influenced by both nature and nurture. Then discuss your beliefs on how much each of those sides influences your chosen factor. Back up your viewpoint with a least one example of current research or an expert opinion. Your short paper should demonstrate an understanding of the topic, as evidenced by discussing the required critical elements, sharing your opinion, and applying current research. Analyze the topic, integrate text and reference material into the short paper, and apply course concepts into your discussion. Demonstrate your critical thinking ability by giving your thoughts and opinions on the topic, and explain why you hold those beliefs. Specifically, the following critical elements must be addressed: Nature versus nurture for a chosen factor Application of current research Critical analysis Guidelines for Submission: Use at least one reference in addition to the course textbook. It must be from a reliable source, such as a journal or a professional association or university site. Follow these formatting guidelines: one to two double-spaced pages, 12-point Times New Roman font, one-inch margins, and references in APA format at the end of the paper. MUST BE ORIGINAL WORK
<urn:uuid:7fd47a3d-1d85-415a-bead-c4b385317efa>
{ "dump": "CC-MAIN-2018-47", "url": "https://www.uniessayhelp.com/2015/08/24/we-know-that-both-genetics-and-environment-have-an-effect-on-human-development/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746112.65/warc/CC-MAIN-20181119212731-20181119234731-00111.warc.gz", "language": "en", "language_score": 0.9197024703025818, "token_count": 285, "score": 3.421875, "int_score": 3 }
About 850 tadpoles were released into two streams in the Santa Monica Mountains. The U.S. Fish and Wildlife Service will analyze certain pesticides on their impact on wildlife in the United States Alaskan wood frogs can freeze and thaw due to sugar buildup in their bodies. The giant salamander is listed as near threatened by the IUCN. Study says frogs that survive exposure to chytrid build resistance to the fungus. Not all frogs can jump as far as the "The Celebrated Jumping Frog of Calaveras County." The California red-legged frog is now the official state amphibian of California. The varying color patterns of dyeing dart frogs affect how potential predators see the frogs. Man catches and releases American bullfrog back into his private pond. Decades of strife prevented herpetological expeditions in the Mekong region until the late 1990s, the WWF says in their report, "Mysterious Mekong." Footballer admits in interview with BBC's Michael Palin that he doesn't like frogs. This little treefrog was found in a bunch of kale. They are named dancing frogs due to the male's leg kicking during breeding season. Australian angler Angus James was about to release the jungle perch when he noticed a live green treefrog in its mouth. The amphibians have been devastated by pollution, pesticides and the chytrid fungus. Scientists use midwife toad skin mucus to develop a defense against the chytrid fungus. NASA rocket launch sends frog flying into the air. The Coqui frog has become established on the Big Island of Hawaii. Streambed salamander discovered in Arkansas' Lake Catherine State Park Study shows frog Mortality rates ranging from 100 percent after just one hour to 40 percent over 7 days. The California red-legged frog may become the official amphibian of California. Gracixalus lumarius, a new frog species hails from the Vietnamese highlands. Study noticed an 8 percent decline in salamander size from 1980 to 2012. 50-year-old Oahu resident charged with smuggling poison dart frogs. A single Tandayapa Andean toad was first discovered in 1970 and rediscovered in 2012. Grand opening celebration will include a frogging expedition in Strawberry Canyon. Celebrate this most interesting herp. Stanley Park Ecology Society members spot rare albino Ensatina Salamander. The new marsupial frog, Gastrotheca dysprosita hails from the Cerro Barro Negro mountain in Peru. Frog docents will warn hikers not to trample frog breeding grounds in Marin County, Calif. Salado and Georgetown salamanders of Texas get Endangered Species Act Protections Manú National Park in Peru is home to 287 species of reptiles and amphibians. Scientists document the first acoustical and statistical analysis of the frog calls of Nasikabatrachus sahyadrensis Beelzebufo may have been a voracious little dinosaur eater, or not. There are just three populations of the northern cricket frog in New York. When túngara frogs call out to find a mate, bats sense it. The Adventure Aquarium in Camden, NJ opens a frog exhibit featuring more than 20 frog species. California red-legged frogs haven't been seen in Santa Monica Mountains in more than 40 years. Scientists move 40 Litoria lorica to new location four kilometers from only known area in which the frogs can be found. Pesticide is found in 75 percent of stream water and 40 percent of all groundwater tested in a study. Australia's Department of Environment and Conservation and the University of Western Australia develop iPhone app that enables you to tell the difference. Fence will deter ATVs and other vehicles from traversing a pond that destroyed thousands of spotted salamander eggs in March. Animals have been decimated by livestock grazing, habitat destruction, invasive species, and disease. Hyla japonica, normally green in color is bright blue. Amphibian looks like an earthworm and its young feeds on the mother's skin. Reptile breeder Dean Boshoff captured video of Breviceps macrops while camping in country's Northern Cape. Tunnels constructed in 2011 help some 100 plus endangered Ambystoma californiense get to their breeding pond. Wildlife and Fisheries Commission also proposes permit system for taking or killing Eastern Diamondback rattler and two species of pine snake. 4th annual RAAD event showcased herpetology collections as well as reptile and amphibian organizations. The Puerto Rican lowland coquí is one of the smallest treefrogs in the world. U.S.G.S. biologists discover 19 new adult Rana muscosa frogs among 71 frogs and hundreds of tadpoles in a Mojave River pool of water. "The Amphibians and Reptiles of Michigan" published by Wayne State University Press. J.liv bacteria may help to protect amphibians from chytrid. Fish and Wildlife Service says development could contribute to the demise of these species. Public outreach demonstration enables Girl Scouts to interact with Live animals and museum specimens. Blood drawn from mountain yellow-legged frogs confirms cause of death. Lack of plan further imperils Ambystoma californiense, conservation group says. Herpetologists find a single Cardioglossa cyaneospila in Burundi rainforest. The population or Rana sevosa is estimated at around 100 adult frogs. Mayor Ed Lee opposes the transfer and may veto the legislation. Aussie Sunset Frogs Released From Perth Zoo in Hopes of Establishing a Population Outside Their Range Perth Zoo-raised and hatched frogs released onto private property. Park is home to golf course, California red-legged frog and San Francisco garter snake. Exhibit features two of the rarest of salamanders The zooplankton eats the zoospore stage of Batrachochytrium dendrobatidis The radiated tortoise (Geochelone radiata) could become extinct within the next 20 years.
<urn:uuid:5966b47a-c895-46e4-b903-73696850338f>
{ "dump": "CC-MAIN-2015-18", "url": "http://www.reptilesmagazine.com/Article-Archive/index.php?tagID=353&operator=or", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656965.63/warc/CC-MAIN-20150417045736-00170-ip-10-235-10-82.ec2.internal.warc.gz", "language": "en", "language_score": 0.9032653570175171, "token_count": 1306, "score": 3, "int_score": 3 }