text
stringlengths 124
652k
|
---|
There Are No “Best Practices” for Democratic Transitions
I’ve read two pieces in the past two days that have tried to draw lessons from one or more cases about how policy-makers and practitioners can improve the odds that ongoing or future democratic transitions will succeed by following certain rules or formulas. They’ve got my hackles up, so figured I’d use the blog to think through why.
The first of the two pieces was a post by Daniel Brumberg on Foreign Policy‘s Middle East Channel blog entitled “Will Egypt’s Agony Save the Arab Spring?” In that post, Brumberg looks to Egypt’s failure and “the ups and downs of political change in the wider Arab world” to derive six “lessons or rules” for leaders in other transitional cases. I won’t recapitulate Brumberg’s lessons here, but what caught my eye was the frequent use of prescriptive language, like “must be” and “should,” and the related emphasis on the “will and capacity of rival opposition leaders” as the crucial explanatory variable.
The second piece came in this morning’s New York Times, which included an op-ed by Jonathan Tepperman, managing editor of Foreign Affairs, entitled “Can Egypt Learn from Thailand?” As Tepperman notes, Thailand has a long history of military coups, and politics has been sharply polarized there for years, but it’s still managed to make it through a rough patch that began in the mid-2000s with just the one coup in 2006 and no civil war between rival national factions. How?
The formula turns out to be deceptively simple: provide decent, clean governance, compromise with your enemies and focus on the economy.
This approach is common in the field of comparative democratization, and I’ve even done a bit of it myself. I think scholars who want to make their work on democratization useful to policy-makers and other practitioners often feel compelled to go beyond description and explanation into prescription, and these lists of “best practices” are a familiar and accessible form in which to deliver this kind of advice. In the business world, the archetype is the white paper based on case studies of a one or a few successful firms or entrepreneurs: look what Google or Facebook or Chipotle did and do it, too. In comparative democratization, we often get studies that find things that happened in successful cases but not in failed ones (or vice versa) and then advise practitioners to manufacture the good ones (e.g., pacts, fast economic growth) and avoid the bad (e.g., corruption, repression).
Unfortunately, I think these “best practices” pieces almost invariably succumb to what Nassim Taleb calls the narrative fallacy, as described here by Daniel Kahneman (p. 199):
Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen.
The narrative fallacy is intertwined with outcome bias. Per Kahneman (p. 203),
We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious only after the fact… Actions that seem prudent in foresight can look irresponsibly negligent in hindsight [and vice versa].
When I read Tupperman’s “deceptively simple” formula for the survival of democracy and absence of civil war in Thailand, I wondered how confident he was seven or five or two years ago that Yingluck Shinawatra was doing the right things, and that they weren’t going to blow up in her and everyone else’s faces. I also wonder how realistic he thinks it would have been for Morsi and co. to have “provide[d] decent, clean governance” and “focus[ed] on the economy” in ways that would have worked and wouldn’t have sparked backlashes or fresh problems of their own.
Brumberg’s essay gets a little more distance from outcome bias than Tepperman’s does, but I think it still greatly overstates the power of agency and isn’t sufficiently sympathetic to the complexity of the politics within and between relevant organizations in transitional periods.
In Egypt, for example, it’s tempting to pin all the blame for the exclusion of political rivals from President Morsi’s cabinet, the failure to overhaul the country’s police and security forces, and the broader failure “to forge a common vision of political community” (Brumberg’s words) on the personal shortcomings of Morsi and Egypt’s civilian political leaders, but we have to wonder: given the context, who would have chosen differently, and how likely is it that those choices would have produced very different outcomes? Egypt’s economy is suffering from serious structural problems that will probably take many years to untangle, and anyone who thinks he or she knows how to quickly fix those problems is either delusional or works at the IMF. Presidents almost never include opposition leaders in their cabinets; would doing so in Egypt really have catalyzed consensus, or would it just have led to a wave of frustrated resignations a few months down the road? Attempting to overhaul state security forces might have helped avert a coup and prevent the mass killing we’re seeing now, but it might also have provoked a backlash that would have lured the military back out of the barracks even sooner. And in how many countries in the world do political rivals have a “common vision of political community”? We sure don’t in the United States, and I’m hard pressed to think of how any set of politicians here could manufacture one. So why should I expect politicians in Egypt or Tunisia or Libya to be able to pull this off?
Instead of advice, I’ll close with an observation: many of the supposed failures of leadership we often see in cases where coups or rebellions led new democracies back to authoritarian rule or even state collapse are, in fact, inherent to the politics of democratic transitions. The profound economic problems that often help create openings for democratization don’t disappear just because elected officials start trying harder. The distrust between political factions that haven’t yet been given any reason to believe their rivals won’t usurp power at the first chance they get isn’t something that good intentions can easily overcome. As much as I might want to glean a set of “best practices” from the many cases I’ve studied, the single generalization I feel most comfortable making is that the forces which finally tip some cases toward democratic consolidation remain a mystery, and until we understand them better, we can’t pretend to know how to control them.
N.B. For a lengthy exposition of the opposing view on this topic, read Giuseppe Di Palma’s To Craft Democracies. For Di Palma, “Democratization is ultimately a matter of political crafting,” and “democracies can be made (or unmade) in the act of making them.”
• Author
• Follow me on Twitter
• Follow Dart-Throwing Chimp on
Join 13,613 other followers
• Archives
%d bloggers like this:
|
Baccarat: A CASINO GAME of Skill
Baccarat: A CASINO GAME of Skill
Baccarat or just baccara can be an Italian card game mainly played in casinos. It’s a compressing card game, played between two contestants, usually the banker and the player. Each baccarat coup have three possible outcomes: win, tie, and loss. In this way, there are a lot of possibilities for winning with baccarat, making it one of the most interesting games to play in casinos.
casino baccarat
Like other card games, casino baccarat has its special rules, known as the ‘preliminary results.’ Because of this all bets before the draws need to be equal in amount (no bonuses or bets could be made before the draw). The preliminary results are also used to set the amount of stake for each player taking part in the overall game.
Like all card games, baccarat requires strategy. Baccarat can be played between two decks of cards – the player must have two cards confront be able to see any card that the banker doesn’t have. This ensures that the banker cannot know which cards the ball player has, and vice-versa. Some casinos allow three decks, but most allow two. The amount of decks that the players might use will depend on the amount of people who will be playing.
As well as the point values of each hand, many casinos provide other factors, such as for example pre-determined ‘edge,’ which is often used to alter the idea values of the game. This is done by determining the minimum amount of 더킹 바카라 chips that each player has to focus on. Many casinos also assign a particular point value to the card sleeves. This value is often viewed as being between one and seven or eight on a regular baccarat table. If a player has the advantage of having more sleeves than his opponent’s, then that player is given the opportunity to use all the sleeves that he has, regardless of what others have in their hand.
The casinos might use either the standard card decks or they could even use a combination of both. Casino baccarat is commonly played using either the standard decks and even money bettors may purchase their own individual decks. If you choose to purchase your own card deck, you then will want to choose an expensive one, since it will be more difficult to win. It is possible to find extremely good deals on high quality decks through private sellers and also auctions on the web. Before purchasing your personal card deck, however, you should carefully consider whether your money is betting on the cards is worth it.
Baccarat is played on a rectangular table, and most gamblers will prefer to place their bets on the center of the table, making use of their feet placed conveniently on top of the padded footstools. Most gamblers prefer to place their bets close to the entrance to the casino, where they can easily see those walking out with an absolute ticket. In addition, some people prefer to place their bets on the tables before them, where their decisions can’t be easily influenced by other players and the surroundings. However, if you are going to play at a public location, you’re encouraged to move round the room so that everyone can easily see you and your decisions. A simple solution is to place your bets on the dealer’s table, which is usually close to the door or even on one of the walls of the casino.
The biggest edge in baccarat may be the ability to control the amount of money that you placed into the pot. Which means that the player with the best edge is the player with cards – and, oftentimes, the player with chips also has the advantage. The house edge, or “edge”, identifies the difference between just how much the house pays you for each card you get, and the quantity the casino pays you when you’re done playing. Once you get all your chips in and walk away, you’ve lost the edge – so be sure to know when you’re from the pot!
As the earliest casino games were based around gambling and pleasure instead of gambling and chance, baccarat evolved right into a game of skill, just as games like chess, poker and roulette have. Today, you’ll find baccarat tables in high-end casinos, and they aren’t very different from the ones you’ll find in your neighborhood bar or video poker parlor. Exactly the same basic rules apply, although playing conditions are different and the house edge may be larger. Baccarat isn’t appropriate for everyone, especially those with problems keeping their bets under control. However, for those who will keep their biceps tucked under their belt, it’s one of the greatest casino games around!
|
Splunk® Enterprise
Knowledge Manager Manual
Acrobat logo Download manual as PDF
Acrobat logo Download topic as PDF
Use summary indexing for increased search efficiency
Summary indexes enable you to efficiently search on large volumes of data. When you create a summary index you design a scheduled search that runs in the background, extracting a precise set of statistical information from a large and varied dataset. The results of each run of the search are stored in a summary index that you designate. Searches you run against the completed summary index should complete much faster than similar searches run against the source dataset.
The summary index is "faster" because it is smaller than the original dataset and contains only data that is relevant to the search that you run against it. The summary index is also guaranteed to be statistically accurate, in part because the scheduled search that updates the summary runs on an interval that is shorter than the average time range of the searches that you run against the summary index. For example, if you want to run ad-hoc searches over the summary index that cover the past seven days, you should build and update the summary index with a search that runs hourly.
Summary indexing allows the cost of a computationally expensive report to be spread over time. For example, the hourly search to update a summary index with the previous hour's worth of data should take a fraction of a minute. Running the weekly report against the original dataset would take approximately 168 (7 days * 24 hours/day) times longer.
Types of summary indexes
You can create two types of summary indexes:
• summary events indexes
• summary metrics indexes.
Both types of summary indexes are built and updated with the results of transforming searches over event data. The difference is that summary events indexes store the statistical event data as events, while summary metrics indexes convert that statistical event data into metric data points as part of their summarization process.
Metrics indexes store metric data points in a way that makes searches against them notably fast, and which reduces the space they take up on disk, compared to events indexes. You may find that a summary metrics index provides faster search performance than a summary events index, even when both indexes are summarizing data from the same source dataset. Your choice of summary index type might be determined by your comfort with working with metrics data. Metric data points might be inappropriate for the data analysis you want to perform.
To learn how to create both types of summary indexes, see Create a summary index in Splunk Web.
For more information about metrics, see Overview of metrics in Metrics.
Summary indexing use cases
The following sections describe some summary indexing use case examples.
Run reports over long time ranges for large datasets more efficiently
Your instance of the Splunk platform indexes tens of millions of events per day. You want to set up a dashboard with a panel that displays the number of page views and visitors each of your Web sites had over the past 30 days, broken out by site.
You could run this report on your primary data volume, but its runtime would be quite long, because the Splunk software has to sort through a huge number of events that are totally unrelated to web traffic in order to extract the desired data. Additionally, the fact that the report is included in a popular dashboard means it will be run frequently. This run frequency could significantly extend its average runtime, leading to a lot of frustrated users.
To deal with this, you set up a saved search that collects website page view and visitor information into a designated summary index on a weekly, daily, or even hourly basis. You'll then run your month-end report on this smaller summary index, and the report should complete far faster than it would otherwise because it is searching on a smaller and better-focused dataset.
Building rolling reports
Say you want to run a report that shows a running count of an aggregated statistic over a long period of time--a running count of downloads of a file from a Web site you manage, for example.
First, schedule a saved search to return the total number of downloads over a specified slice of time. Then, use summary indexing to save the results of that search into a summary index. You can then run a report any time you want on the data in the summary index to obtain the latest count of the total number of downloads.
Does summary indexing count against your license?
Summary indexing data volume is not counted against your license, even if you have multiple summary indexes.
All summarized data has a special default source type. Events summarized in a summary events index have a source type of stash. Metric data points summarized in a summary metrics index have a source type of mcollect_stash.
If you use commands like collect or mcollect to change these source types to anything other than stash (for events) or mcollect_stash (for metric data points), you will incur license usage charges for those events or metric data points.
How event summary indexing works
When a scheduled search that has been enabled for summary event indexing runs on its schedule, Splunk software temporarily stores its search results in a file as follows:
MD5 hashes of search names are used to cover situations where the search name is overlong.
From the file, Splunk software uses the addinfo command to add general information about the current search and the fields you specify during configuration to each result. Splunk Enterprise then indexes the resulting event data in the summary index that you've designated for it (index=summary by default).
Use the addinfo command to add fields containing general information about the current search to the search results going into a summary index. General information added about the search helps you run reports on results you place in a summary index.
Last modified on 02 January, 2021
Share data model acceleration summaries among search heads
Create a summary index in Splunk Web
Was this documentation topic helpful?
0 out of 1000 Characters
|
Why do people interpret things as signs
The allegory is a stylistic device that we can identify in works of all kinds and literary genres. Allegory is the visualization of the abstract or the unreal. The general becomes in particular (Individual) pictured. This means that a complex issue is represented by a single thing or a pictorial text. Very often this happens through the use of personification and the accumulation of metaphors (→ example metaphors).
The term allegory is derived from the Greek (ἀλληγορία ~ allegoria) and can be approximated with other or veiled language translate. The translation thus shows quite well what the allegory is basically about: the linguistic disguise [of an abstract concept by the recipient (Reader, viewer) must be deciphered and interpreted].
Accordingly, the allegory is a series of metaphorsthat extend over an entire text, but can also be interrupted, the stylistic figure can still be understood as a metaphor that goes beyond a single word and therefore more is, as a simple shift in meaning.
Note: Basically, a metaphor is a shift in meaning. This means that two areas are connected that actually have nothing to do with each other. For example, if we are betrayed, our hearts are broken. Here breaking is connected with the heart, which is actually not consistent, since a heart cannot break. A linguistic image emerges (metaphor) that we have to interpret.
Often, however, the allegory also comes to light in the form of a personification. That is, an abstract term (Justice, death, love) in the form of a person (Lady Justice, Grim Reaper, Cupid) is presented, which is provided with typical properties and features of the term.
Note: The personification means that an inanimate thing is endowed with properties that otherwise only humans have. This means that things act like people (e.g. the sun laughs). In the context of allegory, this ascription is a little more complex. In doing so, not only a single object is brought to life, but an abstract fact is humanized.
Note: Accordingly, the allegory can be on a purely linguistic level (several metaphors or a metaphor that goes beyond the simple shift of meaning) and on a figurative level (Personifications) function. Let's take a closer look at both forms.
The figurative allegory
The pictorial allegory is found primarily in art, as it requires a pictorial representation. If this is known to the recipient, however, it can be named in the text.
Since the allegory depicts abstract facts graphically, representations have been found in art since antiquity that are based on the stylistic device and thus illustrate and make difficult, complex and abstract things clear and have to be deciphered by the viewer.
There are numerous allegorical representations of abstract concepts, especially in Greek and Roman culture (Happiness, peace, love, fertility, money)that decorate temples, coins or triumphal arches in the form of personifications. Let us look at an example of figurative allegories.
The picture shows the paintingLa liberté guidant le peuple(Freedom leads the people) by the French painter Eugène Delacroix. It shows the bustle on the battlefield of the July Revolution of 1830, which was a brutal confrontation between French authorities and citizens → literary epochs
In the center of the picture we see a bare-breasted woman, holding a French flag in her right hand and a bayonet in her left, and storming a barricade in the process. This allegorical female figure is the national figure of the French: Marianne and symbolizes freedom. It is therefore an allegory.
Why is woman a figurative allegory of freedom?
• flag: In her right hand the woman raises the French flag, which is therefore to be interpreted as a symbol for France. This holds them up, which is why the flag hovers over the scenario.
• gun: In the other hand she holds a rifle, which can be clearly interpreted as a symbol of the fight and which accompanies her in her running run.
• Cap: Your headgear is called Phrygian cap designated. The Jacobins wore this as a political commitment at the time of the French Revolution. The so-called freedom hat became a symbol of freedom in France and Europe.
• barricade: She is the first to climb over the barricade and has the people behind her. So it is she who turns apparently victorious against the authorities that endanger freedom.
• Nudity: Many things can be interpreted here, which is why we would like to name only a few. In this context, nudity can stand for beauty, perfection and simplicity, whereby the female breast can also be interpreted as a symbol of the nurturing and protective mother.
• Conclusion: The woman combines several attributes of the freedom of the French people. This is made clear by the flag that she wears fighting (rifle) and victoriously over the bounding barricade, and her clothing also endows her with characteristics of freedom. Accordingly, she is the personification of the term 'freedom' and thus an allegory of it. It is crucial that it unites several symbols that exemplify freedom. This compression of imagery and symbolism makes it an allegory!
More examples of pictorial allegories:
• Lady Justice: Is a woman who allegorically represents justice. She is shown blindfolded (all people are equal in front of her), scales in one hand (to weigh the judgment precisely) and a sword in the other (to carry out the judgment). Consequently, it unites several symbols of justice.
• Reaper: Is an allegory of death. He has no flesh on his bones (so he is no longer alive) and carries a scythe in his hands (to judge people and bring them into the realm of the dead). He is thus personified by several symbols.
The linguistic allegory
In contrast to the pictorial allegory, the linguistic allegory is limited to the text that describes it. Although the linguistic picture can (e.g. Justitia) through which the visual finds its way into the text, but it can also arise metaphorically.
Before there was allegory as a stylistic device of rhetoric, there was allegory. Allegory is the interpretation of literary works. One tries to open up the hidden meaning and thus to grasp the actual message. (→ poem analysis, poem interpretation)
The concept of allegory goes back to antiquity where you can scandalous stories of gods von Homer defended himself and noted that the texts meant something completely different from what they actually meant. As a result, the Homeric epics were read as an allegory.
The best-known interpretation of entire units of meaning as allegories can be found in the Bible. Hardly any passage here is taken literally, but rather reinterpreted by theologians and scholars.
The attempt is therefore made to interpret what is hidden in the text and to decipher what could be meant by the words, apart from the actual literal meaning. So you look at a level that lies behind the literal meaning and only becomes visible at second glance.
Typical genera linguistic Allegories
In principle, texts of all literary genres can be understood as an allegory if, in their totality, they stand for something else and combine several linguistic images. However, there are individual genres that very often serve as an allegorical representation.
• fable: The fable is a short narrative populated by animal protagonists. These are mostly symbolic for people and their characteristics, which means that they sometimes depict an allegorical image of society → characteristics of the fable, mythical animals.
• saying: Is a popular wisdom that gets by with just one sentence. It reminds of that aphorism and the Bon mot. Proverbs are often allegorical representations in a confined space. ('The jug goes to the well until it breaks.', Means: someone drives something until it is damaged.)
• Satire / parody: Both are forms that exaggerate a fact and often portray it very mockingly. An allegorical representation of a situation can of course be used here. The poet Heinrich Heine attacked, for example, in his satirical verse epic Atta Troll (1843) allegorically the German politics of the Vormärz.
George Orwell also parodies in his work Animal Farm (1945) the society of that time, which can be interpreted as an allegory of the history of the Soviet Union, in which the popular October Revolution was ultimately followed by the dictatorial rule of Stalin.
• Biblical parables: As already described, the Bible is a work that is seldom understood literally, but is reinterpreted through allegory. The parable of the sower (Mark 4, 3-8) should serve as an example at this point.
Parable of the sower (Mark 4, 3-8)
"Listen! A sower went to the field to sow. When he was sowing, some of the grain fell on the road and the birds came and ate them. Another part fell on rocky ground, where there was little earth, and rose immediately because the earth was not deep; but when the sun rose, the seeds were scorched and withered because they had no roots. Another part fell into the thorns, and the thorns grew and choked the seeds and they gave no fruit.
Another part finally fell on good soil and bore fruit; the seed sprouted and grew up and carried thirtyfold, even sixtyfold and a hundredfold. "
For interpretation: We would like to limit ourselves to the essential aspects in order to show by way of example that this is an allegory.
• If we were to read the text literally, it would simply say that there is a sower who sows his seed, which in some places produces fruit and in other places withers.
• Understood as a parable for the kingdom of God and thus as an allegorical representation, we can reinterpret the text and thus discover the hidden level that stands behind it.
• Then the point is that faith is like a seed that can bear fruit. Perhaps it does not reach everyone or is equally fruitful everywhere, but wherever it meets a believing heart, it will take root and carry faith into the world.
Difference: Allegory, personification, metaphor, symbol
Many stylistic devices were used in this post to explain the allegory. That is why we want to make the differences clear and, above all, look at the symbol.
• The metaphor is above all a shift in meaning. This means that terms are used in an improper context. According to this, one term is linked to another which at first glance does not fit at all → metaphor examples
• The personification endows an inanimate object with human properties and thus enlivens it. In the context of allegory, however, the representation of a complex situation as an acting person is usually meant. In this way it becomes an allegorical representation.
• The symbol is a thing that represents an abstract state of affairs. So the cross stands for Christianity or the dove for peace. The allegory mostly uses different symbols, which makes the allegorical representation clear. However, the allegory does not just stand for one thing, but is these themselves (Imagery).
Note: A clear distinction between allegory and other stylistic devices is not always possible. Sometimes the boundaries are blurred and the figures merge with one another.
Accordingly, no clear subdivision is possible and actually not necessary at all. Certain tendencies can be recognized between the stylistic devices, but a clear drawing of the boundaries would always be flawed. Goethe described this fact in the Maxims and reflections (1833) so:
“Allegory transforms the appearance into a concept, the concept into a picture, but in such a way that the concept is still limited and complete in the picture, to be had and to be expressed in the same. The symbolism transforms the appearance into an idea, the idea into a picture, and in such a way that the idea in the picture always remains infinite and, even when pronounced in all languages, remains inexpressible. "
Effect and function of the allegory
It is sometimes difficult to ascribe a clear function or effect to a stylistic device, since you run the risk of only interpreting the figure according to this. Nevertheless, we would like to give some hints about the effect the allegory has on the recipient (Reader / viewer) may have.
Overview: Effect, function and effect of the style figure
• Allegory is the visualization of the abstract or the unreal. The general is shown in particular (individual), with metaphors or personifications often being used to make this representation possible.
• Consequently, an allegorical representation can illustrate a situation and make it more pictorial and thus also more understandable. Biblical parables, in particular, are a fine example of how allegories make complex ideas clearer.
• The visual representation in particular can enliven a difficult or complex circumstance naturally, make it clearer and make it appear more tangible.
• It is still importantthat whoever wants to interpret an allegory must know about the individual elements of the representation. Those who do not know that the scales in Justitia's hand are a symbol of weighing and that the sword is used to carry out the judgment will not understand that it is itself a personification of justice.
|
Category Archives: Blue Children
Royal Doulton’s Blue Children ware
Even as I write this I am aware of the misnomer surrounding the title of this piece, as Doulton never referred to this series as such, but rather ‘Blue Figures’ was the title given. The label Blue Children refers to the popularity among collectors for pieces of this ware featuring children and they re-christened it.
Traditionally blue is the most popular colour for porcelain decoration and follows an ancient Chinese tradition that still pervades today. The scenes featuring children and also young women in various backdrops would have been purchased outside of Doulton and there are examples of what we know as Doulton scenes appearing on other manufacturers items from around this time. One such example is the factory Royal Bayreuth, whose wares bear an uncanny likeness to its Doulton counterparts.
However, what sets the Doulton series apart from its competitors is of course the quality. Doulton’s printing process allowed for finer detail and certainly subtler colour variations, as well as added detail by Doulton artists specifically to the faces of the characters and also the often detailed backgrounds.
A precise date of introduction of this ware is not known, although late Victorian is a time I think correct given the elaborate Victorian shapes of many of the earlier pieces, so ca. 1890. Signatures on these early pieces are also to be expected with J. Hughes being a common one. As was typical when the so-called ‘print and tint’ process was used, Doulton’s major artists used a pseudonym. Thus J. Hughes was in fact John Hugh Plant. Similar examples exist for other artists including E. Percy which is recorded as being either Edwin Wood or Percy Curnock.
The end of the Victorian age, when Queen Victoria died in 1902 brought a change in popular tastes, which meant that the elaborate rococo shapes used for Blue Children pieces became much more simplified and the gilding too was often reduced to simply a gilded edging. Having said that, the shapes of vases etc…that were used are what I class as typically Doulton, in that they were not limited to this series but were used in seriesware production and also the top-end hand-painted wares.
Displays of Blue Children cannot fail to catch the eye and there are some fantastic collections around the world. With a never ending variety of shapes collectors are well catered for! Here is a display made by a friend in South Africa, notice the great shapes, I particularly like the square vases!
|
Quick Answer: What Is The 3 Month T Bill Rate?
Which bank in Ghana is good for investment?
Standard Chartered Bank GhanaStandard Chartered Bank Ghana stands out as a great bank with the best investment account providing clients with attractive interest rates.
However, other investment banks in Ghana like Ecobank, Access Bank, Fidelity Bank and others give an excellent investment platform and provide beneficial investment accounts..
What is the current 2 year Treasury rate?
0.14%2 Year Treasury Rate is at 0.14%, compared to 0.14% the previous market day and 0.34% last year.
Why are T-bills low risk?
The risk-free rate is the rate of return of an investment with no risk of loss. Most often, either the current Treasury bill, or T-bill, rate or long-term government bond yield are used as the risk-free rate. T-bills are considered nearly free of default risk because they are fully backed by the U.S. government.
Why yields are falling?
A bond’s yield is based on the bond’s coupon payments divided by its market price; as bond prices increase, bond yields fall. Falling interest interest rates make bond prices rise and bond yields fall. Conversely, rising interest rates cause bond prices to fall, and bond yields to rise.
What is the safest form of investment?
For example, certificates of deposit (CDs), money market accounts, municipal bonds and Treasury Inflation-Protected Securities (TIPS) are among the safest types of investments. … Money market accounts are similar to CDs in that both are types of deposits at banks, so investors are fully insured up to $250,000.
Is Treasury a note?
A Treasury note is a U.S. government debt security with a fixed interest rate and maturity between one to 10 years. Treasury notes are available either via competitive bids, wherein an investor specifies the yield, or noncompetitive bids, wherein the investor accepts whatever yield is determined.
What is the current rate for Treasury bills in Nigeria 2020?
Current Nigeria Treasury Bills Rates – CBN NTB Primary Auction Results:Auction dateMarginal RateJuly 15, 20201.30%3.35%July 1, 20201.78%3.39%June 17, 20201.80%3.74%June 10, 20202.00%4.02%5 more rows
What is the 6 month T-bill rate?
0.05%6 Month Treasury Bill Rate is at 0.05%, compared to 0.03% the previous market day and 0.05% last year.
Is Treasury bill a good investment?
Treasury bills are one of the safest forms of investment in the world because they are backed by the Ghana government and are considered risk-free. They are also used by many other governments throughout the world.
How does a 10 year bond work?
The 10-year Treasury note is a debt obligation issued by the United States government with a maturity of 10 years upon initial issuance. A 10-year Treasury note pays interest at a fixed rate once every six months and pays the face value to the holder at maturity.
Can you lose money on T-bills?
Treasury bonds are considered risk-free assets, meaning there is no risk that the investor will lose their principal. In other words, investors that hold the bond until maturity are guaranteed their principal or initial investment.
Which bank in Ghana has the highest interest rate?
For agriculture, NIB and Republic Bank offered the highest interest rate of 34.8% and 33.5 – 34.2% respectively, while NIB, again, and CAL Bank offered the expensive loans at rates of 29.5 – 34.3% and 29%, respectively in the agriculture and manufacturing sectors.
What is the current interest rate on government bonds?
US 10-Year Government Bond Interest Rate is at 1.25%, compared to 1.06% last month and 1.51% last year. This is lower than the long term average of 6.03%.
What are Treasury bonds paying now?
What do Treasury bonds pay? A 30-year U.S. Treasury Bond is paying around a 1.25 percent coupon rate. That means the bond will pay $12.50 per year for every $1,000 in face value that you own.
What is the 13 week T bill rate?
13 Week Treasury Bill (^IRX)Day’s Range0.0130 – 0.015052 Week Range-0.1080 – 0.2600Avg. Volume0
What is today’s 5 year Treasury rate?
0.85%5 Year Treasury Rate is at 0.85%, compared to 0.82% the previous market day and 0.51% last year.
What is the T-Bill rate today?
Treasury securitiesThis weekMonth agoOne-Year Treasury Constant Maturity0.080.0891-day T-bill auction avg disc rate0.020.03182-day T-bill auction avg disc rate0.040.05Two-Year Treasury Constant Maturity0.150.114 more rows
What is the current T-Bill rate in Ghana?
Issue DateTenderDiscount Rate22 Mar 2021173812.493422 Mar 2021173812.784315 Mar 2021173712.61418 more rows
What is the 10 year T Bill rate today?
1.63%10 Year Treasury Rate is at 1.63%, compared to 1.62% the previous market day and 0.88% last year. This is lower than the long term average of 4.37%.
What is the minimum amount for Treasury bills in Nigeria?
What is the minimum amount for Treasury bills in Nigeria? The minimum amount of treasury bill you can buy depends on the stockbroker or bank. Some banks offer a minimum investment as low as ₦50,000, while some offer a minimum of ₦1,000,000.
|
Why do we say Holy Ghost?
I went down a bit of a linguistic rabbit hole today. One of the FB groups I’m on posted “Why do we say Holy Ghost? Is it a ghost?” After the first minute of smacking my head, I started really looking into the question. Ghost is an English word of Germanic origin, which is used to translate spiritus from latin into old English. It’s first used in Old English as such: sē hālga gāst is Old English for “The Holy Ghost”. Spirit didn’t enter English as a translation for spiritus until the Middle English Period (after the Norman Conquest, when the Normans brought more French and Latinate words into English). Ghost and geist are direct cognates, from English and German respectively. Geist has a meaning of spirit in the supernatural sense, as well as the meaning of apparition and of something having a frightening appearance. It also has a sense of furor or agitation, a sort of ecstasy. Thus, a sense of being filled ‘with the Holy Ghost’ carries a sense of ecstatic experience.
The Friary: Work, Opus, Aspirations
For the past few weeks, the Sacred Flame has been reignited in me to re-address my Friary Work. I've been writing a lot of material and emails and general musings, all about the Friary Work. And as I've been doing that, I've been thinking a lot about the language I use. As a magician, the words have meaning, and choosing them carefully affects the outcome of the process in which I'm engaged. That got me thinking about the word 'Work'. In English, the word work can have some bad connotations: Something you are doing that you must do, something that takes a great deal of effort, labor, something you do in exchange for payment or gain. And we know language shapes the way we think. The following video gives examples of that. Perhaps my efforts in the Friary realm should not be considered work, but perhaps the Latin source for the term, Opus. We are engaged in the Magnum Opus, the Great Work, after all. And Opus has totally different connotations in
Logos is a fantastically complicated word. It is derived from a Greek word meaning "ground", "plea", "opinion", “law”, "expectation", "word", "speech", "account", "reason", "proportion", "discourse". To me, as we use it in the AJC, it has all of those meanings and more. It is the Ground of all being, the Word spoken when God at creation said “Let there be light” יְהִי אוֹר It is the plea of the Divine to remember from whence we came. It is the opinion of the Gnostics, that they know the unknowable, and approach the unapproachable; that they make effable the ineffable. It is the Law laid out for the conduct of morality, whether to be observed or broken. It is the expectation that although now we see through a glass darkly, then we shall see clear. It is the Word of God: Written, spoken, experienced, living. It is the Speech of God: The great Metatron and all of continual creation formed t
What is a Johannite: Spiritually Decisive
"And going on from there he saw two other brothers, James the son of Zebedee and John his brother, in the boat with Zebedee their father, mending their nets, and he called them. Immediately they left the boat and their father and followed him." -Matthew 4:21-22 As the Director of Communications for the Apostolic Johannite Church, I am the first point of contact for many people seeking a spiritual home that can accept eccentric ideas, yet has a strong tradition. I tend to encounter people who've been wandering from tradition to tradition. These people, their first experience is generally a very narrow version of Christianity, where if you don't toe the line, you are ostracized. So they reject Christianity altogether, and they look at Buddhism, or paganism, or nothing. Then, when they begin to heal, the Church calls them back, but they cannot accept the narrowness of the version of which they were raises. And so they find us. For some people, they've studied a l
Reading material
I am reading a ridiculous amount of books at the moment. I should really pick one and finish it. The Three Body Problem by Liu Cixin Complicity by Iain Banks Driftless Spirits by Dennis Boyer Northern Frights by Dennis Boyer Sword of the Legion by Jason Anspach, Nick Cole Annihilation by Jeff VanderMeer I really should finish one or more of these.
Holst: Jupiter
I am a fan of all of Holst's planet series, but nothing speaks to me like Jupiter, Bringer of Jollity. Anyone who's read my blog for any period of time knows I like Jupiter in all his aspects: King, Bringer of Jollity and plenty, Expander of boundaries and limits. The music is so upbeat! It fills one with joy, with granduer, with expansion, at least in the first movement. Then it quiets, but even it's quiet is joyous, and then it comes back in grand and sweeping, like a mountain vista in New Zealand. One can almost see a procession of gifts and brightly colored courtiers coming to pay homage to the king on the day of his coronation, or the day of his victory. A stately procession follows a flurry of preparation. Just when you think the grandeur of the second movement will come to it's end, the small trilling notes of fairies and servants and swirling dance comes back in, and then Jupiter is there, proclaiming his good will and beneficence to all. It makes me hap
Clothes Horse
There is nothing quite like wearing a good tailored suit. The cut simply falls correctly on the body, and the figure you cut, with a neatly tied tie and a pressed shirt, makes you think you can take on the world. A set of shiny dress shoes and some socks that speak to my personality, and I feel like a man about town, a captain of industry, a go-getter ready to take on the world. I can dream in a suit. That style of dress is out of place on the farm. On the farm, I like my Romeo boots with no laces that just slip on. A pair of old jeans, a flannel shirt if it's cold, a t-shirt if it's warm. A ball cap to keep the sun from my eyes. It's relaxed, it allows for work, and it makes me feel, well, like a man of the land. Someone who works hard to accomplish their goals. It grounds me. Even more removed is the vestments of the priest. Designed at the height of the Roman empire, the alb, stole, chausible, and cincture put me in a sacred role, a role that is outside of my ev
Great balls of fire, I am feeling ill. Grievous wails of despair come from my mouth Groaning, sweating, swearing, failing of will. Gone down a bad path, my health has gone south. Give me release, succor, a cooling cloth Growing is my unease that this illness Growls and rolls in my belly, ceaseless froth. Green mucus, bane color of un-wellness Government healthcare does not cover me Great witch doctors have no elixir cure Good faith healers' prayers do not set me free. Gross fluids flowing to make body pure. Greeting the morning of my discontent Going to find where my superb health went Writing Prompt: October 1 Fearful symmetry Pick a letter, any letter. Now, write a story, poem, or post in which every line starts with that letter.
|
After clicking Comment, place your message anywhere
Regner Ramos and Kleanthis Kyriakou
Funded by the Graham Foundation (2021-2023), and named after one of Puerto Rico’s largest sugary refineries dating back 200 years ago (1820-2003), “Coloso” is a web-based, virtual factory for users to produce digital monuments commemorating LGBTQ+ spaces in Puerto Rico. It aims to destabilize monuments as static representations of patriarchal, colonial ideals. Instead of immobile statues and architectural elements, “Coloso” reimagines monuments as distributable digital icons and data. Taking the form of 3D digital models, drawings, and animations, “Coloso’s” monuments exist freely on the internet where they can be generated, accessed, and shared by individuals, groups, and organizations. They can be uploaded and dropped into spaces in Google Maps, printed as posters, and laser-cut and 3D printed as objects to be folded, stacked, and assembled. Thus, through this project, the act of creating queer digital monuments both appropriates and rejects colonial ideals of memorialization; contests who and what is commemorated; democratizes who gets to decide and commission; and reimagines the very materiality of monuments.
“Coloso” critically reflects on the loss of LGBTQ+ spaces. But rather than simply looking back on them nostalgically, it gives them agency, distributing and immortalizing them through the internet for generations. Thus, the project aims to contribute to contemporary architectural discourse through the creation of a performative website that celebrates, commemorates, and registers LGBTQ+ architectures. By using the architectural typology of the monument as a tool to contest the displacement and ephemerality that defines Puerto Rican queer spaces, “Coloso” is a subversive tool advocating for permanence, claim and ownership, and the right to the the built environment.
The website performs as both a queer archive and an architectural research method. It enables user-generated content to materialize—using a unique kit of parts—into a digital “monument” which can be downloaded, screenshot, shared, laser-cut and/or 3D-printed. The project aims to explore digital and analogue, coding and making, process and play, and immateriality and permanence as queer, decolonial modes of conducting architectural research.
This project is collaborative between Regner Ramos and Kleanthis Kyriakou, through their research and design practice, Wet-Hard Agency.
Follow the project here.
|
Four knights vs queen challenge
by Frederic Friedel
9/8/2017 – It is a seven-piece ending over which Troitzky once toiled. In the meantime endgame databases have mastered it and can solve billions of non-trivial positions, each in an instant. But can a computer program be taught to generate interesting studies or problems with the given material? KI researcher Dr Azlan Iqbal has instructed his software to try, and it has produced a mate-in-five. We ask our readers: can you do better?
Your key to fresh ideas, precise analyses and targeted training!
Endgame four knights vs queen
In an article Troitzky: In Memoriam by Prof. Nagesh Havanur, published in March last year, AI scientist Prof. Azlan Iqbal came across the following quote:
“When I imagine Troitzky in his Siberian forest, surrounded by howling wolves, analyzing night after night whether king plus four knights can always beat king plus queen, that is great. That’s what chess is all about, only you have to be a chess player to appreciate it. How can you explain to a non-chess player that within chess there is a little world of endgame studies… within which there is a microcosm made of utterly mad men analyzing four knights against queen.” – Tim Krabbé, Chess Curiosities
Inspired by this passage Dr. Iqbal, who has a Ph.D. in Artificial Intelligence and is a senior lecturer at Universiti Tenaga Nasional, Malaysia, decided to turn his attention to the endgame four knights against a lone queen, which was part of the 7-man Lomonosov tablebases calculated in 2012 at the Computer Science department of Moscow State University. It contains the exact evaluations (draw or moves to mate) for all positions with seven (or less) pieces on the board. The total number of positions is more than 500 trillions – 500,000,000,000,000! The true number is in fact several times larger, but many of the positions can be obtained from others by mirroring and rotating of the board, so there is no need to store them on disks.
Storing 500 trillion positions, together with evaluations and best move for each, would require 1000 hard drives with petabytes of data on them. Fortunately there is a possibility to compress the data, and 100 Terabyte of data turned out to be enough to store the information. Of course that does not fit on regular desktop or notebook drives, so the Russians created a special site and made the information available for online access. And this is what Azlan Iqbal used to study the NNNN-Q endgame.
First came a survey of chess problem and puzzle books published in the last 50 years or so. This did not reveal any existing compositions using the four knights vs. queen – hardly surprising since chess problems tend to avoid promoted pieces in the starting position. Azlan only managed to find a YouTube video showing a particularly long mate with the NNNN-Q material. No composer or source is mentioned, and not all the moves or even the length of the total solution are shown, but the initial position is real and valid. You can try playing it out against Fritz on this page.
Deep Rybka 4 Multiprocessor Version
A strong chess engine will show the position as essentially drawn, and it is extremely difficult to beat it. The 7-man Lomonosov tablebases, on the other hand, tell us that it is a forced mate in 89 moves. Interestingly, the position was published on YouTube in 2008, four years before the development of the first 7-piece endgame tablebase. The solution in the video may not be the most optimal one, but whoever composed the study was correct in thinking that it was indeed a forced mate – even though they could not have been certain of it.
To illustrate how important or delicate the initial position is using these pieces, take a look at the following position:
Here, the knight is simply moved from the e1 to c8, but the Lomonosov tablebases evaluate the position as a draw, given optimal play by both sides! We do not give you the moves and, heaven forbid, all the possible variations of these positions, but you are welcome to play them out against our online Fritz program and see if it defends (or attacks) competently. Kudos if you can win the first position, compliments to the computer if it can successfully defend the second.
There are probably tens of billions of positions where none of the knights are under threat of immediate capture (or trapped or hanging). Most of these positions are likely to be wins or draws for White, some like the above in up to 89 moves. But it is clear that very few of these positions would be considered appealing to human chess enthusiasts.
So how to extract the interesting ones? This is where Chesthetica comes in. This program, well known to our readers, was developed by Dr Iqbal and uses Digital Synaptic Neural Substrate (DSNS) technology to try to identify positions that are not just sound but also aesthetically pleasing. The articles listed in the links at the bottom of this article trace the development of Chesthetica in great detail.
A recent addition to Chesthetica is that it can now compose studies using particular piece sets. In the above screen shot (click to enlarge) you can see that four knights and one queen is selected, and certain other parameters set which are meant to extract study-like positions. All the four problem types are selected, and a number of conventions applied: no cooked problems, no check and no capture in the key move), no ‘cooked’ problem, i.e. an additional key move not intended by the composer. Applying these conventions helps ensure that the composition is not simply one where the key move was say, forking the king and queen and then winning the queen. That would not be particularly interesting or having aesthetic merit.
Given the rarity and complexity of forcing a win with the pieces, it was quite surprising to Dr. Iqbal when Chesthetica managed to compose the following position in just ten days of computing, using a small personal computer.
You can try solving against Fritz and find the forced mate in five moves. Chesthetica evaluates this composition as scoring 3.086 aesthetically, which is rather good and definitely within the class of what most human players with domain competence would consider a chess problem.
We were not deeply impressed by the problem and wrote to Azlan: "The problem composed by Chesthetica does not appear to be valuable: Black loses the queen on move two, which is only slightly better than losing it to a fork on move one?!"
To this Dr Iqbal wrote back: "Indeed the problem may not be the most beautiful forced mate in five there could possibly be, using these pieces, but that was never the intention. The fact that a computer could compose such a thing at all is what is a milestone for computational creativity in this area. Do you know of any other program that can do that? How about any 10-12 year human child who has been playing for a few years? Why not set up the pieces for these children and see what they come up with? As for the one composed by Chesthetica, there is no check, capture or fork in the key move and so that's not bad for a small computer that did not refer to an omniscient 7-piece endgame tablebase in the composing process."
So here's the challenge: can any of our readers come up with a nice, aesthetically pleasing mate problem, let us say mate in five, using four knights against a lone queen? Problemists, can you do something exceptional – like no early capture of the queen? We will anxiously await your submission (subject: "Four knights" please).
For the best effort there will be a prize of a Deep Rybka program signed by some of the best players in the world.
Rules for reader comments
Not registered yet? Register
|
What Is Sign Language Interpreting?
Sign Language Interpreting
Written by Bernadine Racoma
April 27, 2021
Before we define sign language interpreting, let us first define what sign language is. Sign language is visual communication, using hand signals, gestures, body language, and facial expressions. It is the primary form of communication of people who are deaf and hard of hearing. It is also an effective communication technique for people with disabilities such as Down syndrome, cerebral palsy, apraxia of speech, and autism.
Sign language is not universal. Similar to the spoken language, the visual communication method developed naturally. There are about 300 different types of sign language people use around the world.
Most countries that share the same oral language do not have the same sign language. For example, the United States has the American Sign Language or ASL. The United Kingdom uses British Sign Language (BSL). In Australia, they use the Australian Sign Language or Auslan. Sign languages have their unique grammar, vocabulary, and semantics.
What is sign language interpreting?
Most TV programs today have a sign language interpreter, to comply with the rule to provide equal access to information to everyone, whether they can hear or they have a hearing impairment. The interpreter uses sign language to convey the information in the program’s audio to viewers.
What do sign language interpreters do?
Sign language interpreting guarantees the participation of people with hearing difficulties in business meetings, events, conferences, lectures, and other similar activities. When you offer the sign language interpretation service to your event, you make it possible for participants whose primary mode of communication is sign language, equal access.
In events, the sign language interpreters work between a sign language and a spoken language, which benefits two unique audiences: those who use spoken language and those who use sign language.
The sign language interpreters do not work from a booth like other interpreters. They need to be close to the speaker so they can hear them. They need a high-quality audio feed to ensure clarity. When interpreting into other sign languages, the interpreter must have an excellent view of the signer.
Sign language interpreters work in a one-on-one setting or in group situations, such as government, law offices, courtrooms, doctors’ offices, hospitals, schools, and performing arts.
The interpreter interprets all the content and the contextual information to achieve the goals of the speakers and to ensure that both sides have productive communication.
It takes exceptional skills to be a sign language interpreter. The person must always fully understand the subject and can accurately translate the information. Aside from English language proficiency, the interpreter must have excellent sign language skills, together with listening and communication skills.
The interpreter must have an excellent memory since the interpreter needs to remember what the speaker said in detail so the interpretation is accurate. In some situations, the interpreter will do advanced research if the subject is technical or complicated.
Is it hard to learn sign language interpreting?
Learning American Sign Language is like taking a foreign language course. At the very least, you will take six three-credit ASL courses over two or three years to gain a beginning-intermediate skill.
Course in signing
It will take two more years to reach an intermediate-fluent skill in the ASL-English interpretation training. You can achieve fluency a few years after graduation, which means getting plenty of practice and experience.
Here is a sample credit-based curriculum for ASL studies (Signing Naturally program).
• Level 100 comprises two-semester courses in ASL 101 and ASL 102, with each three-credit course comprising 60 to 65 credit hours from each semester.
• Level 200 comprises two courses, ASL 201 and ASL 202, which the student will take at one course per semester. Like the first level, each course requires 60 to 65 credit hours.
• Level 300 comprises two courses, one per semester, with each one requiring 65 credit hours.
Some programs require the students to take additional two 45-credit hour courses.
After finishing and passing the full course, the student will receive a diploma in ASL and Deaf Studies, which is equivalent to a degree in Bachelor of Arts major in ASL and Deaf Studies. The student will be a signer.
ASL-English interpretation course
If your goal is to become an ASL interpreter, you must pursue further studies. The course will take another two years of the full-time program.
According to the University of Northern Colorado website, their extended campus offers a bachelor’s degree program for ASL and English Interpreting. It consists of a 120-credit program requiring 11 semesters the student should take consecutively. This is because the school offers the interpreting courses in sequence and available only once each year.
In years one and two, the program focuses on developing the English and ASL skills of the students. In years three and four, the focus is on interpreting skills.
Recently, the Registry of Interpreters for the Deaf (RID), offered the certification program called the Certified Deaf Interpreter (CDI) for graduates of the five-year bachelor’s degree program. It requires the sign interpreters to finish a 40-hour training, meet the educational requirements and pass the performance exam of RID.
The National Association of the Deaf (NAD) also offers assessment and certification programs in five levels: Assessment for Level I and Level II for novice sign interpreters. Those who are in these two levels will receive a profile/graph sheet, but they are not yet interpreters. The other assessment type is for those who have reached Level III (Generalist), Level IV (Advanced), and Level V (Master). In the last three levels, those who pass will receive a certificate, a profile/graph sheet, and a wallet-sized copy of their certificate.
Can sign language interpreting help in social distancing?
Up to a certain degree, sign language interpreting can help in social distancing because the deaf and the hard of hearing can still understand the information through sign language without the need to be close to the other person.
But wearing a face mask can be detrimental for the lip readers. As the government eases quarantine restrictions to re-open the economy, wearing face masks will be a normal practice. It is being a problem with the deaf and hearing-impaired communities.
A sign language interpreter is not available all the time, therefore it’s another burden that will make life difficult for deaf and hard-of-hearing people.
Contact eTS if you need help in sign language interpreting.
Customer support and help, particularly for people who are deaf or hard of hearing, is very much the focus of attention today amid the health crisis. If your organization, office, facility, or events needs to comply with federal, state, and local regulations to provide equal access to information to all individuals regardless of their health and physical conditions, get in touch with eTranslation Services. Tell us what you need and we will connect you with any of our certified sign language interpreters immediately. Email [email protected]etranslationservices.com or call us at (800) 882-6058.
You May Also Like…
Need Help? Chat with us
Please accept our privacy policy first to start a conversation.
|
Dangers of Vaporizing – Is it As Bad As it Sounds?
dangers of vaping
Dangers of Vaporizing – Is it As Bad As it Sounds?
Many people have questions about the dangers of E-Cigarettes and when they are even safe to use. The truth is that while you can find no known dangers of e-cigs, many risks exist when working with them. Although e-cigs look like regular cigarettes, they often contain dangerous chemicals along with other substances not usually within regular tobacco cigarettes, and their potential long term effects are largely unknown. Unfortunately, all of the negative health risks of e-cigs are currently unknown. But here are a few facts that can help you decide if e-cigs are right for you:
– While there have been no known dangers of e-liquid or juice yet, some experts have expressed doubts. It’s possible that vaporizing traditional cigarettes has more toxins than vaporizing them with a e-juice. And, although some argue that liquid nicotine gum may be in the same way bad, they admit that there’s no proven record of individuals who quit long term using either product. E-cigs are still a very new product, so there’s plenty of time to determine what their effects may be, whether they are actually safer than traditional cigarettes.
– Many vapers declare that they do not feel any more cravings because of their favorite cigarettes once they switch to e-liquid. They also claim that they don’t really have any more bad breath or oral problems from smoking. This is because the vapors of most e-liquids are considered relatively harmless to breathe, although some experts note that there are several potent e-liquid chemicals that could be harmful if inhaled. That is why, it’s best to follow the rules provided by the Nicotine patch along with other nicotine products.
– The flavors obtainable in many vaporizing products may appeal to everyone. You can find tobacco flavors like cinnamon berry or chocolate brown, plus fruit and spice flavors like apple pie or banana cream. You can also find fruit and vegetable flavors such as for example carrot and celery. Some individuals claim that fruit juices make their teeth whiter, while others say that it’s an incredible way to control their tobacco cravings.
– If you are worried about the chemicals used in making e smokes, you needn’t be. E-cigs don’t typically contain nicotine, but rather come in a range of other chemical compounds. Just like other tobacco products, they are full of toxins which could harm smokers and Eightvape Coupon non-smokers alike. In line with the surgeon general, these harmful chemicals “are known or suspected of causing cancer and a number of other health issues, including cardiovascular disease, stroke, nervous system damage, infertility and childhood development issues”. Also, the surgeon general noted that many of these compounds are known or suspected of causing respiratory problems, such as asthma and allergies.
– While many people believe that vitamin E is an excellent natural option to tobacco smoke, the surgeon general notes that there is “limited evidence to aid this belief”. There’s some evidence, however, that some e-cigarette users have increased the risk of dying from stroke through the use of them. In one study, smokers who took a Vitamin E supplement had a higher risk of stroke than non-smokers who didn’t take any Vitamin E supplements. For this reason, it’s probably best to avoid e cigarettes entirely, especially if you want to protect your wellbeing.
In a case report released by the American Heart Association, there were several cases of fatal heart attacks caused by the inhalation of second hand tobacco smoke. One case was so very bad that the patient died. This tragedy has resulted in calls for all adult American citizens to stop smoking cigarettes by 2021. However, in cases like this, the culprit was an electric cigarette, not tobacco. E cigarettes usually do not pose any type of dangers to heart health when taken in their proper form, based on the American Heart Association.
All things considered, the dangers of vaping products are relatively minor compared to the serious health problems linked to the traditional cigarettes. However, a lot of people are put off from switching to these kinds of products because of all the hype surrounding them. The simple truth is that these new devices are much safer compared to the traditional ones. They just lack the flavor and satisfaction that lots of smokers are used to getting from smoking. But in the event that you look at the bigger picture, the benefits of becoming a healthier consumer far outweigh any negatives that you may get from with them. So, if you’re not already a fan of e-cigs or vapes, what are you waiting for?
|
Every Gesture is a World
Every Gesture is a World
Last update: 12 June, 2018
Every day, we voluntarily make a variety of gestures. We perform them in order to relay information that others can understand. The frequency with which this happens may lead us to believe that a gesture has meaning. On the contrary, the gestures alone have no meaning, the meaning is given by those who interpret it.
When we travel to places with cultures different from our own, the way others move and act may seem strange. We may not understand the meaning of their gestures, even if they are similar to gestures we make. The truth is, every gesture is its own world, or at least it can be.
Let’s take a wink as an example. This gesture consists of closing one eye for a short period of time, while the other remains open. It is an easy gesture to recognize but not so easy to interpret.
Imagine this situation: several people are drinking coffee on the terrace of a cafe. A boy winks at a girl who is looking at him. At another table, another boy who isn’t directly looking at anyone also winks. The girl returns the wink to the boy, while another girl, who looks at all three, winks.
How can we interpret this situation? If you want, think about the meaning you would give each wink so, as you continue reading, you can compare them with the real meaning. Here is the interpretation of this story:
• The first boy who winked did so intentionally. He wanted to attract the attention of a girl he was interested in. The gesture of winking intentionally has meaning as a courtship ritual and serves to show intentions, romantic interest. It is also a gesture that sometimes seeks to confirm or highlight complicity. In another context, it could also mean mockery, especially if it had been accompanied by sarcasm.
• The second boy who winked did it involuntarily. He has a tic that caused him to wink. Because it was a tic, he did not look at anyone.
• The girl who returned the wink to the boy did so because she wanted to fit in. She is from a different culture, in which the act of winking has no meaning. This girl returned the wink because she thought it was something culturally appropriate, as if it were a greeting.
• Finally, we come to the girl who sees the whole scene and winks. She winked simply because something has entered her eye. The act of winking, in this case, is a reflex act to remove the particle that has entered her eye. Therefore, the wink is involuntary.
This story, though it’s invented, shows that winks can have several meanings and be voluntary or involuntary. Have you guessed the meaning of a wink? Imagine how difficult it can be to interpret this sign. Without evidence of the meaning that a sign usually has in each context, guessing the meaning of a gesture can be very complicated.
Saussure: meaning and significance
For Ferdinand de Saussure, gestures are composed of the union between meaning and signifier. The meaning is what the gesture means, while the signifier is the word that represents it. This includes both its spelling and pronunciation. In the case of gestures, the signifier would be the gesture itself.
The union of both, between signifier and meaning, does not present any motivation for which its relation is arbitrary. This means that a simple hand movement can have a meaning with which it has no relationship. Hence, sometimes, it is difficult to know the meaning of gestures.
A gesture in another culture
Different cultures usually have gestures with different meanings. Some of these gestures which have a different meaning than what you may be familiar with are as follows:
• Giving a thumbs up typically means everything is fine. But in Germany, it signifies the number one, and in the Middle East it is a gesture of anger towards someone.
• Finishing all the food on your plate may mean, to you, that the food was very good. However, the same action in China and the Philippines indicates that the ration was scarce and the host was stingy.
• Extending the palm of the hand outwards is a sign we make to signal that someone should stop or wait. In Greece, placing the hand this way is a way of calling another person a criminal.
• Extending the index and pinky fingers while folding the others over the palm can indicate “rock on” in the United States. In Spain, however, this gesture is used to indicate that someone’s partner is being unfaithful to them. In Africa, this gesture indicates a curse. A similar hand gesture, in which the thumb and pinky are extended, is a common greeting in Hawaii.
• Giving flowers commonly serves to show love or sympathy. Most of us do not stop to count the number of flowers. However, in Russia, giving an even number of flowers signifies a wish for the receiver of the flowers to die.
The next time you travel, be careful with how you interpret gestures, and be much more careful about the gestures you make. When we do not know a culture or context, we can reach absurd conclusions. When in doubt, a question can help avoid conflict.
It might interest you...
5 Practical Keys to Master Nonverbal Communication
Exploring your mindRead it in Exploring your mind
5 Practical Keys to Master Nonverbal Communication
In most situations, nonverbal communication is a determining factor in what we say and how we say it. It's also the perfect companion for words.
|
Harvesting and using propolis.
Sometimes called “bee glue” this material has many properties that the bees use and humans can make use of the material as well.
The order in which these items appear does not indicate any hierarchy of importance.
Vibration reduction… By gluing adjacent parts together the whole structure of the nest is strengthened, but in particular any relative movement between parts that is caused by vibration is greatly reduced and the frequency of any vibration that does still occur is lowered due to the larger lengths and masses of the items that are glued together.
Hole filling is a natural response to the hole itself, but there is a significant advantage to the bees in using propolis to do this… The filled holes become smoother and less able to trap disease spores and bacteria and any such spores or bacteria that are present before the hole filling are effectively encapsulated by the antiseptic material and thus sealed and isolated from the bees. This feature alone would be a strong driving force in natural selection, which would select positively for bees that filled holes.
Propolis is sometimes used as an aromatic barrier by the bees.
Antibiotic, Antiseptic and Antifungal properties are much promoted by the followers of ‘alternative medicine’. I have used Tincture of propolis quite often myself as an agent to promote healing and I have found it useful in restoring sore throats to normal. But in the main I think that the medical properties of propolis are often overstated particularly with regard to aggressive conditions like cancer. There are allergy issues with propolis as well.
Propolis screens (sometimes called propolis grids pictured above) can be used to collect the raw product. The commercially produced grid is rather like a perforated sheet queen excluder, but is made of a polyethylene material three or four millimetres thick. Short round ended slots are of about four millimetres width are punched in the plastic and it is these slots that the bees fill with propolis if the screen is placed on the hive instead of a crown board.
If the screen is removed and frozen then a slight flexing will release lozenge like pellets that are the same shape as the slot. The punching action and the soft nature of the sheet material produce sloping sides to the slots which aid the removal of the pellets.
Netting made from polyethylene yarn can also be used for collection. The adhering propolis being removed by flexure after freezing in a similar manner to the grid.
Humans have used propolis since stone age times when it was used to secure flint arrowheads or spear points.
In more recent times it has been used in violin making, as a component of the varnish on prestigious violins.
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Understanding Perspective in Photography
Understanding Perspective in Photography
When a three-dimension object is translated onto a flat 2D artwork, like a photograph, our eyes use little clues to help us get our bearings. Those clues make up the perspective of the image. For example, the simplest is one point perspective photography where two parallel lines disappear in the distance. Our eyes know the lines are parallel, so the only way they could meet at a vanishing point is because they are getting farther away. Little clues like this help us understand a photograph.
There are many other types of perspective–but the word is also used to describe the position and direction from which the image is taken. It is an element of composition in that the photo can be taken from different angles. Photographers are always looking to take photographs with a fresh perspective.
What is Perspective Photography?
Perspective is a complicated topic in photography only because it can mean these different things. Most photographers bat the term around to mean how the photographer sees the scene. To change your perspective is to move the camera angle or to take a fresh approach.
Why is Perspective Important?
Understanding perspective photography is an excellent way to up your photo game. For one thing, you can begin to play with it in ways that few photographers understand. It can become a point of interest, or it can become the subject, as is the case in forced perspective photography.
Like many compositional elements, perspective is a natural reaction your audience will have to your work. It will be there whether you plan for it or not, so it benefits you to play with it and adjust it for the best results. It is also essential to understand so that you can identify when the perspective is skewed by something. If you know what’s wrong, you can quickly fix it.
Types of Perspective
The different types of perspective in photography are how our eyes and brains notice that objects are closer or farther away in a photograph. We see these things all the time in day-to-day life, but we seldom think about them. If you study a scene closely enough, you will start to identify things that can give the impression of being farther away when they are not.
Vanishing Points
The most apparent type of perspective, and the one most familiar to photographers, is made when sets of parallel lines appear in the photo. Linear perspective occurs when the two parallel lines seem to converge as they get farther away from the viewer. Our brain knows that they are parallel and therefore never touch, but they appear to. So, they must be getting farther away.
Railroad tracks disappearing into the distance is an example of one point perspective photography. The same effect comes from standing on a bridge, a road, or a straight path. The sides make two parallel lines, and they converge at a vanishing point.
You can also create photos with multiple vanishing points. The same rules apply, but the lines that make them won’t be parallel. In two point perspective photography, there are two vanishing points on a horizontal line. An example is standing at the corner of a building, where the corner is close to you, but the building’s sides get farther away. In this example, the two point perspective photography vanishing points are at the edges of the photo, not in the center. Three point perspective can be created with triangle-shaped lines, with each apex having its own vanishing point.
Relative Size
Our brains have a rough idea of how big things are, and we take these ideas with us when we view a new photograph. This is why its usually desirable to include a person or a hand in a photograph, so the viewer can immediately get some sense of scale. We compare objects in the photo to things we know about, but our brains also know that big objects far away look small and small objects up close look big. This is the concept of diminishing scale.
That’s why these photographs are so appealing. They’re captivating simply because the eye can’t figure them out at first glance. When we’re tricked, we’re drawn in as we try to figure out what the photographer did.
Taking forced perspective images is usually a simple matter of placing the subjects in the right places relative to the camera. The big thing needs to be far away to appear small, and the small thing needs to be placed up close to appear big. The depth of field needs to be carefully controlled because if it’s too shallow, one of the objects will be out of focus.
Manipulate Perspective in Post
Editing software has a few tools built in to help you fix perspective issues. From basic warping fixes to complex resizing of individual elements, you can use these tools to alter the perspective after the fact. Most of the time, these items will be used to correct an error, but creative photographers have been making more and more photo art with these tools.
It’s good to keep this in mind so that you can look for and fix perspective issues in Photoshop. Sure, it’s better to catch things at the time of the shoot and get it right the first time. But having the option available means that you know what you can fix and what you can’t, and if something didn’t translate quite as you’d hoped in the image, you can still try to make it work.
With a little practice, Photoshop’s perspective warp tool can be used for all sorts of useful things. If you are placing new objects in a composite project, you can alter their perspective to match the background photo. You can take a telephoto image and warp it slightly to make it appear like a wide-angle picture. Or you can correct perspective issues in architecture photography by warping buildings and realigning them.
Add Something for Scale
A finished image stands alone in the environment–there is nothing besides it for reference. This is why it’s so important to include something identifiable for reference. The audience needs to have some references to identify what sort of place the photo was taken and how large it is. Is it zoomed in on a tiny spot, or are they looking at something huge? In nature, they would look around and what surrounds the scene. But in the photo, they have to figure it out based solely on what they see in the frame.
Sometimes, not knowing is a fun exercise, and the photographer can use the audience’s confusion to help captivate them. Sometimes landscape artists like to focus on textures or rock formations that are abstract. When shown alone, they cannot identify precisely what they are or how big they are, but they are beautiful nonetheless.
Use the Right Lens
The length of your lens affects a lot more than how close you can make your subject appear. Telephoto lenses tend to compress the perspective in an image, so you can make things appear closer to one another than they are. Wide-angle lenses do the opposite, spreading out the elements in the frame to make them stand apart.
Plan how the focal length will affect the perspective before you shoot the picture. Long telephoto lenses can produce some interesting effects on their own. One good example is making the moon appear large in landscape photography. With careful positioning of the subjects, you can make the moon appear as the star of the show, even though it appears pretty tiny when compared to the landscape.
Play with Forced Perspective
As mentioned above, forced perspective photography is when the photographer intentionally manipulates the perspective of the image. You can make your models look like they’re propping up buildings, or look like they’re holding the sun or moon like a marble.
Forced perspective is all about using scale to fool the viewer. The types of perspective described above can be used to your advantage to manipulate the world in a way that tricks the viewer’s eye.
|
Electronic Music – The New Age
Digital songs, even more specifically any songs created or changed by electronic, mechanical, or electronic means, can easily be described as electronic music. However, it is necessary to keep in mind that this particular form of songs is not restricted to one specific category as well as can rather be available in a range of various designs and sub-genres. These categories can vary from electronica, to industrial, to trance, and also to techno. While any type of music developed or modified by digital, mechanical, or digital methods are frequently described as digital music, in reality it’s more precise to call such songs digital since a piece of job of music that utilizes such means would practically qualify thus. In fact, electronic music was initially specified by the German artist Ulrich Schnauss, who required to the airwaves with his radio program qualified “Electronic Musik” in the late 1950s. This radio program featured a series of experimental recordings made by Schnauss with the goal of introducing several of the earliest types of digital music to the public. The main reason for this was Schnauss’ wish to introduce digital songs to mainstream audiences, which at that time was virtually uncommon in the USA. In the following years and decades, Schnauss continued to carry out radio shows on various radio terminals throughout the country. As its popularity expanded, lots of artists began to make use of the initial kind of digital songs. In some instances, music from these artists might be heard around the country. Most of these artists’ music also took a trip to countries besides the United States, where they were typically played in nightclubs, bars, and also various other venues. While this aided grow and also solidify the popularity of the kind of music, most of the artists themselves ended up being skeptical of it, seeing it as a threat to their very own creative integrity. However, today, modern electronic songs musicians are beginning to acknowledge the potential that such music has. These artists’ work has now ended up being a lot more sophisticated, frequently integrating making use of the newest technologies right into their work. Some are even utilizing computers to manufacture their songs in addition to to create as well as record their tracks. Some electronic musicians have also integrated using DJ turntables right into their songs. Nonetheless, numerous artists still choose to make their beats making use of only hardware, which include turntables, keyboards, and also microphones. Some even develop and also mix their music digitally. While some doubters would argue that many digital musicians have removed the identity as well as tradition of conventional music categories, there are also those that declare that digital music has assisted to rejuvenate as well as rejuvenate these music styles. Several doubters think that this is since the electronic design of music provides an outlet and a way for artists to express their creative thinking, along with their musical preferences as well as music choices without having to make a drastic adjustment to their personal styles or musical preferences. The capability to merely “record over” without making any kind of modifications to their songs has actually enabled some artists to develop digital music that is as initial and as special as they wish it to be. Several critics declare that electronic music has offered people the capability to explore their sounds and ideas in new as well as amazing means. As this kind of songs is a lot more individualized than a lot of kinds of music, it allows individuals to seem like they are truly making their very own songs. It is additionally true that a lot of these artists have the ability to create much more varied musical experiences than those who choose to play the exact same tune repetitively over once again. Electronic music also enables the artists to produce music that is usually tough to locate in various other types of music without having to buy costly tools.
Figuring Out
The Beginner’s Guide to
Similar Posts
|
Decimal pricing
Decimal Pricing
The listing of a security's price in decimals instead of fractions. For example, decimal pricing means that a security would be listed as, say, $25.25, instead of $25 1/4. Decimal pricing makes prices and the market more understandable for investors and laypeople.
Decimal pricing.
US stocks, derivatives linked to stocks, and some bonds trade in decimals, or dollars and cents. That means that the spread between the bid and ask prices can be as small as one cent.
The switch to decimal stock trading, which was completed in 2001, was the final stage of a conversion from trading in eighths, or increments of 12.5 cents.
Trading in eighths originated in the 16th century, when North American settlers cut European coins into eight pieces to use as currency. In an intermediary phase during the 1990s, trading was handled in sixteenths, or increments of 6.25 cents.
Mentioned in ?
References in periodicals archive ?
Over the past 50 years, with the development of a privately managed pension system, the deregulation of trading commissions and a move to decimal pricing, assets flowed away from bank deposits to investment accounts controlled by a growing asset-management industry.
The odds will be displayed in decimals as decimal pricing gives the Coral traders greater fl exibility to offer you a better price.
Although many believe that decimal pricing has benefited small individual (retail) investors, concerns have been raised that the smaller tick sizes have made trading more challenging and costly for large institutional investors, including mutual funds and pension plans.
Trading costs, a key measure of market quality, have declined significantly for retail and institutional investors since the implementation of decimal pricing in 2001.
Although decimal pricing led to lower order preferencing on NASDAQ, the proportion of preferenced trades after decimalization is much higher than what some prior studies had predicted.
Last year's implementation of decimal pricing was a factor in creating this new opportunity.
Beyond the convenience of executing trades in dollars and cents, the conversion to decimal pricing is expected to bring savings for investors.
But the National Association of Securities Dealers, the parent of the Nasdaq, formally asked SEC Chairman Arthur Levitt to delay the implementation of decimal pricing till 2001.
The order required (1) the markets to submit a decimals pricing implementation plan by March 13, 2000 and (2) the options and equities markets to phase in decimal pricing by year-end.
The order, as issued in January and modified in March, requires the markets to submit a decimals-pricing implementation plan by mid-April and requires the options and equities markets to phase in decimal pricing by year-end.
Since then, various positive and negative effects have been attributed to the transition to decimal pricing. As part of this transition, the major stock markets chose one penny ($.01) as the minimum price variation for quoting prices for orders to buy or sell.
|
C validating textbox input Sex botchats
Client-side validation alone does not ensure security; therefore data needs to be validated on the server-side as well.
Forms frequently include required input that needs to be clearly identified using labels.
In some situations, such as validating custom controls or supporting legacy browsers, additional scripting may be necessary to validate user input.
Custom validation needs to notify users in an accessible way as described in the User Notifications part of this tutorial.
Application development frameworks, including the . Fortunately, there is a open source library, in your .
NET projects and easily leverage this powerful functionality.
Note that the label also displays “(required)”, to inform users that don’t use assistive technology or use older web browsers that do not support the HTML5 attribute informs assistive technologies about required controls so that they are appropriately announced to the users (as opposed to validating the input).
Most current web browsers automatically set its value to .
HTML5 defines a range of built-in functionality to validate common types of input, such as email addresses and dates.Where possible, require user confirmation for irreversible actions, such as permanent deletion of data.Examples include: These tutorials provide best-practice guidance on implementing accessibility in different situations.Most current web browsers support these features and handle input validation.Also, HTML5 validation helps users inputting data by providing specific controls, such as date pickers and custom on-screen keyboards.
Search for c validating textbox input:
c validating textbox input-72c validating textbox input-68
In general, client-side validation results in a better user experience and makes resolving validation errors more understandable. However, not all web browsers support HTML5, or they may not support your custom validation scripts.
Leave a Reply
One thought on “c validating textbox input”
1. The .6 award breaks down like this: million for punitive damages under the New Jersey Law Against Discrimination, 0,000 for economic loss, 0,000 for willful action against the Age Discrimination and Employment Act (ADEA) and another 0,000 for pain and suffering.
2. There are also shops for airline booking and money remittance. However, they may continue to stay in Israel after five years provided that they are working for the same employer.
|
I have read in TLFi, Word Reference and in Wiktionary that "commerce" is pronounced /kɔ.mɛʁs/. However, I hear an /o/ instead of an /ɔ/ in the corresponding audios in both WR and Wiktionary web pages. Is it just me or is this /ɔ/ not pronounced accurately as in "porte"? Maybe the difference is related to stress?
• 1
Maybe a regional pronunciation ? If I imagine "commerce" with a closed [o], that sounds to my ears like a feature from the South of France. – Greg Oct 7 '20 at 9:40
• @Greg I expected TLFi, Word Reference and Wiktionary to point to the standard (Parisian AFAIK) French pronunciation. Don't you? – Alan Evangelista Oct 7 '20 at 10:12
• Does this answer your question? Do I have to learn /o/ or /ɔ/ separately? – jlliagre Oct 7 '20 at 13:00
• @jiliagre I'm not asking if I have to learn the 2 phonems, I want to know what native French speakers use in this specific case. If there are regional differences, I want to know them. – Alan Evangelista Oct 7 '20 at 13:17
• Native French speakers might use both pronunciations. There are regional and individual variations. I suspect that people who say \ʁoz\ for rose are more likely to say \kɔ.mɛʁs\ while people who say \ʁɔzə\ are more likely to say \ko.mɛʁsə\. – jlliagre Oct 7 '20 at 13:36
In French, the pronunciation between /o/ and /ɔ/ is quite close. In the case of "commerce", it is effectively a /o/; the o is pronounced like "eau" (water) and not as "porte".
However, depending on the region, the accents differ and many pronunciations are "distorted" or at least less accentuated.
I found another recording recorded by someone of French origin (and not English like on Wiktionary), maybe that will help you to distinguish:
• It's odd that, even in the link you mentioned in which I clearly hear /ko.mɛʁs/ in the audio , the IPA transcription says /kɔ.mɛʁs/. I thought that someone would have noted and fixed that inconsistency at some point. – Alan Evangelista Oct 7 '20 at 10:10
• Indeed, according to my answer, I will use /ko.mɛʁs/ instead. Maybe I am not pronouncing it correctly myself ... As I said, the distinction between open and closed o is quite close, even lost depending on the region and the words. See: fr.wiktionary.org/wiki/Annexe:Prononciation/… if your read French. – Armand Oct 7 '20 at 10:28
• Thanks for the useful link! As a self taught languages learner, I find very frustrating when dictionaries use outdated (aka "traditional") IPA transcriptions. – Alan Evangelista Oct 7 '20 at 10:37
As a French language learner, you shouldn't focus on differences no native French really cares of.
What is important is mispronunciations that would either lead to a different word than the one expected (rare) or prevent the listener to easily or at all understand what you want to say.
Using /o/ or /ɔ/ in commerce isn't either of these cases so just use whatever vowel you like. Nobody will notice.
The standard pronunciation is [kɔ mεʀs] as found in the TLFi. Nevertheless, there are variations and [o] is also found ; this is also true for a large number of other nouns, for instance "restaurant". For example, my personal pronunciation for these two is /o/ (as internalised, not a result of practicing). Either one is acceptable. However, in "restaure" there is just one possibility and that is /ɔ/. In "restorez", again the two are possible. Apparently, there exists a pnonétic principle for deducing where the change is possible and even likely, but I am not aware of it. It could be a question having to do with the phonetic vowel that follows, whether it is a nasal or not.
• Personally, I always heard that the "o" was always pronounced /ɔ/ if the vowel of the next syllabe was a silent "e". Remark that it is not the only case where the "o" os pronounced /ɔ/, but I don't know an example of such a "o" where it is pronounced /o/ – Abel Milor Oct 7 '20 at 10:05
• @AbelMilor An example is "rose"; standard: [ʀo:z], ie north, central; extreme southern part: /rɔz/. Here is an interesting reference in the way of showing a little of the unsuspected variety there is in the pronunciation of French: buzzfeed.com/fr/julesdarmanin/… – LPH Oct 7 '20 at 17:24
Your Answer
|
Corn: The Grain That Built America
Corn–born in the Americas, domesticated in the Americas, first cultivated in the Americas, and most of its uses developed in America. No other food exemplifies this country like corn. In its honor, this is the first in a series of short articles exploring the history and culinary aspects of this versatile native grain.
Publication1The origin of corn was a mystery for many years since nowhere in the world can it be found growing wild. It was only in the 1950s that Noble Prize winner Dr. George Beadle and a team of botanists, geneticists, and archeologists were able to identify a Mexican grass called teosinte as corn’s ancient ancestor. They also determined that it was first domesticated as early as 9,000 years ago in the Balsas River Valley near Puebla, Mexico.
Teosinte-Early CornBy the beginning of the thirteenth century, corn cultivation had spread throughout Mexico and into the U.S. Southwest. And by the end of the century it had migrated through middle and eastern America as far north as southeastern Canada, quickly becoming a major food staple of Native Americans, along with squash, beans, and a few other indigenous plants.
Native American folklore is filled with stories about the origin of corn (known by the Native Americans as mahiz or maize, meaning “that which sustains us”). Most of these colorful stories, once preserved from generation to generation through oral interpretation, have only recently been written down. Here is one such story by a member of the Cherokee nation.
Long, long ago when the Earth was very young, an old woman lived with her grandson in the shadow of a great mountain. The old woman gave her grandson a bow and arrow, and he went out and killed a small bird for them to eat. “You will be a great hunter!” said the grandmother. “We will have a feast.”cherokee symbol for corndawn mother She went into the small storeroom behind their house and brought out some dried corn. With the bird and the corn she made a delicious soup. Everyday the boy would hunt and everyday the grandmother would bring corn to add to the pot. One day the boy looked into the store house and saw that it was empty; but that evening the grandmother brought the corn as usual. The boy was so curious that the next evening he peeked carefully into the storehouse when the grandmother went for the corn. The grandmother rubbed her hand along the side of her body and out popped the corn from her side. The boy was confused and afraid. When the grandmother came out she understood that he had seen her.
“Now I must die,” said the grandmother, “but you must do all I tell you so that when I am gone you still will have food. After I am dead you must clear the land behind the house where the sun shines longest and brightest. Drag my body over the land seven times and bury me in the field.” The next morning she was dead.
The boy did exactly as the grandmother had told him. Everywhere a drop of the grandmother’s blood fell, a small plant grew. The boy kept the land clear around the plants. They grew tall and strong and soon had tassels which reminded the boy of his grandmother’s long hair. The wind rustling the long leaves sounded like her voice. Soon the plants grew heavy with ripe corn, enough to feed the boy and the people.
cornbsAs European settlers began to arrive in America, they soon recognized the proficiency by which Native Americans cultivated their crops and quickly adopted their agricultural techniques. Fields consisted of small mounds of tilled earth about a meter apart in which kernels of corn were planted. Several weeks later, beans and squash were planted between each of the mounds. This resulted in a more sedentary method of farming–cornstalks provided support for the bean vines and squash leaves helped with moisture retention and pest and weed control. Native Americans referred to this as the “three sisters.”
While there are many types of corn, the most common of these are flint, dent, sweet and popcorn.
Indian CornFlint corn, also known as “Indian corn” or “ornamental corn,” is generally multi-colored ranging from white to red to black. The kernels of flint corn have a hard outer layer said to be hard as flint, hence the name. Its low water content is resistant to freezing and therefore well suited for New England and the more northern part of the United States. Flint corn is one of the three kinds of corn cultivated by Native Americans.
Corn_DentDent corn, or “field corn” as called by some, is one of the most widely cultivated crops in the world. It can be either white or yellow and gets its name from the indention on the side of each mature kernel. While often used as livestock feed, it is also used to make processed foods such as starch, oil, and sweetener, or industrial products such as glue, ink, and cosmetics.
sweet-corn-2Sweet corn, sometimes referred to as “table corn,” is so named because it contains more sugar than other types and is grown for human enjoyment. It is rarely used for livestock feed, flour, oil, or industrial purposes.
popcornPopcorn is actually a type of flint corn. It has a moist, starchy center and hard shell that explodes when heated. It is the soft, starchy, white center that you enjoy at home or in movie theaters. Popcorn is the oldest kind of corn, dating back to 3600 BC.
Corn is not only an important part of our diet but our lives as well. Literally everywhere, it is almost impossible to go through our day without being touched in some way by corn–from the eggs we enjoyed for breakfast to the burger and soft drink we had for lunch. Corn is even in the fuel used to operate our automobiles.
Over the next few weeks, we will take a look at some of the foods and food products brought to us by the grain we call corn.
|
Couple by boat
Your Elder Years Strategy
I recently saw two advertisements from companies trying to help people “stop Medicaid from taking your retirement.” The ads paint a frightening picture and offer advice on how to avoid trouble. Typically, when you read between the lines of these advertisements, you see they’re usually advising you to take one or two strategies: develop some kind of trust, or invest in some kind of financial product. I’m not suggesting that these are bad ideas or that the people sending you these advertisements are bad people. I do suggest that the concern for long-term care is not about buying a product. It’s about establishing a strategy for how you wish to age.
When I talk about long-term care, I’m talking about skilled care that is typically provided in a nursing home. Under the right circumstances, it can also be provided in your own home. It involves skilled therapies, case management, and intensive care plans. Long-term care is not routine, day-to-day, outpatient healthcare. Additionally, it costs a lot of money. Unless you are very wealthy or have and maintain high quality long-term care insurance, there is only one way to pay for this type of care: Medicaid. Many people are under the mistaken belief that Medicare (routine government provided health insurance for people over age 65) will pay for all of their healthcare needs, but Medicare will only pay for a short period of skilled care following a hospital stay. It will never be longer than several months, and is typically only a few weeks. After that, Medicare assumes you either no longer need care or that you have some other way to pay for it.
In order to qualify for Medicaid, you can only have a limited amount of property and a limited amount of income. While there is some property that doesn’t count against you (like your home if your spouse is living in it), you typically have to “spend down” your excess property in order to meet eligibility criteria. Many people have heard that Medicaid will “look back” five years from the date on which you apply to see if you gave away any of your property in order to qualify for Medicaid. Often, you will get advice about how to give away property and still manage the five-year look back period.
Don’t let advertisements mislead you. There is a better, more thoughtful way to consider how you will pay the cost of long-term care. First, know what care costs. According to Genworth Financial, Inc., in 2016 the average cost of a semi-private nursing home room was $6,388.00 per month (estimated to be $8,585.00 per month in 2026). The cost of an in-home health aide was $3,813.00 per month (estimated to be $5,124.00 per month in 2026). You will contribute some of your monthly income to your care, so the amount of money you need to come up with will be somewhat lower. Still, most people cannot afford that level of care. How long can you afford to pay for care? How long will you need it?
Know your risks, both financial and biological. Anticipate what your finances will look like at the time you think you may need care. According to the US Department of Health and Human Services, someone turning 65 today has approximately a 70% chance of needing long-term care, and will use that care for three years. Of those people, 35% will get that care in a nursing home. On average, they will be in a nursing home for one year.
So, what is your chance of needing long-term care? Does your family history give you some insight? What care would you predict for yourself based on your own health care history? Most importantly, where do you want to receive long-term care if you need it? Do you have friends or relatives who can provide a safe level of care? Is your home properly planned and accessible so you can “age in place” in your home? These are all questions that can only be answered by looking at you as a human being, not just as an accumulation of your money and property. It requires legal, financial, and geriatric care professionals.
So let’s take another look at the advertisements that I received. Maybe you’ve seen these, too. You may have even attended a seminar or sought professional advice about your planning options. Did they talk to you about care management? Did they talk to you about aging in place? Did they analyze the risks you have for care and the options you have for paying for that care? If the answers to those questions are, “No,” my advice is to engage a professional who will consider these questions as part of a comprehensive plan for your long-term needs.
You need not wait until you’re 65 or older to consider this. You should start planning early and consider how to structure your life for your elder years. For example, my wife and I (in our mid-50s) recently remodeled our kitchen. We made sure there was plenty of room between the cabinets in case one of us ever needed a wheelchair. We placed the appliances low, within reach of a wheelchair. Any professionals with whom you engage for long-term care planning should look comprehensively at your life to determine not only how best to pay for care, but how to best receive that care.
Posted in Blog, Estate Planning.
|
How to Clean Exterior Windows Without a Ladder
Cleaning your downstairs exterior windows is a chore, but at least you can reach them. However, there are ways of cleaning the upstairs windows that do not require you to perch precariously on a ladder. You may be able to clean them from inside. If this is not possible, purchase one of the bottles of window cleaning detergent that attaches to a hose.
Clear all debris, cobwebs and loose dirt from the windows. If you can do this from inside, use the brush from a brush and dustpan set. If you have windows that reverse for easy exterior cleaning, you can vacuum this away using a brush attachment, providing the window is completely dry. If you cannot reach the glass from inside, use a broom with a long, extending broom shaft while standing directly underneath the window. Achieve extra reach by tightly taping the bottom of the broom shaft to the top of another one.
Clean reversible windows with a steam cleaner, or a solution of mild detergent such as dishwashing liquid. In the case of the steam cleaner, use the squeegee attachment. Polish the window with a clean, good quality microfiber cloth to achieve extra shine. Alternatively, use a microfiber cloth dipped in mild detergent solution and wrung out; finish by polishing with a dry microfiber cloth. Use a cloth dipped in mild detergent solution followed by a dry microfiber cloth if you are cleaning one window by leaning out of an adjoining one.
Attach the outdoor window cleaning detergent to your garden hose, if you cannot reach the exterior windows from inside. Remove the yellow plug and turn the control on the bottle to "rinse." After thoroughly rinsing the whole window, turn the control to "clean." Ensure the suds cover the entire area of the glass. Wait for 15 seconds, and rinse the window thoroughly a second time to ensure you wash away all traces of detergent.
Clear away any debris left lying on pathways or lawns directly underneath your exterior upstairs windows. Empty and dry your bucket, or drain your steam cleaner. If you used the hose window cleaning detergent, replace the yellow plug and store it in a dry place. Roll up your hose and put it away.
|
In Wine Types and Styles
The recent cold weather in large parts of our country (South Africa) made me remember a tasting I had with a winemaker. As far was icewine was concerned, he got the concept of minimum and maximum temperatures all mixed up and insisted that he harvested his frozen grapes at a minimum temperature of -7˚C. After realising that his mind was frozen from all the alcohol he had consumed during our tasting, I accepted the futility of trying to explain to him that he was actually referring to the maximum temperature requirement.
It is believed that frozen grapes were already harvested in the Roman times. Other reports indicate icewine production in Germany as far back as 1794. Another documented case stated that German vintners anticipated a very harsh fall in 1829. Grapes were left hanging on the vines for later use as animal feed. After discovery that these grapes yielded very sweet must, icewine was born! Austria, Canada and certain states in the USA also produce icewine, with China, USA, South Korea, Hong Kong and Singapore being the top markets for icewine.
In Canada, all icewine must have at least a part of it made from grapes that have been frozen naturally on the vine and then pressed whilst still frozen and without any intervention (no artificial freezing allowed after harvesting). Several challenges daunt the winemaker here. The grapes must survive animal, insect and bird activity, whilst combating mould and raisining. Only healthy, frozen grapes are thus harvested, which considerably limits the amount of grapes that can be processed. The cost of icewine it thus high, which is illustrated in an extreme case by Canadian producer, Royal DeMaria. Five cases of Chardonnay icewine was released in 2006, with a price tag of C$ 30,000 per half bottle!
On the technical side (my favourite side), I’ve just read a very interesting article about yeast adaptation concerning Riesling icewine juice fermentation. Juice with concentrations of up to 46 degrees Brix were fermented. This is quite challenging for many yeasts and it should come as no surprise that juice concentrations higher than 42 degrees Brix would not be able to be fermented to 10% v/v ethanol. Also, acetic acid produced as a function of sugar consumed was positively correlated to the glycerol produced. Glycerol and acetic acid are well-known markers for yeast stress, where acetic acid can represent up to 20% of wine TA (in icewine). Those of you familiar with icewine will know what I’m talking about when I say that it has a ‘slight’ bite to it…
A new threat to the chilly tradition that is icewine making is however gaining ground. Charles-Henri de Coussergues, Quebec icewine maker, says: “The danger now is that other wine regions start using the name ‘icewine’ for a product made the artificial way.” He is of course referring to a technique called ‘cryoextraction’. In essence, grapes are artificially frozen (-7˚C or lower) and the rest of the process is similar to traditional icewine making. The benefits here are larger production at less cost and better control over grape quality. Also, one doesn’t have to wait around for winter to do its work. But then the old-schoolers insist that traditional icewine just tastes better and more complex, possibly because of the extended hang time under harsh conditions.
The frosty debate between the ‘naturals’ and the ‘cryo-extractors’ continues. What do you think? Is there room for both these schools of thought in the already crowded wine market?
Bernard Mocke is a technical consultant for Oenobrands.
Quick Message
Start typing and press Enter to search
|
Researcher Creates 'Artificial Tongue' That Can Detect Fake Whiskey
Researcher Creates 'Artificial Tongue' That Can Detect Fake Whiskey
How much are you willing to bet on your ability to tell the difference between a top shelf whiskey and something your best mate’s Grandad cooked up in his shed?
Uwe Bunz, a researcher from the University of Heidelberg has developed a ‘synthetic tongue’ that can differentiate between whiskeys based on their brand, age, blend and even country of origin with astonishing accuracy!
Researcher Creates 'Artificial Tongue' That Can Detect Fake Whiskey
[Image Source: Pixabay]
While Bunz admits his invention can’t identify an unknown sample, it can compare a sample to other known samples. "If you buy a crate of expensive whiskeys," he said, "you can test if they are actually what you think they are."
The artificial tongue is actually a fluorescent solution. You mix the solution in a whiskey sample you want to identify and then watch for results. "Our human tongue consists of 6 or 7 different receptors -- sweet, salty, bitter, sour, umami, and hotness -- and they're able to identify food by differential reactions of those elements," Bunz explains. "The combination of differential receptors gives you an overall taste impression of what you eat."
The solution gave off a unique reaction to the 33 different whiskeys it sampled. In this way, it can be used to quickly to identify the authenticity of drinks. Bunz plans to create a tongue for red wine but says the applications for this type of solution is endless. Using it to test the counterfeit drugs and perfumes are just two applications that could have big impacts.
Fake whiskey business
Fake whiskey might sound like a joke but it can be a big-dollar business for the right people. Earlier this year a 41-year-old man was arrested in London after allegedly trying to sell hundreds of thousands of pounds of fake spirits at auction.
A raid on a residential home led to the discovery of a meticulous bottling operation that involved refilling old bottles of whiskey and rum with cheaper alcohol.
The crooks were unearthed after the director of spirits auction site, Isabel Graham-Pool started to notice bottles of fake whiskey being integrated into genuine lots. She set up a visit with the seller under the pretense of wanting to purchase the goods. She described the visit, saying, “What we saw at the property was a significant collection, hundreds of bottles, of supposedly valuable liquids that if genuine were unlikely to be available on such a scale. This was an immediate red flag and our doubts were justified when we began scrutinising individual bottles. It was only when we really examined the bottles that we noticed things, like the labels didn't look quite right, the colour of the liquid didn't look quite the same as others, or the level of the liquid was just a bit higher than you’d expect for a bottle of that age or producer.”
Sources: ScotchWhisky, CellPress
SEE ALSO: Artificial Intelligence Systems Can Now Predict When You Will Die
Follow Us on
Stay on top of the latest engineering news
Just enter your email and we’ll take care of the rest:
|
Inverarity (inverarity) wrote,
Book Review: Lion of Liberty: Patrick Henry, by Harlow Giles Unger
One of the most famous Founding Fathers who never became President.
Lion of Liberty: The Life and Times of Patrick Henry
Da Capo Press, 2010, 322 pages
Known to generations of Americans for his stirring call to arms, “Give me liberty or give me death,” Patrick Henry is all but forgotten today as the first of the Founding Fathers to call for independence, the first to call for revolution, and the first to call for a bill of rights. If Washington was the “Sword of the Revolution” and Jefferson, “the Pen”, Patrick Henry more than earned his epithet as “the Trumpet” of the Revolution for rousing Americans to arms in the Revolutionary War. Henry was one of the towering figures of the nation’s formative years and perhaps the greatest orator in American history.
To this day, many Americans misunderstand what Patrick Henry’s cry for “liberty or death” meant to him and to his tens of thousands of devoted followers in Virginia. A prototype of the 18th- and 19th-century American frontiersman, Henry claimed individual liberties as a “natural right” to live free of “the tyranny of rulers”—American, as well as British. Henry believed that individual rights were more secure in small republics than in large republics, which many of the other Founding Fathers hoped to create after the Revolution.
Henry was one of the most important and colorful of our Founding Fathers—a driving force behind three of the most important events in American history: the War of Independence, the enactment of the Bill of Rights, and, tragically, as America’s first important proponent of states’ rights, the Civil War.
Patrick Henry is known as the firebrand who said "Give me liberty or give me death." He never became President, but he did serve as Governor of Virginia (three times before independence, and again afterwards), and was a powerful figure before and after the American Revolution.
Also, the dude had eighteen children (of whom all but two survived) and 77 grandchildren. He's estimated to have over 100,000 descendants. If anyone deserves to be known as the "Father of his country," it's Patrick Henry, not childless George Washington.
Patrick Henry
If this be treason, make the most of it!
Like so many of the Founding Fathers, Patrick Henry got his start as a lawyer, where his gift of oratory was evident early in his career, as was his willingness to rebel against the king. In a case in which his father was the presiding judge, Patrick Henry represented a group of tobacco farmers whose debts to the Anglican Church had been reduced by a bill passed by the Virginia House of Burgesses, but which the clergy, not happy about having their payments reduced, petitioned London to overturn. The British authorities did so, the clergy sued for payment, and Patrick Henry railed in court against the king and the church, calling them tyrants and bad for society. Despite opposing counsel accusing him of treason, Henry's oratory worked: the jury came back and awarded the plaintiff damages of 1 farthing. This would not be the first time that he would convince a jury to effectively ignore the law and the facts of the case in favor of an emotional response.
It was the Stamp Act that really got Patrick Henry rolling. To cries of "Treason!" when he seemed to be calling for King George's head, he said, "If this be treason, make the most of it!"
And it was on. If Patrick Henry were alive today, he would probably be a Trump-like figure, shitposting on Twitter and calling his opponents enemies of the people.
Patrick Henry-speech
Another complicated Virginian
A lot of the most prominent Founding Fathers were Virginians, including four of the first five presidents. Patrick Henry was a peer of Washington, Jefferson, Madison, and Monroe; mostly a friend of Washington, an opponent of Madison, and a sometimes ally, sometimes adversary of Jefferson and Monroe.
Like many of those men, Henry managed to simultaneously oppose slavery and own slaves. He frequently wrote about how his moral and religious beliefs convinced him that slavery was evil, and the cognitive dissonance tore at him. He never purchased slaves directly, but slaves often came attached to farms he purchased. He regarded abolition as even crueler than slavery, since the only alternative he could conceive of was throwing freed slaves out onto the streets to fend for themselves.
In his early lawyerly days, he defended Baptists (then a bunch of persecuted upstarts) against the Anglican Church, and was a strong advocate of a separation between Church and State. This would not prevent him later in his career from trying to make Christianity the state religion, only to be opposed by Jefferson and Madison.
During the revolution, when Virginia was vulnerable to British invasion, Virginians were quite unimpressed by Jefferson's leadership. When Patrick Henry took charge, he basically declared martial law, asking the Virginia legislature to give him powers he had previously claimed were assumed only by tyrants. Unger spends a lot of time defending Henry here against the obvious charge of hypocrisy, pointing out that almost every politician reversed course sometimes when the facts on the ground changed. And he's not wrong, but it was rather amusing, in a biography about one of the most firebreathing, pro-freedom Founding Fathers, to read about how democracy doesn't work in an emergency and Patrick Henry was totally justified in effectively declaring himself a military dictator, even if only temporarily.
You thought crazy wives in the attic were fictional
Patrick Henry's first wife, Sarah, had six children before she went mad. Sarah Henry apparently suffered from severe mental illness and depression which only got worse as she got older, until his friends were recommending she be sent to an insane asylum. Since insane asylums at that time were horrific hellholes, Henry, to his credit, hid her in his attic instead.
Well, okay, not quite. But Sarah Henry did spend the last few years of her life quietly tucked away in their mansion and probably cared for by slaves.
Henry would marry a second wife, Dolly, who gave him twelve more children. As Unger puts it, Dolly and the kids had to run and hide from the British early in the war, but once General Cornwallis was driven out of Virginia, they returned...
And from then on, whenever Henry returned home, he made certain that if his wife was not already pregnant from his last visit, she most certainly would be by the time he left.
Dolly apparently coped with spending her entire life pregnant better than his first wife did. By all accounts, they both really, really loved children, and had a happy marriage. He was known to be a passionate fiddler and played for and with his children often. He was also strict and scrupulous and there were never any stories of infidelity.
How not to get rid of a troublemaker
After the war, Thomas Jefferson was still pissed at Patrick Henry for opening an inquiry into his disgraceful retreat from the capital while he was governor (Jefferson was really a pretty shit wartime governor) even though Henry insisted it wasn't personal. Henry was also rabble-rousing, so Jefferson and Madison and the rest of the Virginia legislature decided to make him governor again, where he'd be powerless.
This seems to be a thing that happened quite a lot in American history: take a zealous, charismatic politician you want to sandbag and give him an office where you think he can't do any damage. Somehow this rarely works out as intended.
Henry was enormously popular, and started issuing executive orders right and left that flat out ignored legal restrictions on his power. Not all of his schemes worked (he tried to subsidize mixed marriages with Indians in an effort to integrate them with white society), but he did block land surveys and do his best to prevent more encroachments on Indian land. He also suspended capital punishment in Richmond.
He opposed ratification of the Constitution, and considered Washington's expansion of executive powers as everything he'd feared, as he was a vehement advocate of states' rights. Ironically, since he was one of Washington's most loyal friends and would later oppose the "anti-Federalists." While he lost the fight against the Constitution, many of his objections were eventually incorporated into the Bill of Rights. He convinced James Monroe to run for Congress against Monroe's friend (and Henry's foe) James Madison. Monroe agreed, resulting in a tepid campaign in which Madison won anyway. Henry almost challenged the governor who succeeded him, Edmund Randolph, to a duel over another political argument.
After retiring from public office, he resumed his career as a lawyer, where he continued to use emotionally manipulative rhetoric. At one trial, he convinced a jury to acquit a murderer by getting them to cry over how much it would grieve the defendants' aging parents to see him put to death. The judge made the jury reconsider their verdict after pointing out to them that they didn't have to put the defendant to death. Despite developing a somewhat disreputable reputation as a lawyer who'd get anyone off for a fee, Henry became very wealthy and one of the greatest landholders in Virginia.
I found this book, and Patrick Henry's life, quite interesting. He was, like many of the Founding Fathers, gifted and full of passion and convictions, and also flawed and capable of ignoring his principles when they were inconvenient. I'd rank him as a better man by far than Jefferson, and probably more sincere than anyone save perhaps Madison, who was less passionate but far more brilliant.
Harlow Giles Unger writes short and rather dry biographies, but they're informative and a good way to fill in some gaps about non-presidential figures.
Patrick Henry
Also by Harlow Giles Unger: My review of John Quincy Adams.
My complete list of book reviews.
Tags: books, harlow giles unger, history, non-fiction, reviews
Posts from This Journal “harlow giles unger” Tag
• Post a new comment
Anonymous comments are disabled in this journal
default userpic
Your reply will be screened
|
Monday Message Board
5 thoughts on “Monday Message Board
1. America, Compromised: Lawrence Lessig explains corruption in words small enough for the Supreme Court to understand
Lessig proposes as lucid and devastating a theory of corruption as you’ll ever find, a theory whose explanatory power makes today’s terrifying news cycle make sense — and a theory that demands action.
“From this historic perspective, Lessig painstakingly builds up an argument about how inequality has fueled corruption, which has fueled inequality — and how the bankrupt ideology of the Chicago School corrupted every institution, forcing each of us to make one tiny compromise after another, until we arrive at the present moment.
Lessig’s use of case-studies alternated with broad statistical and political analysis flips back and forth from the microcosmic to the macrocosmic, from individuals and institutions to the whole society and back again, in a story that is as compelling as it is infuriating.”
2. The article mentions:
“Lessig is well-known for having formulated the “four forces” theory of social change: that the world is moved by markets (what is profitable), norms (what is considered ethical), code (what is technically possible) and laws (what is legal).”
I am a little puzzled by the term “code” here. Surely what is meant is technology?
Of course, natural forces and natural resources (abundance, scarcity, substitutability etc.) affect social change too, so there is need for an expanded list. What affects human societies are the following (in summary);
(a) natural forces in general;
(b) biosphere (incl. climate which is worth noting these days);
(c) ecology;
(d) biology (including non-human and human evolution;
(e) human knowledge, skills and beliefs – true or not. (science, technology, philosophy, arts, politics, religion, ideology ); and then;
(f) customs and institutions (incl. laws)
(g) ethics;
(h) markets.
Some of these overlap and they all interact to generate social change. What we particularlt have to remember is that the important forces which affect the direction of civilization are not all endogenous: far from it in fact.
Having said all that, Lessig is right about the descent of the US (and not just the US) into systemic corruption. Inequality certainly plays a major role in this. Only a democratic, socialist and egalitarian society stands a reasonable chance of minimising corruption.
3. Hi Ikonoclast
Yes. Just 4 simple points and the world will be great. Meadows (below) and Lessig imo need to caveat with “depending on culture”. I love system dynamics ala Jay Forrester. Here is Meadows 12 point list. Top of the list (bottom!) is paradigm shift. Almost in black swan territory – paradigms and culture.
PLACES TO INTERVENE IN A SYSTEM (in increasing order of effectiveness)
7. The gain around driving positive feedback loops.
3. The goals of the system.
1. The power to transcend paradigms.
4. Share prices have fallen so hopefully this will discourage Trump from further escalating the trade war he has started which causes real hardship for people who rely on income from work rather than investments.
“The decline in the paper value of your assets means more to me than real suffering ever could…”
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Reader Comments
One shot keto - What can I eat to lower my blood pressure immediately?
by sherly sylvia (2021-04-17)
Hypertension can be caused by poor eating habits that must be corrected. Today we tell you what to eat if you have high blood pressure.
The high blood pressure or hypertension is no joke, even if it is a condition that usually manifests itself not painful and that makes a lot of sufferers do not make it all the case they should. My mother was one of those, one of those who, despite the doctor's warnings, was one of those who thought that "total for a day nothing happens, that I am fine" and did not forgive her piece of cheese or her piece of ham whenever he played ... until 3 years ago one of those excesses resulted in a rise that caused an acute heart attack from which he could not overcome.
And it is that following a diet, whatever it may be, can be very bearable, but it is not always easy and even less so when the only information that the doctor gives you is a small sheet that says "Diet X" which in this case would a low sodium diet - low in sodium - that one has to interpret as best as possible and figure out how not to get depressed in front of what will be their new menus. As I do not want anyone to get sad or depressed when sitting at the table today, I am going to talk about what to eat if you have high blood pressure or how to interpret the sheet that the doctor gives you to try to lower your tension / pressure blood pressure and prevent it from rising further.
What to eat if you have high blood pressure
The first thing the doctor will have told you if you have high blood pressure is not to add salt to your meals or to reduce it to the maximum, because now sodium has become your worst enemy, that at most you can take 1000 mg of sodium per day, which would be the equivalent of 2.5 g of common salt. And no, that does not mean that you can use 2.5 g of salt to season your dishes, because sodium is in more places and that also counts.
Vegetables and high blood pressure, which ones are and which ones are not?
In any healthy diet it is necessary to consume fruits and vegetables. These generally do not contain sodium in large amounts, except for the following, whose sodium content may be relevant:
• Swiss chard, 150 mg Na / 100 g
• Celery, 100 mg Na / 100 g
• Spinach, 80 mg Na / 100 g
• Asparagus, carrots and beets, 60 mg Na / 100 g
Na = Sodium
It should also be borne in mind that we are talking about fresh and unprocessed food, since treatments such as dehydration cause the sodium content per serving to increase considerably.
On the other hand, you can enjoy garlic and onions whenever you want -which will become great allies when it comes to seasoning your dishes- and red and citrus fruits.
Foods you should not eat if you have high blood pressure
All kinds of processed food should be avoided, whether packaged or canned, since even those that are indicated to be low in salt can still be a considerable source of sodium that may be part of other salts that are used as preservatives, such as sodium benzoate, or as flavorings, such as monosodium glutamate or E-621.
Avoid sausages and cured meats such as chorizo, loin or ham and all kinds of smoked or salted fish such as salmon or cod.
Pickled vegetables such as pickles, onions or olives are also a source of sodium.
The industrial snacks are also prohibidísimos, as they have salt cascoporro, but that does not mean you can never enjoy popcorn while watching a movie at home. It is as easy as making them yourself by buying the corn in bulk and making them explode in the microwave as we explain in this post, then to give them flavor, instead of salt you can add ground pepper, sweet or spicy paprika, or even give them a touch sweet with a little cinnamon.
The seasonings and sauces business also have to leave them in the store, but you can make your own by combining own extra virgin olive oil, vinegar that you like and any spice. The soy sauce is pure salt is not recommended, as the bouillon concentrate much put that are low in salt.
The cheeses also have a lot of salt and some fresh cheeses are packaged in a pickle, these also should be part of your diet, you choose fresh cheeses rather low in salt.
Dark chocolate, a permitted whim
If you're a dark chocolate lover, you're in luck, because a few ounces every now and then won't affect your blood pressure.
Be careful with the water and what you drink
The mineral water is one of the healthiest beverages of the planet but eye, if you have a high voltage to be sure of weak mineralization and its sodium content is less than 5 mg Na / l. Also be careful with sodas and carbonated drinks and find out their sodium content before consuming them.
The best advice when you have high blood pressure
Always read the labels of what you put in the shopping cart and if you have any questions, always consult your doctor.
The word Keto comes from the word ketogenic which is the name of a metabolic process in the human body. ketogenic diet are a class of low carbohydrate (carb) diets and are executed by lowering the intake of carbohydrates. Typically, it is advised to reduce the total carbohydrate intake to 50 grams per day and net carbohydrate intake to 20-30 grams per day to be on a ketogenic regime.
|
7 Ways to Cope with Anxiety Naturally
lagunashoresrecovery-7-Ways-to-Cope-with-Anxiety-Naturally-photo-of-Young-redhead-woman-sitting-on-the-carpet-thinking Anxiety is one of the most common disorders in the United States. With some 40 million Americans struggling with some form of anxiety, the disorder affects more than 3% of the population. Yet, less than 35% of us ever seek out treatment. Despite that, anxiety is a highly treatable disorder, and tactics like cognitive behavioral therapy, mindfulness-based stress-reduction therapy, and other behavioral therapies can have a significant impact on quality of life. Therefore, it’s always important to seek out a doctor and then treatment. Often, anxiety treatment will require medication. However, that medication is normally temporary, helping to alleviate the worst of the symptoms so you can get back on your feet and learn coping mechanisms. In other cases, you might not be able to take or use medication, might not need medication with your symptoms, and might want to try (with the advice of your doctor) to cope without medication.
If so, there are plenty of paths you can take. Most of them involve going to therapy, learning good coping skills, and building behavior patterns that reduce anxiety as much as possible. 7 of the most helpful ways to cope with anxiety naturally include:
Professional Therapy
It’s important to go to a doctor as the first step in any treatment plan. Anxiety overlaps with thyroid disorders, nutritional deficiencies, substance use disorders, and other serious health problems. You cannot self-diagnose, although an anxiety self-test may help you see if you have any anxiety symptoms. If you’ve already been diagnosed with anxiety, you still might want to go to a professional therapist to get a new assessment and recommendation.
Additionally, modern anxiety treatment normally uses medication as a last resort. If you don’t respond well to behavioral therapy, you’ll likely be recommended to medication. However, that normally involves 3-6 months of therapy first. Behavioral therapy offers an approach that helps you to reduce the impact of anxiety by changing how you respond to anxious thoughts. This can mean learning how to stop and get out of negative thought patterns, learning how to break off downward spirals, learning how to distract yourself when you feel anxiety, and examining and understanding underlying problems and patterns. Eventually, therapy should work to teach you skills that reduce the impact of anxiety long-term, however, many people need consistent follow-ups and ongoing therapy at touchpoints throughout their lives.
Identify Triggers
Understanding what triggers anxiety (and if possible, why) is a powerful step to coping with anxiety. Triggers are items which set off anxious thoughts or cycles. For example, you might be triggered by people criticizing you, by having chores, by traffic, etc. Triggers are complex and not always directly related to what you end up being anxious about. Sometimes, there is no real reason. Sometimes, you’re anxious because you’re afraid of being anxious.
Sitting down with a therapist or with yourself and working to identify what triggers are can be freeing. It also means you can prepare to be triggered when those things come up. If you walk into cleaning up a mess going, “I know I get triggered by this and here’s how I’m going to cope”, you’re already prepared. Of course, the important thing there is to prevent that foreknowledge of triggers to result in more anxiety.
If your anxiety is very bad, it will likely get in the way of being able to exercise. If you have things enough under control, exercise can be a very good treatment for anxiety. In fact, exercise is often prescribed in clinical settings. Why? Regular exercise increases endorphin production, meaning your body naturally produces more of the dopamine and serotonin you need to overcome anxiety. It also means increased blood oxygenation, which boosts mood and energy levels. Exercise can also help shift your attention, pulling you out of downward spirals and anxious thoughts.
However, you don’t have to spend hours at the gym to see benefits. Most doctors recommend 30-60 minutes of light to moderate exercise, 5+ days per week. That means a light workout at the gym, cycling, swimming, walking, or light jogging. You don’t have to be exhausted; you don’t have to lose or gain weight; you just have to move. Building good exercise habits can be difficult, especially at first. Therefore, it’s always a good idea to try for exercise that’s fun. Dancing, swimming, and yoga for addiction recovery are popular choices for that.
Be Brave. Get Help.
We know what it’s like to have a new chance at life. We want you to feel that, too.
Eat Well
lagunashoresrecovery 7 Ways to Cope with Anxiety Naturally photo of a young woman eating healthy foods Did you know that nutrition plays a large part in anxiety and mood? Some nutrient deficiencies can even mimic anxiety. Good nutrition helps to balance the mood, balance your energy levels, and gives you a healthy basis with which to recover from. If you do have a nutritional deficiency, it’s important to see a nutritionist for specialized advice. Otherwise, you can follow regular dietary advice, use the daily guidelines, and make sure that roughly 80% of your food is healthy. While this will take months to take effect, it does help.
Learn Stress Management
Stress management is one of the single most powerful ways to cope with anxiety naturally. At the same time, there are hundreds of tactics to do so. Some, like mindfulness, mindfulness-based stress reduction, stress management courses, etc., are very formal. They’re also often recommended either as part of primary therapy for anxiety or as complimentary therapy for it. Practices like mindfulness are highly beneficial for persons with anxiety because they teach you to get out of thought loops and to spend more time experiencing rather than thinking.
At the same time, relying entirely on formal techniques is a mistake. Good habits help you release and reduce stress. For example, building good time management, keeping your home clean, taking time out to de stress, building good communication habits with your friends and family, taking on work and responsibilities in manageable ways, and managing expectations.
It also means taking care of yourself. For example, if you don’t get enough sleep, you will be stressed. Not getting enough sleep changes how your body produces and absorbs endorphins, which means getting too little or too much (less than 7 or more than 9 hours) sleep in a night can mess with your mood and increase anxiety. Building good habits, like going to bed at the same time every day, turning off devices an hour before bed, or meditating or reading before bed can greatly improve sleep if you struggle with getting enough of it.
Organize Your Life
Organizing your home, office, and life is an easy way to reduce stress in your daily life. More things means more to worry about, more to stress about, and more to keep track of. Taking time to organize spaces, get rid of things you don’t need, remove habits and hobbies you don’t want or need from your life, and otherwise making life easier on yourself is important for coping with anxiety long-term. This can mean taking 15 minutes every morning to clean up and organize, it can mean organizing your schedule to reduce stress, and it can mean finding better ways to do things that might stress you out.
Take Time Out
Managing anxiety can be a lot of work. It can mean taking on hobbies, committing to exercise, setting aside time to clean, going to therapy. It’s also important to take time out to do nothing. Relax, calm down, and just do nothing. That can be difficult in and of itself. It’s hard not to take downtime to clean, or to pick up after kids, or do chores. Taking time to do nothing might mean reading a book, sitting in the bath with music, watching your favorite show, etc. But, making time to just relax and do nothing should be an important part of your daily routine. That’s especially true if you normally guilt yourself for taking time out or struggle to make yourself do things. Planning time for that gives you space to learn how to relax without the guilt, so you can actually relax.
For many of us, anxiety isn’t going away. You might need medication to manage it. You might be able to cope with symptoms with therapy and good habits. Either way, it’s important that you see a therapist, get advice, and work for long-term health. Good luck.
If you or your loved one have questions about dual diagnosis treatment, please contact Laguna Shores today. We are here to support you. Reaching out for help with addiction and mental health challenges takes courage – but you can do it.
|
The Warmest Places In Britain
If you are fed up of the cold and wet weather then why not take a look at our latest infographic that reveals the warmest and driest places in the UK. By using figures available from the Met Office, we ranked cities by warmth using a single combined metric based on maximum average temperature, dry days and total rainfall. Those living in London should slap on the sun factor as the English capital beat all other UK cities to claim the hot top spot. This is thanks to a maximum temperature of 15.3 degrees celsius, 256 dry days and only 557 mm of rainfall annually. Other cities that made the top 5 include: Cambridge, Chelmsford, Worcester and Canterbury. On the other hand, those living in Glasgow should wrap up warm and always think about carrying an umbrella. The Scottish city has won the title of coldest and wettest city in the UK. Glasgow has a maximum average temperature of just 12.2 degrees Celsius, experiences 195 dry days and receives 1,124 mm of rainfall annually, more than double the amount of rainfall received in London. Those looking for warmth should also avoid St Davids, Newry, Leeds and Bradford. These cities join Glasgow and make the top 5 coldest cities in the UK. Take a look at your city below and see how it compares? Are your surprised by the results? Only cities with a complete available data set have been included.
Infographic by Revealed: Warmest Cities in the UK
|
How Germany and China Saved the World from Fossil Fuels
In 2020, 132bn watts of new solar generating capacity were installed around the world; in many places solar panels are now by far the cheapest way to produce electricity. This transformation… was the result of a decisive shift in German government policy happening to coincide with China becoming the dominant force in global manufacturing.
By 2012 Germany had paid out more than €200bn in subsidies for solar energy production. It had also changed the world. Between 2004 and 2010 the global market for solar panels grew 30-fold as investors in Germany and the other countries which followed its lead piled in… By 2012 the price of a panel was a sixth what it had been in 2004, and it has gone on falling ever since… In sunny places new solar-power installations are significantly cheaper than generating electricity from fossil fuels. Installed capacity is now 776gw, more than 100 times what it was in 2004.
That does not mean Germany got exactly what it wanted. Solar power is not the decentralised, communal source of self-sufficient energy the Greens dreamed of; its provision is dominated by large industrial installations. And the panels on those installations are not made by the German companies the Social Democrats wanted to support: Chinese manufacturers trounced them…But they do provide the world with a zero-carbon energy source cheaper than fossil fuels, and there is room for many more of them…
The industry boasts no giants comparable to those in aircraft manufacture or pharmaceuticals, let alone computing; no solar company has a market capitalization of more than $10bn, and no solar CEO is in danger of being recognized on the street. It is a commodity business in which the commodity’s price moves in only one direction and everyone works on very thin margins. Good for the planet—but hardly a gold mine.
Excerpt from How governments spurred the rise of solar power, Economist Technology Quarterly, Jan 9, 2021
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Friday, October 25, 2019
Sudden Infant Death Syndrome I Essay -- Crib Death SIDS
Sudden Infant Death Syndrome SIDS (Sudden Infant Death Syndrome) is a traumatic and tragic disease that affects thousands of babies throughout the world every year. There is no way of explaining the death of a child that has SIDS and there are no real ways of predicting if it could happen to any baby. What makes SIDS even worse is that the source of what exactly may be the cause of it is still unknown. Advanced research in the last 30 years has dramatically reduced the number of deaths. SIDS not only affects the infants but also the families of the infant and it proves to be a very tough and emotional experience for them. So what exactly is SIDS? The term SIDS was finally defined in 1969 as the sudden death of an infant or child, which is unexpected by history and in which a through post-mortem examination fails to demonstrate an adequate cause of death (Culbertson 3). Basically this is another way of saying that it is not known why these babies die. SIDS is not a new disease contrary to what some people might believe, but it has been happening throughout time, unexplained deaths of babies are even recorded in the bible. SIDS was probably the most neglected disease ever recorded in history of man. It wasn't until recently that major steps were taken to figure out why babies were dying so unexpectedly and what could we do to prevent it from happening. So what exactly causes SIDS and is there anything we can do to prevent it? Well as of right now, the cause of SIDS is unknown. We do not know what causes SIDS and there are no consistent warning signs that might alert us to the risk of it. However, scientists and researchers have discovered many things that might attribute to the causes of SIDS. SIDS almost always occurs at night when the infant is sleeping. A higher incidence of SIDS is seen among premature and low birth weight children. Women who smoke and let their children be exposed to smoke give their children a higher risk of SIDS. Low birth rates among children have a higher chance of getting SIDS. Finally there is a much higher rate of SIDS when infants are placed on their stomach to sleep.(Culbertson, 8-10) One of the biggest recommendations physicians make to new parents today is to let their babies sleep on their back. Putting them on their back greatly decreases the risk of SIDS to their children. These are just some of the things that have been... ...2). Everything we know, all the information that is produced and published is all just a theory because we don't even know what happened and what caused the death. So where do we go from here? What can be done to stop this terrible disease? Who knows. All we can do is sit back and hope someone's 'theory'; is the right one and be thankful that this disease only affects 1-3 infants per thousand born. There are so many questions but not nearly enough answers and until that day we can only do what the experts tell us to do and hopefully the SIDS disease will almost become non-existent. Culbertson, Krous, Bendell, ed. 1988 Sudden Infant Death Syndrome: Medical Aspects and Psychological Management. Baltimore: The John Hopkins University Press Bergman, Abraham B. M.D. 1986 The 'Discovery'; of Sudden Infant Death Syndrome: Lessons in the Practice of Political Medicine. New York: Praeger Publishers Guntheroth, Warren G. M.D. 1989 CRIB DEATH : The Sudden Infant Syndrome Death Second Revised Edition Mount Kisco, New York Futura Publishing Company. Injury Prevention Committee, Canadian Pedrictric Society. Reducing the Risk of Sudden Infant Death. Journal of Pedriactics and Child Death
No comments:
Post a Comment
|
Accessibility links
Breaking News
American Orchestra Performs in North Korea
More than 100 musicians traveled to Pyongyang for historic first performance by an American symphony orchestra. Transcript of radio broadcast:
This is IN THE NEWS in VOA Special English.
The New York Philharmonic orchestra performed in North Korea’s capital, Pyongyang, this week. It was the first performance by an American symphony orchestra in the communist state. More than one hundred performers made the trip to Pyongyang, led by the Philharmonic's musical director, Lorin Maazel. The historic event was broadcast live on television and radio in North Korea. It can also be seen on the Internet.
More than one thousand North Koreans attended the concert Tuesday night. North Korean leader Kim Jong Il did not attend. However, other top North Korean officials did.
The New York City orchestra performed the North Korean national song and America's national anthem, "The Star-Spangled Banner." The musicians played famous music, including Antonin Dvorak’s "New World Symphony" and George Gershwin’s "An American in Paris." The performance ended with a version of "Arirang," a Korean folk song that is considered an unofficial national anthem in both North and South Korea.
North Korea’s government usually bans music that is not approved by officials. As a result, jazz, rock and most Western classical music are not permitted.
The North Korean government has always described the United States as a hostile aggressor. But the American orchestra’s visit was widely described as a form of musical diplomacy. Some Americans hoped the friendly cultural exchange will help improve relations between the United States and North Korea.
South Korea’s Foreign Ministry praised the event as a chance to improve understanding and trust between North Korea and the United States. South Korean experts say much has changed in North Korea since leaders from the North and South met in two thousand. They say expanded contacts have increased the flow of information about the rest of the world into North Korea. Many experts say the country is not as disconnected as it once was. They say events like the Philharmonic performance make important gains in opening North Korea even further.
However, the White House spokeswoman said the performance neither hurt nor helped American diplomatic efforts. She said relations between the two countries will only improve when North Korea provides information about its nuclear programs. It was supposed to provide such information about two months ago to the United States, South Korea, Japan, China and Russia. Those countries have promised aid and improved diplomatic relations if North Korea ends all of its nuclear programs.
(MUSIC: "An American in Paris")
|
Image of a beautiful Bengal tiger, Mexico
Interesting Articles
Follow Us
Why Is Rome, Italy the Best Place to Visit?
Image of a Popular attraction in Rome, Italy - Trevi Fountain
The city of Rome is one of the oldest cities in Europe and the world that still exist to this day, with the city being founded over an estimated 28 centuries ago. Today the city is the capital of Italy and boasts a total population of 2.9 million people.
In this article, we are going to lay out the facts for you about why is Rome, Italy the best place to visit?
It Was the Epicenter of the Roman Empire.
Rome is said to have been founded in 753 B.C. but it wasn’t until over 700 years later that the Roman Empire came into existence, the Roman Empire is one of the most famous empires in history for being so ahead of its time and being the only country to be able to conquer and control such large parts of Europe ever with at its peak the Romans controlled more than half of all the territory that makes up the continent of Europe.
The history of the Roman Empire is very well documented in its capital city of Rome, where you can go out and see hundreds of still-standing Roman buildings such as:
• Pantheon that was originally a Roman temple before being converted to a church, it was built by emperor Hadrian and is known as being one of the best-preserved Roman buildings.
Image of Rome attractions Roman temple - Pantheon
• Colosseum, one of the best-known structures built by the Romans, it was used for theater plays but more notoriously for its gladiator battles that consisted of prisoners of war and slaves fighting for their freedom, to gladiators fighting against exotic beasts such as tigers and lions as well as even mock navy battles where the Colosseum would be filled with water and real-life galley warships would be transported in the building. Although it is no longer in use it an architectural marvel as it was built almost two thousand years ago but it is so big that it could at the time hold 50,000 people.
Historic building in Rome - Colosseum
• Circus Maximus, another very famous structure is the Circus Maximus which was a stadium built for chariot racing that was an estimated 621 meters in length with the ability to do multiple laps.
Famous building in Rome - Circus Maximus
• Castel Sant’Angelo, a tomb of the late Roman Emperor Hadrian that is built on the bank of the Tiber River that runs through the city of Rome. At the time of it being built, it was the tallest building in Rome and was used as a fortress many times throughout history.
Castel Sant'Angelo - a tomb of the late Roman Emperor Hadrian, Rome
• Baths of Diocletian, a massive public bathing complex that was built near the end of the Roman Empire before it split into two separate entities.
Image of the bath complex Baths of Diocletian, Rome
The Vatican.
Aside from being the capital of one of the biggest empires in human history, Rome also holds particular importance in the Christian world as the independent micro country of the Vatican being located within the city of Rome. There are an estimated 1.2 billion Roman Catholics in the world which shows how much influence the Vatican has.
Image of the Vatican, Rome
The Vatican is officially the smallest country in the world; however, it is home to many of the world’s best-known landmarks such as St. Peter’s Basilica, the Sistine Chapel and Saint Peter’s Square. Furthermore, you can traverse the entire country in a day with ease, with thousands of tour guides being available for hire thanks to the country’s huge popularity as a tourist destination.
Additionally, you can see the Pope in person either by attending on a Christian holiday or simply by coming on a Sunday where the Pope appears on his balcony and gives a speech and carries out a prayer.
Trying out Authentic Italian Food.
Italy is known for its iconic food such as pasta, pizza, lasagne as well as its cheeses, wines, and salami.
Image of iconic food in Italy - Pizza
By visiting Rome you can indulge yourself in high quality authentic Italian food such as Carbonara which is a famous style of pasta that originates from the city of Rome. Carbonara is known for its light cheesy sauce that is often coupled with a type of Italian meat called guanciale.
Popular Italian food Pasta Carbonara
Another authentic Italian food that you must try when visiting Rome is gelato which is a type of frozen yogurt with many versions to choose from with literally thousands of gelato parlors located in Rome alone.
If you are looking to try something that is not widely available outside of Italy like carbonara and gelato then a good dish for you to try is Porchetta which is a bone of pork that has been specially prepared to be soft and full of flavor so that it almost melts in your mouth when you eat it.
Porchetta is considered a specialty in Italy and can be eaten in a variety of ways such as in sandwiches and baguettes as well as in savory cakes and pies.
A specialty in Italy - Porchetta
A Thriving Fashion Hub.
Italy is home to many of the biggest fashion brands in the world such as Gucci, Dolce & Gabbana, Armani, Prada and Versace to name just a few. If you decide to go to Rome and you are an avid fashion lover it is the perfect location for you. The city is home to hundreds of luxury clothing that feature limited edition products that can only be bought in Italy.
Apart from the world-famous luxury brands that we mentioned above, Rome is also home to many excellent shopping centers where you can buy more affordable clothing as well as old cobbled streets where you can find independent boutiques that sell high-quality handmade clothing made in Italy that allows you to truly buy some genuine Italian fashion pieces.
Image of a fashionable woman
Fashion catwalk shows are a regular occurrence in Rome with many of the world’s best-known models, designers and celebrities converging on the city, it is not commonly known but the everyday public are able to buy tickets for such events at fairly affordable prices of just a few hundred dollars which is not a lot of money for a once in a lifetime experience to mingle with leading fashion world figures and see in real life some of the most expensive clothing pieces in the world.
Also read an interesting article: What Should A Traveler See In Milan, Italy? 7 Best Places.
Click to Rate this Post!
[Total Votes: 2 Average Rating: 5]
|
Germaine Greer is right about trans-women
Germaine Greer does not think new clothes, new hormones, or sex-reassignment surgery can turn men into women (or, I assume, women into men). She is right about that, and a Cardiff University controversy about her planned lecture this month is a tsunami in a teaspoon.
Of course gender is not fixed at birth. Simone de Beauvoir was right that no one is born a woman. Possibly, no one is even born female. Sex is cluster-concept, a bundle of attributes, some of which do not develop until puberty or later. And gender is another cluster-concept. Gender is constituted by norms and values that are conventionally considered appropriate for people of a given sex. Gender is a lot more vague than sex, and a lot more historically and geographically variable.
But gender has another interesting feature. It is path dependent. To be a woman is for the pertinent norms and values to apply a result of a certain life history. Being a woman is not only ‘socially constructed’, as they say, it is also constructed by the path from one’s past to one’s present. In our society, to be a woman is to have arrived there by a certain route: for instance, by having been given a girl’s name, by having been made to wear girl’s clothes, by having been excluded from boys’ activities, by having made certain adaptations to the onset of puberty, and by having been seen and evaluated in specific ways. That is why the social significance of being a penis-free person is different for those who never had a penis than it is for those who used to have one and then cut it off.
The path dependence of gender is not unique. Many social categories are shaped by the way they come to take hold. It is one thing to grow up with English as one’s mother tongue, another to speak English as a second language; one thing to be born to privilege, another to be a ‘self made man’; one thing to be raised a Jew, another to be an adult convert. Admittedly, it would be silly to say that fluent learners of English are utterly different from native speakers, that millionaire parvenus have nothing in common with trust-fund babies, or that converts are simply not Jews. These things aren’t black or white. But by the same token it would be just as silly to say they are all simply white. And that is the sense in which MTF transgendered people are not women.
But that is Greer’s point. She says, ‘I just don’t think that surgery turns a man into a woman. (…) I mean, an un-man is not necessarily a woman.’ People focus on her first sentence at the expense of the second. Greer is not saying that MTF people are stuck being men, no matter how they feel, what they choose, how they are seen, or how they are treated. She is not saying that the oppression of transgendered people has nothing in common with the oppression of women. She is saying that ceasing to be a man does not make one a woman. These things aren’t black or white.
Obviously, the fact that something is true need not stop people taking offense at it. But there is actually no evidence of widespread offense at Greer’s remarks. I called the controversy a ‘tsunami in a teaspoon’ because, contrary to what you might suppose from the press, the students were mostly untroubled by Greer’s comments. Not one in a hundred even felt moved to click on an anti-Greer petition. No serious opposition was mounted; no policy of exclusion was formulated. There was no ‘hecklers’ veto’; in fact, there was a pretty effective hecklers’ veto veto.
So this is all rather puzzling. Greer’s remarks are correct and are neither dangerous nor hateful. The number of critics of students who supposedly want to ‘no-platform’ speakers dwarfs the number of students who want to ‘no-platform‘ anyone. Maybe the transgender tsunami hit the press, not because of some seismic event in our universities, but because commentators want threats to freedom of speech and inquiry to come from a politically safe source. And what safer, softer, target than an imaginary recrudescence of virulent PC-ism in our student unions?
31 thoughts on “Germaine Greer is right about trans-women
1. A good discussion about gender and social constructs, though you go off the rails at the end. A couple of thousand students signed a petition for her talk to be cancelled because of offense at her views which, as you say, aren’t offensive. That’s pathetic. And you don’t need to spend much time in cyberspace to see lots of ranting and raving about Greer’s “transphobia.”
Liked by 1 person
• …not having got a response yet, I’ll elaborate. What I see is one person asking Weinberg, “Do you endorse the whole post or just the paragraph?”, another saying “Your original post looked like it endorsed more than just that paragraph?”, and another person saying that your post contains a lot of offensive stuff. All of which is speech responding to speech. I can’t anything authoritarian about it, unless vigorous criticism is authoritarian.
I also think that the criticism, far from being stupid, is obviously correct, but that’s more of a judgment call.
Liked by 1 person
And someone who has felt off since childhood in relation to their assigned role has lived a path. Unfortunately that doesn’t solve the problem of people who want to see things as binaries.
It would have been interesting watching the confusion if Rachel Dolezal had been a white man identifying as a black woman. Somehow Dolezal has few defenders. Why?
“commentators want threats to freedom of speech and inquiry to come from a politically safe source.” Actually, yes. Criticizing Dolezal is politically safe; agreeing with Greer is not. And many people remember the marchers for “free speech” in Paris but few of those seem to notice that speaking out for BDS is now illegal in France.
3. (…)
The claim that the oppression of transgendered people has nothing in common with the oppression of women seems incorrect in the light of everything that we know about intersectionality. Greer has a point, but she is wrong.
(Incidentally, Greer didn’t help her point by making it personal about Jenner. To be fair, this wasn’t entirely her fault; Kirsty Wark, the the Newsnight interviewer, brought it up, to which Greer instantly replied, “Must you?”)
What I don’t understand is when it became not okay to be wrong.
Isn’t that what academics are supposed to do? You’re supposed to broadcast your theory to the widest audience, and defend it vigorously, in the knowledge that others will probably disagree and perhaps even show it to be wrong. I can’t think of anyone who has consistently done this better than Germaine Greer. Her consistent willingness to put herself “out there” and change her mind as needed is something we should all aspire to.
While that remark of Greer’s was wrong, it was not transphobic. And more to the point, what she said should have been the start of a new thread of the conversation, not the end of it.
• I don’t think Greer claims that the oppression of transgendered people has nothing in common with the oppression of women. It obviously does. It also has a lot in common with the oppression of gay people, and the oppression of people of colour. That doesn’t make transgendered people gay, or black.
Liked by 3 people
• I thought her point was that the *experience* of trans women is not the experience of women. Menstruation, pregnancy, childbirth, breastfeeding, so many things that are distinctive to the experience of so many women and around which so much of feminist activism has centered, these are all things that someone who has been a man for 50 years and became a woman two weeks ago is not going to have experienced. And this seems just obviously right. So much so that it’s hard to believe anyone would contradict it.
This article, by an old-guard feminist, also made many of the same points. And she also was pilloried for it.
4. Almost everyone has some views on any topic which are neither hateful nor dangerous, and it’s good that you should draw attention to the fact that this is also true of Germaine Greer. But, given that she has also written things like this:
I’m less puzzled than you are about why people are upset.
If you’re confident that there’s nothing hateful about calling people ‘ghastly parodies’, I’ll defer to your learned opinion: but not without noting that others who are less deferential might reasonably feel otherwise.
Similarly with ‘deluded’ – obviously, we shouldn’t stigmatize people with mental illnesses, but to describe someone in the vocabulary of mental illness is very often a stigmatising move. (Incidentally, this kind of rhetoric also strikes me as rather inconsistent with the more eirenic ‘these matters aren’t black and white’ line that you take when representing Greer’s position. So I wonder whether you are characterising her position accurately here.)
Here’s a link for the article from which the quotation was taken, by the way:
Liked by 1 person
• Les, I agree with Brian inasmuch as I don’t think this is ‘an imaginary recrudescence’. There is quite a lot of this censoriousness about right now, both among actual student politicians and among the overgrown student politicians – the SJWs – who inhabit parts of the web. Their groupthink makes the student politicians of thirty years ago (parodied in The Young Ones) look pretty sensible. I agree with you, on the other hand, that a focus on this recrudescence mainly serves to distract us from much deeper problems we face. While we are fretting about the more comic symptoms of rampant me-me-me individualism, especially symptoms exhibited by daft young identity-politicians who falsely believe themselves to be progressives, we have less time to focus on the much darker aspects of contemporary capitalism for which these young people are shills or stooges. Squabbles about the cultural superstructure are what keep the economic base out of the firing line.
• Greer is a radical feminist. She’s said much worse things about men. I wonder if all the folks who are so outraged now were outraged then. Or do they only care about meanness, when it is directed towards some people but not others?
5. Hi Dr. Green,
Your discussion of the issue in question, as Brian worded it, is illuminating. I was wondering what you think about my initial reception of what you wrote here:
1. My impression is that there is an ontological dispute about gender implicit in this debate, but that more people are interested in spinning the debate as an issue of free speech or of transphobia rather than a need to get our ontology of gender right; but I don’t expect politics to ever know what ontology means. In any case, my second impression is that the ontological dispute has become a nasty, verminous verbal dispute à la Eli Hirsch. More precisely, parties to the debate argue over what constitutes being a woman, and they debate over whether any alleged necessary conditions of womanhood are, in fact, necessary conditions; of course, they talk over each other in disagreement, not always agreeing to the conditions that each side of the debate thinks is necessary.
2. Dr. Green, you write: “in our society, to be a woman is to have arrived there by a certain route . . .”
You fail to mention some obstacles to your thesis: When does a person arrive at being a woman? What are the relevant similarities or significant dissimilarities people face on their journey to being molded into what a society thinks exemplifies the female gender? I feel like these questions are worth asking because otherwise we run the risk of imagining gender as this tattoo first sketched and outlined at birth by society, filled in over time until it is complete and practically irremovable without extensive surgery that in most cases fails to convince people of the tattoo’s (gender’s) erasure or replacement. I call such a picture a risk because gender is never as real as a tattoo is, so there are some relevant dissimilarities implicit in that picture
6. I’m curious about how path-dependence is supposed to work here. Consider the following case:
Carson was born in 1993. She has XX chromosomes and a vagina. She was listed as ‘female’ on her birth certificate. Her parents are both self-identified feminists, and they made sure not to impose gendered limitations on her. She was never forced to wear a dress or go around covered in pink. Her parents sent her to a progressive school with strict policies about gender-neutrality, since the teachers and other parents share feminist commitments. When Carson reached puberty, she did not have her period. She was diagnosed with Mullerian agenesis; she will never menstruate or be able to conceive a child. As a young adult, Carson chooses to keep her hair short and has a generally ‘butch’ appearance. Some people sometimes mistake her for a young man, but she confidently and consistently self-identifies as female.
According to you, is Carson a woman? If so, please identify the ‘path’ that makes her so.
• Feminists who are critical of aspects of transgender ideology have a name for this time of derailment. It’s called “co-opting intersex narratives”, or COINing for short. Even if the phenomenon of intersexuality shows that there are some difficulties in application of the concepts “woman” and “man” or “male” and “female”, this really isn’t relevant because the amount of people who identify as trans far exceeds the amount of people who are intersex. It is far clearer that Catelyn Jenner is not a woman, but of course it’s becoming verboten to say what should be obvious nowadays.
Liked by 1 person
• Greer’s point is not that someone like Carson is not a woman; it is that someone like Jenner is not. She evidently believes that a person who is biologically one sex and mentally the other, is neither.
But I don’t think that her position can be maintained. I think your question about the “path,” points to the conclusion that it is not the path that makes the man or woman, but the biology (putting aside those people who’s biology is indeterminate.)
So when you say “She (was born with) XX chromosomes and a vagina,” that’s pretty much an end on it, to anyone but an academic. She’s a woman.
And Jenner was born with XY chromosomes and a penis, so he’s a man. Cutting off his penis (assuming he ever actually gets that done) doesn’t make him a woman. Greer is right – just for the wrong reason.
7. I don’t think this is a convincing argument for excluding trans women. First, you acknowledge that gender is a cluster concept that comes with “a bundle of attributes” but then you only focus on the historical attributes that are unique to non-trans women. There are two problems with this:
First, there is a semantic problem. If you acknowledge that gender is a cluster concept, why are the many actually shared attributes not sufficient? The whole point of cluster concepts is that not everyone needs to have every attribute. So why exactly do we need to think of the historical attributes that are not shared as necessary conditions? For example, why do you cite “having been made to wear girl’s clothes” as important and “new clothes” as not important? Why not consider the sub-bundle of attributes that is actually shared sufficient for using the term “woman”?
Second, there is an ethical problem. Even if you were right that “woman” is currently used in a way that these historical attributes are necessary conditions, this may still be a harmful linguistic practice. If our current talk about women indeed excludes trans women, why not adopt a less harmful way of using the term “woman” in which the shared attributes become sufficient? Linguistic practices can be oppressive but this is no reason to justify them. This point is not new in the literature (remember Fausto-Sterling’s “First, Do No Harm”? Or Haslanger’s “ameliorative project”?). Greer’s attempt to exclude trans women as “parodies” of “real women” seems like a prime example of using philosophical tools to sell harmful linguistic practices as metaphysical truths.
• Many readers are concerned about the ethical problem David Ludwig mentions in his thoughtful post, and also another set of ethical problems.
The first problem is that, if I am correct that our concept of ‘woman’ excludes MTF transgendered people, then it is a morally pernicious concept and should be reformed. I think it is clear that I neither denied nor affirmed that. I do assume that there is a fact of the matter about what our concept of woman is (and that it is not ‘in one’s head’) though I say that it is vague (and vaguer than ‘female’–a different concept.) But I obviously don’t think it is vague along the paths I mention. I think there is lots of evidence that ‘woman’ is path-dependent. No one thinks we could cure ‘gender discrimination’ in the bar by encouraging male lawyers to transition to MTF; most people, on discovering that an MTF person was once a man will revise or qualify a number of judgments and inferences they make about that person, and so on. Now, supposing I’m right, is it a matter of regret, and if so, how should we address it? I share the common view that we have too few stable and publicly recognised gender categories, and–with more hesitation–the view that these categories anyway just box people in. What to do? Make the boxes bigger? More boxes? No boxes? Our concept of race has similar properties, and I think that it is a matter of regret that we have that concept at all. I don’t see how it improves things if we modify the concept of ‘black’ or african american’ to include Rachel Dolezal. Here I favour no boxes. I’m also tempted by the view that, that ‘woman’ should also not be extended, since that entrenches the salience of the (morally defective) concept. So it’s no boxes or more boxes. I’m not sure we can get to no boxes–I was persuaded by one of Haslanger’s arguments for the salience of sex, and there is a conceptual relation between sex and gender. So I lean towards more boxes. Some TG/TS people feel the same way. They want to be accepted and recognised *as TG/TS*, or *as FTM* etc, and not boxed in to the binary at all. They feel it oppressive that, just because they are unwilling or perhaps unable to see themselves as ‘men’ they must therefore see themselves as ‘women’. I see merit in their argument. And also merit in the argument that we should always use the pronouns, predicates etc that people would like us to use when speaking of them. That is not just required by courtesy, but by respect.
The other ethical questions have to do with the nature of hostility towards and discrimination againt TG/TS people, and the sort of remedies that we should pursue. I made no claims about any of that. I cannot defend the point here, but I think sex discrimination and gender discrimination (and sexual orientation discrimination) are different things. Of course they are related. An anti-discrimination measure useful in one area sometimes prove useful in others. But sometimes they don’t and sometimes they are counterproductive. (I’ve written about these matters elsewhere.) These are complex questions of political philosophy and legal strategy. None of the replies to my OP have so far seem to cast any light on the political-legal questions, at least not so far as I can see. And some of them are obscurantist, self-indulgent, posturing.
I see great danger in making the word “harm” this elastic. It certainly has the potential to render a classically liberal politics impossible, and for all of its faults, it strikes me as the best politics we’ve yet devised. The others all wind up becoming authoritarian at one point or another.
More Mill and less Marcuse is what I’m suggesting. At least if we want to remain a free people.
8. Dr. Green,
People see what they expect to see. And while clothes “make the man,” they apparently don’t make the woman. It is true that “woman” is a product of social conditioning. But there are instincts, perhaps undeveloped, or perhaps atrophied, that every transperson feels; and these perceptions, archetypes, and instincts are not of the classic expectation of the gender they were born into.
The issue is what to do about it. For a transperson, to not do something about it is live an invisible and practically unlived life. For everyone else, it’s really not your concern, or why is it? Why does every bloke and lassie think they are experts on someone else’s experience and what it means?
If you woke up one day and deduced you were actually an exo-archeologist, it would not make you one. And that is the essential missed point in all these kinds of useless opinion pieces: Most anyone who actually looks at these folks, in depth, personally, eventually learns to see what they were experiencing and “gets it.” Transgenderism is a great example of existential experience. For everyone else, married to their own experience and projecting it on others, how would they like if it transpeople were able to create an exclusion zone around others’ gender identity?
Be that as it may, apparently people are just not busy enough. I think the wise course is to honor and respect the heartfelt experience of all people, you, Greer, and everyone else. And we should work hard to understand you and how you got to be the way you are. In the end, it is only through understanding in an open-minded way that we can widen our world to include other perspectives.
In the Americas, the “native” Americans were quite accepting and respectful of transgender people, treating them as honored members of their tribes. Perhaps that is an example that all of the high-flying academics can study in order to maintain the reservoir of respect that we have for them.
• I agree with you: if one day I woke up with the firm conviction that I am an exo-archeologist, it would not make me one. Nor could I become one by dressing like an exo-archeologist, changing my CV to create the impression that I am an exo-archeologist, etc. There is a usually a difference between ‘feeling like an X’, ‘dressing like an X’, ‘identifying as an X’ etc and actually being an X. I think that is true when x=woman.
Liked by 1 person
9. It’s funny, all of this argument about who is and who isn’t a woman is almost purely academic. While people argue back and forth about whether or not I, a transwoman, GET to use the labels “woman” or “female”, I just do… every single day. The people I interact with view me as a woman, treat me as a woman, call me “mom”, ask when I had my last menstrual cycle when I’m at the doctor’s office, open doors, and affix glass ceilings above me. From my perspective, whether or not I am a woman is academic nonsense, and has very little to do with my real life.
I think, to deny me access to public women’s spaces (restrooms)- simply because I was placed in a certain category by my doctor and family at birth- would cause harm me and my children. It would be the equivalent of a scarlet letter, or the African car on the “separate but equal” train. I don’t care about privately-held “womyn’s spaces”. A private group has the right to define me any way they want. Go ahead, call me “a man” or “a transgender”… whatever you need to do to exclude me and feel like you’re with your own kind (whatever that means). I’m not interested in people who employ “one-drop” rules anyway (one drop of male privilege means you’re not a woman). I’m not going to force my way into spaces where I am not welcome.
In my lived experience, however, I think of myself as a woman. I “feel” like a woman (and I definitely know what that means to me personally). I am a “mom” to my kids. And finally, I will fight tooth and nail for the public status of female/woman because I understand that the public-at-large is not a gender-studies class, and will never accept anything but the gender binary. It runs too deep, in every human society on Earth. I live in one such society, and if I want to be a participant, I use the agreed-upon labels. That’s the way it is.
• This strikes me as the right view. Life isn’t a philosophy class, and the question of how transgendered people should be treated cannot be determined by figuring out whether an MTF transgendered person is, or is not, best thought of as a kind of woman. I guess your last point–that the ‘gender binary’ can’t realistically be overcome explains why some transgender people feel such as strong need to be treated as cisgender people. But not all. Some think that asserting their (say) ‘womanhood’ just makes things worse. Maybe it is like those people of mixed race who thought that justice should not require passing as white?
• I think the trans community is complex, and includes people who simply want to blend and belong to the current social paradigms, and those who wish to challenge, change and/or destroy those paradigms. The motivation ffor transition ranges too, from those who feel they are innately cross-gendered (not unlike intersex), to those who are engaged in a pursuit of political/idealistic social change. I put myself squarely in the first group, and whether or not some people respect my insight or texperiences, I still live every day of my life as a woman. And society views me as a woman. My life is much better for it too. Less depression, more enthusiasm, better relationships, more opportunity, less internal conflict. That a human being is able to enjoy their life post-transition in a way they cohld never do before, that means something to me.
Liked by 1 person
10. Thank you for an interesting post. I’m not sure if I agree that trans women aren’t ‘real’ women, but I do believe that trans women are not equivalent to cis-women. The trend towards the employment of the terms ‘trans’ and ‘cis’ only reinforces the sense of there being a necessary categorical distinction between the two. Also, something that is frequently missed when these issues is discussed is that gender is not just about commonality, it’s about exclusion. While the trans woman may wish to assert membership of the category ‘woman’, there are women who might wish to exclude the category ‘men’ (for instance, in a women’s refuge), so it’s not sufficient to say that ‘my gender is nobody else’s business’. This, allied with a shift (in some sectors) towards identifying gender as essentially metaphysical, existing independent of physical correlates of any kind, suggests to me that gender as a category will disappear rather than opening membership to any who want it. (The developed form of this argument can be found here:
Leave a Reply to Interview with Kwame Anthony Appiah – Pt. III | Oxford Review of Books Cancel reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Contact Us
What Is Code Refactoring? Definition, Benefits and Best Practices
Software Code Refactoring
Code refactoring is the process of restructuring software source code with the purpose to improve its internal structure and non-functional features. At the same time, its external behavior remains unchanged.
Code refactoring is aimed at simplifying the construction of actual code, improving its readability, and making it more efficient and maintenance-friendly. This process is a kind of software upgrade necessary to improve several non-functional features: maintainability, performance, security, and scalability.
The process commonly consists of a series of little steps called “micro-refactoring”. At each of these steps, a small alteration to the source code is made. That alteration makes the code simpler and cleaner while system functionality remains the same.
Refactoring process in Agile
Want to start a project?
What Are the Benefits of Code Refactoring?
As we mentioned above, code refactoring does not change software external behavior. The product functionality remains the same, and users will not see any difference.
Why is it necessary to refactor then? There are several reasons for it:
1. Simplified support and code updates. Clean code is much easier to update and improve. Developers can quickly make new functionality available for users, as well as save the support budget, as the maintenance will require less working time spent by the developers involved.
2. Saved time and money in the future. Code refactoring reduces the likelihood of errors in the future and simplifies the implementation of new software functionality. Instead of making sense with tangling code or fixing bugs, developers can start implementing the required functionality at once.
3. Reduced complexity for easier understanding. If the team engages a new employee, or the entire development team changes altogether, it will be easier for new developers to comprehend the code and make the necessary alterations faster.
4. Maintainability and scalability. At times, programmers simply avoid making alterations to some dirty code since they do not clearly understand what consequences these modifications will lead to. The same is true for scalability. Removing this obstacle is another benefit of code refactoring.
Improving the design of code
Thereby, in a nutshell, both businesses and developers receive two main benefits here: reduce the time and money spent on further use of the software product and can easier understand how everything works.
When it’s Time to Refactor Your Software’s Code?
Time to Refactor Your Code
It is not difficult to know the right time for refactoring. Here are some common situations when it is worth doing:
• Technical debt gets accumulated. If some similar tasks started to take more time to be completed than they did at the project launch, but the level of their complexity did not change, these are obvious symptoms of accumulated technical debt. It means the project contains more and more complex and confusing pieces of code, architectural failures, and the project scalability itself is difficult.
• It is necessary to scale. Let us presume the product works alright, but it takes too much time to add new functionality, or various issues start appearing as a result of its implementation.
• It is necessary to make the code more understandable. It takes years to develop some software products, and logically, team personnel changes over time. Refactoring makes any code easier to comprehend for new team members.
• It is necessary to reduce upgrade and support costs. Through a business prism, this point is the most important. As we wrote above, clean and well-structured code takes less to update and maintain.
Besides, if you can, you may micro-refactor regularly. For instance, you may spend the last hour of your working day on this activity several times a week.
Want to start a project?
Refactoring in Agile: Best Practices
In the Agile methodology, refactoring is normal practice, as with every new iteration, it will be more difficult to maintain and expand the code if you do not strive to constantly make it cleaner and easier to understand.
Here are some important principles of work to classify as best practices:
1. Move one step at a time. Never try to do everything at once. It is necessary to refactor the code as a series of small micro-modifications not to affect the product’s functionality.
2. Test. The refactoring process should go hand in hand with tests to make sure the alterations made did not result in new bugs.
3. Refactoring should not add new functionality. Never mix this process with modifications to product functionality or adding new features. Refactoring is a task used to make the code cleaner/more understandable to allow these very functions to be implemented easier and faster.
4. Plan your work and focus on progress. Any code becomes obsolete over time. Consequently, you should accept the fact that such a process will never be 100% completed, and therefore, it is worth seeing it as regular project maintenance.
Refactoring tips and checklist
Refactoring does not offer immediate benefits, and its advantages for business are not always obvious. However, in the long run, you will get better code, as well as a calm, more productive work environment, and that makes the investment in such a work reasonable.
If, having read this article, you understand that code refactoring can be useful for your software product, please contact us, and Lvivity expert professionals will provide you with detailed advice.
Lvivity Team
Our services
You may also like
Leave a Reply
|
Inventor Alfred Bird
Alfred Bird
Born25 August 1811
Died15 December 1878 | Age 67
Alfred Bird 1837
Alfred Bird was born in Nympsfield, Gloucestershire, England in 1811. He invented Bird's Custard Powder in 1837.
He was a food manufacturer and pharmacist. He set up a chemist's shop in Birmingham in 1837.
Apple Pie & Custard
Apple Pie & Custard
Alfred's wife, Lady Bird, was allergic to egg and yeast but she adored custard. Alfred Bird set about trying to find a way of formulating an egg free custard for his wife. After much experimenting he found that cornflour powder would thicken to form a custard-like sauce when mixed with milk and heated.
Birds Custard
Birds Custard
He formed 'Alfred Bird and Sons Ltd' and later went on to create a formula for Baking powder, blancmange powder, jelly powder, and egg substitute.
Alfred Bird Plaque
Alfred Bird Plaque
The emblem of Rule Britannia
In History
The emblem of Rule Britannia
Who Invented?
Who Discovered?
Who Created?
The History of..
|
Accounting for Alcohol – part 4, a brewery in Okinawa and its role in economic development.
This post #4 in my summary of a recent edited book. Chapter 4 is written by Kazuhisa Kinoshita, and details the role of the Orion Brewery in the economy of the Okinawa region post the Second World War.
The Orion Brewery, while small in terms of the overall Japanese market, helped rebuild Okinawa and the dreams of the young. Okinawa was home to a large US air force base from the 1950s through to the 1970s, and the base became part of the local economy. Supplies to the base were a large source of income to the local economy, and in this environment, a “local” product to generate a sense of identity belonging for local people. This product came in the form of beer from Orion.
From an accounting perspective, the chapter looks at costs and output – in essence, cost volume profit analysis. The remote location of the Okinawa islands increased the cost of building a brewery, and limited the market size affected sales. There was also the effect of beer duties to be included in the decision, and the regional government were favourably disposed towards a lower beer duty. The end result was the construction of the Orion brewery in 1957, and it is still active to this day.
About martinjquinn
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this:
|
Public includes a formal conveyance. When we
Public speaking and casual conversation are comparable in that they are done to advise, convince or engage an audience. For example, have you ever needed Mexican food, but the rest of your group wants Italian food? You discover yourself waging a plausible argument for tacos by depicting its cost, benefits, or ease of ordering. What you’re doing is attempting to persuade and inform your companions to side along with your cravings. Possibly even discovering evidence that Mexican food could represent a more beneficial choice. You may indeed offer to pay for the dinner. In any occasion, you’re in a way, attempting to alter thinking or behavior through the same channels as a public speaker.
They are diverse in that public speaking is more organized, is spoken in a formal dialect and includes a formal conveyance. When we are locked in a casual discussion with companions or colleagues, it generally isn’t formal and more often than not includes a back-and-forth communication. You state something; your companion reacts. Pretty straightforward.
In a casual discussion, the dialect isn’t formal and is more pardoning. In spite of the fact that casual discussion may be traded to educate or convince, ordinarily it is for entertainment. In public speaking, it is more organized and for the most part, takes on a more formal dialect. The audience does not lock in with the speaker. It is organized and has time limits, requiring cautious planning and research. Its conveyance indeed contains a reason, which can include to persuade, educate, or even entertain.
When we speak in public, we speak to advise, we are delivering data to the audience that they don’t already know. This may be how to do something, like deep fry a turkey, or to relay information regarding a critical event. Now and then, as a public speaker, we might attempt persuading an audience by endeavoring to alter the way the audience believes or behaves. There are times when the speech is made predominantly to entertain or to include humor or delight the audience. This may be a speech given at a celebration supper or indeed at an occasion. It may be done like a broil to jab light-hearted fun at another individual. For instance, when roasting a certain individual for their achievements a few light-hearted jabs are made throughout the speech for entertainment and to hype up the crowd. The casual discussion may additionally provide comparable purposes but not in such a calculated way.
I'm Mary!
Check it out
|
NEO Coin And Ways to Obtain It
NEO can be defined as an ‘open-source blockchain‘ that was among the pioneers in the Chinese market. It supports both digital assets and digital identities and uses smart contracts to function. NEO is marketed as a way to link your real-life ID and digital identity, allowing you to enhance the safety and security of your online identity.
NEO is traded at numerous exchanges, where the most popular crypto pairs are NEO to BTC, NEO to USDT, and NEO to ETH. Pairs like NEO to GO are less common, and can be found on a handful of platforms. Also be sure to check out the cardano ada web wallet as well.
The Distinction Between NEO Token Functions & NEO GAS Functions
When you start researching NEO, you will find that there are NEO GAS and NEO coin. So, it is essential to understand the distinction between the two tokens.
NEO coin is a governance coin used to control the system and vote. NEO GAS is a utility token used to cover the cost of transaction fees and enact smart contracts.
When it comes to NEO mining, you can mine GAS by using your NEO token. What is really great about GAS is that the token is shared proportionally among NEO owners. This means you can expect to get GAS without having to do anything.
NEO is keen to be seen as a smart economy and is working hard to prove their value to their investors.
How NEO Offers a Smart Economy
NEO has made it their mission to offer a genuine, smart economy to the users. They see this as an integral part of what makes them unique on the blockchain.
The idea behind this smart economy is to provide all parties with the chance to enter into various contracts without the need to know and trust each other. A computer code that is used to do this is impartial and unbiased.
NEO‘s founder, Da Hongfei, shared a vision of economic applications that are able to run on the NEO blockchain utilizing smart contracts. These contracts are designed to support the enforcement of the NEO ecosystem laws.
A Purchasing Guide
NEO‘s driving force is entirely community based with lots of developers working on it across the globe. That’s why NEO coin has proved to be a popular choice and is likely to continue growing steadily.
If you are looking to get hold of coins then you must have either fiat or any cryptocurrency ready. To buy NEO, you will need to register at the exchange site and pick the amount you want to purchase using your selected payment method. This process is simple and quick, allowing you to become a happy owner of NEO as soon as possible. Once purchased, you will need to securely store your NEO in your personal wallet.
NEO is usually traded against the top cryptocurrencies like Bitcoin and Ether. But now, a highly sought-after pair is NEO/GO. Also be sure to check out idrqq as well!
Where Is The Best Place To Exchange NEO for GO
Both of these cryptocurrencies are fairly new to the scene, so it is to be expected that there aren’t a great deal of places where you can exchange both of them in a safe manner. You will need to use a secure exchange to complete a NEO-to-GO transaction, and the one place we’ve found that you can accomplish this is Godex.
Godex offers an anonymous, fast and fully automated buying process. It uses a real-time price tracker to ensure it is on top of all the best rates being offered by competitors.
The team behind Godex has a solid belief in the world of cryptocurrency and has made it their mission to provide a service that helps all crypto enthusiasts get the currency they need. The platform offers:
• A service that is easy to use and does not require form filling or account opening to make the exchange you need;
• A speedy transfer capability so that you can get your NEO to GO exchanged without having to wait hours to see it appear in your wallet;
• The ability to access the best rates 24/7. While exchanging, the platform freezes the rate so that you are protected from any fluctuations in prices.
One of the best aspects of Godex is that there are no limits to how much you can exchange unlike many other services. This means that you can sell as many NEO coins as you want without having to complete multiple transactions at different times.
The other great thing about Godex is that they respect your right to anonymity and never ask you to share personal information when you enter the site or when you complete exchanges.
Finally, you can be sure that your trades are safe since Godex uses strong security protocols, SSL encryption, and DDoS protection mechanisms.
However you choose to convert your NEO, the coin is well worth considering and offers a decent ROI for anyone who was an early adopter. Start trading and enjoy the returns you are likely to make if you play your cards right.
|
What Is A Synonym And Antonym For Rich?
What is the synonym and antonym of empty?
bare vacant stripped empty-handed pillaged glazed lifeless ransacked looted glassy white void plundered vacuous clean fullness blank.
emptiness full validate existence valid..
Whats is an antonym?
: a word of opposite meaning The usual antonym of good is bad. Other Words from antonym Some Differences Between Synonyms and Antonyms More Example Sentences Learn More about antonym.
What is a very rich person called?
A person who possesses great material wealth. Synonyms. fat cat wealthy man wealthy person someone plutocrat billionaire man of means mortal somebody Croesus person have millionaire individual millionairess multi-billionaire rich man soul affluent.
What is another word for lockdown?
What is another word for lockdown?solitary confinementconfinementcustodydetentionimprisonmentincarcerationholding cellisolationsolitarythe hole13 more rows
What are the 10 examples of antonyms?
What’s a fancy word for SAD?
SYNONYMS FOR sad 1 unhappy, despondent, disconsolate, discouraged, gloomy, downcast, downhearted, depressed, dejected, melancholy.
What can I say instead of sad?
What’s a word for feeling empty?
TFD – sad or lonely, especially from being deserted or abandoned. MW – bereft, forsaken.
What are the 50 examples of synonyms?
What is a antonym for rich?
rich(adjective) Wealthy: having a lot of money and possessions. Antonyms: lean, needy, poor.
What is a synonym for rich?
SYNONYMS. wealthy, affluent, moneyed, cash rich, well off, well-to-do, with deep pockets, prosperous, opulent, substantial, propertied.
What is the opposite of no one?
no one(pronoun) Antonyms: everyone.
What is opposite of strong?
Antonyms. weakness frail weak tender powerless delicate.
What is the opposite of old?
Antonym of OldWordAntonymOldYoung, ModernGet definition and list of more Antonym and Synonym in English Grammar.
|
An Overview of Anxiety and Prescription Medication
anxiety and prescription medication, prescription drugs used to treat anxiety, how to treat anxiety
Anxiety and Prescription Medication
Anxiety is a quite normal response by humans to certain situations that may be stressful or dangerous. It often increases the level of caution and awareness of the immediate environment or setting. There is a lot to know about anxiety disorders as well as anxiety and prescription medication.
Most everyone will experience anxiety at one stage or the other for different reasons. This feeling does not last long for the majority of people, but for millions of people, this is not the case as they have to deal with different forms of anxiety disorders.
Over 40 million American adults have been diagnosed with various forms of anxiety disorders and other psychological conditions. Read more to find information specific to anxiety and prescription medication.
What are anxiety medications and how do they work?
Anxiety disorders occur when there is an imbalance of some chemicals in the brain like neurotransmitters such as serotonin, norepinephrine, and gamma-aminobutyric acid or GABA. These chemicals influence a person’s state of mind, well-being and how relaxed he is at any given time.
The treatment of anxiety disorders commonly involves the use of multiple types of drugs such as antidepressants, anti-anxiety medications (also known as anxiolytics), and beta-blockers.
The main function of the antidepressants and the anxiolytic medications is to effectively balance some specific chemicals in the brain known generally as neurotransmitters. The beta-blockers and other forms of anxiety medications are primarily used to treat the symptoms that usually accompany an anxiety attack.
What are the signs and symptoms of anxiety?
Anxiety disorders present themselves in several ways, and the symptoms usually differ in some extreme cases. However, the body reacts to anxiety in some specific ways because an anxious person is always on high alert, trying to identify any potential threat or danger. The flight or fight responses common to everybody kicks in. Below is a list of some of the common signs and symptoms of anxiety;
• An urge to avoid anxiety triggers
• Feeling nervous and restlessness
• Obsessive-compulsive disorder (OCD)
• Having trouble dealing with other issues other than what makes you anxious
• Having digestive or gastrointestinal problems like gas, constipation, or diarrhea
• Insomnia
• Increased sweating
• Feelings of panic, danger, or dread
• Performing certain behaviors over and over again
• Post-traumatic stress disorder (PTSD)
• Rapid heart rate
• Rapid breathing, or hyperventilation
• Trembling or muscle twitching
• Weakness and lethargy
Panic attacks
A panic attack is the sudden occurrence of fear or distress which usually peaks within minutes is defined by some of the symptoms below;
• Chest pains
• Choking sensation
• Dizziness, light-headedness, or feeling faint
• Feeling of detachment from oneself or reality, known as depersonalization and derealization
• Fear of losing control or ‘going crazy’
• Fear of dying
• Feeling hot or cold
• Numbness or tingling sensations (paresthesia)
• Nausea or gastrointestinal problem
• Prescription Medications For Anxiety
• Palpitations
• Sweating
• Shaking or trembling
• Shortness of breath or feeling like you are being smothered
Anti-Anxiety Medications
While most anti-anxiety drugs are used to effectively treat the symptoms of anxiety disorders and panic attacks it should be noted that they are not cures, but simply a means to manage the condition. The anti-anxiety medications prescribed by medical professionals usually depends on the specific kind of anxiety disorder. Specific anxiety and prescription medication relationships include:
When talking about anxiety and prescription medication, it’s important to talk about benzodiazepines. Drugs like Ativan (lorazepam) Valium (diazepam), Xanax (alprazolam), Klonopin (clonazepam) fall into this category and usually causes relaxation when taken. They are used to treat anxiety disorders, panic disorders, and social anxiety disorders.
These medications (acebutolol, atenolol, propranolol) generally block the effects of epinephrine by reducing the heart rate and blood pressure. They are prescribed for use on a short-term basis and usually reduce physical anxiety symptoms like sweating and trembling.
Lesser-known when discussing anxiety and precription medication is buspirone. This medication BuSpar (buspirone) is prescribed for long-term use to treat anxiety disorders. It usually takes days before its effect is felt and must be taken daily. It is less addictive and less sedative.
Selective Serotonin Reuptake Inhibitors (SSRIs)
Prescription medications like Paxil (paroxetine), Prozac (fluoxetine), Zoloft (sertraline) and Lexapro (escitalopram) and many others are all in this category. Like the name implies, SSRIs boost the amount of serotonin in the brain which improves mood.
Serotonin-Norepinephrine Reuptake Inhibitors (SNRIs)
The serotonin-norepinephrine reuptake inhibitors (SNRIs) include medications like Effexor (venlafaxine), Cymbalta (duloxetine), and Pristiq (desvenlafaxine) which also increases the level of serotonin and norepinephrine top improve mood.
Tricyclic Antidepressants
Tricyclic antidepressants include Tofranil (imipramine), Elavil (amitriptyline), Pamelor (nortriptyline) and Anafranil (clomipramine) and still considerably more effective in treating anxiety.
Leave a Reply
Connect With Us
Addiction Info and Resources Delivered To You
Never display this again
|
Control pest in your home
Home Business Why you need to Control pest in your home
Why you need to Control pest in your home
5 min read
Pest control is vital as the different species of insects and rodents are carriers of fatal infections. The pest can also cause physical damage in your home by damaging your furniture and other home accessories. If you want the best methods of control you can opt for the best pest control phoenix.
The following are the reasons why pest control is necessary and inevitable:
• To protect the health of the occupants of the house
Insects, rats, among other pests often invade on food and leave contaminants. If you eat that contaminated food, you are likely to develop severe illness. The pests are a danger to you, your family and your pets.
Rodents, for example, carry toxic bacteria known as Hantavirus and salmonella. If such an animal contaminates your food, you are likely to acquire the fatal disease. You should ensure you employ tactics that keep such pests away from your home.
• To prevent the destruction of household items
Termites are very notorious and will feed on wooden furniture in the house leading to decaying of such items. The rats feed on clothes and leave ugly marks on your favourite garments. Rats and mice damage all insulation wires in your home as they try to build nests on the cables. Having such pests in your home makes the value of your house and items to depreciate.
You should hire a professional to ensure that all pests are eliminated in your home in a professional manner. The experts ensure that their natural habitats are destroyed to prevent them from reproducing in your homestead. They also kill the existing pests that are causing damage in your home.
• To prevent allergens in the house
Pests tend to carry allergens to your home. The allergens can deteriorate existing conditions such as asthma and those with breathing difficulties. The feces of the pests can also cause skin infections for people who have sensitive skin.
Animals like wasps and bees can sting you and deposit allergens on the skin surface. The sting can react with your antibodies and result in severe inflammation associated with mild to chronic pain.
• Feed on your food
Another significant effect of the pests is that they feed on your household food. If they continuously feed on your food stock, it means you will incur costs to buy more stock. Also, if you note that the food is contaminated, you will have to dispose of the food leading to food wastage. Thus, pests in your home lead to high costs of buying food and increases the level of food disposal, which is uneconomical.
Bottom Line
It would be best if you strived to control pests in your home to avoid such effects in your home. A pest-free house is good for your health, and the value of the home does not depreciate. However, the methods you use to control the pets should not put in jeopardy the occupants of the house. You need to hire professionals to ensure that all pests are eradicated safely and professionally.
Leave a Reply
Check Also
10 Best Places to Celebrate New Year in Europe
New Year in Europe The New Year in Europe, traditionally the most glamorous start to a yea…
|
Magnetic Moon, Two-Sex Birds, Homemade Fusion and More Mysterious News Briefly — October 8, 2020
Mysterious News Briefly — October 8, 2020
Researchers using DNA collected from ancient latrines and cesspools found that the prokaryotic and eukaryotic intestinal flora of people in the Middle Ages were loaded with bacteria, protozoa, fungi, and parasitic worms that were deadly back then but easily treated with antibiotics today. Kudos to these dedicated researchers who also manage to come up with the best words for sh*t.
A new study confirmed that the magnetic characteristics of the Moon most likely come from an ancient core dynamo, not plasma generated by meteoroid impacts as some scientists suggest. With the number of impact craters on the Moon, that kind of magnetic pull would affect not just Earth’s tides but its paper clips too.
Neandertals adults are depicted as having barrel chests and a new study shows that Neandertal babies were born with them, suggesting this was an inherited trait and not just a physical development due to the heavy breathing required to survive 50,000 years ago. This might explain why there are no cave paintings of Neandertals bench-pressing bisons.
Biologists in Pennsylvania discovered an extremely rare rose-breasted grosbeak (Pheucticus ludovicianus) that has male plumage on one side and female on the other – a clear sign of a genetic anomaly known as bilateral gynandromorphy. How soon before it appears on RuPaul’s Drag Race?
SpaceX is trying to convince residents of Boca Chica in southeastern Texas – who are mostly retirees — to sell their homes to the company and move because it’s too dangerous to live near Elon Musk’s private resort for launching Starship-Super Heavy rockets. They need to convince Musk it’s too dangerous to build it there and put the resort on Mars instead.
It hasn’t gotten much publicity lately, but the hole in the ozone over Antarctica reached 8.8 million square miles (23 million sq. km.) this year — more than twice the size of U.S. Once again, 2020 sets the bar even higher for 2021.
A 12-year-old middle school student in Memphis, Tennessee, broke a Guinness World Record when he became the youngest person ever to create a nuclear fusion reactor and he did it inside his family’s house. “We’re not worthy!” said every science fair participant for the next 20 years.
A team of doctors and engineers at Nanjing University developed a new hydrogel that they say repairs nerve damage in animals and may work just as well in humans. In the meantime, mice are probably stocking up as a way to defeat mousetraps.
Viruses similar to the one that causes rubella, or German measles, were found in three different species of animals that died in a German zoo, leading researchers to believe the German measles jumped from animals to humans. Is this the beginning of the end for German shepherds as pets?
You can follow Paul on and
|
What is Homeopathy
Homeopathy is a system of alternative medicine that has been in worldwide use for 200 years. It is a system that puts the patient at the centre of its own healthcare and by working with the body’s innate ability to heal itself can bring forth true health, by restoring the balance between Mental, Emotional and Physical bodies.
Homeopathy is based on the principle that “like cures like”. A substance that causes symptoms in an individual when taken in large doses, can be taken in smaller doses to treat similar conditions. This concept has been around for decades and is well documented in medical fields in biology and toxicology.
An example of the above principle, would be the de-sensitisation of allergic patients by administering to them small doses of pollen, or the use of the stimulant amphetamine-based Ritalin to treat Attention Deficit Hyperactivity disorder.
Today, 450 million people around the world use homeopathy as their principle from of medicine.
What can homeopathy do for you
Homeopathy can support you with conditions such as:
• Autoimmune conditions and long term chronic illness
• Emotional stress, Anxiety, Low self esteem, Depression, Fears, OCD and Phobias.
• Digestive problems such as Irritable Bowel Syndrome (IBS), Heartburn, Constipation etc.
• Skin conditions such as psoriasis, eczema, boils, acne, ulcers etc
• Allergies such as Hay Fever and Asthma
• Hormonal imbalances such as PCOS, PMT, Menopause, Infertility, Hypo or Hyperthyroidism, Prostate problems
• Autoimmune conditions
• Childhood illnesses such as common colds, recurrent Otitis, Chicken pox, Urine tract infections etc
Homeopathy can be used with children, babies, during pregnancy or breastfeeding and it can be used alongside conventional medication.
Homeopathy is a sustainable and environmentally friendly form of medicine and the production of homeopathic medicine created no toxic waste.
Thanos Razis
Kirstin Bruges
Offers Banner
|
Type to search
xhamster stories fetish tranny bangs slut.
teen babe hd my annoying stepbro.
Cancer-Free Pets
Cancer-Free Pets
Five Ways to Help Keep Them Healthy
by Karen Shaw Becker
Veterinarians are seeing cancer in more and younger pets these days than ever before. According to the American Veterinary Medical Association, approximately one in four dogs will develop cancer at some point in life, including almost half of dogs over the age of 10. But taking practical steps can help lower a pet’s risk.
Don’t allow a dog or cat to become overweight. Studies show that restricting the number of calories an animal eats prevents and/or delays the progression of tumor development across species. Fewer calories cause the cells of the body to block tumor growth, whereas too many calories can lead to obesity, which is closely linked to increased cancer risk in humans.
There’s a connection between too much glucose, increased insulin sensitivity, inflammation and oxidative stress, all factors in obesity and cancer. It’s important to remember that fat doesn’t just sit in a pet’s body harmlessly—it produces inflammation that can promote tumor development.
Feed an anti-inflammatory diet. Anything that creates or promotes inflammation in the body increases the risk for cancer. Current research suggests cancer is actually a chronic inflammatory disease fueled by carbohydrates. The inflammatory process creates an environment in which abnormal cells proliferate.
Cancer cells require the glucose in carbohydrates to grow and multiply, so work to eliminate this cancer energy source. Carbs to remove from a pet’s diet include processed grains, fruits with fructose and starchy vegetables like potatoes.
Keep in mind that all dry pet food (“fast food”) contains some form of potentially carcinogenic, highly processed starch. It may be grain-free, but it can’t be starch-free because it’s not possible to manufacture kibble without using some type of starch. The correlation between consuming fast foods and cancer has been established in humans, so it’s wise to incorporate as much fresh, unprocessed food into an entire family’s diet as can be afforded.
Cancer cells generally can’t use dietary fats for energy, so high amounts of good-quality fats are nutritionally beneficial for dogs fighting cancer, along with a reduced amount of protein and no carbs—basically a ketogenic diet.
A healthy diet for a pet is one that’s anti-inflammatory and anti-cancer, and consists of real, preferably raw, whole foods. It should include high-quality protein, including muscle meat, organs and bones. It should also include high amounts of animal fat, high levels of EPA and DHA (omega-3 fatty acids) and a few fresh-cut, low-glycemic veggies. This species-appropriate diet is high in moisture content and contains no grains or starches.
Also make sure the diet is balanced following ancestral diet recommendations, which have much more rigorous standards (higher amounts of minerals and vitamins) than the current dietary recommendations for pets. A few beneficial supplements like probiotics, medicinal mushrooms, digestive enzymes and super green foods can also be very beneficial to enhance immune function.
Reduce or eliminate a pet’s exposure to toxins and minimize chronic stress. These include chemical pesticides like flea and tick preventives, lawn chemicals linked to cancer (weed killers, herbicides, etc.), tobacco smoke, flame retardants, household cleaners and air-scenting products like candles and plug-ins. Because we live in a toxic world and avoiding all chemical exposure is nearly impossible, a periodic detoxification protocol can also benefit a pet.
Research points to the benefits of identifying and removing sources of chronic stress in an animal’s life. Focusing on providing environmental enrichment and opportunities for dogs to just be dogs (play, sniff and run) on a daily basis is important in keeping them happy and healthy.
For dogs, especially a large or giant breed, hold off neutering or spaying until the age of 18 months to 2 years. Studies have linked spaying and neutering to increasing cancer rates in dogs. Even better, investigate alternative ways to sterilize a pet without upsetting their important hormone balance.
Refuse unnecessary vaccinations. Vaccine protocols should be tailored to minimize risk and maximize protection, taking into account the breed, background, nutritional status, lifestyle and overall vitality of the pet. Vaccines may cause cancer, and titer testing is a responsible way to ensure a pet has adequate immunity in place of over-vaccinating on an annual basis.
Karen Shaw Becker, DVM, a proactive and integrative veterinarian in the Chicago area, consults internationally and writes Mercola Healthy Pets.
10 Classic Symptoms Not to Ignore
websitesjav xxx xvedios mona lee strokes plastic cock with her feet.
|
Type to search
xhamster stories fetish tranny bangs slut.
teen babe hd my annoying stepbro.
Think Outside the House: Expanding Spring Cleaning
Think Outside the House: Expanding Spring Cleaning
Spring cleaning traditionally heralds a new beginning, an opportunity to take stock of hearth and home and a time of renewal regardless of the season. Clearing figurative cobwebs is as important as sweeping away real ones, and while most folks focus on giving their abode a thorough airing and scrubbing, there’s plenty to tend to outside before the heat of summer sets in.
Clear out potentially dead grass and leaves and other organic matter near the sides of the house to prevent termites and other insect infestations. Collect the organic matter, add in food scraps and compost it all to benefit the garden. Composting sends the nutrients of loose ingredients into the soil as a natural fertilizer. Eartheasy reports it can help divert as much as 30 percent of household waste from the garbage can.
Make sure to check the top and outer walls of the house. Upraised nails in a shingled roof or deteriorated shingles or gaps where plumbing vent pipes penetrate the surface—possibly due to high winds, falling branches or ice thawing in colder climes—can produce small breaks and holes for water to seep through onto tops of ceilings. That can possibly lead to mold as summer temperatures rise and water leaks into the interior of the house. The Old House Web says collars of vent pipes should be tight, as “some older [ones] can loosen over time and even some newer rubber collars crack and leak long before the shingles fail.” Also, check the gutters to make sure they are clear of packed leaves and tree branches.
Don’t forget the family car, which may need its own spring cleanup. Go green with a natural soap to remove slush and grime, and then take a close look at the toll the past year has taken. Pebbles and rocks may have been kicked onto the sides of the car, resulting in small chips and abrasions of the paint from which rust might spread. The nonprofit Car Care Council recommends covering the areas as quickly as possible and if necessary to use a little clear nail polish—nontoxic, of course—as a quick fix for minor paint damage until a proper touch-up can be scheduled.
Then there’s the undercarriage. Salt particles that may have been used in treating roads and highways in icy regions may be lodged into crevices which can corrode metal and functional parts. Make sure the hose sprayer also reaches these areas.
websitesjav xxx xvedios mona lee strokes plastic cock with her feet.
|
Money alone can’t change Indigenous living conditions
Article content
Lorrie Goldstein, of the Toronto Sun, wrote a sterling piece on how billions thrown at Indigenous in the 21st century alone by all levels of government has not changed their living conditions.
Story continues below
Article content
If money could solve Indigenous lack of potable water, housing services, firefighting equipment, communication systems, unemployment and suicide, it would have happened.
Governments spend billions ineffectively in Canada’s troubled relationship with First Nations.
What was the knee-jerk response to the discovery of the remains of 215 children in an unmarked burial site at a former residential school? More money.
Has government consulted with individual bands to determine their most urgent needs?
It blahs “no relationship is more important than the relationship with Indigenous.”
The latest budget promises a “historic investment of $18 billion over the next five years to improve their quality of life.”
Governments will spend a total of $24.5 billion on Indigenous programs in 2021-22, on top of $12.9 billion in 2016-17; $15.4 billion in 2017-18; $17 billion in 2018-19; $20.5 billion in 2019-20, and a further $22.7 billion, total= 113 billion.
Have Indigenous living conditions improved?
Every year, spring flooding causes Kashechewan residents to be evacuated.
In 1990 Archbishop Desmond Tutu was shocked at the living conditions in some northern communities reminding him of shanty towns of his fellow blacks in South Africa.
How would you react if you had six or eight or 15 members of your family and community commit suicide in one year?
How is it possible that Indigenous reserves still lack clean water?
Story continues below
Article content
For all the promises the federal government made over decades, quality of Indigenous lives have not come even close to the majority of Canadians.
Unemployment, poverty, disease, drug and alcohol addiction, suicide and incarceration rates remain far above Canadian norms.
Endless land claims remain unresolved, resulting in peaceful protests that disrupt our economy through rail and highway blockades.
This despite “scathing reports from Canada’s late auditor-general Michael Ferguson in 2016-18 on Canada’s incomprehensible failure to close the socioeconomic gap between its First Nations people and other Canadians.”
He called it “an abject failure of leadership going back decades at the federal, provincial, territorial and First Nations levels, with most of the responsibility falling on the federal government.”
He said “the federal bureaucracy doesn’t monitor the results of its spending on Indigenous programs to see if the money is accomplishing what it’s supposed to accomplish.”
Surely this comment applies to more than Indigenous programs.
Instead, Ferguson reported “these programs are managed to accommodate the people running them rather than the people receiving the services; the focus is on measuring what civil servants are doing rather than how well Canadians are being served.”
What business would keep spending like this with no results?
Ferguson said “We don’t even see that they know how to measure those gaps that the funding is supposed to address, and until we do, Canada will continue to squander the potential and lives of much of its Indigenous population.”
A FaceBook cartoon of a white church atop a mountain of children’s bones with the words “Love your Neighbour” drew no comments.
It is sad that tragedies of Indigenous people receive less attention and concern than tragedies involving new Canadians.
Reach Gene Monin at
Latest National Stories
Story continues below
News Near Kirkland Lake
This Week in Flyers
|
Azimuthal Projections Info
Example azimuthal map
Azimuthal projection from ARRL headquarters
This all started when I was interested in calculating the distance and bearing between amateur radio stations. I found this reference on calculating the great circle distance and bearing. Ultimately, this interest led to a web form for producing azimuthal maps.
For those who never heard of an azimuthal map, it is a special kind of map that prioritizes correctly showing the great circle distance and bearing from the center reference point. Azimuthal maps are particularly useful for ham radio operators with a directional antenna. For example, if you’re in Connecticut and you want to talk with someone in Cameroon (Africa), the azimuthal map will tell you to point your antenna to a bearing of 90° on a compass (technically you also must adjust for the difference between magnetic north and true north).
It occurred to me that this the bearing and distance calculation was the fundamental tool for making an azimuthal projection. All I needed was a database of land and political boundaries expressed in latitude/longitude pairs.
The azimuthal project is always made from a particular reference point on the globe, and I can convert the points in the land and political boundary database into bearing and distance using the great circle calculations. This gives me a collection of points in polar coordinates (r, Θ) which is what I need for the azimuthal projection.
For the land and political boundary information, I used the database from the CIA World Databank II; however, it required some massaging. First it was too detailed, and I had to filter the data to an appropriate resolution. Next, I wanted land masses and water bodies to be represented by closed paths. The CIA World Databank II is a collection of unconnected paths, so I had to write a program to patch them together and reorder the points to be clockwise. Having closed paths of points ordered in clockwise orders is necessary to have water and land colored differently. Lastly, I had to identify which closed paths represent land and which represent water. Unfortunately, this work is incomplete. You’ll notice some lakes that aren’t colored blue.
Initially, my Ruby program was generating raw PostScript. However, I recognized that it would be easier for people to use if it generated a PDF. Rather than using a Postscript to PDF convertor, I changed to generating PDFs directly using the Ruby PDF::Writer. By using PostScript or PDF, I get vector graphic output that is scalable from small to very large sizes without having to make a huge raster graphic files. There is a limit to the resolution of the continent & political outline data, so at very large sizes, the outlines may not appear smooth.
Ultimately, I used the great circle distance formula from Wikipedia. According the article, it’s accurate for both long and short distances.
The hardest part of the whole project was getting the water blue. There are still some maps where the program gets it wrong, and you’ll see weird coloring. Things tend to go wrong when the reference point is inside a particular small region of land or water. In these cases, the floating point accuracy seems to cause the colors approach to fail.
My plans for future improvements include:
• add small tables of bearing & distance for important world cities around the margins
• add US and world cities to the map (particularly for area map)
• adjusting political boundaries and labels to be more accurate
• adding more views to allow people to report lakes that aren’t colored correctly
373 thoughts on “Azimuthal Projections Info”
1. Hoping to make a map centered on my location, but apparently your server cannot be found… It is 1/3/2021
2. Hi Tom…
I tried several times using this string: 33° 41′ 14″ N, 117° 49′ 33″ W
Kept getting “Internal Server Error”. Also tried different sizes, with the same result.
Thanks for providing such a nice service.
Jon KA6MOK
1. Ah, that works. I guess you can’t use N/S or E/W to indicate + or – anymore? Good to see it’s working otherwise.
Thanks again!
2. Oops.. re read the instructions, and it was probably my formatting, the page won’t parse the degree, minute, second characters. right? Anyway, yet another lesson in RTFM… 😉
3. This is an excellent resource! Something I wish it had was an option to fill in Maidenhead grid squares with either two or four places (e.g. CM or CM86, depending on map resolution)
4. Thank you for this! I needed to calculate the angle between two locations when drawn on a polar azimuthal equidistant map, and your tool allowed me to calculate it real easy.
One question: would you have similar software for maps in other polar azimuthal projections like stereographic and gnomonic?
5. HI,
can You add the equator line the see the part of the other hemisphere and choose it with a flag ?
Anyway great app.
73 de ik8udd, Ermanno
6. Hi,
Many thanks for this great tool.
If I understood it correctly , here is the place to report bugs etc 🙂
I have a remark/bug and a little wish 🙂
When I generate a map with for example 1500km from JN39 I have all continental Europes countrys in blue and the U.K. & Ireland in white. So it takes in count Brexit 😉
little wish: Ability to generate Great circle maps with 4 char locators written on it for VHF / UHF work 🙂
Many thanks
1. I never found a bullet proof way to see if the center was enclosed in land or not. In most cases, I was able to fix the blue by using a nearby location or different distance. You can also turn off the blue.
7. Hi Tom,
I really enjoy using your Azimuthal maps and find it helpful. I was wondering for a future feature if you could plot “real time” MUF as seen at but overlaid onto the Azimuthal data for a selected location. It would let you see in real time if propagation is likely open between the selected location and any place in the world easily.
Thanks again!
8. Bug report: When I use the parameter: Center: 22°18’0″N 114°0’0″E Radius: 10000 km to generate the map, the part from France to Greenland become bugged with national and land boundary broken.
9. In addition to the previous reported problem, Fiji is also bugged in multiple map different maps that show it, including the map of Center: 22°18’0″N 114°0’0″E Radius: 10000 km
1. Yes, bad things happen when there is a small island on the other side of the world from the center. Small islands are defined by very few points, and some of the approximate methods used in the map generation fail. You can often resolve the problem by moving your center a little.
1. I think the problem with Fiji is not merely because of small island. Instead, it seems to me there are some islands near Fiji that crosses the 180 degree longitude, and isn’t handled correctly, causing and abnormal white quadishape around the area.
10. In addition, there are problem with map data, like the map didn’t show the name of Hong Kong when click showing city name, but instead show “Xianggangdao”, and it displayed “Macao, China” as country name instead, while it didn’t display the name of Taiwan, and for the Taiwan city of Taipei and Kaohsiung, it also spelled their name as Taibei and Gaosyong using less common way of transliteration
The map have also missed the boundary between Koreas and the boundary between Russia and Kazakhstan, Ukraine, Belarus, Baltic countries, Greogia, Armenia, and Azerbaijan, as well as border between Israel, Jordan, Lebanon, Egypt. The map also missed border between Saudi Arabia, UAE, Oman, Yemen, and Bahrain, with Yemen’s label also disappeared. Border between Somalia and Ethiopia is also partially missing. And there is a strange gap between the border of Iraq and Saudi Arabia, near Kuwait.
The map also missed the label for “Pyongyang” and other North Korean cities, with Nampo being name of the only one being shown. The map also missed any city in Laos.
The map labelled Northern Mariana but missed Guam. It also shortened the name of Federation of Micronesia into Micronesia which IMO shouldn’t happen because Micronesia is also the name of the bigger region it’s in.
It displayed Australian capital as “Canberra-Queanbeyan (ACT-NSW)” which I am not sure what it mean. The map also missed New Zealand capital.
South Sudan is missing from the map together with their cities.
National label for Singapore is also missing.
Anchorage in Alaska is being shown as “Anchorage municipality which is quite redundant.
1. The sources used for land and country borders are old. It has many flaws are you point out.
I have a new program based on another more recent database, but I haven’t had time to make it robust enough for deployment via WWW.
1. Load PDF in Inkscape, use the export PNG feature to make a PNG bitmap at the desired resolution. The GIMP, can convert the PNG into a JPEG if that’s desired. On Linux, there are lots of other tools that can convert a PDF to JPEG.
11. Feature Request: It would be nice, if instead of generating the map on a single A0/A2/A4 paper, it can also generate the map and cut it up into 2×2/3×3/4×4 pieces of A4 paper in the PDF, with an additional index page showing their order.
12. Another feature request: Add blue marble satellite photo, or topographical map, as optional map layer to the generation product, similar to gcmap dot com’s option
13. Hi Tom,
this is a great, very useful tool. I generated a map for VHF/UHF contest use around my locator using radius 1.200 or 1.600km. Would it be possible, not to have only Maidenhead greater fields but also squares (00..99) ? E.g. with thin red lines and numbers ?
I would really apriciate such a feature.
vy 73 de Rolf
1. If you would like to edit the program to add this capability, I can give you access to the code and provide the outline data.
14. How do I send you an azimuthal map that shows the center of a circle that just touches 3 extreme points on the coasts of the Lower 48, the smallest circle that surrounds all territory of the Lower 48?
15. I keep getting a “500 Internal Server Error” when attempting to create a map. I attempted with both a city and state and the actual latitude and longitude.
Your fellow amateur radio buff Geoff Fox posted a link to your Azimuthal Projections map web site, so I guessing you have had a multiude of requests.
I will bookmark the site and try later
16. hi not sure if im doing something wrong or the map is doing something wrong LOL. I type in cords, distance, etc click show city labels, submit the map and it gives me grids and towns but its all blue. No map or anything else shows up. Is that right? in a video I watched of LCARA Ham Radio they had lakes and roads and more on their map.
What am I doing wrong?
1. There are some heuristics (approximate methods) to determine which side of the graph is inside the shape (land) and outside the shape (water). Sometimes, this goes horribly wrong, and land and water are inverted or just blue everywhere. Sometimes you can get something better by turning off the water fill (deselecting blue background).
17. im not seeing the actual map just blue. I do see the cities names but no map. any help is much appreciated.
18. How can I download it so that the texts and the lines will be layers instead of just an image on a PDF?
Like I want to open it in GIMP with everything as a layer so I can turn it off and on as needed
1. I understand what you’re asking for having used GIMP myself, but I don’t believe it’s achievable without a lot of programming work. If you want to do the programming work, I can make the source code and data files available to you.
I think Inkscape is a better tool to use on the PDF because Inkscape knows how to read scalable vector graphic file formats. You may be able to do some of the post processing you want without any changes by me.
19. Hi! I love this map tool, but I’m running into a problem. If I center on the North Pol (90N 0W) and render the maximum distance, the heading degrees are messed up. For example, New Zealand is rendering at the top of the map, heading 0 degrees.
This is entirely wrong as compared to, say, Gleason’s map or even just historical land claims in Antarctica. For example, New Zealand historically claims the Ross Dependency (150 degrees west to 160 degrees east). This would be on the OTHER side of Antarctica from New Zealand according to your headings.
Maybe I’m missing something (reawakened an interest in geography in my old age, haha) but it seems to me like your headings are inverted from top to bottom in my end result projection.
Anyways, I’d be very appreciative of some insights or thoughts. I’m wondering if I incorrectly formatted my input or missed some means of offsetting/inverting the headings. I can make this work for my uses, but it means I’ll have to overwrite all the headings by hand.
1. This reminds me of an old puzzle. You’re in a hut with all southern views. You look out the window and see a bear. What color is the bear?
An azimuthal map from the north pole is basically undefined. If you’re at the north pole, every direction you look is south (a bearing of 180 degrees). In some respects, I suppose the map should collapse into a line where everything is at bearing 180.
Navigations or map specialists may have a way to handle this, but it’s beyond me. If you’re planning a trip near the poles, I suggest working some somebody with more expertise than me.
20. First off, this is a great tool! Thank you for building it. Unfortunately, some islands and bodies of water are colored inconsistently.
For example, this request:
Grid location: CN87uo
Distance: 1000
The various islands (Vancouver Island, the San Juan islands, etc.) are white instead of blue. Some major bodies of water are white (in northern California) but most are blue (in British Columbia, Washington, Idaho, Montana, etc.).
1. Yeah, the method to determine what the inside of the land isn’t perfect. You can turn off the blue water to get something more reasonable.
21. Awesome work! I haven’t noticed problems from the several locations that I’ve used as center. I have two suggestions that should be easy to implement.
– an option to add tropics, polar circles, poles and equator (independently from lat/long grid)
– an option to have no political borders at all. A clean map of only landmass and water.
1. Glad that it has worked well for you. Thanks for your suggestions. I have no idea if I’ll ever get around to implementing them.
22. Great tool but the map and the country boundaries are a bit too old. Like 30 years or so. 🙁 Yugoslavia does not exist since 1991!
23. Two comments:
First, in your first paragraph from this page (, you state, “technically you also much adjust for the difference between magnetic north and true north”. “Much” should be “must.”
Second, a world map on an 8.5″x 11″ paper is rather small. Is it possible to make an option to print portions of the map on each of several sheets, to be pasted together after printing into a larger map?
1. Thank you so much find reporting the typo! I believe there are some tools to print large PDFs using multiple pages using “tiling”.
Leave a Reply to DL3LAR Cancel reply
|
Biology 11 Chapter 12 Circulation
1. The transport system of the human body is known as? 2. The organ that pumps the blood to the whole body is? 3. The heart, trachea, esophagus and associated structures form a Middle portion known as? 4. The closed sac that surrounds the heart is known as? 5. The protective fluid between the membranes of the pericardium is known as? The phenomenon in which the cell regain its shape after being plasmolysed is known as? Movement of cell sap involving the cytoplasmic connection of adjacent cells is known as? The source of energy in the photosynthesis is?
|
Biology- 11 Chapter 3 Enzymes
An enzyme combines with its substrate to form? Once a reaction has occurred, the complex breaks up into? At the end of the reaction, the enzyme? These are thermo labile catalysts, protein in nature, which can work in living tissues and also outside the tissues? What happens to a enzyme when it takes Chapter 3 Enzymes Test in chemical reaction? All the enzymes are proteins, so each enzyme has its own? Enzymes needed for the synthesis of dna and rna are located in the? Energy that is required by the molecules to react with one another is known as? The temperature that promotes maximum activity of an enzyme is known as?
|
Definition:Summation/Propositional Function
From ProofWiki
Jump to navigation Jump to search
Let $\struct {S, +}$ be an algebraic structure where the operation $+$ is an operation derived from, or arising from, the addition operation on the natural numbers.
Let $\tuple {a_1, a_2, \ldots, a_n} \in S^n$ be an ordered $n$-tuple in $S$.
Let $\map R j$ be a propositional function of $j$.
Then we can write the summation as:
$\ds \sum_{\map R j} a_j = \text{ The sum of all $a_j$ such that $\map R j$ holds}$.
If more than one propositional function is written under the summation sign, they must all hold.
Such an operation on an ordered tuple is known as a summation.
Note that the definition by inequality form $1 \le j \le n$ is a special case of such a propositional function.
Also note that the definition by index form $\ds \sum_{j \mathop = 1}^n$ is merely another way of writing $\ds \sum_{1 \mathop \le j \mathop \le n}$.
Hence all instances of a summation can be expressed in terms of a propositional function.
Iverson's Convention
Let $\displaystyle \sum_{R \left({j}\right)} a_j$ be the summation over all $a_j$ such that $j$ satisfies $R$.
This can also be expressed:
$\displaystyle \sum_{j \mathop \in \Z} a_j \left[{R \left({j}\right)}\right]$
where $\left[{R \left({j}\right)}\right]$ is Iverson's convention.
The set of elements $\set {a_j \in S: 1 \le j \le n, \map R j}$ is called the summand.
The sign $\sum$ is called the summation sign and sometimes referred to as sigma (as that is its name in Greek).
Also see
• Results about summations can be found here.
Historical Note
The notation $\sum$ for a summation was famously introduced by Joseph Fourier in $1820$:
Le signe $\ds \sum_{i \mathop = 1}^{i \mathop = \infty}$ indique que l'on doit donner au nombre entier $i$ toutes les valeurs $1, 2, 3, \ldots$, et prendre la somme des termes.
(The sign $\ds \sum_{i \mathop = 1}^{i \mathop = \infty}$ indicates that one must give to the whole number $i$ all the values $1, 2, 3, \ldots$, and take the sum of the terms.)
-- 1820: Refroidissement séculaire du globe terrestre (Bulletin des Sciences par la Société Philomathique de Paris Vol. 3, 7: pp. 58 – 70)
However, some sources suggest that it was in fact first introduced by Euler.
|
Signs You Need a Detox + a Daily Gut Rejuvenator
The idea of detoxing is all around us. Every health clinic, health magazine, health blog, and even social media is talking about why you need to detox and cleanse your body. They all tell you different ways to do it too. There are juice cleanses, gall bladder cleanses, fasts, sugar detoxes, bone broth cleanses; how do you know what is right for you with all the detoxes out there?
It is no secret that we live in a world full of toxins. We are in a time now where we are exposed to higher levels of toxins every day. The air we breathe, the chemicals in our foods, the release of chemicals in our homes and workplaces all add to our toxic load. It may not be just one incident of exposure to a toxin, but the daily small exposures that increase your toxic load and lead to many common health consequences. Being aware of the sources of your daily accumulation can allow you to make less toxic choices when it comes to the foods you eat, the personal care items you choose, and the environments in which you live and breathe.
Along with toxic exposure in our environments, homes, and foods, our bodies also create toxins. These toxins are by-products of our metabolism, metabolic waste from the production of energy, excess hormones, metabolic particles from incomplete digestion, infection (yeast, bacteria, viruses, and parasites), and free radicals (damaged cells). So, every day, our bodies have their own toxic load of metabolic waste that must be cleared.
Your Body’s Natural Detoxification Needs Your Help!
Because of so much exposure, it is important for our bodies to break down these toxins and clear them out. Our major detoxification organs, such as the kidneys, liver, lungs, lymph, and skin, have tons of work to do every day.
Your body naturally detoxifies on a daily basis, but to accomplish this it needs the right foods and a variety of nutrients to support the detoxification process. If these nutrients are not available, your body may have a difficult time detoxifying. Also, if you are under constant stress, are eating inflammatory foods, such as sugar, refined carbohydrates, bad fats, are living off stimulants, such as sugar, coffee, and sodas, or have a chronic health condition, then the detoxification process is slowed down and toxins are not effectively eliminated. When this happens, the toxins recirculate and are held in the tissues. Then, as more toxins enter the body, those stored toxins do not get eliminated. Over time, the toxins accumulate, and eventually the cells receive fewer nutrients, the energy factory of the cells (mitochondria) become impaired, and you feel it.
It is obvious that the world we live in just keeps getting more toxic. The question is not if you should do a detox; it’s when and how. Symptoms of toxicity will show up as all sorts of common health issues, so start listening to your body and look for signs. If you are experiencing any one of the following symptoms, it’s a good time for you to start a detox.
Are you experiencing digestive distresses?
If you have symptoms like bloating, gas, burping, constipation, or diarrhea, they are not only uncomfortable, but they are a major red flag that things are not functioning correctly. This is one of the most obvious signs that you need a complete diet overhaul. A detox will improve your digestion by giving your gut a break and it will start you on your way to healing a leaky gut.
Do you have sugar cravings or an insatiable sweet tooth?
Most people will crave something, but when your cravings are centered on carbohydrates and sweets it’s a sign that things are out of tune. Constant cravings often end up feeding pathogenic bacteria in the gut. When processed foods are a staple in your diet your blood sugar and your hormones are on a roller-coaster ride which increases your cravings even more. A detox will replace those junk foods with nutrient dense whole food sources that will help balance hormones.
Do you feel run down or easily exhausted?
We are all too busy for our own good these days but being constantly tired is not normal. It can be an indicator of hormonal imbalance, adrenal fatigue, and thyroid dysfunction. Excess toxin buildup, inflammation, gut and microbiome imbalances, and too much stress can cause a sluggish liver, problems with your adrenals, low thyroid function, and make it hard for you to be in a relaxed state. A detox will encourage proper liver function and boost your energy.
Are you experiencing brain fog, difficulty concentrating or remembering things?
A sluggish liver, too many carbohydrates, excess toxins, a leaky gut, too much stress; these all decrease your ability to focus, feel clear, remember details on a regular basis. When your body can’t eliminate toxins efficiently, or fast enough, they build-up and increase inflammation. This inflammation can damage the blood-brain barrier causing a “leaky brain,” and cause brain fog or slow down cognitive function. A detox will help decrease brain inflammation, flush out toxins, and enhance cognitive function.
Are you having difficulty managing stress?
Stress causes cravings, decreases digestive function, impairs detox abilities, and contributes to all kinds of health problems. You don’t want to be in a sympathetic state more often than a parasympathetic state, especially when you are eating, digesting, trying to sleep. Stress can act as a trigger for everything from depression, anxiety, and anger to heart problems, blood sugar imbalance, and autoimmune conditions. While a food detox can help increase energy and decrease physical symptoms of stress, detoxing from excess stimuli like Wi-Fi, social media, and blue light can do wonders for your mental state.
Are you struggling to lose extra weight?
Once you make the commitment to improve your health, weight loss becomes a side benefit. Many people just focus on losing the weight, but you need to get healthy to lose weight, not lose weight to get healthy! Focusing on healing your gut, increasing metabolic health, decreasing inflammation will help access stored body fat. A detox will help to balance your gut microbiome too. When you don’t have enough good bacteria, bad bacteria take over and leads to inflammation and a slowed metabolism.
Do you have acne, eczema, or other skin issues?
Your skin is one of the major detox pathways of your body. Skin troubles are most often a sign of excess toxin buildup and a slowed detoxification system. When you have too many toxins it can overload your liver (your body’s main detoxing organ), creating sluggish detox pathways and resulting in more toxin build-up, which can show up in your skin. Also having a leaky gut and imbalanced microbiome can result in inflammation, which also appears on your skin. Your skin detoxes through sweating, so all of those problems can appear on the surface in the form of acne and other issues.
Do you eat healthy but still not feel healthy?
Many of us think that we are eating healthy, and may be doing fairly well, but are still feeling run down and don’t have our spark. Often it’s unknown food sensitivities that are damaging the gut and causing inflammation and toxin build-up. Even the healthiest foods can irritate and cause a flare up. If you have poor gut health your immunity will be weak as well. Doing a detox along with an elimination diet can give your body a break and help you determine exactly which foods cause you problems. A detox can also help flush out toxins that build-up over time and contribute to health troubles.
Are you having difficulty falling asleep or staying asleep?
Underlying toxin build-up, sluggish liver and gall bladder function, and inflammation can all lead to hormonal issues that worsen circadian rhythms. The foods you eat and when you eat them will significantly affect your sleep. By paying attention to meal timing, reducing excess carbohydrates and processed foods, eliminating toxins and reducing inflammation you can greatly improve your sleep cycle. With a simple detox, you’ll be getting 7-9 hours of uninterrupted sleep before you know it.
Are you feeling moody, anxious or depressed?
Depression and anxiety oftentimes stem from a leaky gut, excessive inflammation, and compromised digestion. Undigested food particles and harmful by-products of pathogenic bacteria can cross the blood-brain barrier, triggering an inflammatory/autoimmune response. This can affect nerve conduction; neurotransmitter signaling and also decrease the production of key “feel good” neurotransmitters like serotonin and dopamine. A detox can help heal the gut and brain barriers to improve cognitive function, increase your joy and create a sense of calm.
Failure to eliminate toxins efficiently from the body can lead to systemic toxicity, which ultimately results in damage to internal organs or tissues. It also results in a compromised immune system that is unable to effectively fight more serious charges on our health.
Additional Symptoms of Toxicity
Take a close look at the symptoms above and make a note of anything you are experiencing. If you can check more than one, I’d say its time to do something about it. You can even start by encouraging some gentle cleansing every day. And of course, by changing your diet and lifestyle you can make detoxing a part of your whole life.
Daily Detox Tip: What is the best way to start your day to encourage daily detox?
My advice is to have 12-24 ounces of room temperature or warm water each morning when you wake up to get your digestion started. In this glass of water, add any combination of the following options, according to your liking and tolerance:
1 Tablespoon of apple cider vinegar
1 Tablespoon of lemon or lime juice
¼-1/2 tsp of unrefined sea salt, especially if you experience fatigue or muscle cramping
1-2 drops peppermint essential oil or a few fresh mint leaves
1-2 drops lemon essential oil or some lemon peel
I call this drink my Pure Health Rejuvenator or Morning Gut Primer.
The typical Standard American Breakfast, consisting of juice, coffee, toast with or without cereal, is a sure way to induce cravings and irritability a couple of hours later. Not to mention the lack of nutrients and amount of refined food that will disrupt the body’s natural detox processes.
Some people think that making the switch to freshly squeezed fruit juice, gluten-free toast or multigrain cereal is a step towards greater health, but even with those changes a meal like this will imbalance blood sugar levels, create surges of insulin, disrupt hormones, and slow detox processes for the rest of the day.
Your body is still in its nightly detoxifying mode until 9 or 10 in the morning, so what you eat during this time matters more than most people realize. Ideally, it is best to wait to eat a couple of hours after you wake up, which provides time for a more complete detox process.
Once you feel ready to eat, I recommended starting the day with a breakfast dense in healthy fats, protein, and fiber. This not only allows for the management of blood sugar and insulin output but it also creates consistent energy, fewer cravings, and more fat burning during the rest of the day.
There are tons of recipes and menu plans available online and in books, however, these often include imbalanced macronutrient options, such as just bacon and eggs without any vegetables, eggs with cheese and no fiber source, or low-fat sweetened yogurt or processed cottage cheese as the protein source. While these types of breakfasts may help to keep your blood sugar steady, they are deficient in micronutrients and antioxidants and do not provide good support your immune system and detox processes. Additionally, if you are prone to constipation or sluggish bowels, a breakfast without any fiber may keep you sluggish for the whole day.
Better choices for your first meal of the day, called “breaking your fast”
• Protein from grass-fed, pasture raised animals or plant protein sources
• Healthy fats focusing on essential fatty acids balanced for omega-3 and -6
• Fiber from low starch vegetables, seeds, and nuts
• Greens for antioxidants and micronutrients
• Probiotics- homemade yogurt, kefir, fermented vegetables
When you eat meals high in macronutrients, your energy soars and you feel satiated for hours. Following the basic macronutrient guidelines above will help you personalize your breakfast options for the best blood sugar and hormone balance plus this allows for the detox pathways to stay active. The goal is not to be hungry for 5 to 6 hours following your first meal of the day. Try breaking your fast with my Keto Green Smoothie, or Baked Avocados, or No-Oat Oatmeal. These recipes are loaded with micronutrients and antioxidants, are low in carbohydrates, and high enough in protein to leave you feeling well-nourished and balanced all day.
Does just the thought of a detox make you stressed or give you anxiety?
When the idea of detoxing and giving up certain foods makes you worried or anxious, that can be a sign of an unhealthy emotional attachment to food. Some of us use food to soothe or as reward or punishment rather than the healthy fuel your body needs to thrive each day. Doing a real detox program can be a great way to help you determine if your eating habits have emotional roots and to start addressing those habits. Are you ready to take some steps to change your relationship with what and how you eat?
If you want to learn more about your own health or feel ready to take your body to the next level check out my Sweet Release Detox program, my Pure Keto Reset program, or please sign-up for a free discovery session with me to determine the best starting place for you.
|
Can My Pet Get Sick From Eating Raw Food?
When many people think of raw meats they also think of the many sicknesses that raw meat can cause to humans (ie salmonella, e.coli), but what they fail to realize is that their pet dog and or cats anatomy is completely differently to our own!
Dogs and cats are predominantly carnivorous. Their systems are designed to handle and thrive on raw meat and have evolved over thousands of years doing so.
There’s a couple of reasons why this is…
First, the hydrochloric acidity levels in their stomach is about 10 times stronger in concentration than that of any human, meaning anything that goes into their stomach cannot survive (including germs and or bacteria).
Secondly, their gastrointestinal tract is very short in comparison to humans. This means any food that goes in through the mouth passes via stool in a matter of hours rather than days, giving any potential bacteria no chance to get settled in.
Think about what your dog or cat would be eating if for some reason they were stuck in the wild? Mice/rats, rabbits, birds, possums etc. They wouldn’t be waiting for a human to catch and cook it for them either! So I’m sorry to say, but your dog or cat was not born different to his brothers and sisters and ‘cannot handle raw meat’ they are all the same. Some may have preferences in taste or how WELL they handle it (solely kibble fed pets take a bit longer to adjust), but they are all the same on the inside. So yes, even raw meat for cats is perfectly normal.
Having said all that, salmonella or e.coli can still infect pets, but it is extremely rare, and would usually be brought on by poor hygiene, not washing bowls where bacteria is thriving, eating poop etc.
Build a custom meal for your pet
Raw & Fresh's range of treats
|
How To Swap Bad Habits for Healthier Ones
We all have some bad habits that we know that we need to break in order to make better choices and to lead a healthier lifestyle. Our habits, whether they are good habits or bad habits, all follow a pattern that has three steps. It is easy to remember using the words reminder, reward, and routine. So in order to break a bad habit, or establish a good habit, you need to break this cycle. Think about the routine that you have with a particular habit, and then avoid it if it means you’re more likely to fall back into a bad habit, for example. When you do this, you can develop a pattern for some new and much healthier habits.
Changing habits is important for improving yourself as well as helping your health (both physical and mental health). So here are some of the things that you can do to help yourself to break any bad habits and how you can establish and get into the routine of starting healthier habits.
If you want to establish some healthy habits, then in order to help you to succeed with them, you need to create a solid foundation to help you to achieve them. You also need to make sure that you have the confidence to achieve it. If you are worried about achieving something or you never think it will be possible, it is going to be so much harder to ever make it part of your routine. For example, if you want to quit smoking and you really want to for your health, you could go cold-turkey and just stop. But if you don’t have the confidence to do that, then you could look to use something like CBD oil vape pens instead. It is an alternative that could help with something like anxiety, but not have that same addictive nature that cigarettes have, helping you to wean off the habit over time.
Make a plan
You are going to make it much more difficult for yourself to establish a new routine for something if you don’t set out a plan. Say that you want to establish a new exercise regime, for example. You can’t just say that is what you are going to do if you want it to work. You need to take time to plan it all out, so that can really get going with it. What kind of exercise are you going to do? What time of day are you going to do it? If you know that at 7 am three mornings a week you are going to go running around the block, and then at 6 pm on Thursdays you are going to do an online Zumba class, then it is going to be much more likely to happen. Plan, decide, and put things in place to help you to establish your healthier habits.
Being ready to change is one of the key things. Nothing will happen if you don’t have the right mindset. So really take time to think about what you want to achieve and then it will be much simpler to be able to make it happen.
Follow by Email
|
FAQ's about waste
What do you mean by waste segregation ?
Waste segregation means keeping biodegradables and
non - biodegradables separately, so that non-biodegradable waste
can be recycled and biodegradable waste can be composted.
Why should I do it ?
So that it reduces waste that gets landfilled and reduces pollution to air and water. Segregation enables
the different processes - composting, recycling and incineration to be applied to different kinds of waste .
How do I practice waste management at home ?
• Keep separate containers for non-biodegradable and biodegradable waste in the kitchen.
• Keep glass /plastic containers rinsed of food matter.
• Send biodegradable waste out of the home daily.
• Store and send non-biodegradable waste out of the home, once/twice a week as applicable in your community.
• Store Household Hazardous waste, Sanitary waste and inerts in a separate bin.
• Store e-waste in a separately
What is non-biodegradable Waste ?
Paper, plastics, metal, glass, rubber, thermocol, Styrofoam, fabric, leather, rexine, wood –
anything that can be kept for an extended period without it decomposing.
Will non-biodegradable waste smell if I store it for a week ?
Not if it is clean and dry. Make sure that plastic sachets of milk, curds, oil, dosa/idli batter, or any food item,
are cleaned of all their contents and dried before being put in the non-biodegradable waste bag. Then they will never stink.
What are the first few steps to initiating waste management in an apartment complex ?
• Form a group of like-minded people
• Explain waste segregation to your family / neighbours in your apartment building.
• Get the staff in the apartment building to also understand the concept.
• Get separate storage drums for storing the non-biodegradable waste and biodegradable waste.
• Have the non-biodegradable waste picked up by the non-biodegradable waste collection centre or your local scrap dealer
Will I have cockroach, rats and flies problems ?
This will happen only if any food residue or organic matter is present in the non-biodegradable waste.
Clean and dry non-biodegradable waste will not attract any vermin.
How do I store pizza and cake boxes ?
Clean the pizza or cake boxes of all food residues – with a biodegradable kitchen cloth, or rinse them
quickly in water and let them dry out before putting them in the non-biodegradable bin.
How do I store pickle, sauce bottles ?
Sauce bottles should be rinsed thoroughly with water. Pickle bottles need to be cleaned with soap and water, as they contain oil. Basically, no food residue must be left in the bottles. Clean them as you would to reuse them.
What do I do with milk packets, dosa/idli packets and yoghurt containers ?
Clean them thoroughly. Open out the milk, yoghurt and dosa/idli batter packets completely at one end and wash out all the residue.
They can be put to wash with the dishes in the sink, then dried, and put into the non-biodegradable waste bin.
If I order take away from a local eatery, do I have to rinse the plastics bags/containers ?
Yes! Any plastic containing any food has to be rinsed, or washed with soap and water if required, before being put into the non-biodegradable waste bag.
Should I rinse my juice containers/ Tetra Paks ?
Yes, otherwise ants will be attracted to the sugar in the juice.
Will my biscuit/bread packet attract ants? How do I store them ?
Make sure all the bread / biscuit crumbs are shaken out of the packet, so they do not attract ants. If the biscuits are too oily,
the packet may need to be washed with soap and water.
What do I do with old clothes/ shoes/ handbags/belts/toys ?
If they are still in usable condition, they should be given to any organization that collects them. If they are totally unusable, or extremely damaged,
they are categorized as non-biodegradable waste.
If clothes are soiled with body fluids, they become sanitary waste.
If they are soiled with paint or any chemicals, they are HHW (household hazardous waste). Both sanitary waste and HHW should be stored in the
Hazardous waste bin.
What do I do with old bed linen/ mattress/ pillows etc ?
Same as above
What do I do with my old furniture/ broken glass table ?
Old furniture can be recycled. If not, it can disposed of as debris or rubbish (inerts) along with broken glass.
What do I do with old crockery / non-stick pans etc. ?
If they are not broken, they are recyclable non-biodegradable waste. If broken, debris or rubbish (inerts).
What do I do with my old taps/ broken sanitary ware ?
Old taps – recyclable non-biodegradable waste.
Broken sanitary ware – debris or rubbish (inerts).
What do I do with my old brooms/ floor cleaning cloths/ dry mops/ bathroom cleaning brush?
Bathroom cleaning brush is sanitary waste.
What is the best method of storing non-biodegradable waste ?
Store it in a bag or bin in the utility area after cleaning and drying till it is picked up.
What is e-waste ?
E-waste or electronic waste consists of batteries, computer parts, wires, electrical equipment
of any kind, electrical and electronic toys, remotes, watches, cell phones as well as bulbs, tube lights and CFLs.
How do I store E-Waste ?
Store these in separate container which is kept closed, away from moisture and in which nothing else is kept. It has to be collected periodically as a separate waste stream.
What do I do with my tube lights, CFLs and other bulbs ?
Tube lights, bulbs and CFLs fall under the category of e-waste.
What is biodegradable waste ?
Biodegradable waste consists of kitchen waste – including vegetable and fruit peels and pieces, tea leaves, coffee grounds,
eggshells, bones and entrails, fish scales, as well as cooked food (both veg and non-veg ).
Can I compost at home ?
Of course! Home composting can be easily done in any aerated container. Specialized kits are also commercially available.
I don’t have time to compost at home, what are my alternatives ?
If you live in a large apartment building, a community composting system like tank composting or pit composting could be set up for all
the biodegradable waste from the residents. If not, the biodegradable waste can be given out every day to the collection staff.
If I don’t use a plastic liner, how do I dispose my food waste in the bin ?
Before the advent of the bin liner, we would all put our garbage directly in the bin, and wash it everyday. That is what we will have to do now.
The bin can be lined with a newspaper liner or a layer of sawdust if you don’t want to put the biodegradable waste directly into it.
What is hazardous wastes ?
HHW or Household Hazardous Wastes include toxic substances such as paints, cleaning agents, solvents, insecticides and their containers,
other chemicals; and biomedical wastes like used syringes, expired medicines, thermometers, used cosmetics etc.
What is biomedical waste ?
This includes used menstrual cloths, sanitary napkins, disposable diapers, bandages and any material that is contaminated
with blood or other body fluids.
How do I dispose sanitary pads and diapers ?
They should be wrapped in a newspaper, marked with a red cross and given along with the Hazardous waste.
How do I dispose expired medicines/ injections/razors/condoms/soiled cotton ?
Expired medicines and injections, used syringes, razors come under HHW or household hazardous wastes. Condoms, soiled cotton, etc. come
under sanitary waste – they should be wrapped in a newspaper, marked with a red cross, and given along with the Hazardous waste.
What do I do with waxing strips and cosmetics?
Used waxing strips are sanitary waste – they should be disposed along with Hazardous waste. Cosmetics come under hazardous wastes.
I have just painted my room. How do I dispose half used paint cans ? What about pesticides, cleaning solutions, mosquito repellents?
They come under HHW or household hazardous wastes. They should be stored separate from Biodegradable and non-biodegradable waste.
How do I dispose dog poop ? In case of loose motions, what is the best way to dispose it ?
It is considered sanitary waste, to be disposed along with Hazardous waste. In case the dog or cat has loose motions, do the same with the cloth used to mop up the liquid poop.
How do I dispose human hair/nails ?
It is considered sanitary waste.
What do I do with garden waste ?
Small amounts can be mixed with the biodegradable waste. If it is substantial quantity, it has to be handed over separately to the collection staff for use in composting unit.
|
Special Reviews
The Sequence of the Human Genome
See allHide authors and affiliations
Science 16 Feb 2001:
Vol. 291, Issue 5507, pp. 1304-1351
DOI: 10.1126/science.1058040
Decoding of the DNA that constitutes the human genome has been widely anticipated for the contribution it will make toward understanding human evolution, the causation of disease, and the interplay between the environment and heredity in defining the human condition. A project with the goal of determining the complete nucleotide sequence of the human genome was first formally proposed in 1985 (1). In subsequent years, the idea met with mixed reactions in the scientific community (2). However, in 1990, the Human Genome Project (HGP) was officially initiated in the United States under the direction of the National Institutes of Health and the U.S. Department of Energy with a 15-year, $3 billion plan for completing the genome sequence. In 1998 we announced our intention to build a unique genome- sequencing facility, to determine the sequence of the human genome over a 3-year period. Here we report the penultimate milestone along the path toward that goal, a nearly complete sequence of the euchromatic portion of the human genome. The sequencing was performed by a whole-genome random shotgun method with subsequent assembly of the sequenced segments.
The modern history of DNA sequencing began in 1977, when Sanger reported his method for determining the order of nucleotides of DNA using chain-terminating nucleotide analogs (3). In the same year, the first human gene was isolated and sequenced (4). In 1986, Hood and co-workers (5) described an improvement in the Sanger sequencing method that included attaching fluorescent dyes to the nucleotides, which permitted them to be sequentially read by a computer. The first automated DNA sequencer, developed by Applied Biosystems in California in 1987, was shown to be successful when the sequences of two genes were obtained with this new technology (6). From early sequencing of human genomic regions (7), it became clear that cDNA sequences (which are reverse-transcribed from RNA) would be essential to annotate and validate gene predictions in the human genome. These studies were the basis in part for the development of the expressed sequence tag (EST) method of gene identification (8), which is a random selection, very high throughput sequencing approach to characterize cDNA libraries. The EST method led to the rapid discovery and mapping of human genes (9). The increasing numbers of human EST sequences necessitated the development of new computer algorithms to analyze large amounts of sequence data, and in 1993 at The Institute for Genomic Research (TIGR), an algorithm was developed that permitted assembly and analysis of hundreds of thousands of ESTs. This algorithm permitted characterization and annotation of human genes on the basis of 30,000 EST assemblies (10).
The complete 49-kbp bacteriophage lambda genome sequence was determined by a shotgun restriction digest method in 1982 (11). When considering methods for sequencing the smallpox virus genome in 1991 (12), a whole-genome shotgun sequencing method was discussed and subsequently rejected owing to the lack of appropriate software tools for genome assembly. However, in 1994, when a microbial genome-sequencing project was contemplated at TIGR, a whole-genome shotgun sequencing approach was considered possible with the TIGR EST assembly algorithm. In 1995, the 1.8-Mbp Haemophilus influenzae genome was completed by a whole-genome shotgun sequencing method (13). The experience with several subsequent genome-sequencing efforts established the broad applicability of this approach (14, 15).
A key feature of the sequencing approach used for these megabase-size and larger genomes was the use of paired-end sequences (also called mate pairs), derived from subclone libraries with distinct insert sizes and cloning characteristics. Paired-end sequences are sequences 500 to 600 bp in length from both ends of double-stranded DNA clones of prescribed lengths. The success of using end sequences from long segments (18 to 20 kbp) of DNA cloned into bacteriophage lambda in assembly of the microbial genomes led to the suggestion (16) of an approach to simultaneously map and sequence the human genome by means of end sequences from 150-kbp bacterial artificial chromosomes (BACs) (17, 18). The end sequences spanned by known distances provide long-range continuity across the genome. A modification of the BAC end-sequencing (BES) method was applied successfully to complete chromosome 2 from the Arabidopsis thaliana genome (19).
In 1997, Weber and Myers (20) proposed whole-genome shotgun sequencing of the human genome. Their proposal was not well received (21). However, by early 1998, as less than 5% of the genome had been sequenced, it was clear that the rate of progress in human genome sequencing worldwide was very slow (22), and the prospects for finishing the genome by the 2005 goal were uncertain.
In early 1998, PE Biosystems (now Applied Biosystems) developed an automated, high-throughput capillary DNA sequencer, subsequently called the ABI PRISM 3700 DNA Analyzer. Discussions between PE Biosystems and TIGR scientists resulted in a plan to undertake the sequencing of the human genome with the 3700 DNA Analyzer and the whole-genome shotgun sequencing techniques developed at TIGR (23). Many of the principles of operation of a genome-sequencing facility were established in the TIGR facility (24). However, the facility envisioned for Celera would have a capacity roughly 50 times that of TIGR, and thus new developments were required for sample preparation and tracking and for whole-genome assembly. Some argued that the required 150-fold scale-up from the H. influenzae genome to the human genome with its complex repeat sequences was not feasible (25). The Drosophila melanogaster genome was thus chosen as a test case for whole-genome assembly on a large and complex eukaryotic genome. In collaboration with Gerald Rubin and the Berkeley Drosophila Genome Project, the nucleotide sequence of the 120-Mbp euchromatic portion of the Drosophila genome was determined over a 1-year period (26–28). The Drosophila genome-sequencing effort resulted in two key findings: (i) that the assembly algorithms could generate chromosome assemblies with highly accurate order and orientation with substantially less than 10-fold coverage, and (ii) that undertaking multiple interim assemblies in place of one comprehensive final assembly was not of value.
These findings, together with the dramatic changes in the public genome effort subsequent to the formation of Celera (29), led to a modified whole-genome shotgun sequencing approach to the human genome. We initially proposed to do 10-fold sequence coverage of the genome over a 3-year period and to make interim assembled sequence data available quarterly. The modifications included a plan to perform random shotgun sequencing to ∼5-fold coverage and to use the unordered and unoriented BAC sequence fragments and subassemblies published in GenBank by the publicly funded genome effort (30) to accelerate the project. We also abandoned the quarterly announcements in the absence of interim assemblies to report.
Although this strategy provided a reasonable result very early that was consistent with a whole-genome shotgun assembly with eightfold coverage, the human genome sequence is not as finished as the Drosophila genome was with an effective 13-fold coverage. However, it became clear that even with this reduced coverage strategy, Celera could generate an accurately ordered and oriented scaffold sequence of the human genome in less than 1 year. Human genome sequencing was initiated 8 September 1999 and completed 17 June 2000. The first assembly was completed 25 June 2000, and the assembly reported here was completed 1 October 2000. Here we describe the whole-genome random shotgun sequencing effort applied to the human genome. We developed two different assembly approaches for assembling the ∼3 billion bp that make up the 23 pairs of chromosomes of the Homo sapiens genome. Any GenBank-derived data were shredded to remove potential bias to the final sequence from chimeric clones, foreign DNA contamination, or misassembled contigs. Insofar as a correctly and accurately assembled genome sequence with faithful order and orientation of contigs is essential for an accurate analysis of the human genetic code, we have devoted a considerable portion of this manuscript to the documentation of the quality of our reconstruction of the genome. We also describe our preliminary analysis of the human genetic code on the basis of computational methods. Figure 1 (see fold-out chart associated with this issue; files for each chromosome can be found in Web fig. 1 on Science Online at www.sciencemag.org/cgi/content/full/291/5507/1304/DC1) provides a graphical overview of the genome and the features encoded in it. The detailed manual curation and interpretation of the genome are just beginning.
To aid the reader in locating specific analytical sections, we have divided the paper into seven broad sections. A summary of the major results appears at the beginning of each section.
1. Sources of DNA and Sequencing Methods
2. Genome Assembly Strategy and Characterization
3. Gene Prediction and Annotation
4. Genome Structure
5. Genome Evolution
6. A Genome-Wide Examination of Sequence Variations
7. An Overview of the Predicted Protein- Coding Genes in the Human Genome
8. Conclusions
1 Sources of DNA and Sequencing Methods
Summary. This section discusses the rationale and ethical rules governing donor selection to ensure ethnic and gender diversity along with the methodologies for DNA extraction and library construction. The plasmid library construction is the first critical step in shotgun sequencing. If the DNA libraries are not uniform in size, nonchimeric, and do not randomly represent the genome, then the subsequent steps cannot accurately reconstruct the genome sequence. We used automated high-throughput DNA sequencing and the computational infrastructure to enable efficient tracking of enormous amounts of sequence information (27.3 million sequence reads; 14.9 billion bp of sequence). Sequencing and tracking from both ends of plasmid clones from 2-, 10-, and 50-kbp libraries were essential to the computational reconstruction of the genome. Our evidence indicates that the accurate pairing rate of end sequences was greater than 98%.
Various policies of the United States and the World Medical Association, specifically the Declaration of Helsinki, offer recommendations for conducting experiments with human subjects. We convened an Institutional Review Board (IRB) (31) that helped us establish the protocol for obtaining and using human DNA and the informed consent process used to enroll research volunteers for the DNA-sequencing studies reported here. We adopted several steps and procedures to protect the privacy rights and confidentiality of the research subjects (donors). These included a two-stage consent process, a secure random alphanumeric coding system for specimens and records, circumscribed contact with the subjects by researchers, and options for off-site contact of donors. In addition, Celera applied for and received a Certificate of Confidentiality from the Department of Health and Human Services. This Certificate authorized Celera to protect the privacy of the individuals who volunteered to be donors as provided in Section 301(d) of the Public Health Service Act 42 U.S.C. 241(d).
Celera and the IRB believed that the initial version of a completed human genome should be a composite derived from multiple donors of diverse ethnic backgrounds Prospective donors were asked, on a voluntary basis, to self-designate an ethnogeographic category (e.g., African-American, Chinese, Hispanic, Caucasian, etc.). We enrolled 21 donors (32).
Three basic items of information from each donor were recorded and linked by confidential code to the donated sample: age, sex, and self-designated ethnogeographic group. From females, ∼130 ml of whole, heparinized blood was collected. From males, ∼130 ml of whole, heparinized blood was collected, as well as five specimens of semen, collected over a 6-week period. Permanent lymphoblastoid cell lines were created by Epstein-Barr virus immortalization. DNA from five subjects was selected for genomic DNA sequencing: two males and three females—one African-American, one Asian-Chinese, one Hispanic-Mexican, and two Caucasians (see Web fig. 2 on Science Online at www.sciencemag.org/cgi/content/full/291/5507/1304/DC1). The decision of whose DNA to sequence was based on a complex mix of factors, including the goal of achieving diversity as well as technical issues such as the quality of the DNA libraries and availability of immortalized cell lines.
1.1 Library construction and sequencing
Central to the whole-genome shotgun sequencing process is preparation of high-quality plasmid libraries in a variety of insert sizes so that pairs of sequence reads (mates) are obtained, one read from both ends of each plasmid insert. High-quality libraries have an equal representation of all parts of the genome, a small number of clones without inserts, and no contamination from such sources as the mitochondrial genome and Escherichia coli genomic DNA. DNA from each donor was used to construct plasmid libraries in one or more of three size classes: 2 kbp, 10 kbp, and 50 kbp (Table 1) (33).
Table 1
Celera-generated data input into assembly.
View this table:
In designing the DNA-sequencing process, we focused on developing a simple system that could be implemented in a robust and reproducible manner and monitored effectively (Fig. 2) (34).
Figure 2
Flow diagram for sequencing pipeline. Samples are received, selected, and processed in compliance with standard operating procedures, with a focus on quality within and across departments. Each process has defined inputs and outputs with the capability to exchange samples and data with both internal and external entities according to defined quality guidelines. Manufacturing pipeline processes, products, quality control measures, and responsible parties are indicated and are described further in the text.
Current sequencing protocols are based on the dideoxy sequencing method (35), which typically yields only 500 to 750 bp of sequence per reaction. This limitation on read length has made monumental gains in throughput a prerequisite for the analysis of large eukaryotic genomes. We accomplished this at the Celera facility, which occupies about 30,000 square feet of laboratory space and produces sequence data continuously at a rate of 175,000 total reads per day. The DNA-sequencing facility is supported by a high-performance computational facility (36).
The process for DNA sequencing was modular by design and automated. Intermodule sample backlogs allowed four principal modules to operate independently: (i) library transformation, plating, and colony picking; (ii) DNA template preparation; (iii) dideoxy sequencing reaction set-up and purification; and (iv) sequence determination with the ABI PRISM 3700 DNA Analyzer. Because the inputs and outputs of each module have been carefully matched and sample backlogs are continuously managed, sequencing has proceeded without a single day's interruption since the initiation of the Drosophila project in May 1999. The ABI 3700 is a fully automated capillary array sequencer and as such can be operated with a minimal amount of hands-on time, currently estimated at about 15 min per day. The capillary system also facilitates correct associations of sequencing traces with samples through the elimination of manual sample loading and lane-tracking errors associated with slab gels. About 65 production staff were hired and trained, and were rotated on a regular basis through the four production modules. A central laboratory information management system (LIMS) tracked all sample plates by unique bar code identifiers. The facility was supported by a quality control team that performed raw material and in-process testing and a quality assurance group with responsibilities including document control, validation, and auditing of the facility. Critical to the success of the scale-up was the validation of all software and instrumentation before implementation, and production-scale testing of any process changes.
1.2 Trace processing
An automated trace-processing pipeline has been developed to process each sequence file (37). After quality and vector trimming, the average trimmed sequence length was 543 bp, and the sequencing accuracy was exponentially distributed with a mean of 99.5% and with less than 1 in 1000 reads being less than 98% accurate (26). Each trimmed sequence was screened for matches to contaminants including sequences of vector alone, E. coligenomic DNA, and human mitochondrial DNA. The entire read for any sequence with a significant match to a contaminant was discarded. A total of 713 reads matched E. coli genomic DNA and 2114 reads matched the human mitochondrial genome.
1.3 Quality assessment and control
The importance of the base-pair level accuracy of the sequence data increases as the size and repetitive nature of the genome to be sequenced increases. Each sequence read must be placed uniquely in the genome, and even a modest error rate can reduce the effectiveness of assembly. In addition, maintaining the validity of mate-pair information is absolutely critical for the algorithms described below. Procedural controls were established for maintaining the validity of sequence mate-pairs as sequencing reactions proceeded through the process, including strict rules built into the LIMS. The accuracy of sequence data produced by the Celera process was validated in the course of the Drosophila genome project (26). By collecting data for the entire human genome in a single facility, we were able to ensure uniform quality standards and the cost advantages associated with automation, an economy of scale, and process consistency.
2 Genome Assembly Strategy and Characterization
Summary. We describe in this section the two approaches that we used to assemble the genome. One method involves the computational combination of all sequence reads with shredded data from GenBank to generate an independent, nonbiased view of the genome. The second approach involves clustering all of the fragments to a region or chromosome on the basis of mapping information. The clustered data were then shredded and subjected to computational assembly. Both approaches provided essentially the same reconstruction of assembled DNA sequence with proper order and orientation. The second method provided slightly greater sequence coverage (fewer gaps) and was the principal sequence used for the analysis phase. In addition, we document the completeness and correctness of this assembly process and provide a comparison to the public genome sequence, which was reconstructed largely by an independent BAC-by-BAC approach. Our assemblies effectively covered the euchromatic regions of the human chromosomes. More than 90% of the genome was in scaffold assemblies of 100,000 bp or greater, and 25% of the genome was in scaffolds of 10 million bp or larger.
Shotgun sequence assembly is a classic example of an inverse problem: given a set of reads randomly sampled from a target sequence, reconstruct the order and the position of those reads in the target. Genome assembly algorithms developed for Drosophila have now been extended to assemble the ∼25-fold larger human genome. Celera assemblies consist of a set of contigs that are ordered and oriented into scaffolds that are then mapped to chromosomal locations by using known markers. The contigs consist of a collection of overlapping sequence reads that provide a consensus reconstruction for a contiguous interval of the genome. Mate pairs are a central component of the assembly strategy. They are used to produce scaffolds in which the size of gaps between consecutive contigs is known with reasonable precision. This is accomplished by observing that a pair of reads, one of which is in one contig, and the other of which is in another, implies an orientation and distance between the two contigs (Fig. 3). Finally, our assemblies did not incorporate all reads into the final set of reported scaffolds. This set of unincorporated reads is termed “chaff,” and typically consisted of reads from within highly repetitive regions, data from other organisms introduced through various routes as found in many genome projects, and data of poor quality or with untrimmed vector.
Figure 3
Anatomy of whole-genome assembly. Overlapping shredded bactig fragments (red lines) and internally derived reads from five different individuals (black lines) are combined to produce a contig and a consensus sequence (green line). Contigs are connected into scaffolds (red) by using mate pair information. Scaffolds are then mapped to the genome (gray line) with STS (blue star) physical map information.
2.1 Assembly data sets
We used two independent sets of data for our assemblies. The first was a random shotgun data set of 27.27 million reads of average length 543 bp produced at Celera. This consisted largely of mate-pair reads from 16 libraries constructed from DNA samples taken from five different donors. Libraries with insert sizes of 2, 10, and 50 kbp were used. By looking at how mate pairs from a library were positioned in known sequenced stretches of the genome, we were able to characterize the range of insert sizes in each library and determine a mean and standard deviation. Table 1 details the number of reads, sequencing coverage, and clone coverage achieved by the data set. The clone coverage is the coverage of the genome in cloned DNA, considering the entire insert of each clone that has sequence from both ends. The clone coverage provides a measure of the amount of physical DNA coverage of the genome. Assuming a genome size of 2.9 Gbp, the Celera trimmed sequences gave a 5.1× coverage of the genome, and clone coverage was 3.42×, 16.40×, and 18.84× for the 2-, 10-, and 50-kbp libraries, respectively, for a total of 38.7× clone coverage.
The second data set was from the publicly funded Human Genome Project (PFP) and is primarily derived from BAC clones (30). The BAC data input to the assemblies came from a download of GenBank on 1 September 2000 (Table 2) totaling 4443.3 Mbp of sequence. The data for each BAC is deposited at one of four levels of completion. Phase 0 data are a set of generally unassembled sequencing reads from a very light shotgun of the BAC, typically less than 1×. Phase 1 data are unordered assemblies of contigs, which we call BAC contigs or bactigs. Phase 2 data are ordered assemblies of bactigs. Phase 3 data are complete BAC sequences. In the past 2 years the PFP has focused on a product of lower quality and completeness, but on a faster time-course, by concentrating on the production of Phase 1 data from a 3× to 4× light-shotgun of each BAC clone.
Table 2
GenBank data input into assembly.
View this table:
We screened the bactig sequences for contaminants by using the BLAST algorithm against three data sets: (i) vector sequences in Univec core (38), filtered for a 25-bp match at 98% sequence identity at the ends of the sequence and a 30-bp match internal to the sequence; (ii) the nonhuman portion of the High Throughput Genomic (HTG) Seqences division of GenBank (39), filtered at 200 bp at 98%; and (iii) the nonredundant nucleotide sequences from GenBank without primate and human virus entries, filtered at 200 bp at 98%. Whenever 25 bp or more of vector was found within 50 bp of the end of a contig, the tip up to the matching vector was excised. Under these criteria we removed 2.6 Mbp of possible contaminant and vector from the Phase 3 data, 61.0 Mbp from the Phase 1 and 2 data, and 16.1 Mbp from the Phase 0 data (Table 2). This left us with a total of 4363.7 Mbp of PFP sequence data 20% finished, 75% rough-draft (Phase 1 and 2), and 5% single sequencing reads (Phase 0). An additional 104,018 BAC end-sequence mate pairs were also downloaded and included in the data sets for both assembly processes (18).
2.2 Assembly strategies
Two different approaches to assembly were pursued. The first was a whole-genome assembly process that used Celera data and the PFP data in the form of additional synthetic shotgun data, and the second was a compartmentalized assembly process that first partitioned the Celera and PFP data into sets localized to large chromosomal segments and then performed ab initio shotgun assembly on each set. Figure 4 gives a schematic of the overall process flow.
Figure 4
Architecture of Celera's two-pronged assembly strategy. Each oval denotes a computation process performing the function indicated by its label, with the labels on arcs between ovals describing the nature of the objects produced and/or consumed by a process. This figure summarizes the discussion in the text that defines the terms and phrases used.
For the whole-genome assembly, the PFP data was first disassembled or “shredded” into a synthetic shotgun data set of 550-bp reads that form a perfect 2× covering of the bactigs. This resulted in 16.05 million “faux” reads that were sufficient to cover the genome 2.96× because of redundancy in the BAC data set, without incorporating the biases inherent in the PFP assembly process. The combined data set of 43.32 million reads (8×), and all associated mate-pair information, were then subjected to our whole-genome assembly algorithm to produce a reconstruction of the genome. Neither the location of a BAC in the genome nor its assembly of bactigs was used in this process. Bactigs were shredded into reads because we found strong evidence that 2.13% of them were misassembled (40). Furthermore, BAC location information was ignored because some BACs were not correctly placed on the PFP physical map and because we found strong evidence that at least 2.2% of the BACs contained sequence data that were not part of the given BAC (41), possibly as a result of sample-tracking errors (see below). In short, we performed a true, ab initio whole-genome assembly in which we took the expedient of deriving additional sequence coverage, but not mate pairs, assembled bactigs, or genome locality, from some externally generated data.
In the compartmentalized shotgun assembly (CSA), Celera and PFP data were partitioned into the largest possible chromosomal segments or “components” that could be determined with confidence, and then shotgun assembly was applied to each partitioned subset wherein the bactig data were again shredded into faux reads to ensure an independent ab initio assembly of the component. By subsetting the data in this way, the overall computational effort was reduced and the effect of interchromosomal duplications was ameliorated. This also resulted in a reconstruction of the genome that was relatively independent of the whole-genome assembly results so that the two assemblies could be compared for consistency. The quality of the partitioning into components was crucial so that different genome regions were not mixed together. We constructed components from (i) the longest scaffolds of the sequence from each BAC and (ii) assembled scaffolds of data unique to Celera's data set. The BAC assemblies were obtained by a combining assembler that used the bactigs and the 5× Celera data mapped to those bactigs as input. This effort was undertaken as an interim step solely because the more accurate and complete the scaffold for a given sequence stretch, the more accurately one can tile these scaffolds into contiguous components on the basis of sequence overlap and mate-pair information. We further visually inspected and curated the scaffold tiling of the components to further increase its accuracy. For the final CSA assembly, all but the partitioning was ignored, and an independent, ab initio reconstruction of the sequence in each component was obtained by applying our whole-genome assembly algorithm to the partitioned, relevant Celera data and the shredded, faux reads of the partitioned, relevant bactig data.
2.3 Whole-genome assembly
The algorithms used for whole-genome assembly (WGA) of the human genome were enhancements to those used to produce the sequence of the Drosophila genome reported in detail in (28).
The WGA assembler consists of a pipeline composed of five principal stages: Screener, Overlapper, Unitigger, Scaffolder, and Repeat Resolver, respectively. The Screener finds and marks all microsatellite repeats with less than a 6-bp element, and screens out all known interspersed repeat elements, including Alu, Line, and ribosomal DNA. Marked regions get searched for overlaps, whereas screened regions do not get searched, but can be part of an overlap that involves unscreened matching segments.
The Overlapper compares every read against every other read in search of complete end-to-end overlaps of at least 40 bp and with no more than 6% differences in the match. Because all data are scrupulously vector-trimmed, the Overlapper can insist on complete overlap matches. Computing the set of all overlaps took roughly 10,000 CPU hours with a suite of four-processor Alpha SMPs with 4 gigabytes of RAM. This took 4 to 5 days in elapsed time with 40 such machines operating in parallel.
Every overlap computed above is statistically a 1-in-1017event and thus not a coincidental event. What makes assembly combinatorially difficult is that while many overlaps are actually sampled from overlapping regions of the genome, and thus imply that the sequence reads should be assembled together, even more overlaps are actually from two distinct copies of a low-copy repeated element not screened above, thus constituting an error if put together. We call the former “true overlaps” and the latter “repeat-induced overlaps.” The assembler must avoid choosing repeat-induced overlaps, especially early in the process.
We achieve this objective in the Unitigger. We first find all assemblies of reads that appear to be uncontested with respect to all other reads. We call the contigs formed from these subassemblies unitigs (for uniquely assembled contigs). Formally, these unitigs are the uncontested interval subgraphs of the graph of all overlaps (42). Unfortunately, although empirically many of these assemblies are correct (and thus involve only true overlaps), some are in fact collections of reads from several copies of a repetitive element that have been overcollapsed into a single subassembly. However, the overcollapsed unitigs are easily identified because their average coverage depth is too high to be consistent with the overall level of sequence coverage. We developed a simple statistical discriminator that gives the logarithm of the odds ratio that a unitig is composed of unique DNA or of a repeat consisting of two or more copies. The discriminator, set to a sufficiently stringent threshold, identifies a subset of the unitigs that we are certain are correct. In addition, a second, less stringent threshold identifies a subset of remaining unitigs very likely to be correctly assembled, of which we select those that will consistently scaffold (see below), and thus are again almost certain to be correct. We call the union of these two sets U-unitigs. Empirically, we found from a 6× simulated shotgun of human chromosome 22 that we get U-unitigs covering 98% of the stretches of unique DNA that are >2 kbp long. We are further able to identify the boundary of the start of a repetitive element at the ends of a U-unitig and leverage this so that U-unitigs span more than 93% of all singly interspersed Alu elements and other 100-to 400-bp repetitive segments.
The result of running the Unitigger was thus a set of correctly assembled subcontigs covering an estimated 73.6% of the human genome. The Scaffolder then proceeded to use mate-pair information to link these together into scaffolds. When there are two or more mate pairs that imply that a given pair of U-unitigs are at a certain distance and orientation with respect to each other, the probability of this being wrong is again roughly 1 in 1010, assuming that mate pairs are false less than 2% of the time. Thus, one can with high confidence link together all U-unitigs that are linked by at least two 2- or 10-kbp mate pairs producing intermediate-sized scaffolds that are then recursively linked together by confirming 50-kbp mate pairs and BAC end sequences. This process yielded scaffolds that are on the order of megabase pairs in size with gaps between their contigs that generally correspond to repetitive elements and occasionally to small sequencing gaps. These scaffolds reconstruct the majority of the unique sequence within a genome.
For the Drosophila assembly, we engaged in a three-stage repeat resolution strategy where each stage was progressively more aggressive and thus more likely to make a mistake. For the human assembly, we continued to use the first “Rocks” substage where all unitigs with a good, but not definitive, discriminator score are placed in a scaffold gap. This was done with the condition that two or more mate pairs with one of their reads already in the scaffold unambiguously place the unitig in the given gap. We estimate the probability of inserting a unitig into an incorrect gap with this strategy to be less than 10−7 based on a probabilistic analysis.
We revised the ensuing “Stones” substage of the human assembly, making it more like the mechanism suggested in our earlier work (43). For each gap, every read R that is placed in the gap by virtue of its mated pair M being in a contig of the scaffold and implying R's placement is collected. Celera's mate-pairing information is correct more than 99% of the time. Thus, almost every, but not all, of the reads in the set belong in the gap, and when a read does not belong it rarely agrees with the remainder of the reads. Therefore, we simply assemble this set of reads within the gap, eliminating any reads that conflict with the assembly. This operation proved much more reliable than the one it replaced for the Drosophila assembly; in the assembly of a simulated shotgun data set of human chromosome 22, all stones were placed correctly.
The final method of resolving gaps is to fill them with assembled BAC data that cover the gap. We call this external gap “walking.” We did not include the very aggressive “Pebbles” substage described in our Drosophila work, which made enough mistakes so as to produce repeat reconstructions for long interspersed elements whose quality was only 99.62% correct. We decided that for the human genome it was philosophically better not to introduce a step that was certain to produce less than 99.99% accuracy. The cost was a somewhat larger number of gaps of somewhat larger size.
At the final stage of the assembly process, and also at several intermediate points, a consensus sequence of every contig is produced. Our algorithm is driven by the principle of maximum parsimony, with quality-value–weighted measures for evaluating each base. The net effect is a Bayesian estimate of the correct base to report at each position. Consensus generation uses Celera data whenever it is present. In the event that no Celera data cover a given region, the BAC data sequence is used.
A key element of achieving a WGA of the human genome was to parallelize the Overlapper and the central consensus sequence–constructing subroutines. In addition, memory was a real issue—a straightforward application of the software we had built for Drosophila would have required a computer with a 600-gigabyte RAM. By making the Overlapper and Unitigger incremental, we were able to achieve the same computation with a maximum of instantaneous usage of 28 gigabytes of RAM. Moreover, the incremental nature of the first three stages allowed us to continually update the state of this part of the computation as data were delivered and then perform a 7-day run to complete Scaffolding and Repeat Resolution whenever desired. For our assembly operations, the total compute infrastructure consists of 10 four-processor SMPs with 4 gigabytes of memory per cluster (Compaq's ES40, Regatta) and a 16-processor NUMA machine with 64 gigabytes of memory (Compaq's GS160, Wildfire). The total compute for a run of the assembler was roughly 20,000 CPU hours.
The assembly of Celera's data, together with the shredded bactig data, produced a set of scaffolds totaling 2.848 Gbp in span and consisting of 2.586 Gbp of sequence. The chaff, or set of reads not incorporated in the assembly, numbered 11.27 million (26%), which is consistent with our experience for Drosophila. More than 84% of the genome was covered by scaffolds >100 kbp long, and these averaged 91% sequence and 9% gaps with a total of 2.297 Gbp of sequence. There were a total of 93,857 gaps among the 1637 scaffolds >100 kbp. The average scaffold size was 1.5 Mbp, the average contig size was 24.06 kbp, and the average gap size was 2.43 kbp, where the distribution of each was essentially exponential. More than 50% of all gaps were less than 500 bp long, >62% of all gaps were less than 1 kbp long, and no gap was >100 kbp long. Similarly, more than 65% of the sequence is in contigs >30 kbp, more than 31% is in contigs >100 kbp, and the largest contig was 1.22 Mbp long. Table 3 gives detailed summary statistics for the structure of this assembly with a direct comparison to the compartmentalized shotgun assembly.
Table 3
Scaffold statistics for whole-genome and compartmentalized shotgun assemblies.
View this table:
2.4 Compartmentalized shotgun assembly
In addition to the WGA approach, we pursued a localized assembly approach that was intended to subdivide the genome into segments, each of which could be shotgun assembled individually. We expected that this would help in resolution of large interchromosomal duplications and improve the statistics for calculating U-unitigs. The compartmentalized assembly process involved clustering Celera reads and bactigs into large, multiple megabase regions of the genome, and then running the WGA assembler on the Celera data and shredded, faux reads obtained from the bactig data.
The first phase of the CSA strategy was to separate Celera reads into those that matched the BAC contigs for a particular PFP BAC entry, and those that did not match any public data. Such matches must be guaranteed to properly place a Celera read, so all reads were first masked against a library of common repetitive elements, and only matches of at least 40 bp to unmasked portions of the read constituted a hit. Of Celera's 27.27 million reads, 20.76 million matched a bactig and another 0.62 million reads, which did not have any matches, were nonetheless identified as belonging in the region of the bactig's BAC because their mate matched the bactig. Of the remaining reads, 2.92 million were completely screened out and so could not be matched, but the other 2.97 million reads had unmasked sequence totaling 1.189 Gbp that were not found in the GenBank data set. Because the Celera data are 5.11× redundant, we estimate that 240 Mbp of unique Celera sequence is not in the GenBank data set.
In the next step of the CSA process, a combining assembler took the relevant 5× Celera reads and bactigs for a BAC entry, and produced an assembly of the combined data for that locale. These high-quality sequence reconstructions were a transient result whose utility was simply to provide more reliable information for the purposes of their tiling into sets of overlapping and adjacent scaffold sequences in the next step. In outline, the combining assembler first examines the set of matching Celera reads to determine if there are excessive pileups indicative of unscreened repetitive elements. Wherever these occur, reads in the repeat region whose mates have not been mapped to consistent positions are removed. Then all sets of mate pairs that consistently imply the same relative position of two bactigs are bundled into a link and weighted according to the number of mates in the bundle. A “greedy” strategy then attempts to order the bactigs by selecting bundles of mate-pairs in order of their weight. A selected mate-pair bundle can tie together two formative scaffolds. It is incorporated to form a single scaffold only if it is consistent with the majority of links between contigs of the scaffold. Once scaffolding is complete, gaps are filled by the “Stones” strategy described above for the WGA assembler.
The GenBank data for the Phase 1 and 2 BACs consisted of an average of 19.8 bactigs per BAC of average size 8099 bp. Application of the combining assembler resulted in individual Celera BAC assemblies being put together into an average of 1.83 scaffolds (median of 1 scaffold) consisting of an average of 8.57 contigs of average size 18,973 bp. In addition to defining order and orientation of the sequence fragments, there were 57% fewer gaps in the combined result. For Phase 0 data, the average GenBank entry consisted of 91.52 reads of average length 784 bp. Application of the combining assembler resulted in an average of 54.8 scaffolds consisting of an average of 58.1 contigs of average size 873 bp. Basically, some small amount of assembly took place, but not enough Celera data were matched to truly assemble the 0.5× to 1× data set represented by the typical Phase 0 BACs. The combining assembler was also applied to the Phase 3 BACs for SNP identification, confirmation of assembly, and localization of the Celera reads. The phase 0 data suggest that a combined whole-genome shotgun data set and 1× light-shotgun of BACs will not yield good assembly of BAC regions; at least 3× light-shotgun of each BAC is needed.
The 5.89 million Celera fragments not matching the GenBank data were assembled with our whole-genome assembler. The assembly resulted in a set of scaffolds totaling 442 Mbp in span and consisting of 326 Mbp of sequence. More than 20% of the scaffolds were >5 kbp long, and these averaged 63% sequence and 27% gaps with a total of 302 Mbp of sequence. All scaffolds >5 kbp were forwarded along with all scaffolds produced by the combining assembler to the subsequent tiling phase.
At this stage, we typically had one or two scaffolds for every BAC region constituting at least 95% of the relevant sequence, and a collection of disjoint Celera-unique scaffolds. The next step in developing the genome components was to determine the order and overlap tiling of these BAC and Celera-unique scaffolds across the genome. For this, we used Celera's 50-kbp mate-pairs information, and BAC-end pairs (18) and sequence tagged site (STS) markers (44) to provide long-range guidance and chromosome separation. Given the relatively manageable number of scaffolds, we chose not to produce this tiling in a fully automated manner, but to compute an initial tiling with a good heuristic and then use human curators to resolve discrepancies or missed join opportunities. To this end, we developed a graphical user interface that displayed the graph of tiling overlaps and the evidence for each. A human curator could then explore the implication of mapped STS data, dot-plots of sequence overlap, and a visual display of the mate-pair evidence supporting a given choice. The result of this process was a collection of “components,” where each component was a tiled set of BAC and Celera-unique scaffolds that had been curator-approved. The process resulted in 3845 components with an estimated span of 2.922 Gbp.
In order to generate the final CSA, we assembled each component with the WGA algorithm. As was done in the WGA process, the bactig data were shredded into a synthetic 2× shotgun data set in order to give the assembler the freedom to independently assemble the data. By using faux reads rather than bactigs, the assembly algorithm could correct errors in the assembly of bactigs and remove chimeric content in a PFP data entry. Chimeric or contaminating sequence (from another part of the genome) would not be incorporated into the reassembly of the component because it did not belong there. In effect, the previous steps in the CSA process served only to bring together Celera fragments and PFP data relevant to a large contiguous segment of the genome, wherein we applied the assembler used for WGA to produce an ab initio assembly of the region.
WGA assembly of the components resulted in a set of scaffolds totaling 2.906 Gbp in span and consisting of 2.654 Gbp of sequence. The chaff, or set of reads not incorporated into the assembly, numbered 6.17 million, or 22%. More than 90.0% of the genome was covered by scaffolds spanning >100 kbp long, and these averaged 92.2% sequence and 7.8% gaps with a total of 2.492 Gbp of sequence. There were a total of 105,264 gaps among the 107,199 contigs that belong to the 1940 scaffolds spanning >100 kbp. The average scaffold size was 1.4 Mbp, the average contig size was 23.24 kbp, and the average gap size was 2.0 kbp where each distribution of sizes was exponential. As such, averages tend to be underrepresentative of the majority of the data. Figure 5 shows a histogram of the bases in scaffolds of various size ranges. Consider also that more than 49% of all gaps were <500 bp long, more than 62% of all gaps were <1 kbp, and all gaps are <100 kbp long. Similarly, more than 73% of the sequence is in contigs > 30 kbp, more than 49% is in contigs >100 kbp, and the largest contig was 1.99 Mbp long. Table 3 provides summary statistics for the structure of this assembly with a direct comparison to the WGA assembly.
Figure 5
Distribution of scaffold sizes of the CSA. For each range of scaffold sizes, the percent of total sequence is indicated.
2.5 Comparison of the WGA and CSA scaffolds
Having obtained two assemblies of the human genome via independent computational processes (WGA and CSA), we compared scaffolds from the two assemblies as another means of investigating their completeness, consistency, and contiguity. From each assembly, a set of reference scaffolds containing at least 1000 fragments (Celera sequencing reads or bactig shreds) was obtained; this amounted to 2218 WGA scaffolds and 1717 CSA scaffolds, for a total of 2.087 Gbp and 2.474 Gbp. The sequence of each reference scaffold was compared to the sequence of all scaffolds from the other assembly with which it shared at least 20 fragments or at least 20% of the fragments of the smaller scaffold. For each such comparison, all matches of at least 200 bp with at most 2% mismatch were tabulated.
From this tabulation, we estimated the amount of unique sequence in each assembly in two ways. The first was to determine the number of bases of each assembly that were not covered by a matching segment in the other assembly. Some 82.5 Mbp of the WGA (3.95%) was not covered by the CSA, whereas 204.5 Mbp (8.26%) of the CSA was not covered by the WGA. This estimate did not require any consistency of the assemblies or any uniqueness of the matching segments. Thus, another analysis was conducted in which matches of less than 1 kbp between a pair of scaffolds were excluded unless they were confirmed by other matches having a consistent order and orientation. This gives some measure of consistent coverage: 1.982 Gbp (95.00%) of the WGA is covered by the CSA, and 2.169 Gbp (87.69%) of the CSA is covered by the WGA by this more stringent measure.
The comparison of WGA to CSA also permitted evaluation of scaffolds for structural inconsistencies. We looked for instances in which a large section of a scaffold from one assembly matched only one scaffold from the other assembly, but failed to match over the full length of the overlap implied by the matching segments. An initial set of candidates was identified automatically, and then each candidate was inspected by hand. From this process, we identified 31 instances in which the assemblies appear to disagree in a nonlocal fashion. These cases are being further evaluated to determine which assembly is in error and why.
In addition, we evaluated local inconsistencies of order or orientation. The following results exclude cases in which one contig in one assembly corresponds to more than one overlapping contig in the other assembly (as long as the order and orientation of the latter agrees with the positions they match in the former). Most of these small rearrangements involved segments on the order of hundreds of base pairs and rarely >1 kbp. We found a total of 295 kbp (0.012%) in the CSA assemblies that were locally inconsistent with the WGA assemblies, whereas 2.108 Mbp (0.11%) in the WGA assembly were inconsistent with the CSA assembly.
The CSA assembly was a few percentage points better in terms of coverage and slightly more consistent than the WGA, because it was in effect performing a few thousand shotgun assemblies of megabase-sized problems, whereas the WGA is performing a shotgun assembly of a gigabase-sized problem. When one considers the increase of two-and-a-half orders of magnitude in problem size, the information loss between the two is remarkably small. Because CSA was logistically easier to deliver and the better of the two results available at the time when downstream analyses needed to be begun, all subsequent analysis was performed on this assembly.
2.6 Mapping scaffolds to the genome
The final step in assembling the genome was to order and orient the scaffolds on the chromosomes. We first grouped scaffolds together on the basis of their order in the components from CSA. These grouped scaffolds were reordered by examining residual mate-pairing data between the scaffolds. We next mapped the scaffold groups onto the chromosome using physical mapping data. This step depends on having reliable high-resolution map information such that each scaffold will overlap multiple markers. There are two genome-wide types of map information available: high-density STS maps and fingerprint maps of BAC clones developed at Washington University (45). Among the genome-wide STS maps, GeneMap99 (GM99) has the most markers and therefore was most useful for mapping scaffolds. The two different mapping approaches are complementary to one another. The fingerprint maps should have better local order because they were built by comparison of overlapping BAC clones. On the other hand, GM99 should have a more reliable long-range order, because the framework markers were derived from well-validated genetic maps. Both types of maps were used as a reference for human curation of the components that were the input to the regional assembly, but they did not determine the order of sequences produced by the assembler.
In order to determine the effectiveness of the fingerprint maps and GM99 for mapping scaffolds, we first examined the reliability of these maps by comparison with large scaffolds. Only 1% of the STS markers on the 10 largest scaffolds (those >9 Mbp) were mapped on a different chromosome on GM99. Two percent of the STS markers disagreed in position by more than five framework bins. However, for the fingerprint maps, a 2% chromosome discrepancy was observed, and on average 23.8% of BAC locations in the scaffold sequence disagreed with fingerprint map placement by more than five BACs. When further examining the source of discrepancy, it was found that most of the discrepancy came from 4 of the 10 scaffolds, indicating this there is variation in the quality of either the map or the scaffolds. All four scaffolds were assembled, as well as the other six, as judged by clone coverage analysis, and showed the same low discrepancy rate to GM99, and thus we concluded that the fingerprint map global order in these cases was not reliable. Smaller scaffolds had a higher discordance rate with GM99 (4.21% of STSs were discordant by more than five framework bins), but a lower discordance rate with the fingerprint maps (11% of BACs disagreed with fingerprint maps by more than five BACs). This observation agrees with the clone coverage analysis (46) that Celera scaffold construction was better supported by long-range mate pairs in larger scaffolds than in small scaffolds.
We created two orderings of Celera scaffolds on the basis of the markers (BAC or STS) on these maps. Where the order of scaffolds agreed between GM99 and the WashU BAC map, we had a high degree of confidence that that order was correct; these scaffolds were termed “anchor scaffolds.” Only scaffolds with a low overall discrepancy rate with both maps were considered anchor scaffolds. Scaffolds in GM99 bins were allowed to permute in their order to match WashU ordering, provided they did not violate their framework orders. Orientation of individual scaffolds was determined by the presence of multiple mapped markers with consistent order. Scaffolds with only one marker have insufficient information to assign orientation. We found 70.1% of the genome in anchored scaffolds, more than 99% of which are also oriented (Table 4). Because GM99 is of lower resolution than the WashU map, a number of scaffolds without STS matches could be ordered relative to the anchored scaffolds because they included sequence from the same or adjacent BACs on the WashU map. On the other hand, because of occasional WashU global ordering discrepancies, a number of scaffolds determined to be “unmappable” on the WashU map could be ordered relative to the anchored scaffolds with GM99. These scaffolds were termed “ordered scaffolds.” We found that 13.9% of the assembly could be ordered by these additional methods, and thus 84.0% of the genome was ordered unambiguously.
Table 4
Summary of scaffold mapping. Scaffolds were mapped to the genome with different levels of confidence (anchored scaffolds have the highest confidence; unmapped scaffolds have the lowest). Anchored scaffolds were consistently ordered by the WashU BAC map and GM99. Ordered scaffolds were consistently ordered by at least one of the following: the WashU BAC map, GM99, or component tiling path. Bounded scaffolds had order conflicts between at least two of the external maps, but their placements were adjacent to a neighboring anchored or ordered scaffold. Unmapped scaffolds had, at most, a chromosome assignment. The scaffold subcategories are given below each category.
View this table:
Next, all scaffolds that could be placed, but not ordered, between anchors were assigned to the interval between the anchored scaffolds and were deemed to be “bounded” between them. For example, small scaffolds having STS hits from the same Gene- Map bin or hitting the same BAC cannot be ordered relative to each other, but can be assigned a placement boundary relative to other anchored or ordered scaffolds. The remaining scaffolds either had no localization information, conflicting information, or could only be assigned to a generic chromosome location. Using the above approaches, ∼98% of the genome was anchored, ordered, or bounded.
Finally, we assigned a location for each scaffold placed on the chromosome by spreading out the scaffolds per chromosome. We assumed that the remaining unmapped scaffolds, constituting 2% of the genome, were distributed evenly across the genome. By dividing the sum of unmapped scaffold lengths with the sum of the number of mapped scaffolds, we arrived at an estimate of interscaffold gap of 1483 bp. This gap was used to separate all the scaffolds on each chromosome and to assign an offset in the chromosome.
During the scaffold-mapping effort, we encountered many problems that resulted in additional quality assessment and validation analysis. At least 978 (3% of 33,173) BACs were believed to have sequence data from more than one location in the genome (47). This is consistent with the bactig chimerism analysis reported above in the Assembly Strategies section. These BACs could not be assigned to unique positions within the CSA assembly and thus could not be used for ordering scaffolds. Likewise, it was not always possible to assign STSs to unique locations in the assembly because of genome duplications, repetitive elements, and pseudogenes.
Because of the time required for an exhaustive search for a perfect overlap, CSA generated 21,607 intrascaffold gaps where the mate-pair data suggested that the contigs should overlap, but no overlap was found. These gaps were defined as a fixed 50 bp in length and make up 18.6% of the total 116,442 gaps in the CSA assembly.
We chose not to use the order of exons implied in cDNA or EST data as a way of ordering scaffolds. The rationale for not using this data was that doing so would have biased certain regions of the assembly by rearranging scaffolds to fit the transcript data and made validation of both the assembly and gene definition processes more difficult.
2.7 Assembly and validation analysis
We analyzed the assembly of the genome from the perspectives of completeness (amount of coverage of the genome) and correctness (the structural accuracy of the order and orientation and the consensus sequence of the assembly).
Completeness. Completeness is defined as the percentage of the euchromatic sequence represented in the assembly. This cannot be known with absolute certainty until the euchromatin sequence has been completed. However, it is possible to estimate completeness on the basis of (i) the estimated sizes of intrascaffold gaps; (ii) coverage of the two published chromosomes, 21 and 22 (48,49); and (iii) analysis of the percentage of an independent set of random sequences (STS markers) contained in the assembly. The whole-genome libraries contain heterochromatic sequence and, although no attempt has been made to assemble it, there may be instances of unique sequence embedded in regions of heterochromatin as were observed in Drosophila (50, 51).
The sequences of human chromosomes 21 and 22 have been completed to high quality and published (48, 49). Although this sequence served as input to the assembler, the finished sequence was shredded into a shotgun data set so that the assembler had the opportunity to assemble it differently from the original sequence in the case of structural polymorphisms or assembly errors in the BAC data. In particular, the assembler must be able to resolve repetitive elements at the scale of components (generally multimegabase in size), and so this comparison reveals the level to which the assembler resolves repeats. In certain areas, the assembly structure differs from the published versions of chromosomes 21 and 22 (see below). The consequence of the flexibility to assemble “finished” sequence differently on the basis of Celera data resulted in an assembly with more segments than the chromosome 21 and 22 sequences. We examined the reasons why there are more gaps in the Celera sequence than in chromosomes 21 and 22 and expect that they may be typical of gaps in other regions of the genome. In the Celera assembly, there are 25 scaffolds, each containing at least 10 kb of sequence, that collectively span 94.3% of chromosome 21. Sixty-two scaffolds span 95.7% of chromosome 22. The total length of the gaps remaining in the Celera assembly for these two chromosomes is 3.4 Mbp. These gap sequences were analyzed by RepeatMasker and by searching against the entire genome assembly (52). About 50% of the gap sequence consisted of common repetitive elements identified by RepeatMasker; more than half of the remainder was lower copy number repeat elements.
A more global way of assessing completeness is to measure the content of an independent set of sequence data in the assembly. We compared 48,938 STS markers from Genemap99 (51) to the scaffolds. Because these markers were not used in the assembly processes, they provided a truly independent measure of completeness. ePCR (53) and BLAST (54) were used to locate STSs on the assembled genome. We found 44,524 (91%) of the STSs in the mapped genome. An additional 2648 markers (5.4%) were found by searching the unassembled data or “chaff.” We identified 1283 STS markers (2.6%) not found in either Celera sequence or BAC data as of September 2000, raising the possibility that these markers may not be of human origin. If that were the case, the Celera assembled sequence would represent 93.4% of the human genome and the unassembled data 5.5%, for a total of 98.9% coverage. Similarly, we compared CSA against 36,678 TNG radiation hybrid markers (55a) using the same method. We found that 32,371 markers (88%) were located in the mapped CSA scaffolds, with 2055 markers (5.6%) found in the remainder. This gave a 94% coverage of the genome through another genome-wide survey.
Correctness. Correctness is defined as the structural and sequence accuracy of the assembly. Because the source sequences for the Celera data and the GenBank data are from different individuals, we could not directly compare the consensus sequence of the assembly against other finished sequence for determining sequencing accuracy at the nucleotide level, although this has been done for identifying polymorphisms as described in Section 6. The accuracy of the consensus sequence is at least 99.96% on the basis of a statistical estimate derived from the quality values of the underlying reads.
The structural consistency of the assembly can be measured by mate-pair analysis. In a correct assembly, every mated pair of sequencing reads should be located on the consensus sequence with the correct separation and orientation between the pairs. A pair is termed “valid” when the reads are in the correct orientation and the distance between them is within the mean ± 3 standard deviations of the distribution of insert sizes of the library from which the pair was sampled. A pair is termed “misoriented” when the reads are not correctly oriented, and is termed “misseparated” when the distance between the reads is not in the correct range but the reads are correctly oriented. The mean ± the standard deviation of each library used by the assembler was determined as described above. To validate these, we examined all reads mapped to the finished sequence of chromosome 21 (48) and determined how many incorrect mate pairs there were as a result of laboratory tracking errors and chimerism (two different segments of the genome cloned into the same plasmid), and how tight the distribution of insert sizes was for those that were correct (Table 5). The standard deviations for all Celera libraries were quite small, less than 15% of the insert length, with the exception of a few 50-kbp libraries. The 2- and 10-kbp libraries contained less than 2% invalid mate pairs, whereas the 50-kbp libraries were somewhat higher (∼10%). Thus, although the mate-pair information was not perfect, its accuracy was such that measuring valid, misoriented, and misseparated pairs with respect to a given assembly was deemed to be a reliable instrument for validation purposes, especially when several mate pairs confirm or deny an ordering.
Table 5
Mate-pair validation. Celera fragment sequences were mapped to the published sequence of chromosome 21. Each mate pair uniquely mapped was evaluated for correct orientation and placement (number of mate pairs tested). If the two mates had incorrect relative orientation or placement, they were considered invalid (number of invalid mate pairs).
View this table:
The clone coverage of the genome was 39×, meaning that any given base pair was, on average, contained in 39 clones or, equivalently, spanned by 39 mate-paired reads. Areas of low clone coverage or areas with a high proportion of invalid mate pairs would indicate potential assembly problems. We computed the coverage of each base in the assembly by valid mate pairs (Table 6). In summary, for scaffolds >30 kbp in length, less than 1% of the Celera assembly was in regions of less than 3× clone coverage. Thus, more than 99% of the assembly, including order and orientation, is strongly supported by this measure alone.
Table 6
Genome-wide mate pair analysis of compartmentalized shotgun (CSA) and PFP assemblies.*
View this table:
We examined the locations and number of all misoriented and misseparated mates. In addition to doing this analysis on the CSA assembly (as of 1 October 2000), we also performed a study of the PFP assembly as of 5 September 2000 (30, 55b). In this latter case, Celera mate pairs had to be mapped to the PFP assembly. To avoid mapping errors due to high-fidelity repeats, the only pairs mapped were those for which both reads matched at only one location with less than 6% differences. A threshold was set such that sets of five or more simultaneously invalid mate pairs indicated a potential breakpoint, where the construction of the two assemblies differed. The graphic comparison of the CSA chromosome 21 assembly with the published sequence (Fig. 6A) serves as a validation of this methodology. Blue tick marks in the panels indicate breakpoints. There were a similar (small) number of breakpoints on both chromosome sequences. The exception was 12 sets of scaffolds in the Celera assembly (a total of 3% of the chromosome length in 212 single-contig scaffolds) that were mapped to the wrong positions because they were too small to be mapped reliably. Figures 6 and 7 and Table 6 illustrate the mate-pair differences and breakpoints between the two assemblies. There was a higher percentage of misoriented and misseparated mate pairs in the large-insert libraries (50 kbp and BAC ends) than in the small-insert libraries in both assemblies (Table 6). The large-insert libraries are more likely to identify discrepancies simply because they span a larger segment of the genome. The graphic comparison between the two assemblies for chromosome 8 (Fig. 6, B and C) shows that there are many more breakpoints for the PFP assembly than for the Celera assembly. Figure 7 shows the breakpoint map (blue tick marks) for both assemblies of each chromosome in a side-by-side fashion. The order and orientation of Celera's assembly shows substantially fewer breakpoints except on the two finished chromosomes.Figure 7 also depicts large gaps (>10 kbp) in both assemblies as red tick marks. In the CSA assembly, the size of all gaps have been estimated on the basis of the mate-pair data. Breakpoints can be caused by structural polymorphisms, because the two assemblies were derived from different human genomes. They also reflect the unfinished nature of both genome assemblies.
Figure 6
Comparison of the CSA and the PFP assembly. (A) All of chromosome 21, (B) all of chromosome 8, and (C) a 1-Mb region of chromosome 8 representing a single Celera scaffold. To generate the figure, Celera fragment sequences were mapped onto each assembly. The PFP assembly is indicated in the upper third of each panel; the Celera assembly is indicated in the lower third. In the center of the panel, green lines show Celera sequences that are in the same order and orientation in both assemblies and form the longest consistently ordered run of sequences. Yellow lines indicate sequence blocks that are in the same orientation, but out of order. Red lines indicate sequence blocks that are not in the same orientation. For clarity, in the latter two cases, lines are only drawn between segments of matching sequence that are at least 50 kbp long. The top and bottom thirds of each panel show the extent of Celera mate-pair violations (red, misoriented; yellow, incorrect distance between the mates) for each assembly grouped by library size. (Mate pairs that are within the correct distance, as expected from the mean library insert size, are omitted from the figure for clarity.) Predicted breakpoints, corresponding to stacks of violated mate pairs of the same type, are shown as blue ticks on each assembly axis. Runs of more than 10,000 Ns are shown as cyan bars. Plots of all 24 chromosomes can be seen in Web fig. 3 on Science Online at www.sciencemag.org/cgi/content/full/291/5507/1304/DC1.
Figure 7
Schematic view of the distribution of breakpoints and large gaps on all chromosomes. For each chromosome, the upper pair of lines represent the PFP assembly, and the lower pair of lines represent Celera's assembly. Blue tick marks represent breakpoints, whereas red tick marks represent a gap of larger than 10,000 bp. The number of breakpoints per chromosome is indicated in black, and the chromosome numbers in red.
3 Gene Prediction and Annotation
Summary. To enumerate the gene inventory, we developed an integrated, evidence-based approach named Otto. The evidence used to increase the likelihood of identifying genes includes regions conserved between the mouse and human genomes, similarity to ESTs or other mRNA-derived data, or similarity to other proteins. A comparison of Otto (combined Otto-RefSeq and Otto homology) with Genscan, a standard gene-prediction algorithm, showed greater sensitivity (0.78 versus 0.50) and specificity (0.93 versus 0.63) of Otto in the ability to define gene structure. Otto-predicted genes were complemented with a set of genes from three gene-prediction programs that exhibited weaker, but still significant, evidence that they may be expressed. Conservative criteria, requiring at least two lines of evidence, were used to define a set of 26,383 genes with good confidence that were used for more detailed analysis presented in the subsequent sections. Extensive manual curation to establish precise characterization of gene structure will be necessary to improve the results from this initial computational approach.
3.1 Automated gene annotation
A gene is a locus of cotranscribed exons. A single gene may give rise to multiple transcripts, and thus multiple distinct proteins with multiple functions, by means of alternative splicing and alternative transcription initiation and termination sites. Our cells are able to discern within the billions of base pairs of the genomic DNA the signals for initiating transcription and for splicing together exons separated by a few or hundreds of thousands of base pairs. The first step in characterizing the genome is to define the structure of each gene and each transcription unit.
The number of protein-coding genes in mammals has been controversial from the outset. Initial estimates based on reassociation data placed it between 30,000 to 40,000, whereas later estimates from the brain were >100,000 (56). More recent data from both the corporate and public sectors, based on extrapolations from EST, CpG island, and transcript density–based extrapolations, have not reduced this variance. The highest recent number of 142,634 genes emanates from a report from Incyte Pharmaceuticals, and is based on a combination of EST data and the association of ESTs with CpG islands (57). In stark contrast are three quite different, and much lower estimates: one of ∼35,000 genes derived with genome-wide EST data and sampling procedures in conjunction with chromosome 22 data (58); another of 28,000 to 34,000 genes derived with a comparative methodology involving sequence conservation between humans and the puffer fish Tetraodon nigroviridis (59); and a figure of 35,000 genes, which was derived simply by extrapolating from the density of 770 known and predicted genes in the 67 Mbp of chromosomes 21 and 22, to the approximately 3-Gbp euchromatic genome.
The problem of computational identification of transcriptional units in genomic DNA sequence can be divided into two phases. The first is to partition the sequence into segments that are likely to correspond to individual genes. This is not trivial and is a weakness of most de novo gene-finding algorithms. It is also critical to determining the number of genes in the human gene inventory. The second challenge is to construct a gene model that reflects the probable structure of the transcript(s) encoded in the region. This can be done with reasonable accuracy when a full-length cDNA has been sequenced or a highly homologous protein sequence is known. De novo gene prediction, although less accurate, is the only way to find genes that are not represented by homologous proteins or ESTs. The following section describes the methods we have developed to address these problems for the prediction of protein-coding genes.
We have developed a rule-based expert system, called Otto, to identify and characterize genes in the human genome (60). Otto attempts to simulate in software the process that a human annotator uses to identify a gene and refine its structure. In the process of annotating a region of the genome, a human curator examines the evidence provided by the computational pipeline (described below) and examines how various types of evidence relate to one another. A curator puts different levels of confidence in different types of evidence and looks for certain patterns of evidence to support gene annotation. For example, a curator may examine homology to a number of ESTs and evaluate whether or not they can be connected into a longer, virtual mRNA. The curator would also evaluate the strength of the similarity and the contiguity of the match, in essence asking whether any ESTs cross splice-junctions and whether the edges of putative exons have consensus splice sites. This kind of manual annotation process was used to annotate the Drosophila genome.
The Otto system can promote observed evidence to a gene annotation in one of two ways. First, if the evidence includes a high-quality match to the sequence of a known gene [here defined as a human gene represented in a curated subset of the RefSeq database (61)], then Otto can promote this to a gene annotation. In the second method, Otto evaluates a broad spectrum of evidence and determines if this evidence is adequate to support promotion to a gene annotation. These processes are described below.
Initially, gene boundaries are predicted on the basis of examination of sets of overlapping protein and EST matches generated by a computational pipeline (62). This pipeline searches the scaffold sequences against protein, EST, and genome-sequence databases to define regions of sequence similarity and runs three de novo gene-prediction programs.
To identify likely gene boundaries, regions of the genome were partitioned by Otto on the basis of sequence matches identified by BLAST. Each of the database sequences matched in the region under analysis was compared by an algorithm that takes into account both coordinates of the matching sequence, as well as the sequence type (e.g., protein, EST, and so forth). The results were used to group the matches into bins of related sequences that may define a gene and identify gene boundaries. During this process, multiple hits to the same region were collapsed to a coherent set of data by tracking the coverage of a region. For example, if a group of bases was represented by multiple overlapping ESTs, the union of these regions matched by the set of ESTs on the scaffold was marked as being supported by EST evidence. This resulted in a series of “gene bins,” each of which was believed to contain a single gene. One weakness of this initial implementation of the algorithm was in predicting gene boundaries in regions of tandemly duplicated genes. Gene clusters frequently resulted in homologous neighboring genes being joined together, resulting in an annotation that artificially concatenated these gene models.
Next, known genes (those with exact matches of a full-length cDNA sequence to the genome) were identified, and the region corresponding to the cDNA was annotated as a predicted transcript. A subset of the curated human gene set RefSeq from the National Center for Biotechnology Information (NCBI) was included as a data set searched in the computational pipeline. If a RefSeq transcript matched the genome assembly for at least 50% of its length at >92% identity, then the SIM4 (63) alignment of the RefSeq transcript to the region of the genome under analysis was promoted to the status of an Otto annotation. Because the genome sequence has gaps and sequence errors such as frameshifts, it was not always possible to predict a transcript that agrees precisely with the experimentally determined cDNA sequence. A total of 6538 genes in our inventory were identified and transcripts predicted in this way.
Regions that have a substantial amount of sequence similarity, but do not match known genes, were analyzed by that part of the Otto system that uses the sequence similarity information to predict a transcript. Here, Otto evaluates evidence generated by the computational pipeline, corresponding to conservation between mouse and human genomic DNA, similarity to human transcripts (ESTs and cDNAs), similarity to rodent transcripts (ESTs and cDNAs), and similarity of the translation of human genomic DNA to known proteins to predict potential genes in the human genome. The sequence from the region of genomic DNA contained in a gene bin was extracted, and the subsequences supported by any homology evidence were marked (plus 100 bases flanking these regions). The other bases in the region, those not covered by any homology evidence, were replaced by N's. This sequence segment, with high confidence regions represented by the consensus genomic sequence and the remainder represented by N's, was then evaluated by Genscan to see if a consistent gene model could be generated. This procedure simplified the gene-prediction task by first establishing the boundary for the gene (not a strength of most gene-finding algorithms), and by eliminating regions with no supporting evidence. If Genscan returned a plausible gene model, it was further evaluated before being promoted to an “Otto” annotation. The final Genscan predictions were often quite different from the prediction that Genscan returned on the same region of native genomic sequence. A weakness of using Genscan to refine the gene model is the loss of valid, small exons from the final annotation.
The next step in defining gene structures based on sequence similarity was to compare each predicted transcript with the homology-based evidence that was used in previous steps to evaluate the depth of evidence for each exon in the prediction. Internal exons were considered to be supported if they were covered by homology evidence to within ±10 bases of their edges. For first and last exons, the internal edge was required to be within 10 bases, but the external edge was allowed greater latitude to allow for 5′ and 3′ untranslated regions (UTRs). To be retained, a prediction for a multi-exon gene must have evidence such that the total number of “hits,” as defined above, divided by the number of exons in the prediction must be >0.66 or must correspond to a RefSeq sequence. A single-exon gene must be covered by at least three supporting hits (±10 bases on each side), and these must cover the complete predicted open reading frame. For a single-exon gene, we also required that the Genscan prediction include both a start and a stop codon. Gene models that did not meet these criteria were disregarded, and those that passed were promoted to Otto predictions. Homology-based Otto predictions do not contain 3′ and 5′ untranslated sequence. Although three de novo gene-finding programs [GRAIL, Genscan, and FgenesH (63)] were run as part of the computational analysis, the results of these programs were not directly used in making the Otto predictions. Otto predicted 11,226 additional genes by means of sequence similarity.
3.2 Otto validation
To validate the Otto homology-based process and the method that Otto uses to define the structures of known genes, we compared transcripts predicted by Otto with their corresponding (and presumably correct) transcript from a set of 4512 RefSeq transcripts for which there was a unique SIM4 alignment (Table 7). In order to evaluate the relative performance of Otto and Genscan, we made three comparisons. The first involved a determination of the accuracy of gene models predicted by Otto with only homology data other than the corresponding RefSeq sequence (Otto homology in Table 7). We measured the sensitivity (correctly predicted bases divided by the total length of the cDNA) and specificity (correctly predicted bases divided by the sum of the correctly and incorrectly predicted bases). Second, we examined the sensitivity and specificity of the Otto predictions that were made solely with the RefSeq sequence, which is the process that Otto uses to annotate known genes (Otto-RefSeq). And third, we determined the accuracy of the Genscan predictions corresponding to these RefSeq sequences. As expected, the alignment method (Otto-RefSeq) was the most accurate, and Otto-homology performed better than Genscan by both criteria. Thus, 6.1% of true RefSeq nucleotides were not represented in the Otto-refseq annotations and 2.7% of the nucleotides in the Otto-RefSeq transcripts were not contained in the original RefSeq transcripts. The discrepancies could come from legitimate differences between the Celera assembly and the RefSeq transcript due to polymorphisms, incomplete or incorrect data in the Celera assembly, errors introduced by Sim4 during the alignment process, or the presence of alternatively spliced forms in the data set used for the comparisons.
Table 7
Sensitivity and specificity of Otto and Genscan. Sensitivity and specificity were calculated by first aligning the prediction to the published RefSeq transcript, tallying the number (N) of uniquely aligned RefSeq bases. Sensitivity is the ratio of N to the length of the published RefSeq transcript. Specificity is the ratio of N to the length of the prediction. All differences are significant (Tukey HSD; P < 0.001).
View this table:
Because Otto uses an evidence-based approach to reconstruct genes, the absence of experimental evidence for intervening exons may inadvertantly result in a set of exons that cannot be spliced together to give rise to a transcript. In such cases, Otto may “split genes” when in fact all the evidence should be combined into a single transcript. We also examined the tendency of these methods to incorrectly split gene predictions. These trends are shown in Fig. 8. Both RefSeq and homology-based predictions by Otto split known genes into fewer segments than Genscan alone.
Figure 8
Analysis of split genes resulting from different annotation methods. A set of 4512 Sim4-based alignments of RefSeq transcripts to the genomic assembly were chosen (see the text for criteria), and the numbers of overlapping Genscan, Otto (RefSeq only) annotations based solely on Sim4-polished RefSeq alignments, and Otto (homology) annotations (annotations produced by supplying all available evidence to Genscan) were tallied. These data show the degree to which multiple Genscan predictions and/or Otto annotations were associated with a single RefSeq transcript. The zero class for the Otto-homology predictions shown here indicates that the Otto-homology calls were made without recourse to the RefSeq transcript, and thus no Otto call was made because of insufficient evidence.
3.3 Gene number
Recognizing that the Otto system is quite conservative, we used a different gene-prediction strategy in regions where the homology evidence was less strong. Here the results of de novo gene predictions were used. For these genes, we insisted that a predicted transcript have at least two of the following types of evidence to be included in the gene set for further analysis: protein, human EST, rodent EST, or mouse genome fragment matches. This final class of predicted genes is a subset of the predictions made by the three gene-finding programs that were used in the computational pipeline. For these, there was not sufficient sequence similarity information for Otto to attempt to predict a gene structure. The three de novo gene-finding programs resulted in about 155,695 predictions, of which ∼76,410 were nonredundant (nonoverlapping with one another). Of these, 57,935 did not overlap known genes or predictions made by Otto. Only 21,350 of the gene predictions that did not overlap Otto predictions were partially supported by at least one type of sequence similarity evidence, and 8619 were partially supported by two types of evidence (Table 8).
Table 8
Numbers of exons and transcripts supported by various types of evidence for Otto and de novo gene prediction methods. Highlighted cells indicate the gene sets analyzed in this paper (boldface, set of genes selected for protein analysis; italic, total set of accepted de novo predictions).
View this table:
The sum of this number (21,350) and the number of Otto annotations (17,764), 39,114, is near the upper limit for the human gene complement. As seen in Table 8, if the requirement for other supporting evidence is made more stringent, this number drops rapidly so that demanding two types of evidence reduces the total gene number to 26,383 and demanding three types reduces it to ∼23,000. Requiring that a prediction be supported by all four categories of evidence is too stringent because it would eliminate genes that encode novel proteins (members of currently undescribed protein families). No correction for pseudogenes has been made at this point in the analysis.
In a further attempt to identify genes that were not found by the autoannotation process or any of the de novo gene finders, we examined regions outside of gene predictions that were similar to the EST sequence, and where the EST matched the genomic sequence across a splice junction. After correcting for potential 3′ UTRs of predicted genes, about 2500 such regions remained. Addition of a requirement for at least one of the following evidence types—homology to mouse genomic sequence fragments, rodent ESTs, or cDNAs—or similarity to a known protein reduced this number to 1010. Adding this to the numbers from the previous paragraph would give us estimates of about 40,000, 27,000, and 24,000 potential genes in the human genome, depending on the stringency of evidence considered. Table 8 illustrates the number of genes and presents the degree of confidence based on the supporting evidence. Transcripts encoded by a set of 26,383 genes were assembled for further analysis. This set includes the 6538 genes predicted by Otto on the basis of matches to known genes, 11,226 transcripts predicted by Otto based on homology evidence, and 8619 from the subset of transcripts from de novo gene-prediction programs that have two types of supporting evidence. The 26,383 genes are illustrated along chromosome diagrams in Fig. 1. These are a very preliminary set of annotations and are subject to all the limitations of an automated process. Considerable refinement is still necessary to improve the accuracy of these transcript predictions. All the predictions and descriptions of genes and the associated evidence that we present are the product of completely computational processes, not expert curation. We have attempted to enumerate the genes in the human genome in such a way that we have different levels of confidence based on the amount of supporting evidence: known genes, genes with good protein or EST homology evidence, and de novo gene predictions confirmed by modest homology evidence.
3.4 Features of human gene transcripts
We estimate the average span for a “typical” gene in the human DNA sequence to be about 27,894 bases. This is based on the average span covered by RefSeq transcripts, used because it represents our highest confidence set.
The set of transcripts promoted to gene annotations varies in a number of ways. As can be seen from Table 8 and Fig. 9, transcripts predicted by Otto tend to be longer, having on average about 7.8 exons, whereas those promoted from gene-prediction programs average about 3.7 exons. The largest number of exons that we have identified in a transcript is 234 in the titin mRNA. Table 8 compares the amounts of evidence that support the Otto and other predicted transcripts. For example, one can see that a typical Otto transcript has 6.99 of its 7.81 exons supported by protein homology evidence. As would be expected, the Otto transcripts generally have more support than do transcripts predicted by the de novo methods.
Figure 9
Comparison of the number of exons per transcript between the 17,968 Otto transcripts and 21,350 de novo transcript predictions with at least one line of evidence that do not overlap with an Otto prediction. Both sets have the highest number of transcripts in the two-exon category, but the de novo gene predictions are skewed much more toward smaller transcripts. In the Otto set, 19.7% of the transcripts have one or two exons, and 5.7% have more than 20. In the de novo set, 49.3% of the transcripts have one or two exons, and 0.2% have more than 20.
4 Genome Structure
Summary. This section describes several of the noncoding attributes of the assembled genome sequence and their correlations with the predicted gene set. These include an analysis of G+C content and gene density in the context of cytogenetic maps of the genome, an enumerative analysis of CpG islands, and a brief description of the genome-wide repetitive elements.
4.1 Cytogenetic maps
Perhaps the most obvious, and certainly the most visible, element of the structure of the genome is the banding pattern produced by Giemsa stain. Chromosomal banding studies have revealed that about 17% to 20% of the human chromosome complement consists of C-bands, or constitutive heterochromatin (64). Much of this heterochromatin is highly polymorphic and consists of different families of alpha satellite DNAs with various higher order repeat structures (65). Many chromosomes have complex inter- and intrachromosomal duplications present in pericentromeric regions (66). About 5% of the sequence reads were identified as alpha satellite sequences; these were not included in the assembly. Examination of pericentromeric regions is ongoing.
The remaining ∼80% of the genome, the euchromatic component, is divisible into G-, R-, and T-bands (67). These cytogenetic bands have been presumed to differ in their nucleotide composition and gene density, although we have been unable to determine precise band boundaries at the molecular level. T-bands are the most G+C- and gene-rich, and G-bands are G+C-poor (68). Bernardi has also offered a description of the euchromatin at the molecular level as long stretches of DNA of differing base composition, termed isochores (denoted L, H1, H2, and H3), which are >300 kbp in length (69). Bernardi defined the L (light) isochores as G+C-poor (<43%), whereas the H (heavy) isochores fall into three G+C-rich classes representing 24, 8, and 5% of the genome. Gene concentration has been claimed to be very low in the L isochores and 20-fold more enriched in the H2 and H3 isochores (70). By examining contiguous 50-kbp windows of G+C content across the assembly, we found that regions of G+C content >48% (H3 isochores) averaged 273.9 kbp in length, those with G+C content between 43 and 48% (H1+H2 isochores) averaged 202.8 kbp in length, and the average span of regions with <43% (L isochores) was 1078.6 kbp. The correlation between G+C content and gene density was also examined in 50-kbp windows along the assembled sequence (Table 9 and Figs. 10 and 11). We found that the density of genes was greater in regions of high G+C than in regions of low G+C content, as expected. However, the correlation between G+C content and gene density was not as skewed as previously predicted (69). A higher proportion of genes were located in the G+C-poor regions than had been expected.
Figure 10
Relation between G+C content and gene density. The blue bars show the percent of the genome (in 50-kbp windows) with the indicated G+C content. The percent of the total number of genes associated with each G+C bin is represented by the yellow bars. The graph shows that about 5% of the genome has a G+C content of between 50 and 55%, but that this portion contains nearly 15% of the genes.
Figure 11
Genome structural features. Relation among gene density (orange), G+C content (green), EST density (blue), and Alu density (pink) along the lengths of each of the chromosomes. Gene density was calculated in 1-Mbp win- dows. The percent of G+C nucleotides was calculated in 100-kbp windows. The number of ESTs and Alu elements is shown per 100-kbp window.
Table 9
Characteristics of G+C in isochores.
View this table:
Chromosomes 17, 19, and 22, which have a disproportionate number of H3-containing bands, had the highest gene density (Table 10). Conversely, of the chromosomes that we found to have the lowest gene density, X, 4, 18, 13, and Y, also have the fewest H3 bands. Chromosome 15, which also has few H3 bands, did not have a particularly low gene density in our analysis. In addition, chromosome 8, which we found to have a low gene density, does not appear to be unusual in its H3 banding.
Table 10
Features of the chromosomes. De novo/any refers to the union of de novo predictions that do not overlap Otto predictions and have at least one other type of supporting evidence; de novo/2x refers to the union of de novo predictions that do not overlap Otto predictions and have at least two types of evidence. Deserts are regions of sequence with no annotated genes.
View this table:
How valid is Ohno's postulate (71) that mammalian genomes consist of oases of genes in otherwise essentially empty deserts? It appears that the human genome does indeed contain deserts, or large, gene-poor regions. If we define a desert as a region >500 kbp without a gene, then we see that 605 Mbp, or about 20% of the genome, is in deserts. These are not uniformly distributed over the various chromosomes. Gene-rich chromosomes 17, 19, and 22 have only about 12% of their collective 171 Mbp in deserts, whereas gene-poor chromosomes 4, 13, 18, and X have 27.5% of their 492 Mbp in deserts (Table 11). The apparent lack of predicted genes in these regions does not necessarily imply that they are devoid of biological function.
Table 11
Genome overview.
View this table:
4.2 Linkage map
Linkage maps provide the basis for genetic analysis and are widely used in the study of the inheritance of traits and in the positional cloning of genes. The distance metric, centimorgans (cM), is based on the recombination rate between homologous chromosomes during meiosis. In general, the rate of recombination in females is greater than that in males, and this degree of map expansion is not uniform across the genome (72). One of the opportunities enabled by a nearly complete genome sequence is to produce the ultimate physical map, and to fully analyze its correspondence with two other maps that have been widely used in genome and genetic analysis: the linkage map and the cytogenetic map. This would close the loop between the mapping and sequencing phases of the genome project.
We mapped the location of the markers that constitute the Genethon linkage map to the genome. The rate of recombination, expressed as cM per Mbp, was calculated for 3-Mbp windows as shown in Table 12. Higher rates of recombination in the telomeric region of the chromosomes have been previously documented (73). From this mapping result, there is a difference of 4.99 between lowest rates and highest rates and the largest difference of 4.4 between males and females (4.99 to 0.47 on chromosome 16). This indicates that the variability in recombination rates among regions of the genome exceeds the differences in recombination rates between males and females. The human genome has recombination hotspots, where recombination rates vary fivefold or more over a space of 1 kbp, so the picture one gets of the magnitude of variability in recombination rate will depend on the size of the window examined. Unfortunately, too few meiotic crossovers have occurred in Centre d'Étude du Polymorphism Humain (CEPH) and other reference families to provide a resolution any finer than about 3 Mbp. The next challenge will be to determine a sequence basis of recombination at the chromosomal level. An accurate predictor for the rate for variation in recombination rates between any pair of markers would be extremely useful in designing markers to narrow a region of linkage, such as in positional cloning projects.
Table 12
Rate of recombination per physical distance (cM/Mb) across the genome. Genethon markers were placed on CSA-mapped assemblies, and then relative physical distances and rates were calculated in 3-Mb windows for each chromosome. NA, not applicable.
View this table:
4.3 Correlation between CpG islands and genes
CpG islands are stretches of unmethylated DNA with a higher frequency of CpG dinucleotides when compared with the entire genome (74). CpG islands are believed to preferentially occur at the transcriptional start of genes, and it has been observed that most housekeeping genes have CpG islands at the 5′ end of the transcript (75, 76). In addition, experimental evidence indicates that CpG island methylation is correlated with gene inactivation (77) and has been shown to be important during gene imprinting (78) and tissue-specific gene expression (79)
Experimental methods have been used that resulted in an estimate of 30,000 to 45,000 CpG islands in the human genome (74, 80) and an estimate of 499 CpG islands on human chromosome 22 (81). Larsen et al.(76) and Gardiner-Garden and Frommer (75) used a computational method to identify CpG islands and defined them as regions of DNA of >200 bp that have a G+C content of >50% and a ratio of observed versus expected frequency of CG dinucleotide ≥0.6.
It is difficult to make a direct comparison of experimental definitions of CpG islands with computational definitions because computational methods do not consider the methylation state of cytosine and experimental methods do not directly select regions of high G+C content. However, we can determine the correlation of CpG island with gene starts, given a set of annotated genomic transcripts and the whole genome sequence. We have analyzed the publicly available annotation of chromosome 22, as well as using the entire human genome in our assembly and the computationally annotated genes. A variation of the CpG island computation was compared with Larsen et al. (76). The main differences are that we use a sliding window of 200 bp, consecutive windows are merged only if they overlap, and we recompute the CpG value upon merging, thus rejecting any potential island if it scores less than the threshold.
To compute various CpG statistics, we used two different thresholds of CG dinucleotide likelihood ratio. Besides using the original threshold of 0.6 (method 1), we used a higher threshold of CG dinucleotide likelihood ratio of 0.8 (method 2), which results in the number of CpG islands on chromosome 22 close to the number of annotated genes on this chromosome. The main results are summarized in Table 13. CpG islands computed with method 1 predicted only 2.6% of the CSA sequence as CpG, but 40% of the gene starts (start codons) are contained inside a CpG island. This is comparable to ratios reported by others (82). The last two rows of the table show the observed and expected average distance, respectively, of the closest CpG island from the first exon. The observed average closest CpG islands are smaller than the corresponding expected distances, confirming an association between CpG island and the first exon.
Table 13
Characteristics of CpG islands identified in chromosome 22 (34-Mbp sequence length) and the whole genome (2.9-Gbp sequence length) by means of two different methods. Method 1 uses a CG likelihood ratio of ≥0.6. Method 2 uses a CG likelihood ratio of ≥0.8.
View this table:
We also looked at the distribution of CpG island nucleotides among various sequence classes such as intergenic regions, introns, exons, and first exons. We computed the likelihood score for each sequence class as the ratio of the observed fraction of CpG island nucleotides in that sequence class and the expected fraction of CpG island nucleotides in that sequence class. The result of applying method 1 on CSA were scores of 0.89 for intergenic region, 1.2 for intron, 5.86 for exon, and 13.2 for first exon. The same trend was also found for chromosome 22 and after the application of a higher threshold (method 2) on both data sets. In sum, genome-wide analysis has extended earlier analysis and suggests a strong correlation between CpG islands and first coding exons.
4.4 Genome-wide repetitive elements
The proportion of the genome covered by various classes of repetitive DNA is presented in Table 14. We observed about 35% of the genome in these repeat classes, very similar to values reported previously (83). Repetitive sequence may be underrepresented in the Celera assembly as a result of incomplete repeat resolution, as discussed above. About 8% of the scaffold length is in gaps, and we expect that much of this is repetitive sequence. Chromosome 19 has the highest repeat density (57%), as well as the highest gene density (Table 10). Of interest, among the different classes of repeat elements, we observe a clear association of Alu elements and gene density, which was not observed between LINEs and gene density.
Table 14
Distribution of repetitive DNA in the compartmentalized shotgun assembly sequence.
View this table:
5 Genome Evolution
Summary. The dynamic nature of genome evolution can be captured at several levels. These include gene duplications mediated by RNA intermediates (retrotransposition) and segmental genomic duplications. In this section, we document the genome-wide occurrence of retrotransposition events generating functional (intronless paralogs) or inactive genes (pseudogenes). Genes involved in translational processes and nuclear regulation account for nearly 50% of all intronless paralogs and processed pseudogenes detected in our survey. We have also cataloged the extent of segmental genomic duplication and provide evidence for 1077 duplicated blocks covering 3522 distinct genes.
5.1 Retrotransposition in the human genome
Retrotransposition of processed mRNA transcripts into the genome results in functional genes, called intronless paralogs, or inactivated genes (pseudogenes). A paralog refers to a gene that appears in more than one copy in a given organism as a result of a duplication event. The existence of both intron-containing and intronless forms of genes encoding functionally similar or identical proteins has been previously described (84,85). Cataloging these evolutionary events on the genomic landscape is of value in understanding the functional consequences of such gene-duplication events in cellular biology. Identification of conserved intronless paralogs in the mouse or other mammalian genomes should provide the basis for capturing the evolutionary chronology of these transposition events and provide insights into gene loss and accretion in the mammalian radiation.
A set of proteins corresponding to all 901 Otto-predicted, single-exon genes were subjected to BLAST analysis against the proteins encoded by the remaining multiexon predicted transcripts. Using homology criteria of 70% sequence identity over 90% of the length, we identified 298 instances of single- to multi-exon correspondence. Of these 298 sequences, 97 were represented in the GenBank data set of experimentally validated full-length genes at the stringency specified and were verified by manual inspection.
We believe that these 97 cases may represent intronless paralogs (see Web table 1 on Science Online at www.sciencemag.org/cgi/content/full/291/5507/1304/DC1) of known genes. Most of these are flanked by direct repeat sequences, although the precise nature of these repeats remains to be determined. All of the cases for which we have high confidence contain polyadenylated [poly(A)] tails characteristic of retrotransposition.
Recent publications describing the phenomenon of functional intronless paralogs speculate that retrotransposition may serve as a mechanism used to escape X-chromosomal inactivation (84,86). We do not find a bias toward X chromosome origination of these retrotransposed genes; rather, the results show a random chromosome distribution of both the intron-containing and corresponding intronless paralogs. We also have found several cases of retrotransposition from a single source chromosome to multiple target chromosomes. Interesting examples include the retrotransposition of a five exon–containing ribosomal protein L21 gene on chromosome 13 onto chromosomes 1, 3, 4, 7, 10, and 14, respectively. The size of the source genes can also show variability. The largest example is the 31-exon diacylglycerol kinase zeta gene on chromosome 11 that has an intronless paralog on chromosome 13. Regardless of route, retrotransposition with subsequent gene changes in coding or noncoding regions that lead to different functions or expression patterns, represents a key route to providing an enhanced functional repertoire in mammals (87).
Our preliminary set of retrotransposed intronless paralogs contains a clear overrepresentation of genes involved in translational processes (40% ribosomal proteins and 10% translation elongation factors) and nuclear regulation (HMG nonhistone proteins, 4%), as well as metabolic and regulatory enzymes. EST matches specific to a subset of intronless paralogs suggest expression of these intronless paralogs. Differences in the upstream regulatory sequences between the source genes and their intronless paralogs could account for differences in tissue-specific gene expression. Defining which, if any, of these processed genes are functionally expressed and translated will require further elucidation and experimental validation.
5.2 Pseudogenes
A pseudogene is a nonfunctional copy that is very similar to a normal gene but that has been altered slightly so that it is not expressed. We developed a method for the preliminary analysis of processed pseudogenes in the human genome as a starting point in elucidating the ongoing evolutionary forces that account for gene inactivation. The general structural characteristics of these processed pseudogenes include the complete lack of intervening sequences found in the functional counterparts, a poly(A) tract at the 3′ end, and direct repeats flanking the pseudogene sequence. Processed pseudogenes occur as a result of retrotransposition, whereas unprocessed pseudogenes arise from segmental genome duplication.
We searched the complete set of Otto-predicted transcripts against the genomic sequence by means of BLAST. Genomic regions corresponding to all Otto-predicted transcripts were excluded from this analysis. We identified 2909 regions matching with greater than 70% identity over at least 70% of the length of the transcripts that likely represent processed pseudogenes. This number is probably an underestimate because specific methods to search for pseudogenes were not used.
We looked for correlations between structural elements and the propensity for retrotransposition in the human genome. GC content and transcript length were compared between the genes with processed pseudogenes (1177 source genes) versus the remainder of the predicted gene set. Transcripts that give rise to processed pseudogenes have shorter average transcript length (1027 bp versus 1594 bp for the Otto set) as compared with genes for which no pseudogene was detected. The overall GC content did not show any significant difference, contrary to a recent report (88). There is a clear trend in gene families that are present as processed pseudogenes. These include ribosomal proteins (67%), lamin receptors (10%), translation elongation factor alpha (5%), and HMG–non-histone proteins (2%). The increased occurrence of retrotransposition (both intronless paralogs and processed pseudogenes) among genes involved in translation and nuclear regulation may reflect an increased transcriptional activity of these genes.
5.3 Gene duplication in the human genome
Building on a previously published procedure (27), we developed a graph-theoretic algorithm, called Lek, for grouping the predicted human protein set into protein families (89). The complete clusters that result from the Lek clustering provide one basis for comparing the role of whole-genome or chromosomal duplication in protein family expansion as opposed to other means, such as tandem duplication. Because each complete cluster represents a closed and certain island of homology, and because Lek is capable of simultaneously clustering protein complements of several organisms, the number of proteins contributed by each organism to a complete cluster can be predicted with confidence depending on the quality of the annotation of each genome. The variance of each organism's contribution to each cluster can then be calculated, allowing an assessment of the relative importance of large-scale duplication versus smaller-scale, organism-specific expansion and contraction of protein families, presumably as a result of natural selection operating on individual protein families within an organism. As can be seen in Fig. 12, the large variance in the relative numbers of human as compared with D. melanogaster and Caenorhabditis elegans proteins in complete clusters may be explained by multiple events of relative expansions in gene families in each of the three animal genomes. Such expansions would give rise to the distribution that shows a peak at 1:1 in the ratio for human-worm or human-fly clusters with the slope spread covering both human and fly/worm predominance, as we observed (Fig. 12). Furthermore, there are nearly as many clusters where worm and fly proteins predominate despite the larger numbers of proteins in the human. At face value, this analysis suggests that natural selection acting on individual protein families has been a major force driving the expansion of at least some elements of the human protein set. However, in our analysis, the difference between an ancient whole-genome duplication followed by loss, versus piecemeal duplication, cannot be easily distinguished. In order to differentiate these scenarios, more extended analyses were performed.
Figure 12
Gene duplication in complete protein clusters. The predicted protein sets of human, worm, and fly were subjected to Lek clustering (27). The numbers of clusters with varying ratios (whole number) of human versus worm and human versus fly proteins per cluster were plotted.
5.4 Large-scale duplications
Using two independent methods, we searched for large-scale duplications in the human genome. First, we describe a protein family–based method that identified highly conserved blocks of duplication. We then describe our comprehensive method for identifying all interchromosomal block duplications. The latter method identified a large number of duplicated chromosomal segments covering parts of all 24 chromosomes.
The first of the methods is based on the idea of searching for blocks of highly conserved homologous proteins that occur in more than one location on the genome. For this comparison, two genes were considered equivalent if their protein products were determined to be in the same family and the same complete Lek cluster (essentially paralogous genes) (89). Initially, each chromosome was represented as a string of genes ordered by the start codons for predicted genes along the chromosome. We considered the two strands as a single string, because local inversions are relatively common events relative to large-scale duplications. Each gene was indexed according to the protein family and Lek complete cluster (89). All pairs of indexed gene strings were then aligned in both the forward and reverse directions with the Smith-Waterman algorithm (90). A match between two proteins of the same Lek complete cluster was given a score of 10 and a mismatch −10, with gap open and extend penalties of −4 and −1. With these parameters, 19 conserved interchromosomal blocks of duplication were observed, all of which were also detected and expanded by the comprehensive method described below. The detection of only a relatively small number of block duplications was a consequence of using an intrinsically conservative method grounded in the conservative constraints of the complete Lek clusters.
In the second, more comprehensive approach, we aligned all chromosomes directly with one another using an algorithm based on the MUMmer system (91). This alignment method uses a suffix tree data structure and a linear-time algorithm to align long sequences very rapidly; for example, two chromosomes of 100 Mbp can be aligned in less than 20 min (on a Compaq Alpha computer) with 4 gigabytes of memory. This procedure was used recently to identify numerous large-scale segmental duplications among the five chromosomes of A. thaliana(92); in that organism, the method revealed that 60% of the genome (66 Mbp) is covered by 24 very large duplicated segments. For Arabidopsis, a DNA-based alignment was sufficient to reveal the segmental duplications between chromosomes; in the human genome, DNA alignments at the whole-chromosome level are insufficiently sensitive. Therefore, a modified procedure was developed and applied, as follows. First, all 26,588 proteins (9,675,713 million amino acids) were concatenated end-to-end in order as they occur along each of the 24 chromosomes, irrespective of strand location. The concatenated protein set was then aligned against each chromosome by the MUMmer algorithm. The resulting matches were clustered to extract all sets of three or more protein matches that occur in close proximity on two different chromosomes (93); these represent the candidate segmental duplications. A series of filters were developed and applied to remove likely false-positives from this set; for example, small blocks that were spread across many proteins were removed. To refine the filtering methods, a shuffled protein set was first created by taking the 26,588 proteins, randomizing their order, and then partitioning them into 24 shuffled chromosomes, each containing the same number of proteins as the true genome. This shuffled protein set has the identical composition to the real genome; in particular, every protein and every domain appears the same number of times. The complete algorithm was then applied to both the real and the shuffled data, with the results on the shuffled data being used to estimate the false-positive rate. The algorithm after filtering yielded 10,310 gene pairs in 1077 duplicated blocks containing 3522 distinct genes; tandemly duplicated expansions in many of the blocks explain the excess of gene pairs to distinct genes. In the shuffled data, by contrast, only 370 gene pairs were found, giving a false-positive estimate of 3.6%. The most likely explanation for the 1077 block duplications is ancient segmental duplications. In many cases, the order of the proteins has been shuffled, although proximity is preserved. Out of the 1077 blocks, 159 contain only three genes, 137 contain four genes, and 781 contain five or more genes.
To illustrate the extent of the detected duplications, Fig. 13 shows all 1077 block duplications indexed to each chromosome in 24 panels in which only duplications mapped to the indexed chromosome are displayed. The figure makes it clear that the duplications are ubiquitous in the genome. One feature that it displays is many relatively small chromosomal stretches, with one-to-many duplication relationships that are graphically striking. One such example captured by the analysis is the well-documented olfactory receptor (OR) family, which is scattered in blocks throughout the genome and which has been analyzed for genome-deployment reconstructions at several evolutionary stages (94). The figure also illustrates that some chromosomes, such as chromosome 2, contain many more detected large-scale duplications than others. Indeed, one of the largest duplicated segments is a large block of 33 proteins on chromosome 2, spread among eight smaller blocks in 2p, that aligns to a paralogous set on chromosome 14, with one rearrangement (see chromosomes 2 and 14 panels in Fig. 13). The proteins are not contiguous but span a region containing 97 proteins on chromosome 2 and 332 proteins on chromosome 14. The likelihood of observing this many duplicated proteins by chance, even over a span of this length, is 2.3 × 10−68 (93). This duplicated set spans 20 Mbp on chromosome 2 and 63 Mbp on chromosome 14, over 70% of the latter chromosome. Chromosome 2 also contains a block duplication that is nearly as large, which is shared by chromosome arm 2q and chromosome 12. This duplication incorporates two of the four known Hox gene clusters, but considerably expands the extent of the duplications proximally and distally on the pair of chromosome arms. This breadth of duplication is also seen on the two chromosomes carrying the other two Hox clusters.
Figure 13
Segmental duplications between chromosomes in the human genome. The 24 panels show the 1077 duplicated blocks of genes, containing 10,310 pairs of genes in total. Each line represents a pair of homologous genes belonging to a block; all blocks contain at least three genes on each of the chromosomes where they appear. Each panel shows all the duplications between a single chromosome and other chromosomes with shared blocks. The chromosome at the center of each panel is shown as a thick red line for emphasis. Other chromosomes are displayed from top to bottom within each panel ordered by chromosome number. The inset (bottom, center right) shows a close-up of one duplication between chromosomes 18 and 20, expanded to display the gene names of 12 of the 64 gene pairs shown.
An additional large duplication, between chromosomes 18 and 20, serves as a good example to illustrate some of the features common to many of the other observed large duplications (Fig. 13, inset). This duplication contains 64 detected ordered intrachromosomal pairs of homologous genes. After discounting a 40-Mb stretch of chromosome 18 free of matches to chromosome 20, which is likely to represent a large insert (between the gene assignments “Krup rel” and “collagen rel” on chromosome 18 in Fig. 13), the full duplication segment covers 36 Mb on chromosome 18 and 28 Mb on chromosome 20. By this measure, the duplication segment spans nearly half of each chromosome's net length. The most likely scenario is that the whole span of this region was duplicated as a single very large block, followed by shuffling owing to smaller scale rearrangements. As such, at least four subsequent rearrangements would need to be invoked to explain the relative insertions and inversions seen in the duplicated segment interval. The 64 protein pairs in this alignment occur among 217 protein assignments on chromosome 18, and among 322 protein assignments on chromosome 20, for a density of involved proteins of 20 to 30%. This is consistent with an ancient large-scale duplication followed by subsequent gene loss on one or both chromosomes. Loss of just one member of a gene pair subsequent to the duplication would result in a failure to score a gene pair in the block; less than 50% gene loss on the chromosomes would lead to the duplication density observed here. As an independent verification of the significance of the alignments detected, it can be seen that a substantial number of the pairs of aligning proteins in this duplication, including some of those annotated (Fig. 13), are those populating small Lek complete clusters (see above). This indicates that they are members of very small families of paralogs; their relative scarcity within the genome validates the uniqueness and robust nature of their alignments.
Two additional qualitative features were observed among many of the large-scale duplications. First, several proteins with disease associations, with OMIM (Online Mendelian Inheritance in Man) assignments, are members of duplicated segments (see web table 2 on Science Online at www.sciencemag.org/cgi/content/full/291/5507/1304/DC1). We have also observed a few instances where paralogs on both duplicated segments are associated with similar disease conditions. Notable among these genes are proteins involved in hemostasis (coagulation factors) that are associated with bleeding disorders, transcriptional regulators like the homeobox proteins associated with developmental disorders, and potassium channels associated with cardiovascular conduction abnormalities. For each of these disease genes, closer study of the paralogous genes in the duplicated segment may reveal new insights into disease causation, with further investigation needed to determine whether they might be involved in the same or similar genetic diseases. Second, although there is a conserved number of proteins and coding exons predicted for specific large duplicated spans within the chromosome 18 to 20 alignment, the genomic DNA of chromosome 18 in these specific spans is in some cases more than 10-fold longer than the corresponding chromosome 20 DNA. This selective accretion of noncoding DNA (or conversely, loss of noncoding DNA) on one of a pair of duplicated chromosome regions was observed in many compared regions. Hypotheses to explain which mechanisms foster these processes must be tested.
Evaluation of the alignment results gives some perspective on dating of the duplications. As noted above, large-scale ancient segmental duplication in fact best explains many of the blocks detected by this genome-wide analysis. The regions of human chromosomes involved in the large-scale duplications expanded upon above (chromosomes 2 to 14, 2 to 12, and 18 to 20) are each syntenic to a distinct mouse chromosomal region. The corresponding mouse chromosomal regions are much more similar in sequence conservation, and even in order, to their human synteny partners than the human duplication regions are to each other. Further, the corresponding mouse chromosomal regions each bear a significant proportion of genes orthologous to the human genes on which the human duplication assignments were made. On the basis of these factors, the corresponding mouse chromosomal spans, at coarse resolution, appear to be products of the same large-scale duplications observed in humans. Although further detailed analysis must be carried out once a more complete genome is assembled for mouse, the underlying large duplications appear to predate the two species' divergence. This dates the duplications, at the latest, before divergence of the primate and rodent lineages. This date can be further refined upon examination of the synteny between human chromosomes and those of chicken, pufferfish (Fugu rubripes), or zebrafish (95). The only substantial syntenic stretches mapped in these species corresponding to both pairs of human duplications are restricted to the Hox cluster regions. When the synteny of these regions (or others) to human chromosomes is extended with further mapping, the ages of the nearly chromosome-length duplications seen in humans are likely to be dated to the root of vertebrate divergence.
The MUMmer-based results demonstrate large block duplications that range in size from a few genes to segments covering most of a chromosome. The extent of segmental duplications raises the question of whether an ancient whole-genome duplication event is the underlying explanation for the numerous duplicated regions (96). The duplications have undergone many deletions and subsequent rearrangements; these events make it difficult to distinguish between a whole-genome duplication and multiple smaller events. Further analysis, focused especially on comparing the estimated ages of all the block duplications, derived partially from interspecies genome comparisons, will be necessary to determine which of these two hypotheses is more likely. Comparisons of genomes of different vertebrates, and even cross-phyla genome comparisons, will allow for the deconvolution of duplications to eventually reveal the stagewise history of our genome, and with it a history of the emergence of many of the key functions that distinguish us from other living things.
6 A Genome-Wide Examination of Sequence Variations
Summary. Computational methods were used to identify single-nucleotide polymorphisms (SNPs) by comparison of the Celera sequence to other SNP resources. The SNP rate between two chromosomes was ∼1 per 1200 to 1500 bp. SNPs are distributed nonrandomly throughout the genome. Only a very small proportion of all SNPs (<1%) potentially impact protein function based on the functional analysis of SNPs that affect the predicted coding regions. This results in an estimate that only thousands, not millions, of genetic variations may contribute to the structural diversity of human proteins.
Having a complete genome sequence enables researchers to achieve a dramatic acceleration in the rate of gene discovery, but only through analysis of sequence variation in DNA can we discover the genetic basis for variation in health among human beings. Whole-genome shotgun sequencing is a particularly effective method for detecting sequence variation in tandem with whole-genome assembly. In addition, we compared the distribution and attributes of SNPs ascertained by three other methods: (i) alignment of the Celera consensus sequence to the PFP assembly, (ii) overlap of high-quality reads of genomic sequence (referred to as “Kwok”; 1,120,195 SNPs) (97), and (iii) reduced representation shotgun sequencing (referred to as “TSC”; 632,640 SNPs) (98). These data were consistent in showing an overall nucleotide diversity of ∼8 × 10−4, marked heterogeneity across the genome in SNP density, and an overwhelming preponderance of noncoding variation that produces no change in expressed proteins.
6.1 SNPs found by aligning the Celera consensus to the PFP assembly
Ideally, methods of SNP discovery make full use of sequence depth and quality at every site, and quantitatively control the rate of false-positive and false-negative calls with an explicit sampling model (99). Comparison of consensus sequences in the absence of these details necessitated a more ad hoc approach (quality scores could not readily be obtained for the PFP assembly). First, all sequence differences between the two consensus sequences were identified; these were then filtered to reduce the contribution of sequencing errors and misassembly. As a measure of the effectiveness of the filtering step, we monitored the ratio of transition and transversion substitutions, because a 2:1 ratio has been well documented as typical in mammalian evolution (100) and in human SNPs (101, 102). The filtering steps consisted of removing variants where the quality score in the Celera consensus was less than 30 and where the density of variants was greater than 5 in 400 bp. These filters resulted in shifting the transition-to-transversion ratio from 1.57:1 to 1.89:1. When applied to 2.3 Gbp of alignments between the Celera and PFP consensus sequences, these filters resulted in identification of 2,104,820 putative SNPs from a total of 2,778,474 substitution differences. Overlaps between this set of SNPs and those found by other methods are described below.
6.2 Comparisons to public SNP databases
Additional SNPs, including 2,536,021 from dbSNP (www.ncbi.nlm.nih.gov/SNP) and 13,150 from HGMD (Human Gene Mutation Database, from the University of Wales, UK), were mapped on the Celera consensus sequence by a sequence similarity search with the program PowerBlast (103). The two largest data sets in dbSNP are the Kwok and TSC sets, with 47% and 25% of the dbSNP records. Low-quality alignments with partial coverage of the dbSNP sequence and alignments that had less than 98% sequence identity between the Celera sequence and the dbSNP flanking sequence were eliminated. dbSNP sequences mapping to multiple locations on the Celera genome were discarded. A total of 2,336,935 dbSNP variants were mapped to 1,223,038 unique locations on the Celera sequence, implying considerable redundancy in dbSNP. SNPs in the TSC set mapped to 585,811 unique genomic locations, and SNPs in the Kwok set mapped to 438,032 unique locations. The combined unique SNPs counts used in this analysis, including Celera-PFP, TSC, and Kwok, is 2,737,668. Table 15 shows that a substantial fraction of SNPs identified by one of these methods was also found by another method. The very high overlap (36.2%) between the Kwok and Celera-PFP SNPs may be due in part to the use by Kwok of sequences that went into the PFP assembly. The unusually low overlap (16.4%) between the Kwok and TSC sets is due to their being the smallest two sets. In addition, 24.5% of the Celera-PFP SNPs overlap with SNPs derived from the Celera genome sequences (46). SNP validation in population samples is an expensive and laborious process, so confirmation on multiple data sets may provide an efficient initial validation “in silico” (by computational analysis).
Table 15
Overlap of SNPs from genome-wide SNP databases. Table entries are SNP counts for each pair of data sets. Numbers in parentheses are the fraction of overlap, calculated as the count of overlapping SNPs divided by the number of SNPs in the smaller of the two databases compared. Total SNP counts for the databases are: Celera-PFP, 2,104,820; TSC, 585,811; and Kwok 438,032. Only unique SNPs in the TSC and Kwok data sets were included.
View this table:
One means of assessing whether the three sets of SNPs provide the same picture of human variation is to tally the frequencies of the six possible base changes in each set of SNPs (Table 16). Previous measures of nucleotide diversity were mostly derived from small-scale analysis on candidate genes (101), and our analysis with all three data sets validates the previous observations at the whole-genome scale. There is remarkable homogeneity between the SNPs found in the Kwok set, the TSC set, and in our whole-genome shotgun (46) in this substitution pattern. Compared with the rest of the data sets, Celera-PFP deviates slightly from the 2:1 transition-to-transversion ratio observed in the other SNP sets. This result is not unexpected, because some fraction of the computationally identified SNPs in the Celera-PFP comparison may in fact be sequence errors. A 2:1 transition:transversion ratio for the bona fide SNPs would be obtained if one assumed that 15% of the sequence differences in the Celera-PFP set were a result of (presumably random) sequence errors.
Table 16
Summary of nucleotide changes in different SNP data sets.
View this table:
6.3 Estimation of nucleotide diversity from ascertained SNPs
The number of SNPs identified varied widely across chromosomes. In order to normalize these values to the chromosome size and sequence coverage, we used π, the standard statistic for nucleotide diversity (104). Nucleotide diversity is a measure of per-site heterozygosity, quantifying the probability that a pair of chromosomes drawn from the population will differ at a nucleotide site. In order to calculate nucleotide diversity for each chromosome, we need to know the number of nucleotide sites that were surveyed for variation, and in methods like reduced respresentation sequencing, we need to know the sequence quality and the depth of coverage at each site. These data are not readily available, so we could not estimate nucleotide diversity from the TSC effort. Estimation of nucleotide diversity from high-quality sequence overlaps should be possible, but again, more information is needed on the details of all the alignments.
Estimation of nucleotide diversity from a shotgun assembly entails calculating for each column of the multialignment, the probability that two or more distinct alleles are present, and the probability of detecting a SNP if in fact the alleles have different sequence (i.e., the probability of correct sequence calls). The greater the depth of coverage and the higher the sequence quality, the higher is the chance of successfully detecting a SNP (105). Even after correcting for variation in coverage, the nucleotide diversity appeared to vary across autosomes. The significance of this heterogeneity was tested by analysis of variance, with estimates of π for 100-kbp windows to estimate variability within chromosomes (for the Celera-PFP comparison,F = 29.73, P < 0.0001).
Average diversity for the autosomes estimated from the Celera-PFP comparison was 8.94 × 10−4. Nucleotide diversity on the X chromosome was 6.54 × 10−4. The X is expected to be less variable than autosomes, because for every four copies of autosomes in the population, there are only three X chromosomes, and this smaller effective population size means that random drift will more rapidly remove variation from the X (106).
Having ascertained nucleotide variation genome-wide, it appears that previous estimates of nucleotide diversity in humans based on samples of genes were reasonably accurate (101,102, 106, 107). Genome-wide, our estimate of nucleotide diversity was 8.98 × 10-4 for the Celera-PFP alignment, and a published estimate averaged over 10 densely resequenced human genes was 8.00 × 10−4(108).
6.4 Variation in nucleotide diversity across the human genome
Such an apparently high degree of variability among chromosomes in SNP density raises the question of whether there is heterogeneity at a finer scale within chromo-
somes, and whether this heterogeneity is greater than expected by chance. If SNPs occur by random and independent mutations, then it would seem that there ought to be a Poisson distribution of numbers of SNPs in fragments of arbitrary constant size. The observed dispersion in the distribution of SNPs in 100-kbp fragments was far greater than predicted from a Poisson distribution (Fig. 14). However, this simplistic model ignores the different recombination rates and population histories that exist in different regions of the genome. Population genetics theory holds that we can account for this variation with a mathematical formulation called the neutral coalescent (109). Applying well-tested algorithms for simulating the neutral coalescent with recombination (110), and using an effective population size of 10,000 and a per-base recombination rate equal to the mutation rate (111), we generated a distribution of numbers of SNPs by this model as well (112). The observed distribution of SNPs has a much larger variance than either the Poisson model or the coalescent model, and the difference is highly significant. This implies that there is significant variability across the genome in SNP density, an observation that begs an explanation.
Figure 14
SNP density in each 100-kbp interval as determined with Celera-PFP SNPs. The color codes are as follows: black, Celera-PFP SNP density; blue, coalescent model; and red, Poisson distribution. The figure shows that the distribution of SNPs along the genome is nonrandom and is not entirely accounted for by a coalescent model of regional history.
Several attributes of the DNA sequence may affect the local density of SNPs, including the rate at which DNA polymerase makes errors and the efficacy of mismatch repair. One key factor that is likely to be associated with SNP density is the G+C content, in part because methylated cytosines in CpG dinucleotides tend to undergo deamination to form thymine, accounting for a nearly 10-fold increase in the mutation rate of CpGs over other dinucleotides. We tallied the GC content and nucleotide diversities in 100-kbp windows across the entire genome and found that the correlation between them was positive (r = 0.21) and highly significant (P < 0.0001), but G+C content accounted for only a small part of the variation.
6.5 SNPs by genomic class
To test homogeneity of SNP densities across functional classes, we partitioned sites into intergenic (defined as >5 kbp from any predicted transcription unit), 5′-UTR, exonic (missense and silent), intronic, and 3′-UTR for 10,239 known genes, derived from the NCBI RefSeq database and all human genes predicted from the Celera Otto annotation. In coding regions, SNPs were categorized as either silent, for those that do not change amino acid sequence, or missense, for those that change the protein product. The ratio of missense to silent coding SNPs in Celera-PFP, TSC, and Kwok sets (1.12, 0.91, and 0.78, respectively) shows a markedly reduced frequency of missense variants compared with the neutral expectation, consistent with the elimination by natural selection of a fraction of the deleterious amino acid changes (112). These ratios are comparable to the missense-to-silent ratios of 0.88 and 1.17 found by Cargill et al. (101) and by Halushka et al. (102). Similar results were observed in SNPs derived from Celera shotgun sequences (46).
It is striking how small is the fraction of SNPs that lead to potentially dysfunctional alterations in proteins. In the 10,239 RefSeq genes, missense SNPs were only about 0.12, 0.14, and 0.17% of the total SNP counts in Celera-PFP, TSC, and Kwok SNPs, respectively. Nonconservative protein changes constitute an even smaller fraction of missense SNPs (47, 41, and 40% in Celera-PFP, Kwok, and TSC). Intergenic regions have been virtually unstudied (113), and we note that 75% of the SNPs we identified were intergenic (Table 17). The SNP rate was highest in introns and lowest in exons. The SNP rate was lower in intergenic regions than in introns, providing one of the first discriminators between these two classes of DNA. These SNP rates were confirmed in the Celera SNPs, which also exhibited a lower rate in exons than in introns, and in extragenic regions than in introns (46). Many of these intergenic SNPs will provide valuable information in the form of markers for linkage and association studies, and some fraction is likely to have a regulatory function as well.
Table 17
Distribution of SNPs in classes of genomic regions.
View this table:
7 An Overview of the Predicted Protein-Coding Genes in the Human Genome
Summary. This section provides an initial computational analysis of the predicted protein set with the aim of cataloging prominent differences and similarities when the human genome is compared with other fully sequenced eukaryotic genomes. Over 40% of the predicted protein set in humans cannot be ascribed a molecular function by methods that assign proteins to known families. A protein domain–based analysis provides a detailed catalog of the prominent differences in the human genome when compared with the fly and worm genomes. Prominent among these are domain expansions in proteins involved in developmental regulation and in cellular processes such as neuronal function, hemostasis, acquired immune response, and cytoskeletal complexity. The final enumeration of protein families and details of protein structure will rely on additional experimental work and comprehensive manual curation.
A preliminary analysis of the predicted human protein-coding genes was conducted. Two methods were used to analyze and classify the molecular functions of 26,588 predicted proteins that represent 26,383 gene predictions with at least two lines of evidence as described above. The first method was based on an analysis at the level of protein families, with both the publicly available Pfam database (114, 115) and Celera's Panther Classification (CPC) (Fig. 15) (116). The second method was based on an analysis at the level of protein domains, with both the Pfam and SMART databases (115, 117).
Figure 15
Distribution of the molecular functions of 26,383 human genes. Each slice lists the numbers and percentages (in parentheses) of human gene functions assigned to a given category of molecular function. The outer circle shows the assignment to molecular function categories in the Gene Ontology (GO) (179), and the inner circle shows the assignment to Celera's Panther molecular function categories (116).
The results presented here are preliminary and are subject to several limitations. Both the gene predictions and functional assignments have been made by using computational tools, although the statistical models in Panther, Pfam, and SMART have been built, annotated, and reviewed by expert biologists. In the set of computationally predicted genes, we expect both false-positive predictions (some of these may in fact be inactive pseudogenes) and false-negative predictions (some human genes will not be computationally predicted). We also expect errors in delimiting the boundaries of exons and genes. Similarly, in the automatic functional assignments, we also expect both false-positive and false-negative predictions. The functional assignment protocol focuses on protein families that tend to be found across several organisms, or on families of known human genes. Therefore, we do not assign a function to many genes that are not in large families, even if the function is known. Unless otherwise specified, all enumeration of the genes in any given family or functional category was taken from the set of 26,588 predicted proteins, which were assigned functions by using statistical score cutoffs defined for models in Panther, Pfam, and SMART.
For this initial examination of the predicted human protein set, three broad questions were asked: (i) What are the likely molecular functions of the predicted gene products, and how are these proteins categorized with current classification methods? (ii) What are the core functions that appear to be common across the animals? (iii) How does the human protein complement differ from that of other sequenced eukaryotes?
7.1 Molecular functions of predicted human proteins
Figure 15 shows an overview of the putative molecular functions of the predicted 26,588 human proteins that have at least two lines of supporting evidence. About 41% (12,809) of the gene products could not be classified from this initial analysis and are termed proteins with unknown functions. Because our automatic classification methods treat only relatively large protein families, there are a number of “unclassified” sequences that do, in fact, have a known or predicted function. For the 60% of the protein set that have automatic functional predictions, the specific protein functions have been placed into broad classes. We focus here on molecular function (rather than higher order cellular processes) in order to classify as many proteins as possible. These functional predictions are based on similarity to sequences of known function.
In our analysis of the 12,731 additional low-confidence predicted genes (those with only one piece of supporting evidence), only 636 (5%) of these additional putative genes were assigned molecular functions by the automated methods. One-third of these 636 predicted genes represented endogenous retroviral proteins, further suggesting that the majority of these unknown-function genes are not real genes. Given that most of these additional 12,095 genes appear to be unique among the genomes sequenced to date, many may simply represent false-positive gene predictions.
The most common molecular functions are the transcription factors and those involved in nucleic acid metabolism (nucleic acid enzyme). Other functions that are highly represented in the human genome are the receptors, kinases, and hydrolases. Not surprisingly, most of the hydrolases are proteases. There are also many proteins that are members of proto-oncogene families, as well as families of “select regulatory molecules”: (i) proteins involved in specific steps of signal transduction such as heterotrimeric GTP-binding proteins (G proteins) and cell cycle regulators, and (ii) proteins that modulate the activity of kinases, G proteins, and phosphatases.
7.2 Evolutionary conservation of core processes
Because of the various “model organism” genome-sequencing projects that have already been completed, reasonable comparative information is available for beginning the analysis of the evolution of the human genome. The genomes of S. cerevisiae(“bakers' yeast”) (118) and two diverse invertebrates,C. elegans (a nematode worm) (119) and D. melanogaster (fly) (26), as well as the first plant genome, A. thaliana, recently completed (92), provide a diverse background for genome comparisons.
We enumerated the “strict orthologs” conserved between human and fly, and between human and worm (Fig. 16) to address the question, What are the core functions that appear to be common across the animals? The concept of orthology is important because if two genes are orthologs, they can be traced by descent to the common ancestor of the two organisms (an “evolutionarily conserved protein set”), and therefore are likely to perform similar conserved functions in the different organisms. It is critical in this analysis to separate orthologs (a gene that appears in two organisms by descent from a common ancestor) from paralogs (a gene that appears in more than one copy in a given organism by a duplication event) because paralogs may subsequently diverge in function. Following the yeast-worm ortholog comparison in (120), we identified two different cases for each pairwise comparison (human-fly and human-worm). The first case was a pair of genes, one from each organism, for which there was no other close homolog in either organism. These are straightforwardly identified as orthologous, because there are no additional members of the families that complicate separating orthologs from paralogs. The second case is a family of genes with more than one member in either or both of the organisms being compared. Chervitz et al. (120) deal with this case by analyzing a phylogenetic tree that described the relationships between all of the sequences in both organisms, and then looked for pairs of genes that were nearest neighbors in the tree. If the nearest-neighbor pairs were from different organisms, those genes were presumed to be orthologs. We note that these nearest neighbors can often be confidently identified from pairwise sequence comparison without having to examine a phylogenetic tree (see legend to Fig. 16). If the nearest neighbors are not from different organisms, there has been a paralogous expansion in one or both organisms after the speciation event (and/or a gene loss by one organism). When this one-to-one correspondence is lost, defining an ortholog becomes ambiguous. For our initial computational overview of the predicted human protein set, we could not answer this question for every predicted protein. Therefore, we consider only “strict orthologs,” i.e., the proteins with unambiguous one-to-one relationships (Fig. 16). By these criteria, there are 2758 strict human-fly orthologs, 2031 human-worm (1523 in common between these sets). We define the evolutionarily conserved set as those 1523 human proteins that have strict orthologs in both D. melanogasterand C. elegans.
Figure 16
Functions of putative orthologs across vertebrate and invertebrate genomes. Each slice lists the number and percentages (in parentheses) of “strict orthologs” between the human, fly, and worm genomes involved in a given category of molecular function. “Strict orthologs” are defined here as bi-directional BLAST best hits (180) such that each orthologous pair (i) has a BLASTP P-value of ≤10−10(120), and (ii) has a more significant BLASTP score than any paralogs in either organism, i.e., there has likely been no duplication subsequent to speciation that might make the orthology ambiguous. This measure is quite strict and is a lower bound on the number of orthologs. By these criteria, there are 2758 strict human-fly orthologs, and 2031 human-worm orthologs (1523 in common between these sets).
The distribution of the functions of the conserved protein set is shown in Fig. 16. Comparison with Fig. 15 shows that, not surprisingly, the set of conserved proteins is not distributed among molecular functions in the same way as the whole human protein set. Compared with the whole human set (Fig. 15), there are several categories that are overrepresented in the conserved set by a factor of ∼2 or more. The first category is nucleic acid enzymes, primarily the transcriptional machinery (notably DNA/RNA methyltransferases, DNA/RNA polymerases, helicases, DNA ligases, DNA- and RNA-processing factors, nucleases, and ribosomal proteins). The basic transcriptional and translational machinery is well known to have been conserved over evolution, from bacteria through to the most complex eukaryotes. Many ribonucleoproteins involved in RNA splicing also appear to be conserved among the animals. Other enzyme types are also overrepresented (transferases, oxidoreductases, ligases, lyases, and isomerases). Many of these enzymes are involved in intermediary metabolism. The only exception is the hydrolase category, which is not significantly overrepresented in the shared protein set. Proteases form the largest part of this category, and several large protease families have expanded in each of these three organisms after their divergence. The category of select regulatory molecules is also overrepresented in the conserved set. The major conserved families are small guanosine triphosphatases (GTPases) (especially the Ras-related superfamily, including ADP ribosylation factor) and cell cycle regulators (particularly the cullin family, cyclin C family, and several cell division protein kinases). The last two significantly overrepresented categories are protein transport and trafficking, and chaperones. The most conserved groups in these categories are proteins involved in coated vesicle-mediated transport, and chaperones involved in protein folding and heat-shock response [particularly the DNAJ family, and heat-shock protein 60 (HSP60), HSP70, and HSP90 families]. These observations provide only a conservative estimate of the protein families in the context of specific cellular processes that were likely derived from the last common ancestor of the human, fly, and worm. As stated before, this analysis does not provide a complete estimate of conservation across the three animal genomes, as paralogous duplication makes the determination of true orthologs difficult within the members of conserved protein families.
7.3 Differences between the human genome and other sequenced eukaryotic genomes
To explore the molecular building blocks of the vertebrate taxon, we have compared the human genome with the other sequenced eukaryotic genomes at three levels: molecular functions, protein families, and protein domains.
Molecular differences can be correlated with phenotypic differences to begin to reveal the developmental and cellular processes that are unique to the vertebrates. Tables 18 and 19 display a comparison among all sequenced eukaryotic genomes, over selected protein/domain families (defined by sequence similarity, e.g., the serine-threonine protein kinases) and superfamilies (defined by shared molecular function, which may include several sequence-related families, e.g., the cytokines). In these tables we have focused on (super) families that are either very large or that differ significantly in humans compared with the other sequenced eukaryote genomes. We have found that the most prominent human expansions are in proteins involved in (i) acquired immune functions; (ii) neural development, structure, and functions; (iii) intercellular and intracellular signaling pathways in development and homeostasis; (iv) hemostasis; and (v) apoptosis.
Table 18
Domain-based comparative analysis of proteins in H. sapiens (H), D. melanogaster (F),C. elegans (W), S. cerevisiae (Y), and A. thaliana (A). The predicted protein set of each of the above eukaryotic organisms was analyzed with Pfam version 5.5 using E value cutoffs of 0.001. The number of proteins containing the specified Pfam domains as well as the total number of domains (in parentheses) are shown in each column. Domains were categorized into cellular processes for presentation. Some domains (i.e., SH2) are listed in more than one cellular process. Results of the Pfam analysis may differ from results obtained based on human curation of protein families, owing to the limitations of large-scale automatic classifications. Representative examples of domains with reduced counts owing to the stringent E value cutoff used for this analysis are marked with a double asterisk (**). Examples include short divergent and predominantly alpha-helical domains, and certain classes of cysteine-rich zinc finger proteins.
View this table:
Table 19
Number of proteins assigned to selected Panther families or subfamilies in H. sapiens (H), D. melanogaster (F), C. elegans (W), S. cerevisiae (Y), and A. thaliana(A).
View this table:
Acquired immunity. One of the most striking differences between the human genome and the Drosophila or C. elegans genome is the appearance of genes involved in acquired immunity (Tables 18 and 19). This is expected, because the acquired immune response is a defense system that only occurs in vertebrates. We observe 22 class I and 22 class II major histocompatibility complex (MHC) antigen genes and 114 other immunoglobulin genes in the human genome. In addition, there are 59 genes in the cognate immunoglobulin receptor family. At the domain level, this is exemplified by an expansion and recruitment of the ancient immunoglobulin fold to constitute molecules such as MHC, and of the integrin fold to form several of the cell adhesion molecules that mediate interactions between immune effector cells and the extracellular matrix. Vertebrate-specific proteins include the paracrine immune regulators family of secreted 4-alpha helical bundle proteins, namely the cytokines and chemokines. Some of the cytoplasmic signal transduction components associated with cytokine receptor signal transduction are also features that are poorly represented in the fly and worm. These include protein domains found in the signal transducer and activator of transcription (STATs), the suppressors of cytokine signaling (SOCS), and protein inhibitors of activated STATs (PIAS). In contrast, many of the animal-specific protein domains that play a role in innate immune response, such as the Toll receptors, do not appear to be significantly expanded in the human genome.
Neural development, structure, and function. In the human genome, as compared with the worm and fly genomes, there is a marked increase in the number of members of protein families that are involved in neural development. Examples include neurotrophic factors such as ependymin, nerve growth factor, and signaling molecules such as semaphorins, as well as the number of proteins involved directly in neural structure and function such as myelin proteins, voltage-gated ion channels, and synaptic proteins such as synaptotagmin. These observations correlate well with the known phenotypic differences between the nervous systems of these taxa, notably (i) the increase in the number and connectivity of neurons; (ii) the increase in number of distinct neural cell types (as many as a thousand or more in human compared with a few hundred in fly and worm) (121); (iii) the increased length of individual axons; and (iv) the significant increase in glial cell number, especially the appearance of myelinating glial cells, which are electrically inert supporting cells differentiated from the same stem cells as neurons. A number of prominent protein expansions are involved in the processes of neural development. Of the extracellular domains that mediate cell adhesion, the connexin domain–containing proteins (122) exist only in humans. These proteins, which are not present in the Drosophila or C. elegans genomes, appear to provide the constitutive subunits of intercellular channels and the structural basis for electrical coupling. Pathway finding by axons and neuronal network formation is mediated through a subset of ephrins and their cognate receptor tyrosine kinases that act as positional labels to establish topographical projections (123). The probable biological role for the semaphorins (22 in human compared with 6 in the fly and 2 in the worm) and their receptors (neuropilins and plexins) is that of axonal guidance molecules (124). Signaling molecules such as neurotrophic factors and some cytokines have been shown to regulate neuronal cell survival, proliferation, and axon guidance (125). Notch receptors and ligands play important roles in glial cell fate determination and gliogenesis (126).
Other human expanded gene families play key roles directly in neural structure and function. One example is synaptotagmin (expanded more than twofold in humans relative to the invertebrates), originally found to regulate synaptic transmission by serving as a Ca2+sensor (or receptor) during synaptic vesicle fusion and release (127). Of interest is the increased co-occurrence in humans of PDZ and the SH3 domains in neuronal- specific adaptor molecules; examples include proteins that likely modulate channel activity at synaptic junctions (128). We also noted expansions in several ion-channel families (Table 19), including the EAG subfamily (related to cyclic nucleotide gated channels), the voltage-gated calcium/sodium channel family, the inward-rectifier potassium channel family, and the voltage-gated potassium channel, alpha subunit family. Voltage-gated sodium and potassium channels are involved in the generation of action potentials in neurons. Together with voltage-gated calcium channels, they also play a key role in coupling action potentials to neurotransmitter release, in the development of neurites, and in short-term memory. The recent observation of a calcium-regulated association between sodium channels and synaptotagmin may have consequences for the establishment and regulation of neuronal excitability (129).
Myelin basic protein and myelin-associated glycoprotein are major classes of protein components in both the central and peripheral nervous system of vertebrates. Myelin P0 is a major component of peripheral myelin, and myelin proteolipid and myelin oligodendrocyte glycopotein are found in the central nervous system. Mutations in any of thesemyelin proteins result in severe demyelination, which is a pathological condition in which the myelin is lost and the nerve conduction is severely impaired (130). Humans have at least 10 genes belonging to four different families involved in myelin production (five myelin P0, three myelin proteolipid, myelin basic protein, and myelin-oligodendrocyte glycoprotein, or MOG), and possibly more-remotely related members of the MOG family. Flies have only a single myelin proteolipid, and worms have none at all.
Intercellular and intracellular signaling pathways in development and homeostasis. Many protein families that have expanded in humans relative to the invertebrates are involved in signaling processes, particularly in response to development and differentiation (Tables 18 and 19). They include secreted hormones and growth factors, receptors, intracellular signaling molecules, and transcription factors.
Developmental signaling molecules that are enriched in the human genome include growth factors such as wnt, transforming growth factor–β (TGF-β), fibroblast growth factor (FGF), nerve growth factor, platelet derived growth factor (PDGF), and ephrins. These growth factors affect tissue differentiation and a wide range of cellular processes involving actin-cytoskeletal and nuclear regulation. The corresponding receptors of these developmental ligands are also expanded in humans. For example, our analysis suggests at least 8 human ephrin genes (2 in the fly, 4 in the worm) and 12 ephrin receptors (2 in the fly, 1 in the worm). In the wnt signaling pathway, we find 18 wnt family genes (6 in the fly, 5 in the worm) and 12 frizzled receptors (6 in the fly, 5 in the worm). The Groucho family of transcriptional corepressors downstream in the wnt pathway are even more markedly expanded, with 13 predicted members in humans (2 in the fly, 1 in the worm).
Extracellular adhesion molecules involved in signaling are expanded in the human genome (Tables 18 and 19). The interactions of several of these adhesion domains with extracellular matrix proteoglycans play a critical role in host defense, morphogenesis, and tissue repair (131). Consistent with the well-defined role of heparan sulfate proteoglycans in modulating these interactions (132), we observe an expansion of the heparin sulfate sulfotransferases in the human genome relative to worm and fly. These sulfotransferases modulate tissue differentiation (133). A similar expansion in humans is noted in structural proteins that constitute the actin-cytoskeletal architecture. Compared with the fly and worm, we observe an explosive expansion of the nebulin (35 domains per protein on average), aggrecan (12 domains per protein on average), and plectin (5 domains per protein on average) repeats in humans. These repeats are present in proteins involved in modulating the actin-cytoskeleton with predominant expression in neuronal, muscle, and vascular tissues.
Comparison across the five sequenced eukaryotic organisms revealed several expanded protein families and domains involved in cytoplasmic signal transduction (Table 18). In particular, signal transduction pathways playing roles in developmental regulation and acquired immunity were substantially enriched. There is a factor of 2 or greater expansion in humans in the Ras superfamily GTPases and the GTPase activator and GTP exchange factors associated with them. Although there are about the same number of tyrosine kinases in the human and C. elegans genomes, in humans there is an increase in the SH2, PTB, and ITAM domains involved in phosphotyrosine signal transduction. Further, there is a twofold expansion of phosphodiesterases in the human genome compared with either the worm or fly genomes.
The downstream effectors of the intracellular signaling molecules include the transcription factors that transduce developmental fates. Significant expansions are noted in the ligand-binding nuclear hormone receptor class of transcription factors compared with the fly genome, although not to the extent observed in the worm (Tables 18 and19). Perhaps the most striking expansion in humans is in the C2H2 zinc finger transcription factors. Pfam detects a total of 4500 C2H2 zinc finger domains in 564 human proteins, compared with 771 in 234 fly proteins. This means that there has been a dramatic expansion not only in the number of C2H2 transcription factors, but also in the number of these DNA-binding motifs per transcription factor (8 on average in humans, 3.3 on average in the fly, and 2.3 on average in the worm). Furthermore, many of these transcription factors contain either the KRAB or SCAN domains, which are not found in the fly or worm genomes. These domains are involved in the oligomerization of transcription factors and increase the combinatorial partnering of these factors. In general, most of the transcription factor domains are shared between the three animal genomes, but the reassortment of these domains results in organism-specific transcription factor families. The domain combinations found in the human, fly, and worm include the BTB with C2H2 in the fly and humans, and homeodomains alone or in combination with Pou and LIM domains in all of the animal genomes. In plants, however, a different set of transcription factors are expanded, namely, the myb family, and a unique set that includes VP1 and AP2 domain–containing proteins (134). The yeast genome has a paucity of transcription factors compared with the multicellular eukaryotes, and its repertoire is limited to the expansion of the yeast-specific C6 transcription factor family involved in metabolic regulation.
While we have illustrated expansions in a subset of signal transduction molecules in the human genome compared with the other eukaryotic genomes, it should be noted that most of the protein domains are highly conserved. An interesting observation is that worms and humans have approximately the same number of both tyrosine kinases and serine/threonine kinases (Table 19). It is important to note, however, that these are merely counts of the catalytic domain; the proteins that contain these domains also display a wide repertoire of interaction domains with significant combinatorial diversity.
Hemostasis. Hemostasis is regulated primarily by plasma proteases of the coagulation pathway and by the interactions that occur between the vascular endothelium and platelets. Consistent with known anatomical and physiological differences between vertebrates and invertebrates, extracellular adhesion domains that constitute proteins integral to hemostasis are expanded in the human relative to the fly and worm (Tables 18 and 19). We note the evolution of domains such as FIMAC, FN1, FN2, and C1q that mediate surface interactions between hematopoeitic cells and the vascular matrix. In addition, there has been extensive recruitment of more-ancient animal-specific domains such as VWA, VWC, VWD, kringle, and FN3 into multidomain proteins that are involved in hemostatic regulation. Although we do not find a large expansion in the total number of serine proteases, this enzymatic domain has been specifically recruited into several of these multidomain proteins for proteolytic regulation in the vascular compartment. These are represented in plasma proteins that belong to the kinin and complement pathways. There is a significant expansion in two families of matrix metalloproteases: ADAM (a disintegrin and metalloprotease) and MMPs (matrix metalloproteases) (Table 19). Proteolysis of extracellular matrix (ECM) proteins is critical for tissue development and for tissue degradation in diseases such as cancer, arthritis, Alzheimer's disease, and a variety of inflammatory conditions (135, 136). ADAMs are a family of integral membrane proteins with a pivotal role in fibrinogenolysis and modulating interactions between hematopoietic components and the vascular matrix components. These proteins have been shown to cleave matrix proteins, and even signaling molecules: ADAM-17 converts tumor necrosis factor–α, and ADAM-10 has been implicated in the Notch signaling pathway (135). We have identified 19 members of the matrix metalloprotease family, and a total of 51 members of the ADAM and ADAM-TS families.
Apoptosis. Evolutionary conservation of some of the apoptotic pathway components across eukarya is consistent with its central role in developmental regulation and as a response to pathogens and stress signals. The signal transduction pathways involved in programmed cell death, or apoptosis, are mediated by interactions between well-characterized domains that include extracellular domains, adaptor (protein-protein interaction) domains, and those found in effector and regulatory enzymes (137). We enumerated the protein counts of central adaptor and effector enzyme domains that are found only in the apoptotic pathways to provide an estimate of divergence across eukarya and relative expansion in the human genome when compared with the fly and worm (Table 18). Adaptor domains found in proteins restricted only to apoptotic regulation such as the DED domains are vertebrate-specific, whereas others like BIR, CARD, and Bcl2 are represented in the fly and worm (although the number of Bcl2 family members in humans is significantly expanded). Although plants and yeast lack the caspases, caspase-like molecules, namely the para- and meta-caspases, have been reported in these organisms (138). Compared with other animal genomes, the human genome shows an expansion in the adaptor and effector domain–containing proteins involved in apoptosis, as well as in the proteases involved in the cascade such as the caspase and calpain families.
Expansions of other protein families. Metabolic enzymes. There are fewer cytochrome P450 genes in humans than in either the fly or worm. Lipoxygenases (six in humans), on the other hand, appear to be specific to the vertebrates and plants, whereas the lipoxygenase-activating proteins (four in humans) may be vertebrate-specific. Lipoxygenases are involved in arachidonic acid metabolism, and they and their activators have been implicated in diverse human pathology ranging from allergic responses to cancers. One of the most surprising human expansions, however, is in the number of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) genes (46 in humans, 3 in the fly, and 4 in the worm). There is, however, evidence for many retrotransposed GAPDH pseudogenes (139), which may account for this apparent expansion. However, it is interesting that GAPDH, long known as a conserved enzyme involved in basic metabolism found across all phyla from bacteria to humans, has recently been shown to have other functions. It has a second catalytic activity, as a uracil DNA glycosylase (140) and functions as a cell cycle regulator (141) and has even been implicated in apoptosis (142).
Translation. Another striking set of human expansions has occurred in certain families involved in the translational machinery. We identified 28 different ribosomal subunits that each have at least 10 copies in the genome; on average, for all ribosomal proteins there is about an 8- to 10-fold expansion in the number of genes relative to either the worm or fly. Retrotransposed pseudogenes may account for many of these expansions [see the discussion above and (143)]. Recent evidence suggests that a number of ribosomal proteins have secondary functions independent of their involvement in protein biosynthesis; for example, L13a and the related L7 subunits (36 copies in humans) have been shown to induce apoptosis (144).
There is also a four- to fivefold expansion in the elongation factor 1-alpha family (eEF1A; 56 human genes). Many of these expansions likely represent intronless paralogs that have presumably arisen from retrotransposition, and again there is evidence that many of these may be pseudogenes (145). However, a second form (eEF1A2) of this factor has been identied with tissue-specific expression in skeletal muscle and a complementary expression pattern to the ubiquitously expressed eEF1A (146).
Ribonucleoproteins. Alternative splicing results in multiple transcripts from a single gene, and can therefore generate additional diversity in an organism's protein complement. We have identified 269 genes for ribonucleoproteins. This represents over 2.5 times the number of ribonucleoprotein genes in the worm, two times that of the fly, and about the same as the 265 identified in the Arabidopsis genome. Whether the diversity of ribonucleoprotein genes in humans contributes to gene regulation at either the splicing or translational level is unknown.
Posttranslational modifications. In this set of processes, the most prominent expansion is the transglutaminases, calcium-dependent enzymes that catalyze the cross-linking of proteins in cellular processes such as hemostasis and apoptosis (147). The vitamin K–dependent gamma carboxylase gene product acts on the GLA domain (missing in the fly and worm) found in coagulation factors, osteocalcin, and matrix GLA protein (148). Tyrosylprotein sulfotransferases participate in the posttranslational modification of proteins involved in inflammation and hemostasis, including coagulation factors and chemokine receptors (149). Although there is no significant numerical increase in the counts for domains involved in nuclear protein modification, there are a number of domain arrangements in the predicted human proteins that are not found in the other currently sequenced genomes. These include the tandem association of two histone deacetylase domains in HD6 with a ubiquitin finger domain, a feature lacking in the fly genome. An additional example is the co-occurrence of important nuclear regulatory enzyme PARP (poly-ADP ribosyl transferase) domain fused to protein-interaction domains—BRCT and VWA in humans.
Concluding remarks. There are several possible explanations for the differences in phenotypic complexity observed in humans when compared to the fly and worm. Some of these relate to the prominent differences in the immune system, hemostasis, neuronal, vascular, and cytoskeletal complexity. The finding that the human genome contains fewer genes than previously predicted might be compensated for by combinatorial diversity generated at the levels of protein architecture, transcriptional and translational control, posttranslational modification of proteins, or posttranscriptional regulation. Extensive domain shuffling to increase or alter combinatorial diversity can provide an exponential increase in the ability to mediate protein-protein interactions without dramatically increasing the absolute size of the protein complement (150). Evolution of apparently new (from the perspective of sequence analysis) protein domains and increasing regulatory complexity by domain accretion both quantitatively and qualitatively (recruitment of novel domains with preexisting ones) are two features that we observe in humans. Perhaps the best illustration of this trend is the C2H2 zinc finger–containing transcription factors, where we see expansion in the number of domains per protein, together with vertebrate-specific domains such as KRAB and SCAN. Recent reports on the prominent use of internal ribosomal entry sites in the human genome to regulate translation of specific classes of proteins suggests that this is an area that needs further research to identify the full extent of this process in the human genome (151). At the posttranslational level, although we provide examples of expansions of some protein families involved in these modifications, further experimental evidence is required to evaluate whether this is correlated with increased complexity in protein processing. Posttranscriptional processing and the extent of isoform generation in the human remain to be cataloged in their entirety. Given the conserved nature of the spliceosomal machinery, further analysis will be required to dissect regulation at this level.
8 Conclusions
8.1 The whole-genome sequencing approach versus BAC by BAC
Experience in applying the whole-genome shotgun sequencing approach to a diverse group of organisms with a wide range of genome sizes and repeat content allows us to assess its strengths and weaknesses. With the success of the method for a large number of microbial genomes, Drosophila, and now the human, there can be no doubt concerning the utility of this method. The large number of microbial genomes that have been sequenced by this method (15, 80, 152) demonstrate that megabase-sized genomes can be sequenced efficiently without any input other that the de novo mate-paired sequences. With more complex genomes like those of Drosophila or human, map information, in the form of well-ordered markers, has been critical for long-range ordering of scaffolds. For joining scaffolds into chromosomes, the quality of the map (in terms of the order of the markers) is more important than the number of markers per se. Although this mapping could have been performed concurrently with sequencing, the prior existence of mapping data was beneficial. During the sequencing of the A. thaliana genome, sequencing of individual BAC clones permitted extension of the sequence well into centromeric regions and allowed high-quality resolution of complex repeat regions. Likewise, inDrosophila, the BAC physical map was most useful in regions near the highly repetitive centromeres and telomeres. WGA has been found to deliver excellent-quality reconstructions of the unique regions of the genome. As the genome size, and more importantly the repetitive content, increases, the WGA approach delivers less of the repetitive sequence.
The cost and overall efficiency of clone-by-clone approaches makes them difficult to justify as a stand-alone strategy for future large-scale genome-sequencing projects. Specific applications of BAC-based or other clone mapping and sequencing strategies to resolve ambiguities in sequence assembly that cannot be efficiently resolved with computational approaches alone are clearly worth exploring. Hybrid approaches to whole-genome sequencing will only work if there is sufficient coverage in both the whole-genome shotgun phase and the BAC clone sequencing phase. Our experience with human genome assembly suggests that this will require at least 3× coverage of both whole-genome and BAC shotgun sequence data.
8.2 The low gene number in humans
We have sequenced and assembled ∼95% of the euchromatic sequence of H. sapiens and used a new automated gene prediction method to produce a preliminary catalog of the human genes. This has provided a major surprise: We have found far fewer genes (26,000 to 38,000) than the earlier molecular predictions (50,000 to over 140,000). Whatever the reasons for this current disparity, only detailed annotation, comparative genomics (particularly using the Mus musculus genome), and careful molecular dissection of complex phenotypes will clarify this critical issue of the basic “parts list” of our genome. Certainly, the analysis is still incomplete and considerable refinement will occur in the years to come as the precise structure of each transcription unit is evaluated. A good place to start is to determine why the gene estimates derived from EST data are so discordant with our predictions. It is likely that the following contribute to an inflated gene number derived from ESTs: the variable lengths of 3′- and 5′-untranslated leaders and trailers; the little-understood vagaries of RNA processing that often leave intronic regions in an unspliced condition; the finding that nearly 40% of human genes are alternatively spliced (153); and finally, the unsolved technical problems in EST library construction where contamination from heterogeneous nuclear RNA and genomic DNA are not uncommon. Of course, it is possible that there are genes that remain unpredicted owing to the absence of EST or protein data to support them, although our use of mouse genome data for predicting genes should limit this number. As was true at the beginning of genome sequencing, ultimately it will be necessary to measure mRNA in specific cell types to demonstrate the presence of a gene.
J. B. S. Haldane speculated in 1937 that a population of organisms might have to pay a price for the number of genes it can possibly carry. He theorized that when the number of genes becomes too large, each zygote carries so many new deleterious mutations that the population simply cannot maintain itself. On the basis of this premise, and on the basis of available mutation rates and x-ray–induced mutations at specific loci, Muller, in 1967 (154), calculated that the mammalian genome would contain a maximum of not much more than 30,000 genes (155). An estimate of 30,000 gene loci for humans was also arrived at by Crow and Kimura (156). Muller's estimate for D. melanogaster was 10,000 genes, compared to 13,000 derived by annotation of the fly genome (26, 27). These arguments for the theoretical maximum gene number were based on simplified ideas of genetic load—that all genes have a certain low rate of mutation to a deleterious state. However, it is clear that many mouse, fly, worm, and yeast knockout mutations lead to almost no discernible phenotypic perturbations.
The modest number of human genes means that we must look elsewhere for the mechanisms that generate the complexities inherent in human development and the sophisticated signaling systems that maintain homeostasis. There are a large number of ways in which the functions of individual genes and gene products are regulated. The degree of “openness” of chromatin structure and hence transcriptional activity is regulated by protein complexes that involve histone and DNA enzymatic modifications. We enumerate many of the proteins that are likely involved in nuclear regulation in Table 19. The location, timing, and quantity of transcription are intimately linked to nuclear signal transduction events as well as by the tissue-specific expression of many of these proteins. Equally important are regulatory DNA elements that include insulators, repeats, and endogenous viruses (157); methylation of CpG islands in imprinting (158); and promoter-enhancer and intronic regions that modulate transcription. The spliceosomal machinery consists of multisubunit proteins (Table 19) as well as structural and catalytic RNA elements (159) that regulate transcript structure through alternative start and termination sites and splicing. Hence, there is a need to study different classes of RNA molecules (160) such as small nucleolar RNAs, antisense riboregulator RNA, RNA involved in X-dosage compensation, and other structural RNAs to appreciate their precise role in regulating gene expression. The phenomenon of RNA editing in which coding changes occur directly at the level of mRNA is of clinical and biological relevance (161). Finally, examples of translational control include internal ribosomal entry sites that are found in proteins involved in cell cycle regulation and apoptosis (162). At the protein level, minor alterations in the nature of protein-protein interactions, protein modifications, and localization can have dramatic effects on cellular physiology (163). This dynamic system therefore has many ways to modulate activity, which suggests that definition of complex systems by analysis of single genes is unlikely to be entirely successful.
In situ studies have shown that the human genome is asymmetrically populated with G+C content, CpG islands, and genes (68). However, the genes are not distributed quite as unequally as had been predicted (Table 9) (69). The most G+C-rich fraction of the genome, H3 isochores, constitute more of the genome than previously thought (about 9%), and are the most gene-dense fraction, but contain only 25% of the genes, rather than the predicted ∼40%. The low G+C L isochores make up 65% of the genome, and 48% of the genes. This inhomogeneity, the net result of millions of years of mammalian gene duplication, has been described as the “desertification” of the vertebrate genome (71). Why are there clustered regions of high and low gene density, and are these accidents of history or driven by selection and evolution? If these deserts are dispensable, it ought to be possible to find mammalian genomes that are far smaller in size than the human genome. Indeed, many species of bats have genome sizes that are much smaller than that of humans; for example, Miniopterus, a species of Italian bat, has a genome size that is only 50% that of humans (164). Similarly, Muntiacus, a species of Asian barking deer, has a genome size that is ∼70% that of humans.
8.3 Human DNA sequence variation and its distribution across the genome
This is the first eukaryotic genome in which a nearly uniform ascertainment of polymorphism has been completed. Although we have identified and mapped more than 3 million SNPs, this by no means implies that the task of finding and cataloging SNPs is complete. These represent only a fraction of the SNPs present in the human population as a whole. Nevertheless, this first glimpse at genome-wide variation has revealed strong inhomogeneities in the distribution of SNPs across the genome. Polymorphism in DNA carries with it a snapshot of the past operation of population genetic forces, including mutation, migration, selection, and genetic drift. The availability of a dense array of SNPs will allow questions related to each of these factors to be addressed on a genome-wide basis. SNP studies can establish the range of haplotypes present in subjects of different ethnogeographic origins, providing insights into population history and migration patterns. Although such studies have suggested that modern human lineages derive from Africa, many important questions regarding human origins remain unanswered, and more analyses using detailed SNP maps will be needed to settle these controversies. In addition to providing evidence for population expansions, migration, and admixture, SNPs can serve as markers for the extent of evolutionary constraint acting on particular genes. The correlation between patterns of intraspecies and interspecies genetic variation may prove to be especially informative to identify sites of reduced genetic diversity that may mark loci where sequence variations are not tolerated.
The remarkable heterogeneity in SNP density implies that there are a variety of forces acting on polymorphism—sparse regions may have lower SNP density because the mutation rate is lower, because most of those regions have a lower fraction of mutations that are tolerated, or because recent strong selection in favor of a newly arisen allele “swept” the linked variation out of the population (165). The effect of random genetic drift also varies widely across the genome. The nonrecombining portion of the Y chromosome faces the strongest pressure from random drift because there are roughly one-quarter as many Y chromosomes in the population as there are autosomal chromosomes, and the level of polymorphism on the Y is correspondingly less. Similarly, the X chromosome has a smaller effective population size than the autosomes, and its nucleotide diversity is also reduced. But even across a single autosome, the effective population size can vary because the density of deleterious mutations may vary. Regions of high density of deleterious mutations will see a greater rate of elimination by selection, and the effective population size will be smaller (166). As a result, the density of even completely neutral SNPs will be lower in such regions. There is a large literature on the association between SNP density and local recombination rates in Drosophila, and it remains an important task to assess the strength of this association in the human genome, because of its impact on the design of local SNP densities for disease-association studies. It also remains an important task to validate SNPs on a genomic scale in order to assess the degree of heterogeneity among geographic and ethnic populations.
8.4 Genome complexity
We will soon be in a position to move away from the cataloging of individual components of the system, and beyond the simplistic notions of “this binds to that, which then docks on this, and then the complex moves there. …” (167) to the exciting area of network perturbations, nonlinear responses and thresholds, and their pivotal role in human diseases.
The enumeration of other “parts lists” reveals that in organisms with complex nervous systems, neither gene number, neuron number, nor number of cell types correlates in any meaningful manner with even simplistic measures of structural or behavioral complexity. Nor would they be expected to; this is the realm of nonlinearities and epigenesis (168). The 520 million neurons of the common octopus exceed the neuronal number in the brain of a mouse by an order of magnitude. It is apparent from a comparison of genomic data on the mouse and human, and from comparative mammalian neuroanatomy (169), that the morphological and behavioral diversity found in mammals is underpinned by a similar gene repertoire and similar neuroanatomies. For example, when one compares a pygmy marmoset (which is only 4 inches tall and weighs about 6 ounces) to a chimpanzee, the brain volume of this minute primate is found to be only about 1.5 cm3, two orders of magnitude less than that of a chimp and three orders less than that of humans. Yet the neuroanatomies of all three brains are strikingly similar, and the behavioral characteristics of the pygmy marmoset are little different from those of chimpanzees. Between humans and chimpanzees, the gene number, gene structures and functions, chromosomal and genomic organizations, and cell types and neuroanatomies are almost indistinguishable, yet the developmental modifications that predisposed human lineages to cortical expansion and development of the larynx, giving rise to language, culminated in a massive singularity that by even the simplest of criteria made humans more complex in a behavioral sense.
Simple examination of the number of neurons, cell types, or genes or of the genome size does not alone account for the differences in complexity that we observe. Rather, it is the interactions within and among these sets that result in such great variation. In addition, it is possible that there are “special cases” of regulatory gene networks that have a disproportionate effect on the overall system. We have presented several examples of “regulatory genes” that are significantly increased in the human genome compared with the fly and worm. These include extracellular ligands and their cognate receptors (e.g., wnt, frizzled, TGF-β, ephrins, and connexins), as well as nuclear regulators (e.g., the KRAB and homeodomain transcription factor families), where a few proteins control broad developmental processes. The answers to these “complexities” perhaps lie in these expanded gene families and differences in the regulatory control of ancient genes, proteins, pathways, and cells.
8.5 Beyond single components
While few would disagree with the intuitive conclusion that Einstein's brain was more complex than that of Drosophila, closer comparisons such as whether the set of predicted human proteins is more complex than the protein set of Drosophila, and if so, to what degree, are not straightforward, since protein, protein domain, or protein-protein interaction measures do not capture context-dependent interactions that underpin the dynamics underlying phenotype.
Currently, there are more than 30 different mathematical descriptions of complexity (170). However, we have yet to understand the mathematical dependency relating the number of genes with organism complexity. One pragmatic approach to the analysis of biological systems, which are composed of nonidentical elements (proteins, protein complexes, interacting cell types, and interacting neuronal populations), is through graph theory (171). The elements of the system can be represented by the vertices of complex topographies, with the edges representing the interactions between them. Examination of large networks reveals that they can self-organize, but more important, they can be particularly robust. This robustness is not due to redundancy, but is a property of inhomogeneously wired networks. The error tolerance of such networks comes with a price; they are vulnerable to the selection or removal of a few nodes that contribute disproportionately to network stability. Gene knockouts provide an illustration. Some knockouts may have minor effects, whereas others have catastrophic effects on the system. In the case of vimentin, a supposedly critical component of the cytoplasmic intermediate filament network of mammals, the knockout of the gene in mice reveals them to be reproductively normal, with no obvious phenotypic effects (172), and yet the usually conspicuous vimentin network is completely absent. On the other hand, ∼30% of knockouts in Drosophila and mice correspond to critical nodes whose reduction in gene product, or total elimination, causes the network to crash most of the time, although even in some of these cases, phenotypic normalcy ensues, given the appropriate genetic background. Thus, there are no “good” genes or “bad” genes, but only networks that exist at various levels and at different connectivities, and at different states of sensitivity to perturbation. Sophisticated mathematical analysis needs to be constantly evaluated against hard biological data sets that specifically address network dynamics. Nowhere is this more critical than in attempts to come to grips with “complexity,” particularly because deconvoluting and correcting complex networks that have undergone perturbation, and have resulted in human diseases, is the greatest significant challenge now facing us.
It has been predicted for the last 15 years that complete sequencing of the human genome would open up new strategies for human biological research and would have a major impact on medicine, and through medicine and public health, on society. Effects on biomedical research are already being felt. This assembly of the human genome sequence is but a first, hesitant step on a long and exciting journey toward understanding the role of the genome in human biology. It has been possible only because of innovations in instrumentation and software that have allowed automation of almost every step of the process from DNA preparation to annotation. The next steps are clear: We must define the complexity that ensues when this relatively modest set of about 30,000 genes is expressed. The sequence provides the framework upon which all the genetics, biochemistry, physiology, and ultimately phenotype depend. It provides the boundaries for scientific inquiry. The sequence is only the first level of understanding of the genome. All genes and their control elements must be identified; their functions, in concert as well as in isolation, defined; their sequence variation worldwide described; and the relation between genome variation and specific phenotypic characteristics determined. Now we know what we have to explain.
Another paramount challenge awaits: public discussion of this information and its potential for improvement of personal health. Many diverse sources of data have shown that any two individuals are more than 99.9% identical in sequence, which means that all the glorious differences among individuals in our species that can be attributed to genes falls in a mere 0.1% of the sequence. There are two fallacies to be avoided: determinism, the idea that all characteristics of the person are “hard-wired” by the genome; and reductionism, the view that with complete knowledge of the human genome sequence, it is only a matter of time before our understanding of gene functions and interactions will provide a complete causal description of human variability. The real challenge of human biology, beyond the task of finding out how genes orchestrate the construction and maintenance of the miraculous mechanism of our bodies, will lie ahead as we seek to explain how our minds have come to organize thoughts sufficiently well to investigate our own existence.
• * To whom correspondence should be addressed. E-mail: humangenome{at}celera.com
1. 1.
2. 2.
3. 3.
4. 4.
5. 5.
6. 6.
7. 7.
8. 8.
9. 9.
10. 10.
11. 11.
12. 12.
13. 13.
14. 14.
15. 15.
16. 16.
17. 17.
18. 18.
19. 19.
20. 20.
21. 21.
22. 22.
23. 23.
24. 24.
25. 25.
26. 26.
27. 27.
28. 28.
29. 29.
30. 30.
31. 31.
32. 32.
33. 33.
34. 34.
35. 35.
36. 36.
37. 37.
38. 38.
39. 39.
40. 40.
41. 41.
42. 42.
43. 43.
44. 44.
45. 45.
46. 46.
47. 47.
48. 48.
49. 49.
50. 50.
51. 51.
52. 52.
53. 53.
54. 54.
55. 55a.
56. 55b.
57. 56.
58. 57.
59. 58.
60. 59.
61. 60.
62. 61.
63. 62.
64. 63.
65. 64.
66. 65.
67. 66.
68. 67.
69. 68.
70. 69.
71. 70.
72. 71.
73. 72.
74. 73.
75. 74.
76. 75.
77. 76.
78. 77.
79. 78.
80. 79.
81. 80.
82. 81.
83. 82.
84. 83.
85. 84.
86. 85.
87. 86.
88. 87.
89. 88.
90. 89.
91. 90.
92. 91.
93. 92.
94. 93.
95. 94.
96. 95.
97. 96.
98. 97.
99. 98.
100. 99.
101. 100.
102. 101.
103. 102.
104. 103.
105. 104.
106. 105.
107. 106.
108. 107.
109. 108.
110. 109.
111. 110.
112. 111.
113. 112.
114. 113.
115. 114.
116. 115.
|
Collision of Asteroids: Bane or Boon
Space is full of surprises. It may seem blank and vast, but many secrets are hidden beneath those layers. For years, I have been gazing at sky, hypnotized by the twinkling stars, shooting metorites, seldom visible planets like Jupiter and Saturn and totally mesmerized by the punctuality of Sun and Moon
Often, a thought reared its head as to what would happen if there was no Sun or Moon. What if Earth was alone in the entire Universe. Or are we actually the only living specimens in the whole wide Space.
I had tried to read more about these little known facts. And as I continued to search and research about it, many mysteries unfolded in front of my eyes. These days I am watching a documentary series How the Universe Works and it has added many more questions and answers to my kitty.
As of now, everyone is of the view that Earth is the only habitable planet in the space explored till now. However, it is certainly not alone in its making. For starter, there are a number of Galaxies and Solar Systems, stretched across the Universe.
Each solar system is born with a Central Star like our Sun. The dust whirls around the Star due to its heat and often collides with each other. Due to high temperature and centrifugal forces, the particles join together to form rocks, which gained mass as more and more rocks fused together to form planets.
In our nascent Solar System, these rocky figures played an important part and eventually formed Mercury, Venus, Earth and Mars. However, in the space beyond these inner planets, the giant gaseous Jupiter played havoc by unleashing huge gravity. As a result all the remaining dust and rocks were left to encircle in a limited space between Mars and Jupiter called Asteroid Belt.
In a way, asteroids can be called leftovers of our Solar system or even may be termed as planets that could not be. However, as they date back to the earliest period, their composition can provide rich clues about current planets and our understanding of the Universe.
There is a huge Asteroid in the famous Ring, called Ceres, it is one of the earliest asteroid to be discovered and is often referred to as a minor planet or planetoid. The interesting fact is that asteroids are airless worlds, covered with dust. They keep moving in the space or rather flowing and often get a push from the gravity of nearest planet and proceed on a downward, dangerous journey towards the Sun.
The Odyssey of an asteroid is dangerous not only for itself but for the nearby planets as well. As their collision can create huge craters, send shock waves and wipe out entire civilizations or life in toto. It is believed that Dinosaurs were made extinct by one such asteroid mishap. Though it were asteroids only that had initially brought the water and some of the nutrients/raw material for creation of life to Earth.
So, we can say that Asteroids are both bane and boon, or rather pristine lab, for experiments, carried out on living and nonliving planets.
Though, we are certainly smarter than Dinosaurs and have installed huge Telescope and Radar to pre judge any Asteroid journeying towards earth and have also developed certain mechanism to divert its path and prevent the collision.
Strangely, the craters we see on moon were most certainly made by asteroids and yet Asteroids have every reason to be a planet themselves. Their world is rich with minerals and Americans are thinking about setting up Space projects, to mine these Asteroids.
Whether it would be feasible or not, only time will tell. But one thing is certain, every time now I see a shooting star, I feel a bit touched by the possibility of that meteor being a full fledged planet itself. A potential story gone wrong somewhere, at some point of time and space. Till then the Odyssey continues and debate of Asteroids being beneficial harbinger of life or cruel eradicators will continue….
Leave a Reply
|
Envelope generator
Revision as of 21:45, 5 March 2021 by InternetArchiveBot (talk | contribs) (Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.8)
Sound synthesis techniques often employ an envelope generator, contour generator or transient generator that controls some parameters of a signal or control voltage at any point in its duration. When it controls a VCA these together form an envelope shaper or loudness contour.[1][2]
The amplitude over time of an ADSR envelope. Only the positive half of the signal is shown.
ADSR envelope generators can be used for various functions. The most important use is to give timbre to a sound. Timbre is what makes one instrument sound distinct from another, even when playing the same note and at the same volume.
Most often the envelope generator is an ADSR (Attack Decay Sustain Release), which may be applied to overall amplitude, frequency, or filter. It is usually triggered by a gate signal from the keyboard.[3]
The contour of an ADSR envelope is specified using four parameters:
Attack time
Decay time
Sustain level
The level during the main sequence of the sound's duration, until the key is released.
Release time
A common variation of the ADSR on some synthesizers, such as the Korg MS-20, was ADSHR (attack, decay, sustain, hold, release). By adding a "hold" parameter, the system allowed notes to be held at the sustain level for a fixed length of time before decaying. The General Instrument AY-3-8910 IC included a hold time parameter only; the sustain level was not programmable. Another common variation in the same vein is the AHDSR (attack, hold, decay, sustain, release) envelope, in which the "hold" parameter controls how long the envelope stays at full volume before entering the decay phase.
Certain synthesizers also allow for a delay parameter before the attack. Modern synthesizers like the DSI Prophet 8 have DADSR (delay, attack, decay, sustain, release) envelopes. The delay setting determines the length of silence between hitting a note and the attack.
For shorter envelopes at higher pitch, as happens in acoustic instruments, a master CV is taken from the same voltage as used for VCO pitch.[4]
This page uses Creative Commons Licensed content from Wikipedia:Synthesizer (view authors).
1. ^ Synthesizers for musicians by R A Penfold, PC Publishing, 1989, ISBN 1-870775-01-5, p.21
2. ^ Beginning Synthesizer by Helen Casabona, David Frederick, Tom Darter, Alfred Publishing Company Inc, 1986, ISBN 0882843532
3. ^ Synthesizers.com Q109 Envelope Generator
4. ^ Description of the Serge Extended ADSR Envelope Generator
Further reading
• The Complete Guide to Synthesizers by Devarahi, Prentice Hall, 1982, ISBN 0-13-160630-1, pages 74–91
External links
Readily available analogue IC and discrete component based
CEM IC based
|
Snacks to Help Lower Cholesterol
Fun fact! Did you know that Bean Dip is a recommended snack for heart health? If you have high cholesterol, it along with oats, nuts and citrus fruits are generally recommended foods. [Now, before you change your diet in any way, we of course strongly suggest you see your doctor as no article or general advice makes up for YOUR doctor’s recommendations for your health.]
What is Cholesterol?
According to, “High cholesterol is considered the prime risk factor under human control as far as reducing the chances of cardiovascular disease, stroke, heart attack, and coronary heart disease. Cholesterol comes in two forms—HDL (or good) cholesterol and LDL (or bad) cholesterol. As it travels through the bloodstream this waxy substance made up of triglycerides is usually safely eliminated from the body. However, when levels of LDL (low density lipoproteins and triglycerides) become too high, the risk of heart disease becomes higher as well.”
See Your Doctor!
Therefore doctors generally recommend certain foods, exercise and medications [this combination depends on your specific diagnosis, so again SEE YOUR DOCTOR] to combat LDL build up.
Bean Dip
Writes, “Beans may have a gassy reputation, but these legumes are truly one of the most versatile foods on the planet. First off beans make a great snack for those trying to shed a few pounds because they contain loads of soluble fiber, which literally slows the digestive process and satiates appetite for much longer between meals. So whip up a bean dip when you’re watching the game or a bean salad the next time you have friends over for a BBQ. You can combine an array of beans (i.e., black beans, kidney beans, navy beans, chickpeas, lentils, and more).” OR save all that trouble and join us for the OG Bean Dip at one of five locations in the Valley! Want more info? Read more HERE .
|
Friday Thought #90 Romania’s dark history
To finish off our Romanian trip report, I must mention the political history of the country as it is one of the most extraordinary stories to emerge from a county so close to home, and in the very recent past.
Nicolai Ceausescu was Romania’s communist president from 1967 – 1989 and despite seemingly starting out as exactly what the country needed and wanted, it soon turned sour and Ceausescu became increasingly brutal and repressive. He maintained tight controls over the media, sanctioned a brutal secret police force and made decisions which resulted in extreme shortages of fuel, energy, medicines, and a nationwide famine. Finally, after years of growing unrest, unlawful killings and political demonstrations, Ceausescu’s communist government was overthrown, and following a dramatic helicopter escape attempt, Ceausescu and his wife Elena were subjected to a brief show trial and sentenced to death by firing squad.
We visited these sights in Bucharest and in Târgoviște, an hour north of the capital, expecting them to be busy and full of tourists with cameras, like any European capital, especially one with such a fascinating history. But what we discovered was quite the opposite, Bucharest was like a ghost town. We were seemingly the only tourists, and the only people paying any attention to these historic landmarks. We got the distinct impression that Romanians don’t want to commemorate the revolution or celebrate their freedom, they just want to forget that this ever happened.
It took us a long time to even find the Ceausescu Museum in Târgoviște, given that it had no signs, wasn’t marked, and has the incorrect address on the website. This crumbling and unremarkable building is where the Ceausescus were held, tried and executed. Eerie doesn’t even come close. You can sit in the chair where they were sentenced, see where they spent their final days, and touch the bullet holes which remain in the wall where they fell. It should signal the end of repression for Romania, a start of a new era, yet the guest book indicated that we were the only visitors in the last 10 days.
Another stark reminder of the Ceausescu’s rule, and one which perhaps the Romanians would like to forget, is the Palace of the Parliament in Bucharest. Ceausescu built this in the 1980s to try and replicate the regime in North Korea. In order to build this narcissistic monstrosity, a hospital was demolished along with several monasteries and around 37 old factories and workshops, and 40,000 people were kicked out of their homes. The cost of building it was estimated at around €3 billion, of public money. Today it is the fourth biggest building in the world and the cost of heating and electric lighting alone exceeds €5.5 million per year. Despite housing the entire Romanian Parliament and several museums, approximately 70% of the building remains empty. Just what Romania needs; a country which still has one of the lowest net average monthly wages in the EU, at just €540.
Romania was a truly fascinating place, and I am pleased that through visiting I was able to learn so much more about the dark history of this country. If you ever find yourself in Eastern Europe I thoroughly recommend a trip.
thumb_IMG_5115_1024A surprising lack of pride on a national monument.
The bullet holes remain, as do the original lines drawn where the Ceausescus fell. Eerie doesn’t even come close.
|
As a small kid, I was told that fish don't feel any pain when they bite into a fishing hook. I'm not sure if this was just something made up so I didn't feel sorry for the fishies.
It appears that the controversy is quite widespread.
Slate even had a piece on it:
There is a new study out that contends fish feel pain. A professor at Purdue and his Norwegian graduate student attached small foil heaters to goldfish. Half of the goldfish were injected with morphine, half with saline, and then the researchers turned on the attached micro-toasters. After the heat was gone, the fish without painkillers "acted with defensive behaviors, indicating wariness, or fear and anxiety." They had also developed a lovely brown crust. These results echo a 2003 study by researchers from the University of Edinburgh who shot bee venom into the lips of trout. The bee-stung fish rubbed their lips in the gravel of their tank and generally seemed pissed off.
The 2003 Edinburgh study confirmed that trout have polymodal nociceptors around their face and head—i.e., they have the ability to detect painful stimuli with their nervous system. But, according to some definitions of pain, the detection of painful stimuli is not enough. The animal must have the ability to understand it is in pain to really feel pain.
WFN(World Fishing Network) has an article that contradicts the findings on Slate:
Do fish feel pain when we hook them? Well, not according to Dr. James D. Rose.
[...] “Fish have the simplest types of brains of any vertebrates,” he says, “while humans, have the most complex brains of any species. Conscious awareness of sensations, emotions and pain in humans depends on our massively developed neocortex and other specialized brain regions in the cerebral hemispheres. If the cerebral hemispheres of a human are destroyed, a comatose, vegetative state results. Fish, in contrast, have very small cerebral hemispheres that lack neocortex. If the cerebral hemispheres of a fish are destroyed, the fish’s behavior is quite normal, because the simple behaviors of which a fish is capable (including all of its reactions to nociceptive stimuli) depend mainly on the brainstem and spinal cord.”
A study has found that, even when caught on a hook and wriggling, the fish is impervious to pain because it does not have the necessary brain power.
The research, conducted by a team of seven scientists and published in the journal Fish and Fisheries*, concluded that the fish’s reaction to being hooked is in fact just an unconscious reaction, rather than a response to pain.
Fish have already been found to have “nociceptors” - sensory receptors that in humans respond to potentially damaging stimuli by sending signals to the brain, allowing them to feel pain.
However, the latest research concluded that the mere presence of the receptors did not mean the animals felt pain, but only triggered a unconscious reaction to the threat. The latest findings contradict previous research, which suggested that these nociceptors enabled the creatures to feel reflexive and cognitive pain.
*Newby, N.C. and Stevens, E.D. (2008) The effects of the acetic acid “pain” test on feeding, swimming and respiratory responses of rainbow trout (Oncorhynchus mykiss). Applied Animal Behavior Science 114, 260–269
Source : telegraph
• 5
It might be considered a bit more complicated than that. Wikipedia reports on the controversy. – Oddthinking Mar 14 '13 at 14:11
• 3
The tl;dr might be: fish, like many "lower" animals, have nociceptors. Some say nociceptors are the source of the experience of pain, while others argue that nociceptors just trigger reflexive avoidance. – Larry OBrien Mar 15 '13 at 1:58
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
How to Prepare a Long-Range Plan
Business owners who look to the future increase the likelihood their business will not only survive but also thrive. Long-range plans covering up to 10 years bring a business plan to life and enable the business to respond rather than react to changing conditions. While understanding the significance of long-range planning is a crucial first step, knowing how to prepare long-range plans that serve as a framework for how you conduct business is even more important.
Gather and Analyze Information
Long-range plans incorporate both qualitative and quantitative data. Start by gathering information such as your business plan, mission and vision statement, annual budget and sales projections, market and consumer research data. Analyze the information you gather to ensure everyone involved in the planning process fully understands the business’s current position and long-term business goals. A strengths, weaknesses, opportunities and threats analysis, for example, can help you assess where the business currently stands and evaluate your operating environment. Consumer research data can help you identify target markets and better understand purchasing behaviors. Market research can help you better understand business competition.
Identify Long-Range Planning Options
Identify long-range planning options that correspond to long-term business goals and help the business capitalize on strengths, minimize risks and create a competitive advantage. Identifying options typically requires careful thought and multiple brainstorming sessions. Assume, for example, you set a long-term market penetration goal of 90 percent; current data shows you’re presently at 30 percent, and consumer research shows your target market is more concerned about how your products fit into their lifestyles and make life easier than they are with price. Potential long-range planning options can include a focus on providing cutting-edge products, lifestyle-oriented marketing campaigns and profit-driven pricing strategies.
Narrow the List
Brainstorming sessions can create a multitude of options and directions in which your business can turn. It is time to prioritize planning options and narrow the list to create manageable and achievable goals. It’s important to remember that while long-range goals remain constant, as time progresses and business conditions change, it may become necessary to modify the steps your business takes to achieve these goals. Never discard an option that -- for now, at least -- might not work, because you don’t know what the future will bring. Focus on arriving at a list of long-range goals and three to four options for achieving each goal.
Create an Action Plan
Everything comes together in the final step of creating a long-range plan. You know the ultimate goals. You’ve identified objectives for achieving these goals and now it’s time to think like a project manager. Start at the end and work your way back until you reach a starting point. Set milestones and then identify action steps and deliverables required to achieve each milestone in a long-range plan. Milestones serve as good stopping points for assessing whether you’re making satisfactory progress or whether you need to modify the action plan.
|
Where the wild stars are
A storm of stars is brewing in the Trifid nebula, as seen in this view from NASA’s Wide-field Infrared Survey Explorer, or WISE. The stellar nursery, where baby stars are bursting into being, is the yellow-and-orange object dominating the picture. Yellow bars in the nebula appear to cut a cavity into three sections, hence the name Trifid nebula.
Colors in this image represent different wavelengths of infrared light detected by WISE. The main green cloud is made up of hydrogen gas.
Within this cloud is the Trifid nebula, where radiation and winds from massive stars have blown a cavity into the surrounding dust and gas, and presumably triggered the birth of new generations of stars. Dust glows in infrared light, so the three lines that make up the Trifid, while appearing dark in visible-light views, are bright when seen by WISE.
The blue stars scattered around the picture are older, and they lie between Earth and the Trifid nebula. The baby stars in the Trifid will eventually look similar to those foreground stars. The red cloud at upper right is gas heated by a group of very young stars.
The Trifid nebula is located 5,400 light-years away in the constellation Sagittarius.
Blue represents light emitted at 3.4-micron wavelengths, and cyan (blue-green) represents 4.6 microns, both of which come mainly from hot stars. Relatively cooler objects, such as the dust of the nebula, appear green and red. Green represents 12-micron light and red, 22-micron light.
NASA’s Jet Propulsion Laboratory, Pasadena, Calif., manages and operates the recently activated NEOWISE asteroid-hunting mission for NASA’s Science Mission Directorate.
|
Friends and Neighbours
The Phoenicians were never a unified nation, never really a nation at all, and in fact their identity was imposed on them by the Greeks. The Levant around the beginning of the 1st millennium BC (modern Syria, Lebanon, Israel and Palestine) was inhabited by a Semitic people known as Canaanites. Some of them occupied cities along the coast - or, in the case of Tyre, an island off the coast. These cities (Beirut, Sidon and Byblos, along with Tyre) developed as trading centres, with their people (Phoenicians to the Greeks) as entrepreneurs selling the rich resources of the Levant to the Mediterranean worlds across the sea. They held a monopoly in exploiting the small mollusc which produces the purple dye - symbol of wealth and success for ruling elites - Roman senators had purple borders to their tunics, and their emperors could wear solid purple. The Greeks assumed that their name, Phoenicians, meant "The purple people": although it also relates to phoinix, meaning a date palm. They planted trading outposts all over the southern Mediterranean (unchallenged by Greeks - except in Sicily - who monopolised the northern part). Carthage, Palermo, Tangier, Algiers, Malaga and Gibraltar all reveal Phoenician/Canaanite origins.
Phoenician ship
Phoenician ship from a sarcophagus 2nd century AD. Wikimedia commons.
There had been a thriving kingdom, connected with the Hittite empire to the north, in the late bronze age based in Ugarit (near Latakia) which was destroyed around 1200 BC. Its fall is linked with the decline of the major civilisations of the eastern Mediterranean (The Hittites and the Egyptians) perhaps connected with the arrival of the mysterious "Sea Peoples", who are more likely though to have been a symptom rather than the main cause of the collapse of the Late Bronze Age civilisations. This allowed smaller players like the Phoenicians to fill the vacuum. They explored the Red Sea and the coasts of Africa; they traded for silver in Spain and tin in far-off Britain (at least according to some interpretations of Strabo), as well as copper from Cyprus and African gold. They sold wine to Egypt, and exported glass - and especially, the uniquely valuable purple dye.
From their Canaanite ancestors they inherited a system of writing which they developed into an alphabet - the ancestor of the Greek alphabet, and from there the Roman alphabet used for many of today's languages (including the one you are reading here), and several others. There were earlier writing systems (the numerous versions of cuneiform, for example) - but only the alphabetical system made literacy available to anyone, no longer just to a secret coterie of scribes or priests.
Phoenician alphabet
The Phoenician alphabet
Persian rule (from about 539 BC).
When Cyrus took Babylon in 539 BC, all of Mesopotamia and the coastal lands of the Levant became part of the new Persian empire. The Phoenicians' ships and nautical expertise were now available for Persia. Added to the navies of the Ionian Greek cities acquired after the fall of Lydia in 547 BC, the Persians now had the sea-power they needed to protect their western frontier.
We hear little of the Phoenician cities under the Achaemenids - presumably they and their trade prospered. Their neighbour, Egypt, had managed to escape from the Persian empire, and resisted several attempts to reconquer it. When the Phoenicians attempted to revolt in 343 BC, they were decisively crushed and brutally punished by King Artaxerxes III in person, before he moved on to recover Egypt at last. Diodorus is the source:
But the people of Sidon before the arrival of the King burned all their ships so that no one could save themselves by sailing off in secret. When they realised the city and the walls had been captured and was swarming with tens of thousands of soldiers, they shut themselves, their children, and their women up in their houses and set fire to them. They say that more than forty thousand, including the slaves, were burned to death in the fire. After this disaster had befallen the Sidonians and the whole city together with its inhabitants had been obliterated by the fire, the King sold that funeral pyre for many talents, for as a result of the wealth of the people a vast amount of silver and gold was found melted down by the fire.
Alexander and later
Just over 10 years later (332 BC) Alexander of Macedon, heading for Persian territory in Egypt, laid siege to Tyre. Other Phoenician cities had already surrendered their navies to him - weakened by their abortive rebellion a few years earlier. The city held out for seven months: finally Alexander built a causeway to connect the island with the mainland (which is why Tyre today is a peninsula, not an island any more). The final assault came from both the sea and the land. Arrian:
There was huge carnage... the Macedonians attacked everything in their path, enraged by the hardships of the siege, and because the Tyrians had captured some of their men. They then paraded them on their walls, in full view of the Macedonian camp, executed them and threw the corpses into the sea. 8,000 Tyrians were killed [as against 400 Macedonians], and some 30,000 sold into slavery.
After the conquest by Macedon, the Phoenican cities began to lose out on their trade to the Greeks. Their culture lived on in their colonies in the west, especially Carthage [in modern Tunisia], and in western Sicily.
Alexander's successors - the Ptolemies and Seleucids - fought several wars over the Levant. In 217 BC the Battle of Raphia (now Rafa in Egypt near the border with Gaza) was one of the largest confrontations in the ancient word, with Ptolemy IV's African elephants facing Antiochus III and his Indian elephants. Egypt won on this occasion. Ownership changed several times, but the Seleucids finally prevailed in 197 BC. After a brief period of rule by the Armenian Tigranes, Phoenicia became part of the Roman province of Syria (65 BC).
African war elephant
Between AD 614 and 619 Khusrau II recovered most of the territory once ruled by the Achaemenids in the west: but it was a brief episode before reconquest by the Byzantines, and only shortly before the entire region was taken by the Arabs.
|
in , , , , , , ,
Bacteria, Predictions, Insulin, and Carbon Footprints
E. coli Bacteria
E. coli Bacteria Colorized scanning electron micrograph of Escherichia coli, grown in culture and adhered to a cover slip.
[Originally published as A Bacterium That “Eats” Carbon Dioxide…and a Creationist Prediction]
Escherichia coli (E. coli) are the workhorses of the bacterial world. They are found in every human being and most warm-blooded animals, but they are also found in laboratories all over the world.
Because they are easy to care for, reproduce quickly, and have a genome that is reasonably well-understood, they are a popular subject of study among biologists. In addition, they end up producing a lot of chemicals that we need but are unable to produce ourselves. For example, insulin is a protein that all people need, but some people don’t make enough of it (or don’t respond well enough to it) to remain healthy. That leads to diabetes, and one treatment for diabetes is regular insulin injections.Factory clipart, photo credit: kisspng
Advertisement Below:
While the insulin in pigs and cattle is close to what we find in people (and was used to treat diabetes for a long time), the best insulin for most diabetics is human insulin. Unfortunately, even with the best technology available, we aren’t good enough chemists to make insulin, but simple organisms like bacteria are. As a result, scientists have learned how to insert the human gene for insulin into a bacterium, which allows the bacterium to do the chemistry for us. As a result, much of the insulin used to treat diabetics today is human insulin produced by E. coli bacteria.
Unlike some species of bacteria, however, E. coli have to eat in order to get the energy and raw materials they need to do that chemistry. This ends up producing carbon dioxide as waste. To reduce the buildup of carbon dioxide in the atmosphere, then, it would be nice to produce chemicals like insulin from an organism that does not produce carbon dioxide in order to live.Clipart showing a bacterium dropping CO2 into a trash bin
There are technical problems with that, however, so right now, diabetic insulin (and many other medically-related chemicals) adds to humanity’s “carbon footprint.” There are two ways to fix this: Either figure out how to use organisms that don’t have to eat (like organisms that make their own food through photosynthesis) or change E. coli so that it doesn’t have to eat.
In a recent study, scientists have been working on the second alternative and have experienced some success. As a byproduct, they have produced something that can be used to test creationism.
In the paper, the researchers report genetically modifying E. coli to “eat” carbon dioxide instead of the simple sugars that they prefer. This involved inserting genes for enzymes that can convert carbon dioxide to carbon-containing compounds that the E. coli can use. However, even with these additional genes, the E. coli would not use carbon dioxide the way the researchers wanted them to.
Laboratory Evolution
As a result, they decided to use laboratory evolution to get the job done. They started giving the genetically-modified E. coli a sugar that the bacteria want to eat, but they then started increasing the amount of carbon dioxide in the surroundings while decreasing the amount of sugar given to the bacteria. After about 200 days, they observed that some of the bacteria were not eating the sugar. After about 350 days, they eliminated all sugars from the experiment, and they found that a population could sustain itself. They had successfully produced a population of E. coli that no longer needed sugar as a raw material for their chemical reactions.
Now before you start thinking about the possibilities of chemicals being produced without carbon dioxide emissions, there are two major issues. The chemistry that a bacterium must do to stay alive (and to make the chemicals we want it to make) takes energy, and carbon dioxide cannot be used to produce it.
Advertisement Below:
So while the bacteria can use carbon dioxide as the raw material they need to make their chemicals, they still have to get energy. They do that by taking in a one-carbon chemical called formate and burning it. That, of course, produces carbon dioxide – more than what is consumed by the chemistry that is being done. Thus, there is still a net output of carbon dioxide, but that output is less than a normal population of E. coli. The authors suggest making the formate used by the bacteria from carbon dioxide using renewable energy sources, so there is no net carbon dioxide production from the formate.
However, there are technical problems associated with scaling that up to what it needs to be for producing medically-related chemicals in the amounts needed to make this economically viable. The other problem is that right now,
These bacteria are not very efficient.
They are roughly 54 times less active than normal E. coli, so once again, they cannot be used in a viable way to make medically-related chemicals. Also, they can only exist if 10% of the atmosphere is carbon dioxide. Since the earth’s atmosphere is only 0.04% carbon dioxide, additional energy must be expended to enrich the bacteria’s surroundings with carbon dioxide.
As a result, the most interesting aspect of this research isn’t its ability to reduce the medical industry’s carbon footprint. It’s the fact that scientists could radically alter an organism so that it no longer uses what it was designed to use to do its chemistry.
To me, however, there is another very interesting aspect to all of this. Some people (like Bill Nye) claim that creationism does not make testable predictions, which is the principal characteristic of a scientific theory. When they make such statements, they are only displaying their ignorance. In fact, one reason I am a creationist is because of creationism’s success at making testable predictions that are confirmed by the data. This study allows another opportunity to do just that.
We know that part of what produced the bacteria in the study was laboratory evolution. However, exactly how the evolution occurred has not been determined. They can tell you what mutations were produced in the evolutionary part of their experiment, but they can’t say when or how they happened.
As a result, it is not known whether this is
• the kind of evolution that evolutionists require to turn flagellates into philosophers (evolution that is built on random mutations) or
• the evolution that creationists think is common in nature (evolution built on the genome’s designed ability to mutate in order to adapt to new challenges).
Fortunately, there is a way to test this:
Advertisement Below:
See How Easy it is to Recreate the Experiment
There are enough mutations in this new version of E. coli to say that if they are random, the odds of reproducing the result is very, very low. It could be done, but it would take many, many attempts. However, if this is the result of mutations that were produced by the genome’s design, then it should be very easy to get the same result. It might not happen every time, but it should happen with only a few attempts.
Now I have to admit that this isn’t much of a prediction. After all, we have seen it before. In Lenski’s long-term evolution experiment, a population of E. coli were produced that could eat a chemical called “citrate” under conditions in which they normally couldn’t eat it. While this was hailed as a result of random mutations filtered by natural selection, intelligent-design scientists have shown that it is very easy to replicate, which rules out random mutations. As a result, it is an example of mutations that the genome was designed to make.
It is rather easy to predict that this current experiment is another result of the same process. Nevertheless, I look forward to seeing it confirmed.
Written by Jay Wile
Advertisement Below:
Leave a Reply
Advertisement Below:
Advertisement Below:
Do You Agree with Jesus?
Lionfish in a reef, photo credit: pxhere
Save the Earth? or Darwin?
|
Chow Mein vs Chop Suey: What’s the Difference?
Chop suey and chow mein are often confused despite different origins, ingredients, and cooking styles. So what exactly separates the two dishes? Find out everything you need to know about the differences between chow mein and chop suey.
chow mein noodles on bowl
Chinese food is one of the world’s most popular cuisines, and for good reason: It’s crazy delicious. While Chinese food varies greatly from region-to-region, it often bears the same two important traits of my favorite kinds of food: It’s simple to make; and it’s satisfying as heck to eat.
The popularity of Chinese cuisine has given rise to a massive number of Chinese restaurants all across the world, and this has seen the influence of the country’s cooking styles go from strength to strength. Not only do we now have easily accessible Chinese restaurants in most cities, but we now have Asian fusion and even Chinese-Western fusion appearing globally.
With the growing number of Chinese-influenced recipes available, the lines between certain dishes have become blurred, meaning that plenty of people struggle to define one dish from another.
And this brings us on to a hot topic: Chow Mein vs. Chop Suey.
Today we’re here to dispel a few myths and spell out the main differences between the two popular dishes.
beef chop suey on plate served with white rice and beef
While chow mein and chop suey are often confused, the two origins of these two dishes are markedly different.
Chow Mein is unmistakably Chinese, with its origins in the North of the country. However, its simplicity saw it spread over the country quickly, with variations of it quickly adapted by cooks of different cuisines adapting it. Its popularity was helped by the egg-based noodles’ ability to hold in flavor, and the fact that it can be rustled up in mere minutes.
Now onto Chop Suey. It’s actually fairly disputed where the dish first came into being. Some say it has Cantonese origins in the South of China, while others say it was invented by 19th century Chinese migrants in the U.S. who wanted to create a fusion of Chinese food and Western cuisine.
Asian Chow Mein Noodles with Vegetables and Chopsticks
Rice vs Noodles
One of the biggest differences between Chop Suey and Chow Mein is obvious as soon as you look at them both: One is made with rice and the other noodles.
Chow Mein comes from the Chinese 炒面, pronounced chao mian, literally meaning ‘fried noodles’. The noodles used are typically made with wheat flour and egg, and beautifully crafted to give a satisfying taste.
On the other hand, American chop suey is typically served on rice. There are some variations of it that are made with noodles, but for the most part you’ll see it paired with rice.
Besides chow mein always containing noodles, and chop suey usually containing rice, there are actually other ingredient and flavor differences between the two that make them easy to tell apart.
Chow mein always contains a meat (usually either pork or chicken), cabbage, bok choy, a thin sauce (always either soy, garlic, or oyster sauce), and egg noodles.
Chop suey on the other hand was originally based upon throwing a bunch of leftovers together, so its rules are less strict.
In fact that’s what often makes this a confusing topic: There’s no formal definition for chop suey. Its name actually refers to a broad group of dishes. In essence, it’s a stir-fried rice or noodle dish made with leftovers, mainly comprised of meat, vegetables and a thick sauce.
This one might be slightly more subtle for the uninitiated but it’s a crucial one: The sauce.
See, chow mein tends to contain a thin sauce, usually comprised of either soy sauce or garlic. The thin nature of the sauce helps it not overpower the flavors of chow mein, meaning you get much more out of the meat and vegetables found within the dish.
Chop Suey however has a much thicker sauce. It tends to be either very sweet or very salty, and sticks to the ingredients to really pack in the flavor.
All in all the differences between the two are fairly slight, which is where so much confusion has come from. In essence though, chop suey is fairly liberal with its ingredients and is quite often sat on a bed of rice (although not always). Chow mein on the other hand is always fried noodles, and with a lighter sauce.
|
Pictures for Test Planning
In the fast-paced changing world of software product development there is a continuous challenge to document the expectations for the system and its internally and externally facing behaviours. Requirements often suffer because of the challenges of keeping up with an iterative project life-cycle, evolving product scope, and uncertain or changing GUI/Screens.
However, the need remains for all stakeholders to optimize agreement, minimize risk, and minimize rework costs. From a tester’s point of view this translates in part into how test coverage of the system can be assured and made visible to all the stakeholders?
In “Testing Without Requirements“, we suggested using checklists, matrices, and user scenarios as ways to approach testing when requirements are non-existent, not complete, or not ready at the time testing needs to start.
Even when you have minimal or out-of-date requirements, you can use different methods to help you rapidly define the application, describe its functions and derive an efficient plan to drive your testing effort.
A first step in developing these tools is to think in pictures.
“Imagery is the most fundamental language we have. Everything you do the mind processes through images,” says Dennis Gersten, M.D., a San Diego psychiatrist and publisher of Atlantis, a bi-monthly imagery newsletter.
Benefits of creating User Scenarios / Use Cases:
• Easy for the owner of the functionality to tell/draw the story about how it is supposed to work.
• System entities and user types are identified.
• Allows for easy review and ability to fill in the gaps or update as things change.
• Provides early ‘testing’ or validation of architecture, design, and working demos.
• Provides systematic step-by-step description of the systems’ services.
• Easy to expand the steps into individual test cases as time permits.
User scenarios quickly provide a clearer picture of what the customer is expecting the product to accomplish. Employing these user scenarios can reduce ambiguity and vagueness in the development process and can, in turn, be used to create very specific test cases to validate the functionality, boundaries, and error handling of a program.
And, every picture tells a story and stories or scenarios form a basis for testing. Using diagrams can be very effective to visualize the software, not only for the tester but for the whole project team.
Creating user scenarios/use cases can be kick-started by simply drawing a flowchart of the basic and alternate flows through the system. This exercise rapidly identifies the areas for testing including outstanding questions or design issues before you start.
In her article “A Picture’s Worth a Thousand Words”, Elizabeth Hendrickson notes, “Pictures can pack a great deal of information into a small space. They help us to see connections that mere words cannot.”
The Unified Modeling Language (UML), which is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems can be employed to help provide these pictures. However, there are many less formal types of notations you can use to put together different simple diagrams, such as activity flow charts, data flow diagrams, state diagrams, and sequence diagrams that can be just as useful for meeting your project needs.
As long as you can achieve the goal to obtain enough information to help you with the task of generating comprehensive test cases ideas, it doesn’t matter what notation you use. Start with a basic diagram, depicting the main modules of the system and when, why, and how they interact; from there, you can create more detailed diagrams for each module.
What should be in the initial picture? The very basic information you have. Is it a client-server application? Is it web-based? Is there a database? What are the major tasks the system is supposed to perform?
You have to focus on how the system behaves. End users can help define user scenarios (or use cases) in a diagram format, providing details of the system that will help you not only understand what the client is expecting but also will allow you to validate the diagrams previously drawn.
Describing the tasks and subtasks in detail will provide test scenarios and analyzing the relationships among the modules will help determine the important inputs for the overall testing strategy.
Flow Charts
A flow chart is commonly seen as a pictorial representation describing a process, defining the logical steps including decision points and activities. Flow charts are useful for defining the paths you want to verify or to force into an error condition.
Flow charts can take different forms, such as top-down flow chart, detailed flow chart, workflow diagrams, and deployment diagrams. Each of the different types of flow charts provides a different view or aspect to a process or task. Flow charts provide an excellent form of documentation for a process, and quite often are useful when examining how various steps in a process work together.
State Diagrams
Another option to capture the software behaviour is the use of state diagrams. State diagrams are used to describe the behaviour of a system. State diagrams describe all of the possible states of an object as events occur and the conditions for transition between those states. The basic elements of a state diagram are rounded boxes representing the state of the object and arrows indicating the transition to the next state. The activity section of the state symbol depicts what activities the object will be doing while it is in that state.
All state diagrams begin with an initial state of the object. This is the state of the object when it is created. After the initial state the object begins changing states. Transition conditions based on the activities determine the next state of the object.
Flowcharts and state diagrams provide similar and at times complementary methods for visualizing, or picturing, the core information to be captured in a user scenario or use case. Throughout the process of creating these diagrams, test case ideas will come to the fore to be rapidly captured for later detailing.
Time well-spent to better understand the software to be implemented and tested not only improves your actual testing activities, but also helps improve the organization’s understanding of the product, and thereby can significantly improve the product as a whole.
About Trevor Atkins
This entry was posted in All, Agile Testing, Automation & Tools, Requirements & Testing, Test Planning & Strategy and tagged , , , , , , . Bookmark the permalink.
|
Jeep Grand Cherokee Mass Air Flow Explained
If the engine in your Jeep Grand Cherokee experiences difficulty starting or does not run smoothly, the problem may be caused by the wrong ratio of air to fuel. Optimal fuel metering requires input from several sensors. Find out what to do if a vehicle diagnostic returns error code P0100 MAF Circuit Malfunction.
How Do You Know If You Have a Bad Mass Air Flow Sensor?
A bad mass air flow sensor provides inaccurate data or fails to provide the engine control unit with air data for controlling fuel injection. Inaccurate air flow readings can cause the fuel injection system to provide too much or too little fuel, causing an engine to run rich or lean.
Diagnostics are helpful for pinpointing sensor problems, particularly with Jeep Grand Cherokees that rely on speed density systems. Check the error codes to determine which sensors are malfunctioning. It is unlikely to get a P0100, P0101, P0102, P0103 or P0014 code because most Jeep vehicles do not have mass air flow sensors. Other Jeep Grand Cherokee auto parts such as the intake air temperature sensor, manifold absolute pressure sensor and oxygen sensors provide speed density data that informs the engine control unit and allows for proper adjustment of fuel levels.
Where to Find Your MAF Sensor on a Jeep Grand Cherokee
The IAT and MAP sensors in a Jeep Cherokee send digital signals that correspond to mass air flow sensor readings. MAF sensors are typically located between the air filter housing and intake manifold on vehicles that are equipped with these components. It is a good idea to disconnect the negative battery cable before removing MAF or speed density sensors for inspection, cleaning or replacement.
Speed density sensors are also located under the hood. Some of these parts are easier to access than others. The IAT sensor is located inside a hose, and it is necessary to remove a cover and detach a snorkel to gain access to the sensor for inspection or cleaning. The MAP sensor is located near the top of the firewall. Be sure to use MAF sensor cleaner or other solutions formulated for these components.
How to Quickly Test If You MAF Sensor Is Working Properly
A scan tool is helpful for measuring engine air flow at different RPMs and comparing readings taken by current sensors and known good sensors. Use a digital multimeter to check voltage or an oscilloscope to measure frequencies. These tools and sensor cleaner or a replacement MAF sensor or speed density system components such as MAP, TPS, CPS, CTS, O2 or IAC sensors may be necessary to address performance issues. It is also important to regularly switch out Jeep Grand Cherokee oil filters and air filters.
The Jeep Grand Cherokee ZJ, WJ, WK and WK2 models do not have MAF sensors. Problems with the ratio of air to fuel in a vehicle that relies on speed-density measurements may be caused by other sensors. Reference diagnostic error codes and test relevant sensors to identify the cause of poor engine performance or problems that affect fuel metering.
Leave a Reply
|
Question: Will A Sofa Fit Through My Door?
What is a standard door size?
One of the most common front door sizes in American houses is 36 inches wide by 80 inches tall, and almost 2 inches thick.
However, not all doors will have these measurements.
Doors can be as narrow as 30 inches and as tall as 96 inches, and thickness can depend on the door material..
What size sofa will fit through a 31 inch door?
It seems the general rule of thumb is that a sofa with that is 38″ deep and 34″ high, max, will fit through a 30″ doorway as long as it is a straight shot through the door. We did get a 38″ deep sofa through our doorway, and it was a tight fit.
What size sofa fits through a standard door?
FITTING THROUGH THE FRONT DOOR Measure the width of your door frame (A) (FIG. 2). If this measurement is greater than your sofa’s packaged height (H) then your sofa will fit through just fine.
Can you return a sofa if it doesn’t fit?
If the furniture doesn’t fit, you have a big problem and will have to do some serious negotiating with the retailer as the law doesn’t say that the goods you buy have to be fit for your purposes, just that they must be fit for the purpose which they are commonly intended – unless you specifically request something.
How do you get a chair through a doorway?
Try Different Angles If it is a soft couch or chair that can squeeze through, be careful not to rip the fabric. To prevent this from happening always use protection on the door jambs or wrap the piece in fabric pads (heavy blankets) or plastic stretch wrap. Some pieces may need to be slowly fit through on an angle.
How do you find the diagonal depth of a sofa?
*To determine the diagonal depth of a sofa, place a straight edge from the highest point of the back frame (not including pillows) to the front of the arm. Then measure from the bottom rear corner of the sofa up to the point that bisects the straight edge. This last measurement is your diagonal depth.
How do you figure out if a couch will fit through a doorway?
Compare the width of your doorframe with the height of your sofa allowing some wiggle room either side. This will determine if we can take the sofa through on its side. If the door width is greater than the height of your sofa, then it will fit through.
How do I get a big couch through a small door?
Slide the couch toward the door, then move it through the doorway straight or in a hooking motion with either the back or the seat entering the door first. If the couch can be moved horizontally and level, simply carry the couch straight out in a level position.
What are the sizes of couches?
Here are the standard table and couch dimensions: Three-seat sofa: 35 inches deep by 84 inches wide. Loveseat: 35 inches deep by 60 inches wide. Armchair: 35 inches deep by 35 inches wide.
Will my DFS sofa fit through door?
DFS have a full delivery and installation service and have years of experience delivering furniture to all types of properties. Whilst our sofas generally fit through standard sized door frames, to ensure that your sofa purchase can be delivered, we advise that you carefully check the dimensions of your chosen sofa.
What size couch will fit through a 29 inch door?
What size sofa should I get?
A general rule of thumb is that your sofa should not occupy the entire length of a wall. There should be at least 18” of space on either side of the sofa. If you want a sectional with a chaise then the long chaise portion should not extend more than halfway across the room.
What do you do if your couch doesn’t fit through the door?
Place your sofa on its end, vertically, with the seat first, and then try to twist the piece slowly into the doorway. This trick usually works like a charm! Step 2: Just squeeze it. Sofas are soft furniture pieces so they can often be squeezed through smaller doors and narrower corridors.
Will a couch fit through a 32 inch door?
Most furniture can easily pass through doorways of size between 33 inches and 34 inches. Therefore, a 36” width is more than enough to pass regular sofas through the door.
Will a 36 inch couch fit through a 30 inch door?
Here are our top tips for moving a sofa through a doorway: Will a 36 inch couch fit through a 30 inch door? It seems the general rule of thumb is that a sofa with that is 38″ deep and 34″ high, max, will fit through a 30″ doorway as long as it is a straight shot through the door.
|
Non-Fused Switch Disconnectors
Non-fused switch disconnectors are a safety device designed to power down or isolate a part of a circuit while it is being serviced or maintained. Unlike fused switch disconnectors, they do not use a fuse to break the circuit, instead using either a rotary or knife-blade disconnect. Rotary switches break the circuit by rotation of a lever, while knife-blade switches use a hinge to lift a lever off its slot.
What are non-fused switch disconnectors used for?
Non-fused switch disconnectors are used to isolate circuit breakers, transmission lines and transformers as part of their maintenance process. They are a safety measure and are not intended for use as a regular part of the circuit.
Types of non-fused switch disconnectors
Non-fused switch disconnectors vary in the method of disconnecting that they use, whether rotary or knife-blade. Beyond that, they have differing power, voltage and switch ratings, as well as different maximum current capabilities, which will determine which non-fused switch disconnector you choose.
|
Electric Saws
Electric saws, also known as power saws, are a supply-powered or battery-powered version of one of the most popular cutting tools. Due to their increased power capabilities, electric saws are used to cut through various materials that would be difficult to cut by hand power alone, such as stones, concrete, hardwoods or metals. The cutting process is similar, in that the toothed edge is used against the material, but user input is more focused on steadying and guiding the material instead of performing the cutting action.
Our range of electric saws features both corded and cordless options, with products from industry-leading brands including Bosch, DeWALT and Makita.
What are the most common types of Electric Saws?
Due to the differences in available blade moving mechanisms, electric saws can be divided into a number of categories.
• Alligator Saws - designed to meet all heavy-duty, industrial applications. Due to two saw blades moving in the opposite direction from each other, alligator saws cut fast and clean through tough materials with minimal effort input from the user.
• Band Saws - these tools are ideal for cutting irregular shapes thanks to a unique blade movement construction. Available as a standalone or as mobile tools, Band saws are ideal for all professional applications.
• Circular Saws - Ideal tools for quick and efficient cutting of various materials, they use a rotating blade that makes clean cross-cuts and rip-cuts. Circular saws are one of the most common types of powered saws and can accept blades that can be used to cut a wide variety of materials, from woods and plastics to metals and masonry.
• Jigsaws - Designed to cut both straight and curved lines in a horizontal position thanks to the reciprocating "push-pull" motor move, jigsaws are ideal for cutting holes in large, flat elements such as kitchen worktops. The material can be guided in nearly any direction (space allowing) to create unique shapes, which is why jigsaws are often used to complete custom designs.
• Mitre Saws - This type of saw is ideal for all professional applications as they allow the user to make cuts at a variety of angles. They also allow tilting and pivoting to the right or the left to produce bevelled cuts.
• Reciprocating Saws - These ultra-versatile power tools, also known as oscillating saws or sabre saws, employ a push-pull motion of the blade that allows for cutting through various materials in the most efficient way.
Where electric saws are used?
Electric saws are extremely versatile power tools that are used for numerous applications, including furniture manufacture, home renovation, engineering and many other manufacturing industries. They are most likely to be found in any of the following industries:
• Construction
• Carpentry
• Metalworking
• Automotive
• Demolition
• DIY
Sort by
|
It was on 24th October 1851 that the telegraph services was first introduced in India. It was in a small route between Calcutta, then capital of British India, and Diamond Harbour.
The history of telegraphs started long back. The electrical telegraph was invented in 1775. The first commercial telegraphs were introduced by the Western Railways in Britain in the 1830s. It was introduced in India in 1850s along with the railways. The telephones were not invented and the fastest communication system was the telegraphs. Only the Britishers were employed as telegraphist both in Railways and in Telegraph Offices in the initial stages due to its utmost importance and secrecy. The Britishers used the Telegraphs and Railways effectively to crush the First Independence War of 1857, which they called as ‘Sepoy Mutiny’. The Telegraphs grew fantastically during the second part of the 20th century and there were telegraph offices in all important cities and towns. In small places, the services were manned by the Postal officials called ‘Singnallers’, who kept the connection between the cities and the villages through telegraph wires.
Telegrams were sent by government as also by public to inform urgent and important news. The CTOs in the metro cities used to have about 100 or 200 telegraphists at the same time and round the clock. The telegrams were taken as official records in the court etc. It was authentic and clear. Since the charge for sending telegram was on the basis of the number of words, the message was constructed briefly with minimum number of words. The message may be of great happiness or that of sorrow like death or disease. The unions used to organise ‘telegram campaigns’ as a method of protest sending the same in large numbers to the concerned authorities.
After the growth of telephones and mobile services, the importance of telegraphs started to wane. By the second decade of 21st century it was almost limited to certain official messages. According to the government and the BSNL, there was much loss and it can not be continued as a viable service.
When the government decided to close the telegraph offices and telegraph/telegram services from 15th July, BSNL Unions put up strong protest and organised protest meetings. As President of the Union, I went to Mumbai and Kolkata offices and held press conferences for getting the support of the people at large for continuing the services.
Telegraph is a heritage service and accommodated in heritage buildings in the big cities. CTO buildings in Mumbai, Delhi, Kolkata etc. are heritage buildings, which have to be maintained as such. Lakhs and lakhs of documents connected with the history of the Mughal, British rule etc. are there in the old documents of the telegraph offices. Just like Western Court building in Delhi is occupied by a hotel, there are proposals to turn these heritage buildings also in to such posh hotels. You can see that many of the historic palaces have already been converted in to hotels.
A PLI case was filed in the court, but did not get any relief. It can only be said that the BSNL management had taken an unwise anti-people decision as also without taking in to confidence of the unions, which were trying to improve the services and make the company financially viable.
Despite all efforts to ensure that the telegraph services are kept as a token of the past, as in the case of trams in Kolkata, neither the government nor the BSNL agreed. It was finally decided to close it on 15th July 2013 forever.
Nobody expected what happened on the day. It was a pleasant surprise. Thousands of people gathered in front of telegraph offices to send their last telegrams to their near and dear ones. Even after midnight of 15/16 July, the queue did not stop and many people had to return disappointed that they could not send the last telegram. Their spontaneous response on the last day showed their love and appreciation of the telegraph services.
seven years are over after closure of telegraph services. The people have almost forgotten the ‘telegram’. The new generation may wonder what is ‘telegraphs’ and ‘telegram’? But those who knew telegram and sent or received them, still remember the same. For them it is nostalgia indeed!
|
Traditional Wine Cellar in Memphis, TN
Thinking about building a wine cellar, but find yourself bogged down by questions? Never fear. Below, we've compiled the questions we are asked most frequently about building wine cellars. Find the answers you've been searching for below. Have a question that's not on this list? Contact us and we'll answer it for you!
Q: Do I have to have to store wine in a wine cellar?
A: If you're a casual wine drinker who consumes bottles soon after you buy them, you probably don't need a wine cellar. But if you're a collector (or aspiring collector) or wine, you should protect your investment by storing it in the correct conditions. Wine stored in too-hot or too-cold conditions, at the wrong humidity, or in an environment in which temperature and humidity fluctuate, can mold, evaporate away, turn rancid, or undergo chemical changes that can make it taste unpleasant.
Q: Does a wine cellar require special construction?
A: Yes. Wine requires a unique environment different from that of your home. Wine cellars must maintain a temperature of between 55 and 78° Fahrenheit and humidity between 55 and 75 percent. This is far colder and more humid than your average house. A wine cellar has to be specially constructed to maintain and control this unique environment. The most important part of this construction is a vapor barrier, which keeps the high humidity in your wine cellar from migrating to the low humidity environment in the rest of the house. Vapor barriers are often overlooked by inexperienced wine cellar builders, leading to ruined wine and high repair costs for the owners later on.
Q: I don't have underground space. Can I still have a wine cellar?
A: Absolutely. Long ago, people used to store wine underground because conditions were usually more optimal there than above ground. But with today's technologies, we can create a wine cellar with perfect conditions in many different locations in a home. However, wine does need to be protected from light, heat and vibration, so picking a cool spot away from windows and excessive noise will save you on construction and energy costs.
Q: I don't have a lot of extra room in my house. Can I use a closet?
A: You can! Small space should never limit your wine cellar aspirations. It is possible to convert a small space like a closet into a fully-functional and beautiful wine cellar. For proof, check out this 800-bottle cellar Vintage Cellars wine cellar constructed in a San Diego home.
Q: Do the wine racks have to be custom-built for my space? That sounds expensive.
A: No. While custom racks are certainly an option, there are many other kinds of racking systems available on the market today. A modular system like Vintner wine racks can give you the gorgeous custom feel without the high price tag. Vintner offers a variety of wine rack sizes and styles, such as columns, bins, and diamond racks, that can all be fitted together to perfectly suit your space.
Q: I love wine but I don't have an eye for design. Can you help?
We'd be honored! Most of our clients know that they want a unique and beautiful space, but they don't know exactly how to achieve that. We specialize in listening to what our clients want, then working with them to create a beautiful design that suits them and fits seamlessly into the rest of their home's design. Contact us today to see what kind of wine cellar we can make for you!
|
*Edits to the Pathfinder Wiki will be disabled on July 27, 2021 in preparation for an upgrade. The upgrade is scheduled to take place on August 10, 2021. During the upgrade, we expect the Wiki to be available only intermittently. Once the upgrade is complete, users will again be able to perform edits. Thanks!
*Las ediciones al Wiki de Conquistadores se desactivarán el 27 de julio de 2021 en preparación para una actualización. La actualización está programada para el 10 de agosto de 2021. Durante la actualización, esperamos que el Wiki esté disponible solo de forma intermitente. Una vez que se complete la actualización, los usuarios podrán volver a realizar ediciones. ¡Gracias!
Adventist Youth Honors Answer Book/Arts and Crafts/Model Rocketry - Advanced
From Pathfinder Wiki
Jump to: navigation, search
Other languages:
English • español
Model Rocketry - Advanced
General Conference
Arts and Crafts
Skill Level 2
Year of Introduction: 1970
Instructor Required
1. Have the Model Rocketry Honor.
This Wiki has a page with instructions and tips for earning the Model Rocketry honor.
2. From a kit, build, successfully launch, and recover a boost glider.
A boost glider is a model having a rocket-powered ascent, transitions to a glider at the apex of its flight, and then glides (usually in circles) back to the ground using aerodynamic surfaces (wings). Most boost gliders locate the rocket engines towards the front of the aircraft, as this eases many of the design challenges involved in powered flight. The rocket portions are ejected from the craft during transition (gliders that do not separate are called rocket gliders rather than boost gliders). The Space Shuttle is an example of a boost glider (though it is most decidedly not a model!) Estes makes a model Space Shuttle which is a boost glider. See here for more information about boost gliders.
Though these models are among the most challenging rocket models to build, a modeler with careful attention to detail has every reason to expect success.
3. Design, build (not from a kit), finish, and paint a single-stage rocket. Check for stability, and successfully launch and recover this rocket.
4. Do one of the following:
a. From a kit build, finish, and paint a two-stage rocket. Successfully launch and recover this rocket.
b. From a kit, build, finish, and paint a three-engine clustered single-stage rocket. Successfully launch and recover this rocket.
5. Design an electrical launch system. When this has been approved by your instructor, build this system and use it to launch rockets at least five times.
6. Describe and demonstrate single station altitude tracking. With the aid of a helper, track the same rocket three times using three different sizes of engines and compare altitudes with an altitude finder.
7. Compare the velocity and altitude of two different weights of rockets using the same size engine.
|
To install click the Add extension button. That's it.
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
From Wikipedia, the free encyclopedia
Hecuba Blinding Polymestor by Giuseppe Crespi
Written byEuripides
ChorusCaptive Trojan Women
CharactersGhost of Polydorus
Polymestor, and his children
Place premieredAthens
Original languageAncient Greek
SettingGreek camp upon the shore of the Thracian Chersonese
Hecuba (Ancient Greek: Ἑκάβη, Hekabē) is a tragedy by Euripides written c. 424 BC. It takes place after the Trojan War, but before the Greeks have departed Troy (roughly the same time as The Trojan Women, another play by Euripides). The central figure is Hecuba, wife of King Priam, formerly Queen of the now-fallen city. It depicts Hecuba's grief over the death of her daughter Polyxena, and the revenge she takes for the murder of her youngest son Polydorus.
In the play's opening, the ghost of Polydorus tells how when the war threatened Troy, he was sent to King Polymestor of Thrace for safekeeping, with gifts of gold and jewelry. But when Troy lost the war, Polymestor treacherously murdered Polydorus, and seized the treasure. Polydorus has foreknowledge of many of the play's events and haunted his mother's dreams the night before.
The events take place on the coast of Thrace, as the Greek navy returns home from Troy. The Trojan queen Hecuba, now enslaved by the Greeks, mourns her great losses and worries about the portents of her nightmare. The Chorus of young slave women enters, bearing fateful news. One of Hecuba's last remaining daughters, Polyxena, is to be killed on the tomb of Achilles as a blood sacrifice to his honor (reflecting the sacrifice of Iphigenia at the start of the war).
Greek commander Odysseus enters, to escort Polyxena to an altar where Neoptolemus will shed her blood. Odysseus ignores Hecuba's impassioned pleas to spare Polyxena, and Polyxena herself says she would rather die than live as a slave. In the first Choral interlude, the Chorus lament their own doomed fate, cursing the sea breeze that will carry them on ships to the foreign lands where they will live in slavery. The Greek messenger Talthybius arrives, tells a stirring account of Polyxena's strikingly heroic death, and delivers a message from Agamemnon, chief of the Greek army, to bury Polyxena. Hecuba sends a slave girl to fetch water from the sea to bathe her daughter's corpse.
After a second Choral interlude, the body of Polydorus is brought on stage, having washed up on shore. Upon recognizing her son whom she thought safe, Hecuba reaches new heights of despair.
Hecuba rages inconsolably against the brutality of such an action, and resolves to take revenge. Agamemnon enters, and Hecuba, tentatively at first and then boldly requests that Agamemnon help her avenge her son's murder. Hecuba's daughter Cassandra is a concubine of Agamemnon so the two have some relationship to protect and Agamemnon listens. Agamemnon reluctantly agrees, as the Greeks await a favorable wind to sail home. The Greek army considers Polymestor an ally and Agamemnon does not wish to be observed helping Hecuba against him.
Polymestor arrives with his sons. He inquires about Hecuba's welfare, with a pretense of friendliness. Hecuba reciprocates, concealing her knowledge of the murder of Polydorus. Hecuba tells Polymestor she knows where the remaining treasures of Troy are hidden, and offers to tell him the secrets, to be passed on to Polydorus. Polymestor listens intently.
Hecuba convinces him and his sons to enter an offstage tent where she claims to have more personal treasures. Enlisting help from other slaves, Hecuba kills Polymestor's sons and stabs Polymestor's eyes. He re-enters blinded and savage, hunting as if a beast for the women who ruined him.
Agamemnon re-enters angry with the uproar and witnesses Hecuba's revenge. Polymestor argues that Hecuba's revenge was a vile act, whereas his murder of Polydorus was intended to preserve the Greek victory and dispatch a young Trojan, a potential enemy of the Greeks. The arguments take the form of a trial, and Hecuba delivers a rebuttal exposing Polymestor's speech as sophistry. Agamemnon decides justice has been served by Hecuba's revenge. Polymestor, again in a rage, foretells the deaths of Hecuba by drowning and Agamemnon by his wife Clytemnestra, who also kills Cassandra. Soon after, the wind finally rises again, the Greeks will sail, and the Chorus goes to an unknown, dark fate.
The plot falls into two clearly distinguished parts: the Greeks' sacrifice of Hecuba's daughter, Polyxena, to the shade of Achilles, and the vengeance of Hecuba on Polymestor, the Thracian king.[1]
In popular culture
A performance of Hecuba is a focus of the 2018 two-part comedy film A Bread Factory.[2]
1. ^ Conacher, D.J. (Jan 1961). "Euripedes' Hecuba". The American Journal of Philology. 82 (1): 1–26.
2. ^ Ebiri, Bilge (25 October 2018). "Review: In 'A Bread Factory,' Local Artists Face Off Against the World (Published 2018)". The New York Times. Retrieved 5 February 2021.
Further reading
• Zeitlin, Froma (1996). "The body's revenge: Dionysos and tragic action in Euripides' Hekabe", in Froma Zeitlin, Playing the Other: Gender and Society in Classical Greek Literature. Chicago: University of Chicago Press. pp. 172–216.
External links
This page was last edited on 14 May 2021, at 21:59
|
Wiktenauer logo.png
Andre Lignitzer
From Wiktenauer
Jump to navigation Jump to search
Andre Lignitzer
Born date of birth unknown
Legnica, Poland
Died before 1452
Relative(s) Jacob Lignitzer (brother)
Occupation Fencing master
Movement Fellowship of Liechtenauer
Language Early New High German
First printed
english edition
Tobler, 2010
Concordance by Michael Chidester
Andre Lignitzer (Andres Liegniczer) was a late 14th or early 15th century German fencing master. His name might signify that he came from Legnica, Poland (German: Lignitz). While Lignitzer's precise lifetime is uncertain, he seems to have died some time before the creation of the Starhemberg Fechtbuch in 1452.[1] He had a brother named Jacob Lignitzer who was also a fencing master,[2] but there is no record of any treatise Jacob may have authored. The only other fact that can be determined about Lignitzer's life is that his renown as a master was sufficient for Paulus Kal to include him, along with his brother, in his list of members of the Fellowship of Liechtenauer in 1470.[2]
An Andres Juden (Andres the Jew) is mentioned as a master associated with Liechtenauer in Pol Hausbuch,[3] and Codex Speyer contains a guide to converting between sword and Messer techniques written by a "Magister Andreas",[4] but it is not currently known whether either of these masters is Lignitzer.
Andre Lignitzer is best known for his teachings on sword and buckler, and some variation on this brief treatise is included in many compilation texts in the Liechtenauer tradition. He also authored treatises on fencing with the short sword, dagger, and grappling, though these appear less frequently. Lignitzer's sword and buckler teachings are sometimes attributed to Sigmund ain Ringeck due to their unattributed inclusion in the MS Dresden C.487, but this is clearly incorrect.
Note that the Augsburg, Salzburg, and Graz versions of Lignitzer's treatise on short sword fencing are erroneously credited to Martin Huntsfeld, while Huntsfeld's own treatise on the subject is credited to Lew.[5]
The text of the Krakow version of Lignitzer frequently refers to intended illustrations that were never added to the manuscript. The appropriate blank pages are included in the illustration column for reference. It's possible (though not likely, given what we know about its origins) that this manuscript was replicating another one with a complete set of illustrations; if this ever surfaces, the illustrations will be replaced.
Additional Resources
1. He is given the traditional blessing on the dead on folio 73r.
3. Anonymous. Untitled [manuscript]. MS 3227a. Nuremberg, Germany: Germanisches Nationalmuseum, ca.1389.
4. von Speyer, Hans. Untitled [manuscript]. MS M.I.29. Salzburg, Austria: Universitätsbibliothek Salzburg, 1491.
5. Jaquet and Walczak 2014.
6. play
7. The Rome version says: “Here begin the pieces with the buckler that the master Andre Lignitzer has written hereafter”.
8. Oberhaw could be translated as “downward cut” for ease of use and clarity in English.
9. This instruction is present in the Dresden version, but missing from the Rome version.
10. Underhaw could be translated as “upward cut”. Can be done with the back edge or false edge, and can also be directed either at the man or at the sword. In this stuck, it appears to be a rising action to meet his sword.
11. Dresden version specifies from his right shoulder, missing from Rome version.
12. The position called the schilt is one described for longsword in the Kolner Fechtbuch and some of the other gemeinfechten sources, and is somewhat similar to what Liechtenauer would call an Ochs, although the point can be upward, potentially like quite a high Pflug. With the buckler in the left hand, standing like this in “two shields” with the sword in the schilt position and the shield covering the right hand, it looks very reminiscent of the schutzen position in the MS I.33. Following this line of thinking, the instruction to turn the sword to the right (out of the schutzen) and to reach (slice) through his mouth is very reminiscent of the follow-up action that the MS I.33 recommends from the schutzen obsesseo, and is also similar to what the Liechtenauer Zedel and glosses refer to as the Alten Schnitt.
13. This instruction to wind bloß (“turn uncovered”) seems to have the sense of separating your sword and buckler while still pushing with both, keeping the hands more or less in front of the shoulders (as if sitting behind a steering wheel in a car with the hands at the “ten to two” position). The body probably has to move and turn in order to support this action, to keep the hands in front of the body rather than going out to the sides.
14. Dresden has “holds his shield up”, Rome has “lifts his shield up”. Both could mean more or less the same thing, but I prefer “lifts” as an instruction.
15. Wechselhaw could be translated as “changing cut”, because it goes up and down, side to side.
16. Streÿchen could be translated as “strikes”, but in this context are specifically those striking actions from below, sweeping up with the short edge, perhaps “streaking” up from the ground to the opponent or to his sword.
17. The same idea of separating your sword and buckler while still pushing both, keeping the hands more or less in front of the shoulders (as if sitting behind a steering wheel in a car with the hands at the “ten to two” position).
18. Probably with a thrust, but potentially with any other pushing technique.
19. Mittelhaw could be translated as “middle cut”, going across from one side to the other.
20. Zwerch could be translated as “across”, in the sense of slanting across from one side to another or slanting across from one height to another, or going diagonally across from one place to another. It also has the sense perhaps of going across something, perhaps slanting across or athwart a boat, or going across your opponent’s blade or leg as opposed to simply coming onto it in whatever fashion. The Zwer is an example of a Mittelhaw, but it is important to note that the thumb is beneath the blade and the cut is performed with hand high.
21. Schaittler could be translated as “parter”, in the sense of being something which parts another thing in two, or dividing something in two.
22. Sturtzhaw could be translated as “dropping cut”, in the sense of a ball dropping back to earth when it has been thrown upward.
23. The treatise says schilts, plural, meaning that you thrust inside both sword and shield.
24. Dresden version specifies to the body, missing from Rome version.
25. If this gloss follows the Liechtenauer method of understanding the five words Vor, Nach, Schwöch, Störck, Indes and their relationship to each other, then we should look to the Blossfechten gloss for the meaning of Indes. However, there is no guarantee that this means exactly the same thing, so the word Indes could just mean “immediately” when removed from its technical context. There does not seem to be as much Winden involved with this sword and buckler treatise as there is in the Blossfechten gloss, although it is still quite possible to perform Winden with shorter blades (look at Leckuchner’s messerfechten, for example), and Lignitzer was a member of the Gessellschaft Lichtenawers and so was probably quite well aware of Liechtenauer’s understanding of the five words and how they relate to fighting.
26. Although both the Dresden and Rome versions say bind, what they probably mean is the fastening of the hand, or the grip upon the sword.
27. The instruction to Versetz could mean “to obstruct”.
28. More correctly, both the Dresden and Rome versions say: “Thus, you have taken the shield from him.” However, the sudden change of tense seems a little abrupt and awkward, so I prefer to maintain the same tense as the rest of the instruction, for stylistic reasons.
29. There is a further piece of instruction in Goliath: “Pull your left leg far back”.
30. The instructions in Goliath are more precise: “Go through to your left side under his left armpit while holding his left arm”.
31. “his” (in Goliath)
32. “his” (in Goliath).
33. “his” (in the Glasgow Fechtbuch).
34. Goliath’s description is a bit different: “Strike out with your right hand and grab his right butt cheek”.
35. The Glasgow Fechtbuch has another suggestion: “…or into his eyes”.
36. “over” (in the Glasgow Fechtbuch).
37. The instructions in Goliath are clearer: “Step with your right leg outside behind his right leg…”.
38. Goliath goes in more detail here: “…turn to your left side and throw him over your right hip”.
39. Goliath has a further suggestion: “You can also step with your right thigh to his left thigh during the turn and throw him”.
40. “his” (in Glasgow Fechtbuch).
41. Korrgiert aus »rechten«.
42. This play is listed twice.
|
Recording fish song to make fisheries more sustainable
What do fish sound like?
When Gulf Corvina breed, their mating calls could be likened to an immense, underwater roar. Now, a group of researchers have found a way to use the deafening din to save these fish from exploitation. Using underwater microphones, they’ve developed a method for converting sound recordings of the fish’s calls into precise population estimates. Those could inform more accurate catch limits, they say, that would ultimately make corvina fisheries—and others—more sustainable.
Fish stocks worldwide are being depleted by overfishing, which often boils down to inaccurate population surveys that can lead to overly-liberal catch quotas. For the Gulf Corvina especially, overfishing over the last 20 years has taken its toll; the fish now has a vulnerable status.
Part of the problem for this species, the researchers explain in Scientific Reports, is that every year the entire population of two million corvina migrate to one spot in the Northern Gulf of California, Mexico, to spawn. There, males attract mates by producing their spectacular cacophony—so loud that fishers can easily locate them from the surface, and haul up more than a million over the course of three weeks. The researchers realized that if they could instead use the noise to monitor the population, there might be a solution for these fish.
Over the course of eight surveys in 2014, they used underwater hydrophones to record the corvinas’ roar at the Colorado River Delta where they spawn. The louder the din, the more fish there were assumed to be. But because of the way sound travels underwater it can be misleading, meaning the recordings alone couldn’t provide a dependable estimate. So the researchers paired them with sonar. This method pings sound waves into the water that bounce off objects and create a detailed picture of how many objects—i.e. fish—there are beneath the surface.
Sonar would be too costly to use for every population estimate. But in this case, the researchers only used it to sample the population size at different points in the survey, adding a layer of detail to the sound recording. If it worked, this would prove whether there was a link between more noise and more fish. And it did: “When all the fish are packed into the spawning grounds and males are chorusing during the peak spawning activity, we find a tight correlation between sound and abundance,” says co-author Brad Erisman from the University of Texas Marine Science Institute. “Now you can imagine a situation, if it’s predictable, [where] you can just have the underwater microphones out there,” he says. “Because you know this sort of sound intensity and this loudness corresponds to about this many fish. Then you have a very powerful monitoring capability.”
The researchers were thus able to determine that at peak spawning, there were between 1.53 and 1.55 million corvina in the delta. Compared to more traditional surveying tools, the advantage of this method, they showed, is its cost-effectiveness and efficiency. Hydrophones can easily and regularly be deployed to monitor the population, which could inform more accurate catch quotas and move the fishery towards sustainability.
As a tool, it also holds promise for other species. Commercial fish like pollock, cod, haddock, and grouper all produce calls during spawning. Now the researchers say they’re looking into how their method could be used to set sustainable catch limits for those species, too.
Source: Rowell TJ et al. “Estimating fish abundance at spawning aggregations from courtship sound levels.” Scientific Reports. 2017.
Water Quality Depends on Farmer’s Willingness Not Regulations
On the national level, the Federal Clean Water Act regulates pollution flowing out of pipes, known as point source pollution. But contaminants flowing off of farm fields — non-point source pollution — are exempt from regulations. With little authority to compel farmers to adopt clean water practices, state and federal agencies rely on a voluntary approach. As a result, farming practices can be dramatically different from one field to the next.
|
little Malayan Tapir
Marjorie the baby Malayan Tapir was recently born in Belfast Zoo, from parents Gladys and Elmer. images credit: Belfast Zoo
Zoo Curator Andrew Hope said, “Malayan tapirs are a beautiful but slightly unusual looking species. They are related to horses and rhinoceroses. The adults have a distinctive coat pattern and are black on the front and white on the back. However, when the calves are born they have beige spotted and striped markings, which make them look incredibly like ‘watermelons on legs’. Marjorie will begin to lose her markings after a few months. When she is six months old, she will look like a miniature adult!”
little Malayan Tapir
The Malayan Tapir (Tapirus indicus), also called the Asian Tapir, is the largest of the four species of tapir and the only one native to Asia. The scientific name refers to the East Indies, the species’ natural habitat. In the Malay language, the tapir is commonly referred to as “cipan”, “tenuk” or “badak tampong”.
via zooborns
source belfastzoo
|
How to write a good lab conclusion
Writing A Good Lab Report - Organic Chemistry Laboratory ... At every stage of an experiment, the accurate and unbiased recording of results is essential. Your lab report should be neat and legible and only written in pen or ... The Simple Lab Report | UNSW Current Students
What is a conclusion? * A conclusion is what you will leave with your reader * It "wraps up" your essay * It demonstrates to the reader that you accomplished what you set out to do * It shows how you have proved your thesis * It provides the reade... How to Write a Nursing Case Study Essay - BestEssayEdu Crafting a nursing case study really has two major tasks. First, you select a patient, and begin to collect history. You also set up treatment plans and collect data to determine the efficacy of the plan and then determine your recommendations. Second, you actually have to write up the final piece. If an experiment failed should I continue to write a lab report? Maybe writing your lab report over a "failed experiment" isn't going to help other people in the way I described above (then again, maybe it will; I don't know what you are doing). However, it will probably help you understand the actual scientific writing process better than if you wrote a report over an experiment where everything went exactly as expected. PPT How to write a GOOD conclusion… - Westerville City Schools
Sixth grade Lesson Writing a RECALL Lab Conclusion (Part 1/2)
How to Write a Business Report Conclusion | Bizfluent A report conclusion should summarize what the problem or goal is and offer new insights into the situation. You will link your report's contents to the conclusion in an understandable, insightful way. The conclusion will interpret and draw attention to the main points in the body of the report. How to Write a Lab Report | Owlcation Here is a lab report example with step-by-step instructions on writing a good lab report. When writing a lab report you are presenting scientific facts that support a hypothesis, to an audience.
How to Write a Conclusion for a lab Report
In order to know how to write a conclusion for a lab report, you have to begin by defining what a lab report conclusion is. A lab report basically refers to a paper or a report that is written to give the description and analysis of a laboratory experiment that explores a scientific phenomena or concept. How to Write a Lab Report Conclusion | The Classroom Steps to Writing Your Conclusion. Start by reviewing your introduction and following that structure when wrapping up your report. Next, restate the purpose and goals of the study undertaken. Then, indicate the methods and procedures you used to conduct an experiment to test your research question.
Writing A Conclusion For A Lab Report The choice is all yours. The article will help the students to understand some witty ways to write such type of paper based on the structure of argument essay outline and 40 great topic ideas.
PDF FORMAL LABORA TORY REP ORT - Prince Edward Island Literature Cited section of the lab report in alphabetical order in the format suggested in the for-mentioned section of the student agenda. This section should be on a separate final page of the report. Questions Although questions are not part of a formal lab report, they should be answered on a separate Writing A Conclusion For A Lab Report - Quality Writing Writing A Conclusion For A Lab Report If for some reasons we are not able to complete the task in time, we will inform you immediately and suggest a suitable deadline. Instead, try a critical workshop, an in-person or online writing class or seminar, or even a hired editor (this is one of my own secret weapons). How to Write a Lab Proposal | Synonym A well-written lab proposal will introduce research plans to an audience; sometimes this audience is a teacher who will be grading lab work, while at other times it is a group who will provide ...
How to Write a Conclusion for a Lab Report
How to Write a Good Conclusion Paragraph | Time4Writing You can start your conclusion by saying, “Gym, Math, and Art are the three classes I try to never miss.” If it’s a longer paper, a good place to start is by looking at what each paragraph was about. For example, if you write a paper about zoo animals, each paragraph would probably be about one particular animal. Tips on Writing Lab Reports - UCLA III. Conclusion. The conclusion is alot like the introduction except, instead of a summary of what you are going to do, it's a summary of what you did. The reason you have a conclusion is because your lab report might be long and the reader may not remember all the important points that you stated. How do you write a good science lab conclusion - answers.com To write a good science lab conclusion you should use the RERUN procedure. The RERUN procedure means: Recall what you did during this lab.. Explain why you did this lab and what you were trying to ...
PDF The Essentials of Writing a Good Lab Report for Introductory ... ¥Consult your Lab ManualÉthere is good information in back that is very clear and well-organized. Well worth your time! ¥A Short Guide to Writing About Biology Third Edition, by: Jan Pechenik is worth its weight in gold and great for many other writing applications. (Check the library!) ¥Your TAÕs are always willing to help but you have Conclusion Generator Tool to Finish your Essay Properly ... Adding a good conclusion to your paper. Keep in mind that our summary generator creates the final part automatically from the analysis of your writing, that's why you have to review the text beforehand and add corrections if needed. Here are some useful hints for you to add a strong conclusion to any document: How to Format a Biology Lab Report - ThoughtCo
|
I had never heard of Edouard Louis until March of this year when I read his featured article in New York Times Magazine. And I was incredibly intrigued. Who was this French writer who had been upending French literature for years? Why hadn’t I ever heard of him? I keep up with popular books and there are translations of his work in English. How had I literally never heard of The End of Eddy?
Honestly, I’m still not sure. The End of Eddy came out in 2017. I was out of college at that point so it wasn’t like I was buried under a pile of school work. I’ve read a fair amount of queer literature and a hand full of French novels that were translated. I can only guess that it is a combination of things. American’s thinking we’re superior to other nations. And we don’t want to read about poor communities, anywhere. It could be because it’s blatant in it’s violence and poverty. It’s unapologetic in it’s depiction of life as a queer young man.
Whatever the case, I didn’t know about Louis or his writing until roughly two months ago. But as soon as I did, I put The End of Eddy on hold at the library. It was an intense read.
Louis from the NYT Mag article
Eddy was born in Hallencourt, a village in north France. Hallencourt, as we soon discover, is an incredibly impoverished area. The main source of work for men is a factory and the woman are pretty much limited to becoming teachers, shop clerks, or aids for elderly people. The houses are run down, the water supply is limited and many rooms inside Eddy’s house don’t have actual doors.
It would be a tough place to grow up for anyone, but for a boy who is deemed too feminine and sissy, it’s pure hell.
His own family is embarrassed by him, he’s harassed and bullied by boys at school, and the whole town thinks he’s ‘odd.’ Eddy doesn’t ever really make friends, he hangs out with a group of guys but he’s always on the periphery.
There are two boys at school in particular who are physical, and gross, and who make life terrible for Eddy. There were sections of the book that were very difficult to read and my heart ached for all that Louis must have endured.
A large part of The End of Eddy focuses on Louis’ response to this upbringing and how it affected him psychologically. He knew his bullies so well, he could read their moods and honestly kinda liked that they beat him up every day. He justified having anal sex with another boy from the neighborhood because they were both pretending Eddy was a girl. At one point he dated a girl for awhile and everyone was so excited about it. Except him.
Honestly, my heart just broke for him. I have never had to struggle with my identity like that, never had to hide who I was or be ashamed of a fundamental part of who I am. And it must have been horrible.
One of the cover’s for End of Eddy
Which is why I feel like I’m not really in a position to judge this book. It’s non fiction, this is a retelling of his life. I’ve seen people say he exaggerated parts of it, or didn’t tell the whole truth. Or played up how rough things were and intentionally left out any good parts.
To which I say: that’s ok. This is obviously what he needed to do to write about this situation.
It’s not my job to critique how a person felt in life. And unless someone is a literal judge or like on a jury or something, no one owes us the entire truth. Or to tell every fact is exactly correct and portrayed how it happened. This is his version.
If you think Jessica Simpson and Demi Moore’s books were not written and edited to serve their purposes and put them in a certain light, have I got news for you.
Which is all to day, I think more people should read The End of Eddy. Some of the subject matter is difficult. But it goes beyond that. It’s gratifying to know that Louis was able to handle the trauma he’d suffered through. To know he was able to build a life for himself. And it’s important to know that we still have a lot of work to do for our communities. In relation to economic standing, in education, in basic acceptance. So read it. And I recommend you read the NYT Mag piece as well.
And if you want some another interesting and intense biography to read, start with Anthony Kiedis and Scar Tissue.
|
Amsterdam Sights
the Old Jewish Quarter of Amsterdam
For more than 350 years, Amsterdam was a center of Jewish life, and its Jewish community was a major contributor to the city's vitality and prosperity. The Waterlooplein area was their neighborhood, where they held their market and built their synagogues.
Of the five synagogues built in the 17th and 18th centuries, only the Portuguese Synagogue continued to serve as a house of worship after the devastating depletion of the Jewish population in World War II. The other buildings, sold to the city in 1955, stood unused and in great need of repair for many years. During those years, the city authorities and the curators of the Jewish Historical Collection of the Amsterdam Museum were patiently reestablishing the collection of paintings, decorations, and ceremonial objects that had been confiscated during World War II.
Ashkenazi synagogue complex (nowadays Jewish Historical Museum)
Hollandsche Schouwburg Hollandsche Schouwburg (Dutch Theatre)
Hollandsche Schouwburg
The former theatre the Hollandsche Schouwburg on Plantage Middenlaan 24 served during WWII as a gathering point for Jews before being deported to work camps. Many were directly put on a transport to Auschwitz, never to return. Sometimes people were locked inside for a week before they were transported to one of the German camps.
Diamond industry
diamonds when in Amsterdam visit one of the authentic diamond factories - a reminder of Jewish Amsterdam
Jews of Amsterdam
read more about the Jewish Community of Amsterdam
recommended website
Jewish Amsterdam map of Jewish life and culture in Amsterdam
nearby Attractions
Artis the Amsterdam Zoo
Diamond factories reminder of Amsterdam's flourishing Jewish diamond industry
Hermitage Amsterdam satellite museum of the Hermitage in St Petersburg
Het Muziektheater opera and ballet theatre
Hollandsche Schouwburg National Holocaust Memorial
Joods Historisch Museum Jewish Historical Museum
Nieuwmarkt a square in the old centre
Portuguese Synagogue once the world's largest synagogue
Waterlooplein famous flea market
nearby HOTELS
more neighborhoods
|
Red Badge of Courage Literary Analysis
796 Words4 Pages
1 Red Badge of Courage: Literary Analysis In Stephen Crane’s Red Badge of Courage, on the surface, it appears to be about a young boy’s internal struggles when going off to war including: lack of courage, fear of being dishonored, and worst of all, being alone in his situation. As the book goes on, the motif of fear and courage show Henry’s process of maturing as well as emotional growth but also his final understanding of the true meaning of courage. Crane uses Henry as a way to convey his beliefs about war and how it is destroying the lives of our youths today. Toward the beginning of the story, Henry believes that being either wounded or killed in battle would be the only way to earn his “badge” and become accepted as a real soldier. Yet by the end of the novel, he matures and decides to redefine what he believes courage is because of the traumatic and courage-demanding scenes that tell the story in the Red Badge of Courage. Henry really shows off his immaturity when the story reads, “ times he regarded the wounded soldiers in an envious way. He conceived persons with torn bodies to be particularly happy. He wishes that he, too had a wound, a red badge of courage.” (100) This shows how simple-mindedly Henry perceives the war and that he is still caught up in his goal of becoming wounded or worse so that he can call himself “courageous”. This evidence also displays that he is fearful about actually going out onto the battlefield, and that he is striving to just gain respect from the other soldiers. Later on, Henry becomes very cowardice, the very opposite of what he is aiming for, when he leaves the wounded soldier behind. “‘Where-where yeh goin?’ The youth pointed 2 vaguely over there, he replied...The youth went on turning at a distance he saw a tattered man wandering helplessly in the field.” (114) As Henry retreats from saving the soldier, the
More about Red Badge of Courage Literary Analysis
Open Document
|
Wednesday, 5 September 2018
When you're on the hunt for a home, you'll come across all kinds of options. Single family? Co-op? Condo? Here's a guide that will help.
Types of property
• Single-family home: This is normally a stand-alone structure. And unless you're in a planned community, the owner foots the bill for maintenance, landscaping, utilities, and any other expenses incurred from owning a home. Generally houses (and multi-floor townhouses in urban areas) are more expensive to buy than coops or condos.
• Multifamily units: These are sometimes referred to as “income properties" and can be in the form of duplexes, triplexes, or other structures with more than one dwelling. Before you buy, understand the condition of the units, the local rental laws, how much rent you can realistically garner, and what it takes to be a landlord.
• Undeveloped land: This is a piece of land without a permanent structure. It's critical to know if there is easy access to water (or a well), sewer, natural gas, and electricity, so you understand how much money, time and permitting it would take to bring those elements to the land so it can be developed. You can also hold land as-is and resell it at a later date. Loan options may be different for land than homes; make sure you have a survey and all your research in order before you ask for financing.
• Condominium: Also known as a “condo," this is an apartment that is owned by an individual.
• Cooperative: Also known as “co-op," this is an arrangement in which the unit owners buy shares in the building or in a single-family home in a community. Usually, new residents must be approved by a board, and the acceptance process can be arduous.
• Manufactured or mobile home: You own the structure, and if it's situated in a planned community, you usually lease the plot it's on, and pay dues to the association. The structures typically depreciate faster and aren't worth as much as traditional homes; the upside is that you may be able to afford a better neighborhood, and that you'll pay less in taxes if you don't own the land.
• Properties within communities or associations: While you own your home, condo or co-op unit, if it is in a gated subdivision, community, or managed building, it probably has an HOA (homeowners' association). HOA dues can cover property marketing and management costs; amenity maintenance and fees; garbage collection; and sometimes utilities. Maintenance of common areas such as hallways, courtyards, sidewalks, pools, workout areas, and laundry rooms also are maintained by the HOA and covered by dues.
Make sure you understand what is covered by the monthly fee, and what you must cover financially. Major projects, such as a roof replacement for a condo, might require an “assessment," in which each owner contributes extra funds to cover common-area repairs, so it helps to study the association's financials to see if there's enough in a reserve account to cover major initiatives.
Also ask to see the CCRs (covenants, conditions and restrictions) document, which is binding for owners, and make sure you can live by the rules. These may include language about pets, renting out your unit, quiet time, landscaping guidelines, and even the color you can paint your door or shutters.
|
Speak English from the heart
Are you going to take an English speaking exam like the CAE or the IELTS or perhaps you have to give a presentation in English to visiting clients or potential customers?
Whatever the situation, when you speak under stress it is easy to forget why you are speaking.
Speaking only has meaning when you have something to say that is important for you and that you can make relevant to the people listening.
The most useful link with the people listening to you is your humanity. The fact that you are deeply human is what gives you connection to any other human being, whatever their interest, occupation or intellectual standing.
Some things we all have in common - doubt, laughter, fear, joy, enthusiasm, pain and disappointment and all the emotions that every single human experiences at one time or another. So, if you want to connect with others, speak from the heart, the seat of your emotions.
What does this mean concretely? When you talk about a subject, talk about what moves you, what inspires you and what you feel strongly about. Connect to that feeling as you speak. Perhaps it is a memory, or a discovery, or simply a strong belief or value. This is what makes speaking powerful.
One way of doing this is to picture yourself in the situation that you are describing. If the examiner says, ‘Do you prefer studying alone or with friends?” picture yourself in these two situations and connect with how you feel. Then simply describe what it is like for you in each situation. Speak from the heart.
If you are describing pictures in which people are being creative and you have to say how it is important to be creative in these situations, imagine yourself in their place. What does it feel like? What might be difficult? Then you will have plenty to say!
If you are presenting a project to a customer and you are proud and convinced that it will really help them, feel that as you speak. Picture the customer in the new improved situation, thanks to your project. Imagine their joy and enthusiasm as they see the results. Feel the difference that it will make for them. Speak from the heart.
Laisser un commentaire
|
Why mass data analytics isn’t the answer to fraud
Publish date:
Be careful when you propose mass analytics of data for fraud detection. GDPR and other issues may deem this approach as unsuccessful. Read this blog to learn more.
Fraud is a regularly recurring topic on the front pages of the newspapers. The outrage among politicians and citizens is invariably large, because it is often about public money. The indignation does not surprise me. But what I find disturbing is that all kinds of data analysts claim that they can discover fraud with some clever queries. What an optimistic view of the world.
Know the law and be ethical
Fraud detection activities are limited by law. This prevents anyone just crawling through data looking for abnormalities without reasonable cause. With personal data now protected under the GDPR, these limitations have become even more strict. From a legal perspective, any data processing requires legal approval and a specific, explicit, and legitimate purpose that restricts the use of that data. Combining data with other, external data sets can also be problematic. What is technically possible is not always feasible or even legal.
A fraud detection system needs to be ethical as well as legal. Data analytics are all about statistics, which means errors are inherent. How do data analysts deal with these (Type I) errors? And who is at fault when someone is wrongly accused?
Data isn’t the full story
Your data isn’t the whole story, not even a shadow in a cave, to reference Plato. Personal data can only be collected and kept for predefined purposes, which mostly revolve around enabling business processes, not fraud detection, and analysts cannot gather more data than they strictly need.
Good fraudsters know that no process is foolproof – there are always blind spots that can be abused. Fraudsters follow the process as required, but they also know the loopholes and exceptions. In other words, the data set may look innocent even when fraud is happening.
In most cases, there’s an external event that triggers fraud detection. Most commonly, it’s someone that talks too much. Take care of your whistle-blowers, they might point you to more serious fraud cases than petty crimes data analytics usually reveal.
Be humble and reflect on your actions
In China, the social credit system is gathering pace. In the end, all Chinese citizens’ behavior will be monitored. If it’s within the lines drawn by the government, they’ll get benefits, if it’s not, they ’ll be excluded from basic services. Data analytics will make all this possible, but is this the society we want to live in?
What are the potential side effects of claims that massive data analysis is the cure for fraud? If we are monitoring everyone’s personal data for anomalies, in other words, possible fraud, what kind of community are we creating? A place where exceptions are suspicious, where you need to be mediocratic to have a pleasant life.
Fraud prevention limits our freedom. An invisible eye monitoring every single transaction we make could be one of the greatest, and least expected, threats to freedom we have ever encountered. A world without fraud is a utopian concept, but also a prison I don’t want to live in.
Related Posts
AI4Good: How governments can leverage AI for society
Pierre-Adrien Hanania
Date icon November 26, 2020
Pierre-Adrien Hanania provides an overview of the recent AI4Good UN Global Summit event,...
Data Analytics
Do data scientists prefer R or Python?
Sumit Kumar
Date icon September 15, 2020
The rapid emergence of data science has been fueled by both R and Python. However,...
Beyond the AIOps hype: Part 2
Sindhu Bhaskaran
Date icon August 7, 2020
In this, the second in her blog series exploring AI for IT operations (AIOps), artificial...
|
Cat Fight: How to Handle Feline Aggression
Most multi-cat households will encounter this problem eventually. Heres help.
Talk to any two humans living under the same roof, and they will tell you that they occasionally squabble. Talk to any multi-cat owner, and they will tell you their cats have occasional disagreements, as well. The behavior may include hissing, spitting, swatting or chasing; and the disagreements may erupt during mealtimes, when two cats want the same comfortable chair, or even as part of play. If cats occasionally hiss and swat and dont cause injury, or if they take turns doing the chasing, its fine, says Ellen Lindell, VMD, board certified in behavior by the American College of Veterinary Behaviorists. But if one cat is playing and the other is not, its not play anymore.
When Play Turns Serious
Play aggression is not a cause for alarm – but if petty tiffs escalate into major altercations, the resulting skirmishes can become serious. Cat-to-cat aggression is one of the major behavior problems experienced by cat owners. Its causes may include competing for females among sexually intact male cats, or the introduction of a newcomer, or incorrectly introducing the newcomer to the resident cat.
It can also be a case of redirected aggression caused by arousal from an external stimulus such as an unneutered cat outdoors that the indoor cat may see from a window, or competing for territory or dominance. With cats indoors, aggression is usually related to status as opposed to territory or fear-based aggression where one cat is being defensive, says Dr. Lindell.
Too many cats in too small of a space may also result in fur flying, but how many is too many depends on the owner and the cats. People may have 20 cats without any fighting, says Dr. Lindell. Theres no magic number. Its really a [feline] personality issue. Some spats are okay, but head-to-head fighting should not be tolerated. Howling, yowling, flattened ears, dilated pupils, raised hackles, arched back and puffy hair are indications that a knock-down-drag-out battle is on the horizon, so you need to intervene – without using your hands – before they get to that stage. Watch for subtle things like a retreating cat or one cat threatening the other, says Dr. Lindell. Staring at each other with a twitching tail is a warning sign.
Cat-to-cat aggression may result in more than just a few clumps of hair on the carpet. If cats have unresolved issues with one another, they may injure each other, spray your house to mark territory, become lethargic, hide, refuse to eat and subsequently lose weight, experience stress and fear, and even become sick.
Aggression can lead to litter box problems or cats not being able to get to their food, says Dr. Lindell.
As Always, Prevention is Best
When it comes to aggression, the old adage, An ounce of prevention is worth a pound of cure, certainly applies. You can create an environment that promotes harmony among the resident cats rather than one in which the cats have to compete for resources. Give cats choices, says Dr. Lindell. Provide more than one litter box in more than one location, and let the cats decide for themselves which to use.
Providing a food dish for each cat or feeding stations at multiple locations can help reduce competition for food. Multi-level cat trees or shelves make use of vertical space within the home and give cats optional places to hang out. The more opportunity for cats to have their own spots, the better, says Dr. Lindell. If cats have their own stations, they can work out the time and space sharing for themselves.
Its impossible to know for certain whether a resident cat will like a companion. Often, a pet caregiver may project feelings onto a cat if he or she feels guilty for leaving the cat alone for blocks of time, but the cat may actually be perfectly content. In other cases, cats may enjoy the company of other cats. More and more people believe cats are better with a buddy, but its very individual, says Dr. Lindell. Ive seen many cats who are attached to their owner but have bonded with another cat, and cats who have bonded with one cat but not the other.
Consider Your Cat When Adding Another
If you want to add a cat to your household, do it because its something you want, but make it pleasant for your resident cat(s). Choose a new cat carefully by finding out as much about him as possible, and preferably from a multi-cat situation, says Dr. Lindell. In terms of inter-cat aggression, there is no statistical difference in gender, according to Dr. Lindell. Age may be a factor, but it depends on the cats. Kittens have a better chance of accepting adult cats, but not necessarily the reverse. Some adult cats are not comfortable with very playful kittens, says Dr. Lindell. Serious aggression is often not exhibited until cats mature socially at the age of two or three.
When selecting a new cat, look for signs of fear. If a cat is fearful, it may have a hard time adjusting, and may overreact to a small threat, says Dr. Lindell.
Adopting a sexually altered cat is more likely to ensure success than bringing home one that is still intact. If you bring home a new kitten, have it spayed or neutered as soon as possible to prevent sexually related aggression. Whenever a new cat is introduced to a resident cat, proper introductions will play a major role in how well the cats get along.
If Fights Erupt, Get Involved
If your cats begin to fight, interrupt the melee. Make a startling sound like shaking a jar of pennies, snapping your fingers, or clapping your hands, says Dr. Lindell. If that doesnt work, blast them with a stream of water. Once they stop fighting, separate them and reintroduce them gradually, even if they have lived together for some time. Never try to separate fighting felines with your hands. It could result in injury to your hands, face or other parts of your body. Have a heavy-duty water gun or a blanket to throw over them, says Dr. Lindell. Make sure they are settled before putting them together again.
If you arent able to keep the cats or people in the household safe or free from injury, separate them permanently. You dont have to give up a cat, says Dr. Lindell, but do time sharing. One gets the bedroom at one point, while the other is in the den. There are always ways to work through the aggression.
Anti-Anxiety Drugs: Can They Help?
If behavior or environmental modification alone doesnt work, discuss drug therapy with a veterinarian. Anti-anxiety drugs including anti-depressants will lower arousal level, says Dr. Lindell. Some cats are explosive and an anti-anxiety drug will help with impulsive aggression. Its important to work through environmental factors, though.
|
Summary and Analysis Chapter 6
During the following year, the animals work harder than ever before. Building the windmill is a laborious business, and Boxer proves himself a model of physical strength and dedication. Napoleon announces that Animal Farm will begin trading with neighboring farms and hires Mr. Whymper, a solicitor, to act as his agent. Other humans meet in pubs and discuss their theories that the windmill will collapse and that Animal Farm will go bankrupt. Jones gives up his attempts at retaking his farm and moves to another part of the county. The pigs move into the farmhouse and begin sleeping in beds, which Squealer excuses on the grounds that the pigs need their rest after the daily strain of running the farm.
That November, a storm topples the half-finished windmill. Napoleon tells the animals that Snowball is responsible for its ruin and offers a reward to any animal who kills Snowball or brings him back alive. Napoleon then declares that they will begin rebuilding the windmill that very morning.
With the passing of a year, all of the animals (save Benjamin) have wholly swallowed Napoleon's propaganda: Despite their working like "slaves," the animals believe that "everything they did was for the benefit of themselves" and "not for a pack of idle, thieving human beings." When Napoleon orders that animals will need to work on Sundays, he calls the work "strictly voluntary" yet adds that any animal who does not volunteer will have his rations reduced. Thus, Napoleon is able to foster a sense of unity (where animals "volunteer") using the threat of hunger. This transformation of obvious dictatorial practices (forced labor) into seemingly benevolent social programs (volunteering) is another of Napoleon's methods for keeping the animals working and docile.
The effect of Napoleon's propaganda is also seen in Boxer's unflagging devotion to the windmill. Even when warned by Clover about exerting himself, Boxer can only think, "I will work harder" and "Napoleon is always right." The fact that he can only think in slogans reflects his inability to engage in any real thought at all. Slogans such as these are powerful weapons for leaders like Napoleon, who want to keep their followers devoted, docile, and dumb.
One of the most effective ways that Napoleon strengthens his rule is his use of the politics of sacrifice. Indeed, "sacrifice" is an often-repeated word in the novel, and Napoleon uses it to excuse what he knows others will see as his blatant disregard for the Seven Commandments of Animalism. For example, when ordering that Animal Farm will engage in trade with human beings and that the hens must sell their eggs, he states that the hens "should welcome this sacrifice as their own special contribution towards the building of the windmill." After facing some objections from the animals about trading with humans, Napoleon tells them that they will not have to come into contact with any human beings, since, "He intended to take the whole burden upon his own shoulders." Like the apples and milk (which the pigs' pretended not to like in the first place), Napoleon masterfully recasts himself as an animal like Boxer — when, of course, the reader sees that the pig and the horse are complete opposites in their selfishness and selflessness. Of course, if any animals ever hint at seeing through Napoleon's false humility, they will be greeted with the same combination of bleating and growls that faced Snowball in Chapter 5.
Squealer continues his work of mollifying the animals who object to Napoleon's plans. As he figuratively rewrites history when explaining that there never was a resolution against using money or trading and that the animals must have dreamed it, he literally rewrites history when he changes the Fourth Commandment from "No animal shall sleep in a bed" to "No animal shall sleep in a bed with sheets." When Clover learns of the two added words, she is naturally suspicious but has been so brainwashed by Napoleon's regime that she concludes that she was mistaken. Squealer's explanation of why the pigs sleep in beds hinges on semantics rather than common sense: "A bed merely means a place to sleep in" and "A pile of straw is a bed, properly regarded" are examples of his manipulation of language. His most powerful word, of course, is "Jones," for whenever he asks, "Surely, none of you wishes to see Jones back?" all the animals' questions are dispelled.
The destruction of the windmill marks the failure of Snowball's vision of the future. It also allows Orwell to again demonstrate Napoleon's incredible ability to seize an opportunity for his own purposes. Afraid of seeming indecisive and a failure while all the animals stare at the toppled windmill, Napoleon invokes the name of Snowball as Squealer does with Jones: "Do you know," he asks, "the enemy who has come in the night and overthrown our windmill? SNOWBALL!" For the remainder of the novel, Snowball will be used as a scapegoat for all of Napoleon's failings; his commands to begin rebuilding the windmill and shouting of slogans occur because he does not want to give the animals any time in which to consider the plausibility of his story about Snowball. Although he shouts, "Long live Animal Farm," he means, "Long live Napoleon!"
Back to Top
|
Some key things to know before reading: Height, weight, diet, exercise, genetics – all of these play a role in the rate at which drugs are metabolized through the body. If Person A and Person B stop smoking marijuana at the same time and take a drug test one month later, it is possible that Person A tests positive and Person B tests negative. The answers to these questions are as close to fact as possible, taking into account differences in metabolism.
Screening test: Only shows positive or negative for the presence of drugs. No quantities are provided.
Confirmatory test: Determines specifically how much of a drug is in the body. Typically performed after a screening, especially in a clinical setting, and clears up false positives in by showing presence of drug is insignificant.
Now let’s get to it!
Does drinking a lot of water help pass a urine drug test?
In short, yes it does dilute the urine, BUT creatinine levels are measured in a drug test and will be heavily distorted and the facility testing the urine will in-turn recommend a re-test.
Can the lab tell if someone else’s urine is used?
Yes. Temperature of urine at the time it’s produced must be close to body temperature range. If the sample is too cool or hot and falls out of that small range, it will be rejected and a re-test will need to be administered.
Will eating a poppy seed muffin make a drug test positive?
One of the most popular myths in drug testing. It is highly unlikely one would test positive for heroin if a poppy seed muffin is eaten that day. However, eating 6 poppy seed muffins could be a concern, both for the drug test and you as a person. If anything, this could show up in screening as a false positive, but with confirmatory testing would ultimately result as negative.
Will eating CBD oil products make a drug test positive?
No, but in the same aspect of the poppy seed muffin. One CBD gummy isn’t going to make a test positive but if copious amounts of CBD gummies are all one eats for breakfast, lunch, and dinner, it can trigger a false positive – more so within 24 hours of a test. Additionally, these snacks all vary in THC amounts, so potency is truly unknown until consumed, and ultimately tested. If employers choose not to go beyond screening, it could mean trouble. A confirmatory test, however, would show as negative.
Does vaping CBD oil make a drug test positive?
No, same applies as above.
Are all drugs detectable the same amount of time?
No, all drugs metabolize at different rates with marijuana typically being detectable a month after use and propoxyphene only showing 1-2 days after use.
Can urine color change depending on what I consume?
Yes, and it’s one of the strangest things I’ve seen in the lab. Of course dehydration plays a part in making urine darker (yellow, orange, and even brown), but medication can take it even further with amitriptyline turning urine blue and rifampin turning urine red.
Hair, saliva, urine – How far back do they go?
Hair testing is often the route high-paying employers are willing to have job prospects go through, and goes back 90 days. It is, however, the most flawed as drugs can show up in hair even if not consumed. Say you go to a concert and people around you smoke marijuana – a hair test could show positive for the next 3 months, even though you didn’t partake. Even if you did smoke and had a hair test the next day, 3 days, or 5 days – you would show as clean because it takes about a week for the substance to show in the follicle. This being said, hair testing for drugs is misleading for employers and not advised in a clinical setting.
Urine testing is the most reliable because of detection window, as drugs both illegal and prescription are typically detectable between 3 and 7 days after consumption. Though this method does not go as far back as 90 days, the results are more trustworthy especially with validity testing done to ensure the sample is authentic and confirmation testing to pinpoint how much of a drug is found in the body. This is the best possible way of testing within a clinical setting. Not to mention its cost efficiency for clinics and employers.
Saliva testing is reliable but the drawback is that the detection window for drugs is about half as long. If a patient took OxyContin 3 days ago, a urine drug test would detect it, but a saliva test for this drug only goes back 24 hours and would show as negative. It is effective and inexpensive, though it costs slightly more and only gives a small piece of the picture when compared to urine drug testing.
The Bottom Line
The best way to pass a drug test, whether for an employer or your doctor, is to abstain from what isn’t prescribed and adhere to the regimen that is prescribed. Buzzkill – I get it. No matter what, you should be as honest as possible especially when meeting with your doctor, whether you think what you took will show up in a drug test or not.
Thanks for sharing!
|
Site Loader
Canada, Toronto
Vaping was invented to promote a hard-core smoker to stop smoking. Vaping like e-liquidswas presumed to be safer than a regular cigarette. In 2019, research scientists at the University of North Carolina School of Medicine have divulged astonishing scientific evidence that using e-cigarettes promotes the same cellular responses happening in the human body of a smoker who is afflicted with a lung disease called emphysema.
Their study was published in the American Journal of Respiratory and Critical Care Medicine revealed that the lungs of the vapers have the same …
How Your DNA Can Be Altered By Vaping
The puffing of e-cigarettes also known as Vaping has been made popular and merrier by juice flavors. A good example of this in a common flavor is the blueberry cheesecake. Being considered safer than smoking cigarettes, it has gained popularity through vigorous marketing. However, astonishing research has revealed that it can interfere with your immune system.
Genes lodged within epithelial cells and located within the nasal area get suppressed over time. It threatens a healthy well-being because many cells…
|
Place 10,000 µ-LEDs in less than Two Minutes
High-Precision Inspection and Metrology Ensures High Yield
By Dr. Subodh Kulkarni, President and CEO CyberOptics and Justin Wendt, Chief Technology Officer, Rohinni. As published in Global SMT and Packaging.
Mini (m-) and micro (µ-) LEDs are poised to usher in a new generation of display and specialty lighting technologies. Both offer many advantages over current technologies, including higher brightness, blacker blacks, wider color gamut, greater energy efficiency, resistance to moisture and oxygen and absence of image burn-in. The primary barrier to wide-spread adoption of m-and µ-LED based products is manufacturability. Both are based on conventional LED fabrication processes, but at greatly reduced sizes. Conventional LEDs measure a millimeter or more on a side. Mini LEDs are between 1mm and 100 µm (a grain of table salt), and micro LEDs between 100µm and 1µm (the width of a human hair). Compared to other integrated circuits, LED structures are relatively simple, and the fabrication process is well understood. One of greatest challenge arises because, in most applications, m- and µ-LEDs must be separated and placed individually on the final product substrate with placement accuracies that are a small fraction of the LED size and with enough speed to be economically viable. In addition, manufacturers need measurement and inspection capability that allow them to control the process, ensure quality and hit yield targets. One solution already in use in high volume manufacturing combines a novel placement technology (Pixalux/Rohinni®), capable of placing mini LEDs at rates greater than 50/sec, with a 3D inspection and measurement technology (MRS™/CyberOptics®), widely used to monitor the placement of electronic components using surface mount technology (SMT).
All LEDs (light emitting diodes) use similar technology to create light. When current flows through a forward biased diode of appropriate materials, electrons and holes recombining near the junction between the p- and n-type semiconductors that comprise the diode emit light. The wavelength of the light can range from infrared to ultraviolet, encompassing the entire visible range. The light is not coherent or monochromatic (like a laser), but it does have a relatively narrow wavelength distribution. The wavelength is determined by the difference in the energy levels of the conduction and valence bands of the semiconductor materials. White LEDs are made by applying a phosphorescent coating to a blue LED, such that the yellow emissions of the coating combine with the blue light of the LED to create a broad-spectrum white light. Color displays combine red, green, and blue LEDs to generate colors across a wide color gamut. An important difference between conventional LEDs and m-/µ-LEDs is the absence of packaging in the latter. Packaging provides protective encapsulation and electrical connections for conventional LEDs.
|
Our Recent Posts
Your Dog’s Health Depends On Adequate Reserves of Minerals
Urine pH tells you how your dog's body handles the food he/she eats. If the body has enough organic minerals from the foods, the dog is eating then the urine pH should be 5.5 to 5.8 (highly acid). This is a favorable or ideal physiological response if the dog is eating a well-balanced diet.
If the urine readings register in the ranges of 6.0 to 6.6, or 6.8 to 8.0 (high alkalinity), this means that the body has used up its mineral supplies and that the body is not getting these minerals from the dog's daily meals (alkaline foods are high mineral foods such as fruits, green veggies and sea veggies) . Over time, the pH numbers keeps going up. An alkaline urine following feeding your dog an acid diet (meats, dairy, grains and cooked or dry foods) is the result of the body adapting to protect itself.
Here is more explanation:
The acid residue of acid ash-producing foods (meat, grains, processed and cooked foods, medical drugs...etc.), is strong and dangerous to the urinary tract. Strong acids must be neutralized or weakened. Acid urine is neutralized in one of two ways; either alkaline minerals are added through the diet, or, if appropriate minerals are not available, the body will use ammonia. Ammonia as a urine neutralizer is an emergency backup system. Ammonia is more highly alkaline than alkaline reserve minerals. It has a pH of about 9.25. Does your dog’s urine smell like ammonia? Does your dog’s urine burn the grass when she/he pees? All these symptoms indicate ammonia in the urine.
So we have a strong acid that is going to be eliminated in the urine and two methods of neutralizing it are: (1) the body’s alkaline or minerals reserve, if any, and (2) the emergency backup system, ammonia.
When alkaline minerals are taken from the alkaline diet and added to the strong acid foods, the strong acid is made weaker. It is still acid, but weak enough not to irritate delicate tissue in the bladder.
If the alkaline (minerals) reserve has been depleted which will happen in most dogs’ cases who are fed commercially produced foods considered to be acid producing foods, even if some of the ingredients were from organic sources, then minerals will not be available to buffer the acid. Yet the body is intent upon survival. The body will neutralize the strong acid even if it must alter its normal way of functioning. One of the alterations it makes is to use ammonia produced by the body. Ammonia is used to neutralize the strong acid. Again, ammonia is a strong alkali. It overpowers the acid and the urine will register highly alkaline, around pH 8.0 or higher.
The only reason the urine is alkaline when acid type foods and medical drugs are given to dogs is because the body has adapted its function to take care of an emergency. Alkaline urine following acid producing foods or medical drugs is a sure sign that the alkaline or minerals reserve is depleted and that the body’s resistance is faltering.
If a dog has high alkaline urine pH readings, then their body does not have enough minerals especially sodium to buffer the acidity from the diet. Does your dog’s urine have a strong ammonia odor? The odor of ammonia is considered by many to be normal for urine; however, it is a telltale sign that your dog’s body is using an emergency system to keep going.
When you add pulverized veggies or veggie juices to your dog’s diet which is high in organic sodium and other organic minerals such as organically grown zucchini and celery the pH numbers will begin to decline. This is not only to be expected, it’s what you are trying to accomplish. As your dog’s mineral reserves are replenished, enough sodium will be available for the urine and it will start registering around 5.5. This will not happen if your dog is on any types of medical drugs but after you stop giving your dog the medical drugs and you take care of feeding your dog a healthy diet full of fresh veggies and fruits, then the urine pH will start dropping and will become acidic.
In general, I always recommend giving your dog the wild crafted micro algae because of its organic minerals content which is very high since this algae feeds on volcanic soil. Feeding this type of algae to your dog will be the fastest way to alkalinize fluids in your dog’s body.
The above information came from Dr. Ted Morter’s book: Your Health, Your Choice.
|
One Man’s Meat
Ecological medicine has always been concerned about the environment and its impact on our health.
So I was curious to hear about a recent report by 37 international experts from the Wellcome Trust. These included Dr Walter Willett, professor of epidemiology and nutrition at Harvard. This was a culmination of three years work and it recommended changes that they considered not only beneficial for our health but essential to help save the planet.
This report received plenty of publicity in the media and it may be the first time that any major study has made the link has between our health and the health of the planet. They recommended we should cut back on sugar, meat and dairy and increase fruit and vegetables.
Of course, few would disagree with their suggestions to reduce sugar and few would disagree with their suggestion to increase fruit and vegetables. But I suspect many would throw up their hands in horror at the thought of reducing meat and dairy. Do we really need to cut down on these? Are meat and dairy production really threatening the planet? Is this another example of the nanny state? What can we make of it?
We know that meat is not bad in itself. Some Native Americans, with minimal access to fruit and vegetables, lived on meat alone, largely moose. Their health was excellent. True, unlike most of us, they ate organ meat which has a high nutrient concentration. And we can be sure they had no harmful effect on the planet.
But meat production has changed massively and I believe it is not so much meat that is the problem as the way it is being produced. For there is little doubt that meat and dairy production is causing health problems and that it is causing widespread environmental destruction.
The Nurse’s Health study and the Health Professional’s Follow-up study first noted an association between meat consumption and increased mortality from cancer and heart disease and also lower life expectancy. The much bigger NIH-AARP study which followed 545,000 people for 10 years confirmed meat consumption was associated with a lowered life expectancy and with increased mortality from cancer and heart disease.
But we need to look at the bigger picture. There are 70 billion farm animals kept worldwide and two-thirds are factory-farmed (80% in Europe).
These changes started several decades ago when we had what seemed like a good idea. The idea was to breed animals selectively to produce more meat or milk. Perhaps we should have realised the inherent stupidity of this idea but we went ahead anyway and the inevitable happened: the quality of the meat and milk took a nosedive.
We now have to eat 4 factory farmed chickens to obtain the same nutrients that a 1970s chicken would give. The mineral content of meat, milk and eggs has been steadily declining. The amount of essential fats in pasture-fed cattle can be 300 times greater that of factory-fed animals and the beta-carotene levels 700 times greater. Of equal relevance factory-farmed cattle have been found to have 143 residues of drugs and pesticides, 40 of these being carcinogenic. Milk is now largely produced by pregnant cows, greatly increasing concentrations of IgF1, a growth promoter and carcinogen.
For the animals themselves, the change has been a disaster. Factory-farmed animals live in horrendous condition, confined to darkened sheds, hardly able to move. The cruelty is so appalling that these sheds are kept in isolation, well away from prying eyes. Pigs, thought to be as intelligent as dogs, perhaps suffer the most from this forced confinement.
Animals are pushed to their absolute limits, with the aim of producing the maximum quantity of meat and milk. In the past a dairy cow might be expected to produce 1000 litres a year but now a high-yielding cow can produce nearly 10,000 litres a year (ten times what their calves would need) greatly reducing the quality of the milk, but also the lifespan of the cow (from 20 years to 2-3 years). Hens that would normally produce 5-6 eggs a year can now produce 300 a year, reducing their lifespan from 5-8 years to 1- 2 years. Broiler chickens grow to a grotesque size by 7 weeks at which time they are ready for slaughter and over half cannot support their own weight.
But what effect does this have on the environment? By far the most worrying issue is the staggering amount of land needed to produce food for these animals. A third of all agricultural land, an area the size of the EU is now used for animal feed, an amount of land that could feed 4 billion people.
This is now the major reason for deforestation and the major driver for extinction of every other species on the planet which are now disappearing at 1000 time the background rate. All animal species (mammals, birds, reptiles, fish) have halved in number in 40 years. An area of forest the size of New Zealand disappears each year, primarily for land to feed factory-farmed animals.
This land is used to grow wheat, corn and soya. One third of wheat, 50% of corn and 80-90% of soya is being used to feed factory-farmed animals. One third of pelagic fish (such as anchovies and sardines), some 90 billion fish, are sucked out of the oceans each year, ground into fishmeal, and fed to pigs, chicken and farmed fish, destroying whole ecosystems and throwing most seabird populations into sharp decline.
In both in the EU and the USA, sugar, wheat, corn and soy are heavily subsidised by the taxpayer, promoting a system that is damaging to animals, ourselves and the environment and on a scale never seen before.
And there is another major concern. About 80% of the soya is genetically-modified (GM) as is the majority of corn. Great swathes of land in the USA and South America are used for monocultures, with increasingly intense pesticide use (ten-fold increase in ten years), where nothing else can survive, producing a wildlife desert.
And yet we have evidence that rodents fed GM food were unable to reproduce within three generations and had increased rates of breast tumours, kidney and liver disease. Interestingly, in view of the obesity epidemic, rats grew fatter on GM food than those fed non-GM food. We are being fed unlabelled and potentially hazardous GM food, by the back door, through meat and dairy.
It is not just the sheer scale of land being used to feed factory-farmed animals that threatens the planet. Factory farming is also a major contributor to the increasing global water shortage (using one quarter of all fresh water), produces more greenhouse gases than cars, planes and trains put together and squanders 50% of the world’s antibiotics.
And yet chicken and hog farms have been a major source of new infections, including swine flu and bird flu, bringing us perilously close to an age where antibiotic resistant infections become the norm. A large hog farm or poultry plant can produce as much manure as a major city and hog manure is ten times more polluting. The slurry from feeds produce high levels of nitrogen and phosphorus, polluting waterways, leading to algal overgrowth, and dead zones downstream where nothing can survive.
And yet it doesn’t need to be like this. It is not meat itself that it is the problem but the insane way it is being produced. Factory-farming is damaging to our health, heartbreakingly cruel for billions of animals and is taking us to the brink of environmental catastrophe. There has to be a better way.
A fascinating study in 2008 showed that animals allowed to freely forage have the highest levels of nutrients, followed by animals on grass, whereas animals fed on grain had the lowest levels of nutrients.
This is telling us something of immense importance. There is also an irony here. Animals know not only what is good for them, but indirectly, what is good for us, if we only let them choose.
Feeding cattle on grass not only benefits the animals and ourselves, but also turns something we cannot use into food.
Cattle fed on grass need forty times less water and produce none of the environmental degradation from large concentrations of manure, the inevitable result of factory-farms. If cows could be fed on grass, and chicken and pigs could forage and use some of the staggering amount of food waste (30-50% of all food), it would be a win-win situation, for us, the animals and the environment.
Where does this leave us all? I believe that for our own health we need to think carefully about both the quantity and more particularly the quality of the meat we eat. Wherever possible, we should look for labels like grass-fed, pasture fed, outdoor-reared (not outdoor-bred) and organic. Every time we make this choice we not only boost our own health, we vote against animal cruelty and we help protect our fragile environment for future generations.
|
• by Eli Moore Traffic Attorney
Eli Moore Traffic Attorney Red Light Camera Tickets
Red light camera tickets are becoming increasingly common. They are installed, usually at busy intersections, as a way to spot drivers who disregard a red light traffic signal. These cameras, triggered by sensors when a vehicle enters an intersection while the traffic light is red, take a photo of the front license plate and driver. Typically, you won't realize that the camera has caught you running a red light until you receive a ticket with a photo of the license plate. In New York, a red light camera ticket is the responsibility of the car’s owner.
As compared to a red light ticket administered by a police officer which carries heavier fines as well as points, the consequences of getting a red light camera ticket may seem less consequential. Red light camera tickets are treated by traffic court like parking tickets, with a low fine and no points attached. However, if you believe your red light camera ticket is unjustified, it is worth fighting the ticket. You have 30 days to challenge a red light camera ticket and there are ways that, with the help of an experienced traffic attorney, you can be successful:
1. Use the photo as evidence. Poor image quality may be a reason for arguing against the validity of the picture.
2. Prove that you were not driving at the time the photo was taken. Ideally, you should be able to provide evidence of where you were during the time the ticket was given.
3. Show that the traffic light was defective. A traffic lawyer may be able to do this by observing the length of time the yellow light is illuminated and compare that to the standards of the jurisdiction. If the length of time that the yellow light is lit does not meet the traffic requirements of the local area, then you can use that to support your defense.
4. Argue that you acted out of necessity. If you passed through an intersection to avoid harm or a serious accident, the judge may find that you are not guilty of the red light camera ticket.
5. No one from the red light company appears in court to authenticate the photo. If you've requested this court appearance and there was a "no show," you may ask the judge to throw out the red light photo evidence.
In general, it is difficult to successfully fight a red light camera ticket so seek the advice of a traffic lawyer to see if your case is worth fighting.
#westchestertrafficlawyer #newrochelletrafficlawyer #whiteplainstrafficlawyer #trafficlawyerwestchestercoun #trafficlawyerwestchestercounty #WhitePlainstrafficlawyer #Scarsdaletrafficlawyer #PortChestertrafficlawyer #trafficlaws #redlightcameratickets #redlighttickets #speedingtickets #traffictickets
|
What are Wasps?
Wasps (order Hymenoptera), by name alone, can instantly invoke fear of being swarmed and stung. In fact, often an encounter with a lone flying, buzzing, black and yellow insect can cause panic.
What many do not know is that most of the 30,000 identified wasps worldwide are solitary and do not sting. They are also beneficial to helping to control pests that humans find a nuisance in their homes or business or that negatively impact the agriculture industry (by damaging crops).
Wasps are categorized into 2 groups - social and solitary. Of the 30,000 species, about 1,000 (just 3%) are social wasps. Social wasp colonies are started from scratch each spring by a queen who was fertilized the previous year and survived the winter by hibernating in a warm place. These wasps have a worker caste system and rely on the queen to build the population. These wasps are mostly aggressive and will swarm and sting to defend their nest. Solitary wasps do not form colonies and tend to be far less aggressive. These wasps often use their stinger to hunt and paralyze their prey with their venom.
Here in Canada, we have 500+ species, most of which are social, and live in colonies with thousands of members. Wasps are often observed more in summer and early fall, which is when their colonies are large, and they are out seeking food.
The most common nuisance wasps in Ontario are yellow jackets, paper wasps, mud daubers and bald-faced hornets. Many use “wasp” and “hornet” interchangeably, and at times confuse wasps and hornets. Hornets are wasps, and they share many similarities to Yellow Jackets. There is only one true hornet specie, and it is not the bald-faced hornet. This species is far closer to a yellow jacket wasp than a true hornet.
What do Wasps look like?
Wasps have a head, thorax and abdomen. A narrow petiole connects their thorax and abdomen and gives the appearance of a narrow waist. Most wasps are hairless, have 6 legs and 2 pairs of wings. They are smaller than hornets, averaging 1 inch or less in size, however some species can grow up to 1.5 inches long.
They also come in a wide variety of colours, from yellow to brown to metallic blue and even bright red. Yellow jackets are bulky with vibrant yellow, black and white markings, while paper wasps are long and thin, with long legs, and yellow-reddish and black markings. Mud dauber wasps also have long bodies, but most types are dull in colour, with black with faint yellow markings.
Wasp in nature A dead wasp
What do Wasps Eat?
Most wasps eat honeydew secretion from plants, nectar or fruit juices. Some species also prey on insects, especially spiders and flies.
Wasps are also attracted to human food, especially high fructose corn syrup-based treats (which contain glucose, sucralose or sucrose) and exposed/decomposing meats.
Wasp Nests
All wasps construct nests, which can vary in location, size and shape based on the species. In comparison to bees, who secrete a wax-like substance to build their nest, most wasps create a papery material by chewing wood fibers into a pulp. Mud dauber wasps, as per their name, make these nests from mud.
wasps building their nest
Nests, for example, can be aerial or ground, can resemble paper-like balls the size of soccer balls or basketballs with multi-layers (yellow jackets and bald-faced hornet), fist-sized mud nests (mud daubers), or be single layer comb-like nests with no paper enclosure (paper wasps).
Wasps typically build their nests within or handing from:
• Trees, shrubs or bushes
• Fences, soffits, exterior joists, door or window frames
• Porches or decks
• Sheds, garages and barns
• Hollow trees
• Underground burrowers
• Sandy or bare soils (away from vegetation)
• Wall voids, attics, basements or crawl spaces
• Abandoned vehicles
• Some species will re-use abandoned nests from other wasps or bees or small mammals.
An aerial wasp nest in a tree
If you come across a nest whose location does not pose a threat to your home or business, leave it be. Also do not approach a nest that appears to be active. Active nests often have wasps swarming the exterior of them. The wasps are social, and feel threatened, they may swarm and sting in defense. Some wasps, like the bald-faced hornet, are very aggressive, and will even chase you for long distances in attempts to sting you.
The Lifecycle of a Wasp
In early spring, fertilized females (from the previous generation) emerge from overwintering, to become new queens. She will find a suitable place for a new nest and start to build it. She will lay her fertilized eggs in the individual cells within the nest which will develop into the first-generation non-fertile female worker wasps. They will take on the duties of expanding the nest, seeking food and caring for the queen and her young. The queen will remain in the nest and concentrate on egg laying to continue to build up the colony members.
In late summer to early fall, the queen will lay eggs which will develop into adult male and fertile females. These wasps will leave the nest to mate, where shortly after the males will die. The newly fertilized female wasps will be the new queens in the following spring. They permanently leave the colony to find their own protective shelter to overwinter. The rest of the colony will die in winter, leaving these overwintering future queens as the only survivors.
Do Wasps Sting or Bite?
Social wasps will sting, and will sting multiple times if provoked or threatened. This is because these types of wasps are often territorial, very aggressive, and a danger in large numbers.
Social wasps emit a pheromone when under distress attracts any nearby colony members, and sparks a swarming, stinging attack. Non-social wasps, like the mud dauber, can sting is touched, but typically do not swarm as they are not aggressive nor do they defend their nest.
Wasp stings are painful and can cause swelling and redness around the sting site, as well as severe allergic reactions or even death. If you have any concerns after being stung by a wasp, seek professional medical assistance.
Damage Caused by Wasps
Wasps cause superficial damage to a home or building, and they are an eyesore. They are mostly obnoxious, uninvited guests to outdoor activities or events. They are bothersome at BBQs, picnics, outdoor parties, during gardening, and even during a simple meal on a back deck or patio.
Wasps are a serious concern for people and pets because social wasp species will sting, and can cause severe allergic reactions.
Despite the above, wasps do play an important role in our ecosystem. They are effective at helping to control common nuisance insect populations (like flies and spiders), that we do not want in our homes or businesses. They are also beneficial in helping to control crop damaging pests’ populations.
How Does a Wasp Infestation Happen?
In general, queens will overwinter in warm protected locations to survive the winter so that they can awaken in the spring to start the next generation of wasps. In seeking shelter, in late summer and early fall, they enter homes or structures through crack and crevices. In the spring, when they become active, they often construct their new nests close to their overwintering location. As summer passes, the nests grow, as does the population of wasps.
Signs of a Wasp Infestation
Obvious signs of a wasp infestation are the presence of an active nest and wasps swarming around.
How to Get Rid of Wasps
As new nests are typically built in locations close to where the queens overwintered, wasp-proofing your home and business helps to reduce the chances of fertilized female wasps gaining access, and therefore decreases the chances of an infestation the following year.
We advise against DIY wasp pest control, especially when colonies are large. Social wasps will fiercely protect their nest if threatened or provoked, and they can sting repeatedly.
Here are some helpful tips to prevent a wasp infestation:
• Inspect your home or building’s exterior to identify cracks, crevices or openings for wasps, and other insects, which provide a food supply for wasps. Check:
• Windows, doors, siding, eaves and fascia boards.
• Roof joints and behind chimneys.
• Places where utility pipes, plumbing, wires or cables enter the building.
• Seal all exterior cracks, crevices, gaps and holes with quality silicone or silicone-latex caulk.
• Repair or replace torn window and door screens or weather stripping.
• Install or repair screens in roof and soffit vents.
• Trim back trees, shrubs and bushes.
• Remove any non-active wasp or hornet nests from previous seasons.
• Store garbage, green bins and recycling with tight-fitting covers away from your home or business.
• Regularly clean garbage cans, green bins and recycling receptacles to remove sugars and proteins, will can attract wasps.
• In the summer, help keep wasps away (especially yellow jackets) from outdoor entertaining areas by keeping food sealed or covered, and cleaning up spills promptly.
• Some have found relief from nuisance yellow jackets but placing a cup fruit juice or pop away from outdoor eating areas, which draws them away.
If you have a wasp infestation in your home or business, or concerns about an active wasp nest, contact a licensed pest control professional to help you safely get rid of wasps.
Infested with hornets? Learn more about how to get rid of hornets
Call 1 (800) 263-5055
Our Professional Team is Happy to Help!
|
Chapter 1-2-1: Noun Gender
Grammar > Parts of Speech > Nouns > Noun Gender
Unlike French, English usually does not use gender-specific nouns. Instead, gender is based on the actual physical gender of the noun under discussion. For example, Anglophones would refer to a spider as “it”, not “she.
Many common nouns, like "engineer" or "teacher," can refer to men or women. Once, many English nouns would change form depending on their gender -- for example, a man was called an "author" while a woman was called an "authoress" -- but this use of gender-specific nouns is very rare today. Those that are still used occasionally tend to refer to occupational categories, as in the following sentences:
David Garrick was a very prominent eighteenth-century actor.
Sarah Siddons was at the height of her career as an actress in the 1780s.
The manager was trying to write a want ad, but he couldn't decide whether he was advertising for a "waiter" or a "waitress"
Maintaining this website requires alerts and feedback from the students that use it when they see a problem or have a suggestion.
Attribution information for this page: written by Chris Berry, Allen Brizee, edited by Jamie BridgePageID: eslid90473Page keywords:
|
Breaking News
More () »
Coronavirus: Can Chinese products get you sick?
Can products shipped to you from China carry the coronavirus?
PHOENIX — The Wuhan coronavirus is making tens of thousands of people sick and has killed hundreds. So it's no surprise that people are scared of catching it from any source.
A viewer named Samantha sent us an email asking about products she had recently ordered that were being shipped from China.
The email said the products were unable to be shipped because of the coronavirus and ended with, "in consideration of your health and safety issues, we cancel [sic] your order."
Samantha took that to mean the shipper was concerned that the virus might hitch a ride on the cashmere sweater she bought and come to the United States.
We went to Kevin Stephan, an infectious disease expert with eInfectionMD.com to verify.
"At room temperature, on surfaces like metal and plastic," Stephan said. "it lasts maybe up to nine days."
Stephan said tests of other types of coronaviruses have shown it could potentially survive longer in the right conditions, like cold temperatures and humidity, but those are not likely to be found anywhere in shipping.
The Centers for Disease Control also says there's no evidence the virus can come over on imported products and points out that none of the confirmed cases in the US have been from an imported Chinese product.
So we can verify that you're more likely to catch the coronavirus from a person than a Chinese product.
- In rough US flu season for kids, vaccine working OK so far
- Arizona coronavirus patient remains in stable condition
Coronavirus death toll hits 2,100 in mainland China
|
Criminal Law Articles
Call Our Lawyers NOW
1300 636 846
7am to Midnight , 7 Days
Have Our Lawyers Call YOU
Stealing Offences (NT)
Written by Fernanda Dahlstrom
In the Northern Territory there are various stealing offences, most of which are governed by the Criminal Code 1983. While the offence of stealing is a property offence, some offences involving theft are composite offences, meaning they involve both property and violence. This article will outline the main offences relating to stealing in the Northern Territory.
What is stealing?
Stealing is the dishonest appropriation of property belonging to another with the intention to permanently deprive the other person of it. However, it does not include the appropriation of property by a person who reasonably believes that the property is lost and that its lawful owner cannot be found.
Under Section 210 of the Criminal Code, a person who is found guilty of stealing is liable to a maximum penalty of seven years imprisonment, or 14 years imprisonment if the thing stolen is a testamentary instrument (a will) of a person who is living or dead or an item that has a value of more than $100,000.
Receiving stolen property
Under Section 229 of the Criminal Code, it is an offence to receive property that has been obtained by way of an indictable offence (such as stealing or robbery). This offence is punishable by a maximum penalty of seven years imprisonment, or 14 years if the value of the thing is more than $100,000.
Assault with intent to steal
Under Section 212 of the Criminal Code, a person who assaults another person with intent to steal from them is guilty of an offence and liable to a maximum penalty of seven years imprisonment. If the offender is armed, they are liable to a maximum of 14 years imprisonment and if they are armed with a firearm and injure a person by discharging it, they are liable to imprisonment for life.
An assault where an act of theft is actually carried out constitutes the more serious offence of robbery.
Stealing domestic animals
Under Section 54 of the Summary Offences Act, it is an offence to steal a domestic animal such as a dog or any other animal usually kept in a state of confinement. This offence carries a maximum penalty of a fine of $200 plus the value of the animal.
Which court will deal with the matter?
Stealing is an indictable offence, which can be heard summarily with the consent of both defence and prosecution. In the majority of cases, charges of stealing are finalised in the Local Court, where the maximum penalty that can be imposed for a single charge is imprisonment for two years. If either the defence or the prosecution elects to have the matter heard on indictment, it must be committed to the Supreme Court and finalised there.
When a minor is charged with stealing, the matter will generally be heard in the Children’s Court.
A person can be found not guilty of stealing on any of the following bases:
Factual defence
A factual defence is where the accused denies doing the acts alleged.
Reasonable doubt
If the prosecution cannot prove all the elements of the offence beyond a reasonable doubt, the accused will be found not guilty.
Honest and reasonable belief of lawful right
If the accused had an honest and reasonable belief that they were the owner of the property taken, they have a full defence to a charge of stealing. Whether a mistaken belief was reasonable will be assessed based on the circumstances of the alleged offence.
If the accused can establish that they acted under duress, they must be found not guilty. A person acts under duress if they are essentially ‘forced’ to carry out the acts by someone else. If a person can establish that they were subjected to serious threats (of death or serious harm) if they did not carry out the acts, they have a full defence.
If you require legal advice or representation in relation to a theft matter or in any other legal matter, please contact Go To Court Lawyers.
Call Our Lawyers NOW
7am to Midnight , 7 Days
Have Our Lawyers Call YOU
Legal Hotline. Open 7am - Midnight, 7 Days
Call Now
|
Blog at Grand Villa of Delray West in Florida
Return To Blog
Alzheimer's and Dementia: Signs to Watch
How do we know if our aging loved one has Dementia or Alzheimer's disease? Just because mom or dad is forgetting things, doesn't necessarily mean they're suffering from cognitive decline.
Every person with a disease that affects the memory experiences different symptoms with varying levels of severity. However, there are signs to watch for. According to Aging Care, early indicators of dementia and Alzheimer's disease include:
• Confusion and lack of concentration - Being unsure or confused a lot can be a common indicator of memory loss and early onset of dementia or Alzheimer's disease. Also watch for your aging loved one having difficulty doing the things they did before or a hard time concentrating. Some individuals forget where they are or how they got there even.
• Difficulty with language and speaking - Dementia and Alzheimer's disease can have an affect on how someone constructs their sentences while speaking or writing. An example of this could be your aging loved one saying hand-clock instead of watch. The confusion can cause an abrupt stop in the middle of a sentence or a conversation too.
• Changes in personal hygiene or grooming - If a sudden decline in self care emerges, for instance, irregular bathing, oral hygiene, or if the individual is wearing the same clothes over and over again, this could indeed be an indicator of serious cognitive decline.
Discover more signs of Alzheimer's disease and dementia by reading this Aging Care article here
Grand Villa of Delray West
5859 Heritage Park Way Delray Beach, FL 33484
|
When you decide that you want something to change, it can be very challenging to even decide how you would like it to be. What would work for you? What would work for you and everyone involved? What is the best way to shape the future? Why are some things difficult to face and some easy? Why do we feel fear? Can we be more empowered in our attitude to change?
Energy to me is the natural flow of life; it is all information that we are decyphering, interpreting, creating and shaping in every moment. Where there are difficulties, the energy is not flowing easily, could be trapped, corrupted, misunderstood or even be in reverse or inside-out. Self-development is about unravelling and evolving these restricting patterns and developing ways to support ourselves when we feel frightened or overwhelmed, when we really want to move forward, put the fears to rest and grow into something more fulfilling and positive!
So What is Fear? Fear is a natural response to a situation where we feel threatened, whether physical or psychological. Physical threats are of course of primary importance in a given moment but the psychological effects of continued worry of physical security or all the other ‘what-ifs’ we think about these days can be far more debilitating or limiting on who we are as people. When we have one familiar situation or routine to deal with, we are ok but are you able to cope with new challenges or changes in circumstances (whether sudden or progressive)… such as the continuously growing complexity of everyday life… communications..? media? family challenges and career paths? … how do we live our lives with the myriad of choices and opportunities available? How do we not only survive but also thrive?
Evolution is a Natural Process… it cannot be stopped; it is part of Nature. We are continuously getting new ideas based on what we know and how we want the world to be… we are continuously assessing possibilities… both consciously and sub-consciously. We will always be challenged to create and adapt to something new. So…
Think of Fear in a New Way… It is only a response to something beyond your current understanding… something that you can learn about and begin to face. Ask yourself why you feel scared or intimidated or angry? Do you really want this to keep happening? Holistic Self-Development work helps identify and transform disempowering life patterns. When these change, your outer world updates and begins to be more supportive and guided… keeping you not only safe but with a growing idea of what you want out of your life… what will fulfill you… 🙂
Leave a Reply
Protected by WP Anti Spam
|
Any Questions?: +86-23-67305242
Chat Now
Company News Industry News
Professional Integrated Biometric Solution Supplier.
The difference between Iris recognition and fingerprint recognition
Author: huifan Time: 2017-08-17
Fingerprint Identification Device Manufacturer
Iris recognition is performed based on the identification of people in the eyes of the iris, mainly used in security equipment, and high degree of confidentiality requirements place.
We know that the structure of the eye by the sclera, iris, pupil lens, retina and other components. The annular portion of the iris pupil is located in the black and white between the sclera, which contains many staggered spots, filaments, coronal, stripes, and other detailed features crypt. Iris after fetal development stage is formed throughout the life course remain the same. These characteristics determine the unique characteristics of the iris, but also determines the uniqueness of the identification. Therefore, we can iris of the eye of each person as an object of identification.
Iris recognition requires four steps, into the iris image acquisition, image preprocessing, feature extraction and feature matching.
Fingerprint Identification Device Manufacturer
Fingerprint recognition
In all biometrics, fingerprint recognition is a very common one. It includes a fingerprint image acquisition, processing, feature extraction and matching modules such as pattern recognition system. Fingerprint commonly used in the identification of needs for personnel places such as access control systems, time and attendance systems, notebook PCs, the bank's internal processing, bank payment.
Iris recognition is more professional and security than fingerprint recognition.
|
detox water can help cause tooth decay
Detox water (also known as skinny water) is promoted as a great all natural way to cleanse the body and lose weight. These do-it-yourself fruit and herb infused water concoctions are supposed to be great for your overall health, but there’s one problem: detox water can be really bad for your teeth!
Perhaps the most common ingredient in detox water recipes is lemons, though other citrus fruits such as limes, grapefruit, and oranges also make an appearance. Citrus fruits are acidic: they contain citric acid. However, what you might not know is that lots of other fruits are highly acidic too, including pineapples, mangoes, peaches, pomegranates and even blueberries. Some recipes even call for apple cider vinegar, which is also acidic.
Acid is one of your smile’s greatest enemies. Acids can eat through the hard outer enamel layer of your teeth, causing spots, cavities, and a great place for tooth decay-causing bacteria to start an infection. (Fun fact: It’s actually acid that links sugar to tooth decay. The existing bacteria in your mouth consume the sugar and excrete acid as a byproduct, right onto your teeth. Lovely, right?)
So, it turns out, depending on the ingredients, detox water is a nice tasty erosion-causing acid bath for your teeth. Okay, that may be a little dramatic, but detox water certainly puts your teeth at higher risk for tooth decay than plain clean water.
The truth is, detox water (like most flavored drinks) are fine in moderation. Unfortunately, moderation is not what a lot of detox water lovers recommend. A lot of instructions for detox water suggest sipping it all day long. That means repeatedly subjecting your teeth to an acidic environment!
Drinking detox water is okay, and will probably benefit your health by keeping your better hydrated (other health claims are dubious, but that’s another story). Your dentist just asks you to be sensible about it. Just like we recommend not snacking between meals, we also suggest not drinking detox water between meals. This will give your teeth “time off” from being covered in acids, sugars, etc. Most dentists will tell you that the only thing you should be sipping on all day is water. Consider drinking a detox water with breakfast, then brushing your teeth and going about your day with a fun, well-designed bottle of fruit-free water instead.
If you have a detox water habit you just can’t shake, there are some steps you can take to reduce its impact on your teeth. One way is to use a straw, which helps keep the liquid from hitting your teeth directly. You can also flush your mouth with plain water every time you drink the detox water, to help wash away the acid and any sugars. However, remember that the primary way that detox water “draws out toxins” and improves your health is by encouraging you to consume more water. When it doubt, regular fluoridated tap water is your smile’s best friend.
Appointments Until 7pm & on Saturdays!
Request Online or Call Today!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.