content
stringlengths
71
484k
url
stringlengths
13
5.97k
For people who have never worked in shipping, the concept and the logistics of buying a vessel sounds mind-boggling. How one buys a piece of property in some foreign jurisdiction that likely flies another jurisdiction’s flag and possibly could fly another flag once the sale is concluded, when the holding companies of the buyers and sellers may reside in yet other jurisdictions, the vessel likely is mortgaged by a creditor in a different jurisdiction and the mortgage has to be released so that a free title can be passed to the buyers? How one can ensure that all details are attended to and the sale is smooth, and that the whole transaction does not end up in smoke once the buyers’ funds have reached the sellers’ account (buyers’ funds along with release of escrow deposit and other funds due have to reach sellers’ account before the sellers’ release the title.) Buying real estate or a car may be a mini-version of a shipping transaction and things can go wrong, but at least with real estate, it’s absolutely clear that the property is stationary and in the event of dispute, the jurisdiction is abundantly clear. For the purposes of this discussion, in our series of discussions on sale & purchase, we assume that this transaction is about the sale and purchase of a vessel in the secondary market, meaning that the vessel is already on the water and trading and the buyer intends to buy her as a trading vessel and immediately put her to work to transport cargo for profit. Transactions for newbuilding contracts to shipbuilders or ‘resales’ (usually novation agreements for the sale of newbuilding contracts to a new buyer while vessel still on contract to be built or under construction) and transactions for the sale of vessels for scrapping (demolition sale) whether directly to the end buyer (a scrap yard) or to an intermediary ‘cash buyer’ are not considered under this scenario; such transactions are still handled by shipbrokers, but there are nuances in such transactions and their contracts that for now fall outside the scope of this essay. Also, for the purposes of this discussion, we will not endeavor to analyze the logic of the seller and the buyer behind the transaction, only that these two parties, acting on their own will, without compulsion and in full knowledge of facts and market conditions, are willing and able to enter into such a transaction. A typical S&P transaction starts when an operating shipowner (someone who operates / manages their own vessels (as compared to a financial shipowner)) contacts their shipbroker (or ‘S&P broker’) to inquire about a certain type of vessel, market conditions in terms of freight, market and employment prospects, recent transactions and prices, the parties involved in those transactions and the nature of the transactions (such as ‘charter free’ sale, or with ‘charter back’ or with ‘survey due’, with ‘prompt delivery’ or ‘subject to tender’, or ‘subject to financing’, etc.) Likewise, for a potential seller, a similar phone call will be placed with the shipbroker of their choice inquiring about similar information. Once our friendly owner decides to make a purchase, they will inquire about available tonnage for sale, that is for vessels whose owners have openly been advertising them for sale through the S&P broker market, or sometimes through trade publications, internet, etc. For such available tonnage for sale, usually there is no exclusive broker handling a vessel and their owner is dependent on competitive brokers to find buyers. In certain cases for openly availably tonnage, the seller may have engaged exclusively a third-party, competitive broker to source buyers, but usually the exclusive broker may be ‘in-house’ broker or works for a brokerage office affiliated with the sellers’ group of companies. The buyers’ broker will also send a blast email (a ‘P/E’ or ‘purchase inquiry’ in brokerage parlance) to corresponding brokers worldwide looking for additional similar tonnage for sale, fitting the buyers’ guidelines of potential candidates in terms of size, age, builder, specification, etc. At the age of the internet of instant and extremely cheap telecommunications and under the mode of ‘information wants to be free’, most sales candidates are already well known to players with decent market access and set of skills. Therefore, the main purpose of any purchase inquiry is to find vessels that are not actively in the market for sale, vessels that their owners may need some convincing to consider selling them, vessels that there are certain ‘buttons’ behind their ownership that if pressed could entice their own to sell them. ‘Off market candidates’ are usually the holy grail of shipbrokerage, as these are the vessels that can be sold under certain circumstances but the market doesn’t know yet. Once a certain sales candidate has been located and there is a general understanding overall ‘price ideas’ between buyer and seller, then the buyer, through the designated ‘broker channel’ would inquiry certain technical information about the vessel for their evaluation, such ‘class printouts’ (copies of the vessel’s maintenance and condition records with their classification society), permission to ‘sight class records’, ‘TC description’ (short for timecharter description, a two-page description of the vessel that usually the owners present to charterers with the vessels pertinent descriptive and commercial information such as cargo capacity, fuel consumption, etc), ‘GA Plans’ (short of General Arrangement plans, a overall arrangement of the vessel), ‘Capacity Plans’ (showing cargo holds or cargo tanks, their arrangement including loading / discharging mechanisms, and mostly their capacity), ‘Mid-section Plan’ (showing a mid-section, vertical cutoff for the vessel), the ‘Q88’ (for tankers, short of Questionnaire 88) and OCIMF report (for tankers, short for Oil Companies International Marine Forum). Once the information provided is met with buyers’ approval, then buyers will ask for ‘permission to inspect’, that is, they will engage an inspector to visit and inspect the vessel. Usually, there is an LOI involved (‘Letter of Indemnity’ that the inspector / inspecting party indemnifies and holds harmless the owners for any damage or accidents), and the inspector in the course of several hours will undertake a superficial, pre-purchase inspection of the vessel, meaning that without interfering with vessel operations and without having the right to open up the engine or any other parts of the vessel for inspection, the inspector is allowed to sight, inspect, copy and take pictures of the vessel certificates onboard, engine room, hotel accommodations (‘superstructure’), cargo holds / tanks, hull, ballast tanks, etc, observe cargo operations and take notes. Inspecting a huge structure like a ship, while she usually undertakes cargo operations (loading / discharging) under tight time constraints, sometimes without natural light and often in dangerous or heavily contained spaces, it’s an art and a skill that often goes under-appreciated; not to mention the stakes that are tied to an inspector’s report. About a week after the inspection, once the inspector had the opportunity to return to their office and put all the information they collected into a multi-page inspection report, the buyer now has a fairly accurate knowledge of the condition of the vessel, upon which they will base their exact opening ‘firm offer’ to the sellers. The opening offer, contains not only a pecuniary offer for the vessel, but also the ‘main terms and conditions’ for the sale, such as contemplated time and place of delivery, items to be included in the sale, expected protocol, etc, to which the sellers will provide a counter offer about the price and the terms, and items that have to be excluded from the sale (usually rented items such as nitrogen bottles onboard are not the sellers’ property so they have to be excluded from the sale, but the buyer may opt to take over the rental contract to keep them onboard). After a few turns of offering and counteroffering (fascinating ‘horse trading’ ritual this has been called from parties typically associated with shipping), there is hopefully an agreement on all terms, and then there is a ‘main recap’ where a clean email copy of the main terms of proposed sale is prepared and agreement is confirmed once again by both buyer and seller, as this agreement now has the full force of a legal contract. Once the main terms / recap has been confirmed, it’s now the buyers’ broker job to incorporate such terms into a contract to be signed by both parties. For the contract, there are certain boilerplate templates that have been used before; usually, the original ‘firm offer’ is based on such a template and one of the main terms of the recap specifically mentions the template to be used for the contract. The most established form of template or MOA (‘Memorandum of Agreement’) is the ‘NSF 93’ form, short of the Norwegian Sale Form of 1993 which was prepared by the the Norwegian Shipbrokers’ Association and maintained by the Baltic and International and Maritime Council (BIMCO). There is the updated ‘NSF 2012’ form, but NSF 93 has been the most popular template, as people are more familiar with it, and also there is a sizeable body of case law based on the older template. Alternatives to NSF have been the ‘Nippon Sale Form’ which is preferred by Japanese sellers and usually considered a seller-friendlier template (as usually Japanese owners only sell in the second-hand market the vessels they built brand-new at Japanese shipyards), and the ‘Singapore Ship Sale Form 2011’ which is more commercially flexible to accommodate more legal jurisdictions (besides English Law) also the concerns of non-operating shipowners (like leasing companies, etc) who are getting a ever increasing slice of the shipowning pie. Once again NSF 93 is the industry standard despite her age… Once the MOA is prepared by the buyer’s broker, and reviewed once again thoroughly for any errors or omissions against the ‘recap’ and its main terms, then the buyers’ authorized representative first signs an original, to be exchanged by email or facsimile, and countersigned by the sellers’ authorized representative. Once the MOA has been signed by both parties, still there is lots of work to be done until the delivery of the vessel. However, the most difficult parts of the ‘deal’ are already done with, and now it’s time for buyers to ‘lodge deposit’ usually within three banking days for the sale and look forward to the day of the closing. © 2013 Basil M Karatzas & Karatzas Marine Advisors & Co. All Rights Reserved. IMPORTANT DISCLAIMER: Access to this blog signifies the reader’s irrevocable acceptance of this disclaimer and other important information and terms. No part of this blog can be reproduced by any means and under any circumstances, whatsoever, in whole or in part, without proper attribution or the consent of the copyright and trademark holders of this and related websites. Whilst every effort has been made to ensure that information herewithin has been received from sources believed to be reliable and believed to be accurate at the time of publishing, no warranties or assurances whatsoever are made in reference to accuracy or completeness of said information, and no liability whatsoever will be accepted for taking or failing to take any action upon any information contained in any part of this website. Thank you kindly for your consideration.
https://karatzas.mobi/2013/10/22/how-to-buy-a-ship-a-typical-sp-transaction/
A franchise is defined in Article 142 of the IPL, which establishes that a franchise exists when, with a license to use a trademark granted in writing, technical knowledge is transmitted or technical assistance is provided, for the licensee to produce or sell goods or render services in a uniform manner and with the operating, commercial and administrative methods established by the owner of the trademark, to maintain the quality, reputation and image of the products or services distinguished by the trademark. When entering the Mexican market with a franchise system, certain legal aspects must be considered and complied with. For example, legal requirements under the Industrial Property law, such as providing a Franchise Disclosure Document 30 business days before executing the Franchise Agreement, observing the minimum provisions in the Franchise Agreement, and complying with obligations under the Data Protection Law. As in many countries now, Data Protection has gained importance in the last years in the country, and this trend is expected to continue. Data Protection in Mexico Following Constitutional amendments that included the right to data protection as a basic – human- right of individuals, in 2010 the Federal Law on Protection of Personal Data held by Private Parties was enacted, followed, in 2011, by its Regulations, and in 2013, by the Guidelines of the Privacy Notice (hereafter, together the “Data Protection Law”). These pieces of legislation, which are complemented by guides issued by the Data Protection Authority (“DPA”) (the National Institute of Transparency, Access to Information and Protection of Personal Data), apply at a Federal level and make up the Mexican Data Protection legal framework for the private sector. The Data Protection Law applies to all processing of personal data by private entities or individuals, except when it is processed for personal or domestic use or by credit bureaus, which are governed by a special law. Under this law, all personal data must be processed in accordance with the data protection principles of consent, information, proportionality, purpose limitation, legality, loyalty, and accountability. Franchisors and Franchisees will be data controllers with respect to certain personal data they process, for example, Franchisees will be controllers of their employees’ personal data, but may also be data processors in respect to other personal data and specific situations, depending on the degree of power or influence Franchisors exercise over Franchisees, such as in cases where Franchisor is data controller of customer data. In this sense and being “personal data” any information concerning an identified or identifiable individual, both, Franchisors and Franchisees need to be aware of their obligations and responsibilities when processing such data, which are more cumbersome on data controllers. It is of the outmost importance to make sure what are the roles of the parties, that is, who is the data controller or the data processor in the various scenarios, or if responsibilities are shared, as this will be a determining factor for liability in the event of an infringement to the Data Protection Law. Some of the general obligations of data controllers are: (i) to maintain appropriate physical, technical and organizational security measures, (ii) to provide a privacy notice to all data subjects from whom personal data is processed, (iii) to collect consent from data subjects, where necessary, (iv) to appoint a Person of Department of Data Protection, (v) to allow data subjects the exercise of their rights (access, rectification, cancellation, objection, etc.), and (vi) to notify security breaches when material. Due to the nature of the Franchise, transfers of personal data between Franchisor and Franchisee are of the essence. It is therefore worth mentioning that personal data can be transferred to third countries regardless of the level of protection a country provides, as long as transfers are covered by an agreement that is in compliance with the Data Protection Law and with the privacy notice that was made available to data subjects. Explicit and/or written consent from data subjects for the transfer of their personal data is sometimes required. There are no localization laws and no need to register or request authorization from the DPA. Third parties separate from the franchise normally also play an important role in the daily activities of both, Franchisor and Franchisee, as well as in the processing of personal data. When certain processing activities are carried out by these third parties, Franchisor or Franchisee, as the case may be, will remain accountable for the processing; reason why there should be agreements in place containing robust data protection obligations for the third parties. Failure to comply with the provisions of the Data Protection Law may result in hefty fines, and, if personal data is processed deceitfully or for profit, penalties of imprisonment may be imposed. At the moment, there have been no known cases regarding fines imposed to franchisee´s in Mexico nor franchisors, for not complying with the Data Protection Law. Security Measures Data controllers and data processors have the obligation to establish and maintain appropriate physical, organizational and technological security measures to protect personal data from unauthorized processing, access or disclosure. When determining what security measures are appropriate, the risk involved in the processing, sensitivity of the personal data, number of data subjects, and technological development shall be considered. Franchisors must be aware that in the event of a security breach or of an investigation, the DPA, when analyzing if appropriate security measures were in place, will be prone to contemplate the degree of control Franchisor exercised upon Franchisee on the matter of selecting the security measures to be implemented, to allocate responsibility, being the logical conclusion that the more control a Franchisor exercises upon Franchisee, the more accountable will be for the processing of personal data. Cybercrime Although Mexico intends to ratify the Budapest Convention on Cybercrime, it has not done so, and it is not clear when it will ratify it. Notwithstanding this, under the Federal Criminal Code, illicit access to systems, as well as to destroy or cause the loss of information are considered crimes, along with the disclosure of trade secrets and confidential information. Additionally, data protection legislation establishes that it is a crime to cause a security breach involving personal data to deceitfully process it for profit. Some other cybercrimes, due to the lack of specific provisions, are treated by the police, as fraud. Recently there were amendments to the Criminal Codes to include higher penalties for cyber-related crimes, however, the main problem remains having criminal conducts specifically established in the codes, that apply to the activities occurring in cyberspace, so these can be pursued and enforce accordingly. Social Media and E-Commerce There are no specific regulations or provisions applying to Franchises and social media as such, nor codes of conduct in this regard; however, social media activity is mainly governed by the Data Protection Law, and legislation on Consumer Protection. In this sense, it should be clear if Franchisees are allowed to create social media profiles, and if so, in what way these should be handle, i.e. what activities or posts are to be avoided, and to what extent the Franchisee may be active on social media. As social media’s main asset is personal data, special attention should be paid to the collection and further processing of personal data via social media, as the general data protection rules will apply to this processing of personal data, even if individual’s profiles are public. E-commerce is regulated in Mexico by several laws, including the Code of Commerce and the Federal Law on Consumer Protection, which protects consumers in Mexico, regardless of location of providers, meaning that companies offering services or goods in Mexico must comply with Mexican regulations. Advertising is also regulated by the mentioned law and complementary regulations. The Franchise agreement must be clear as to what Franchisees are authorized to do and if they may sell or offer goods or services online and, if so, under what conditions and in what territory. Attention must also be paid to the domain names that are registered and used for such purposes. Enforcement Activity Even though the DPA has been actively enforcing the Data Protection Law since it came into force, there are no reported cases involving a franchise or where issues inherent to a franchise have been considered, as it would be, for example, the control exercised by Franchisor upon Franchisee. Nevertheless, there have been cases related to other issues that could also directly impact a Franchise. It is common to see disgruntle employees file complaints before the DPA, for various reasons, -from claiming not having received a privacy notice to not having consented to the processing of their personal data- and where the main objective is cause damages to the company or force a company to settle a labor case. Lack of transparency in the processing or not providing a privacy notice in terms of the Data Protection Law, have been issues constantly raised by data subjects and penalized by the DPA. Not collecting appropriate consent for the processing and for the transfer of personal data has been a recurring issue in DPA’s resolutions; as there are specific requirements for collection of consent and three types of consent that can be collected depending on the categories of personal data being processed, complicating compliance with this obligation. In the franchise, Franchisor should pay special attention to this issue because if Franchisee does not appropriately collect consent from individuals, the whole processing of personal data, including any transfers of such data to Franchisor, could be considered illegal. It should be mentioned that the burden of proof regarding compliance with all data protection obligations always rests on the data controller. As mentioned, fines tend to be hefty, ranging from approximately USD 4.00 to USD 1,300,000.00, with these amounts duplicating if sensitive personal data is involved in an infringement to the law or in case of recidivism. The DPA has discretion regarding the amount of the fines and so far, most of the fines have been in the millions of Mexican pesos. Although it is possible to seek damages for the unlawful processing of personal data, there are no reported cases where damages have been obtained in connection with the processing of personal data; probably because this needs to be done via civil courts, with the standard of proof set very high. Recommendations and Conclusions It is important that both parties, Franchisor and Franchisee, are aware of their responsibilities when entering into a Franchise Agreement in Mexico, even when choice of law is elsewhere. Taking this into consideration, it is highly advisable to have local counsel guide and protect both parties, and to avoid any type of infringement that could produce irrevocable damage to the image of the Franchise. It is custom in Mexico, as part of providing Technical Assistance for Franchisor to provide Franchisee with a list of legal requirements before opening and operating a Franchise. Such list should include Data Protection aspects. On the latter, reputational damage is one of the main risks associated with incompliance with the Mexican Data Protection Law and affecting both parties. Therefore, making sure and helping Franchisees comply with the law, will benefit the involved parties.
https://www.idiproject.com/news/mexico-franchising-and-data-protection/
Everything you ever wanted to know about the quotes talking about madness in hamlet, written by experts just for you. The storyline of hamlet follows a vein of madness that begins with claudius' murdering king hamlet and ending with the tragic killing of almost every main character many reasons have been proposed for the ultimate tragedy, which occurs at the conclusion of the play it will be argued in this essay that madness is the. Hamlet madness essay hamlet essay - 926 words in william shakespeare's tragic play hamlet, laertes, fortinbras and hamlet find themselves in similar situations while hamlet waits for the perfect time to avenge his father's death by murdering claudius, laertes learns of his father's death and instantly seeks vengeance. Hamlet contradicts himself throughout out the play he endorses both of the virtues of acting a role and being true to oneis self he further supports both of these conflicting endorsements with his actions this ambiguity is demonstrated by his alleged madness, for he does behave madly, only to become perfectly calm and. Free essay: the ghost of late king hamlet appeared to hamlet and requested him to avenge his death “it is only after having seen the ghost that hamlet. 'hamlet' is a play with so many different important themes that students can focus on this lesson offers ideas for essays students can write. One of the most analyzed plays in existence is shakespeare's tragedy hamlet, with its recurring question: “is hamlet's madness feigned or real” this question can only be answered through the portrayal of his character when he is associated with the other characters in the play in shakespeare's play prince hamlet. I've my some changes to the essay i used antic disposition in the essay, because that is what hamlet refers his madness as can you proofread my hamlet essay and offer suggestions i am having trouble with the analysis part i tried my best to expand the analysis, but i could only expand analysis up to. Lidz complicates the issue by contending that hamlet, though he suffers from certain real forms of madness, nevertheless retains his keen intellect and at times only pretends to be insane in order to thwart and baffle those who would prevent him in his quest for revenge p j aldus has observed hamlet's madness from. Read this full essay on hamlet's madness hamlet is one of the greatest works of shakespeare of all the themes in this play madness is perhaps one of the mo. Free hamlet madness papers, essays, and research papers. Hamlet's antic disposition from hamlet, an ideal prince, and other essays in shakesperean interpretation: hamlet merchant of venice othello king lear by alexander w crawford there is much evidence in the play that hamlet deliberately feigned fits of madness in order to confuse and disconcert the king and his. Hamlet study guide contains a biography of william shakespeare, literature essays, a complete e-text, quiz questions, major themes, characters, and a full after rosencrantz and guildenstern leave the royal presence, polonius rushes in , announcing that he has found the reason for hamlet's madness. Hamlet term papers (paper 14544) on analysis on hamlet's madness : an analysis of hamlet s antic disposition is hamlet mad a close analysis of the play disclaimer: free essays on hamlet posted on this site were donated by anonymous users and are provided for informational use only the free hamlet research. Free essays from bartleby | shakespeare could have intended for his character, hamlet to appear to be engulfing himself in convincing everyone that he is in. Free essay: hamlet, madness or sanity hamlet, by william shakespeare, is about a young prince who wants revenge when he learns about the murder of his. Free essay: he proclaims, though this be madness, yet there is method in't (iiii 200-201) although it is clear that hamlet originally intends.
http://wtessayqolk.n2g.us/hamlets-madness-essay.html
Need Writing Help? Get feedback on grammar, clarity, concision and logic instantly.Check your paper » Essay Madness Of Hamlet By William Shakespeare - Madness in the Play Hamlet Madness is defined as “a mental delusion or the eccentric behavior arising from it.” In the play Hamlet, the tragic hero Hamlet went mad after his dead father’s ghost appeared and told him that his uncle, King Claudius, was responsible for his murder. Hamlet believed that if he became mad people would become comfortable and bold around him hoping that eventually King Claudius would reveal that he was the murderer of King Hamlet. By continually revealing Hamlet’s madness, Shakespeare proves that madness as a result of revenge for the family’s sake and the lust for revenge caused Hamlet’s madness to be indeed real and authentic.... [tags: Characters in Hamlet, Hamlet, Ghost, Gertrude] Better Essays 1034 words (3 pages) Essay The Soliloquy Of Shakespeare 's ' Hamlet ' - In this essay I examine the soliloquy-approach which the hero uses. Harry Levin comments on Hamlet’s penchant for soliloquies in the General Introduction to The Riverside Shakespeare: Comparably, Hamlet has been taken to task or, perhaps more often, for an alleged inability to make up his mind. Actually, both the testimony about him and his ultimate heroism show that his hesitations are uncharacteristic. It is a measure of the baffling prethe native hue of resolution Is sicklied o’er with the pale cast of thought.... [tags: Characters in Hamlet, Hamlet, Ghost, Prince Hamlet] Better Essays 2490 words (7.1 pages) The Madness Of Hamlet By William Shakespeare Essay - Madness is a condition that is often difficult to identify, especially when trying to analyze the behavior of a fictional character in a play that was published in 1603. In the play, Hamlet is asked to avenge his father’s death and to accomplish this task in a less apparent manner, Hamlet decides to put on an antic disposition. The madness of Hamlet is often disputed, for good reason, as his behavior is frequently baffling throughout the play. Shakespeare, the author of this tragic play, leaves the audience to decide whether Hamlet is truly mad or not.... [tags: Hamlet, Characters in Hamlet, Gertrude, Ghost] Better Essays 1550 words (4.4 pages) Madness and Insanity in Shakespeare's Hamlet - Insanity in Hamlet Essay - Insanity in Hamlet A consideration of the madness of the hero Hamlet within the Shakespearean drama of the same name, shows that his feigned madness sometimes borders on real madness, but probably only coincidentally. Hamlet’s conversation with Claudius is insane to the latter. Lawrence Danson in “Tragic Alphabet” describes how Hamlet’s use of the syllogism is pure madness to the king: What Hamlet shows by his use of the syllogism is that nothing secure can rest on the falsehood that masquerades as the royal order of Denmark.... [tags: Essays on Shakespeare Hamlet] Better Essays 1774 words (5.1 pages) Madness and Insanity in Shakespeare's Hamlet - Insanity within Hamlet Essay - Insanity within Hamlet Let us explore in this essay the real or feigned madness of the hero in William Shakespeare’s dramatic tragedy Hamlet. Critical opinion is divided on this question. A.C. Bradley in Shakespearean Tragedy staunchly adheres to the belief that Hamlet would cease to be a tragic character if he were really mad at any time in the play (30). On the other hand, W. Thomas MacCary in Hamlet: A Guide to the Play maintains that the prince not only feigns insanity but also shows signs of true insanity: Hamlet feigns madness but also shows signs of true madness) after his father’s death and his mother’s overhasty remarriage; Ophelia actually does go mad after he... [tags: Essays on Shakespeare Hamlet] Better Essays 1894 words (5.4 pages) Madness and Insanity in Shakespeare's Hamlet - Hamlet and Insanity Essay - Hamlet and Insanity William Shakespeare’s supreme tragic drama Hamlet does not answer fully for many in the audience the pivotal question concerning the sanity of Hamlet – whether it is totally feigned or not. Let us treat this topic in detail, along with critical comment. George Lyman Kittredge in the Introduction to The Tragedy of Hamlet, Prince of Denmark, explains the prince’s rationale behind the entirely pretended insanity: In Shakespeare’s drama, however, Hamlet’s motive for acting the madman is obvious.... [tags: Essays on Shakespeare Hamlet] Better Essays 1902 words (5.4 pages) Essay about Madness And Insanity In Hamlet - The play Hamlet by William Shakespeare depicts the story of young Hamlet trying to avenge his deceased father who was killed by his uncle Claudius, who went on to claim the throne as his own and marry Hamlet’s mother, Gertrude. This situation supposedly causes young Hamlet to succumb to madness and insanity. With this insanity, Hamlet and the rest of the royal family goes through a journey that causes Denmark to go into chaos. The play ends with the death of all the royal family and the prince of Norway, Fortinbras, taking over the throne and ruling Denmark.... [tags: Hamlet, Characters in Hamlet, Gertrude, Ghost] Better Essays 1874 words (5.4 pages) The Soliloquies Of Shakespeare 's Hamlet Essay - Are themes always mentioned in the soliloquies of Shakespeare’s plays. In William Shakespeare’s dark and symbolic play, Hamlet, he reveals the major themes of revenge, clarity and death through the soliloquies in order to clarify the plot of the story. Firstly, Shakespeare demonstrates the theme of revenge in the play’s soliloquies. The first soliloquy where Hamlet seeks for revenge occurs when he discovers from his father’s ghost that Claudius murdered his father. This information triggers Hamlet to determine a plan to get vengeance.... [tags: Hamlet, Characters in Hamlet, Soliloquy] Better Essays 1804 words (5.2 pages) Madness and Insanity in Shakespeare's Hamlet Essay example - The Melancholy Hamlet William Shakespeare’s tragic play Hamlet is an exercise in the study of melancholy. Let’s explore the in’s and out’s of this aspect of the drama in this essay. Gunnar Boklund gives a reason for the highlighting of the melancholy aspect of the protagonist in Shakespeare’s Hamlet in his essay “Judgment in Hamlet”: In the tragedy of Hamlet Shakespeare does not concern himself with the question whether blood-revenge is justified or not; it is raised only once and very late by the protagonist (v,ii,63-70)and never seriously considered.... [tags: The Melancholy Hamlet] Better Essays 1968 words (5.6 pages) Hamlet’s Madness Essay - The tragedy of Hamlet by William Shakespeare is about Hamlet going insane and reveals his madness through his actions and dialogue. Hamlet remains one of the most discussed literary characters of all time. This is most likely due to the complex nature of Hamlet as a character. In one scene, Hamlet appears happy, and then he is angry in another and melancholy in the next. Hamlet’s madness is a result of his father’s death which was supposedly by the hands of his uncle, Claudius. He has also discovered that this same uncle is marrying his mom.... [tags: Shakespeare, literary analysis] Better Essays 1882 words (5.4 pages) Popular Essays - The Stress and Coping Strategies Employed by the Students of Engineering and Medical Students in a Semi Urban Area of Karnataka State - The Other Wes Moore - A Detailed Insight of My First and Last Burger - Can We Say that the Selection of the Best and Able and Then Educating Them to Become True Leaders Is Better than Popular Vote to Choose Leaders?
https://www.123helpme.com/soliloquy-madness-in-hamlet-preview.asp?id=315822
Throughout the play Hamlet, Hamlet claims to be feigning madness. Do you think this is true, or is Hamlet actually insane? In Shakespeare's Hamlet, Hamlet himself often feigns madness in order to help carry out his plot for revenge. In act 3, he tells his mother he is not mad "But mad in craft," and after learning of his father's murder, tells Horatio he will put on an "antic disposition" every now and then. Hamlet is not mad in every sense of the word, however he is certainly full of grief and self-doubt. - print Print - list Cite Expert Answers calendarEducator since 2017 write11,872 answers starTop subjects are Literature, History, and Law and Politics Hamlet isn't really mad at all; he's simply putting on what Polonius describes as his "antic disposition." In other words, he's faking it. Though Hamlet is a most unusual young man, with more than a few psychological hang-ups, he's not actually insane. Nonetheless, his fraught psychological condition in the wake of his father's death does allow him to make his mad act convincing, so much so that Claudius, Gertrude, and Polonius are all worried about what it might mean. Hamlet's feigned madness comes about as a direct result of his notorious procrastination. Despite having vowed revenge for the murder of his father, Hamlet has actually done nothing about it, which pains him deeply. Lacking the necessary resolve to destroy Claudius, he settles for unsettling him instead. Rather than just run Claudius through with a sword, Hamlet's going to make his wicked uncle feel uneasy about the stability of the throne he so treacherously usurped from his brother, old King Hamlet. Hence the mad act. And for a... (The entire section contains 3 answers and 840 words.) Related Questions - Hamlet actually slips into insanity at certain moments in the play. Is this true, or is Hamlet... - 1 Educator Answer - What purpose is served when Hamlet decides to feign madness? - 9 Educator Answers - What do we learn from Hamlet's feigned madness and Ophelia's true insanity? - 2 Educator Answers - What are the differences between Ophelia's real madness and Hamlet's feigned madness in Hamlet? - 4 Educator Answers - What do you think about Hamlet's Insanity?In the play Hamlet claims to be feigning madness, do...
https://www.enotes.com/homework-help/throughout-the-play-hamlet-hamlet-claims-to-be-652386?en_action=hh-question_click&en_label=topics_related_hh_questions&en_category=internal_campaign
Hamlet's Heightening Insanity In Hamlet by William Shakespeare, it is clear that Hamlet was once sane, but the tragic events of his life led him to be insane. Grieving over the loss of a loved one, yet a parent, is extremely difficult. These hardships can cause a lot of problems in one’s life. In Hamlet, Shakespeare incorporates a theme of madness to serve a motive. In fact, Hamlet is not initially crazy, but plans to use the insanity as a trick to achieve what he wanted-- revenge. The major problem with Hamlet is that he engages enormous time in planning instead of taken action. Thirdly, Hamlet feigned madness was another cause of delay to his avenging his father’s death. He feigned madness to enable him to buy time to make a decision and plan on how to kill Claudius. Although people like Grenadier Hamlet was not only obsessed with his own conscience but the conscience of others as well. "The play's the thing, wherein I'll catch the conscience of the king." (2.2.617) Hamlet wants to know what king Claudius is thinking in terms of his conscience before Hamlet acts. Here, Hamlet is thinking with his conscience, instead of just killing Claudius like he wanted to do from the beginning, he needs to confirm the conscience of Claudius to convince his own conscience it is the right thing to do. Hamlet was constantly overthinking because he wanted a clean conscience however, this brought several internal conflicts Hamlet had to battle himself with and inevitable lead to his The big question is “Are Hamlet’s actions justified.” Well Hamlet was both justified and not justified. Some things he did were for a reason others were just possibly because he was pretending to have gone insane. Examples of this are the way Hamlet treated his own mother, Gertrude, and the way he treated his love Ophelia, one thing he is not justified in is delaying the murder of his uncle and his mother’s new husband Claudius. But the thing that is justified is actually killing Claudius. None of the men in the play ever acknowledge the emotion sadness. Even after Laertes’ father is murdered by Hamlet, he shows anger rather than sorrow by impulsively threating to kill the king. On the other side of the spectrum, Hamlet immediately expresses weakness and grief at the beginning of the play, due to his own father’s death. He finds himself talking about the pain he has been hiding because of this, “But break, my heart, for I must hold my tongue. ”(Pg. Specifically, his uncertainty is shown when he is given the opportunity to kill his uncle, but he ends up postponing his revenge because he believes that Claudius is praying. Although one might argue that a character’s obsession may lead to happiness, an analysis of Prince Hamlet in William Shakespeare’s Hamlet, and Guy Montag in Ray Bradbury’s Fahrenheit 451, depicts the theme of uncertainty when a character leads to downfall due to their Hamlets tragic flaw is his indecisiveness to make decisions. This trait is demonstrated through the entire play and causes Hamlet to his own demise. When Hamlet has immediate suspicious of his fathers murder and later proof, he delays the murder, which is puzzling because the play is about revenge, and one would expect him to have done it earlier as he had ample amount of opportunities to do so. His indecisiveness has puzzled many. If there is any true madness, the madness comes from this: Hamlet is caught between the proverbial rock and hard place. His life is dammed and doomed no matter what he does. He eventually quits trying to choose and simply acts according to the cultural example Fortinbras sets before him. All die as a result of Hamlet 's reaction to Fortinbras ' example. This seems to condemn the cultural requirement for revenge even though Fortinbras carries it off with such aplomb and with such honor. Macbeth’s hamartia is his excessive ambition to become King, which leads to paranoia, and then leads to his death. The Fatal Flaw in Shakespearian tragedies is what classifies the play under that genre. Whilst there is death and sadness in his other plays, to be sorted with his Tragedies the plays must end in the main character’s death brought upon them due to their own faults. Hamlet putting on the play and seeing Claudius’s reaction is a big clue that Claudius is the murderer. Instead, Hamlet thinks that To Be Or Not To Be (Three Messages from Hamlet Soliloquy) Life is never easy and people every day are struggling against different circumstances. Some fight depression, addiction, divorce, death of a family member or friend. In Hamlet written by William Shakespeare, Hamlet is having a very hard time dealing with and coping with his father’s death. Along with that burden, his mother also re-married quickly after to his father’s brother; or Hamlets uncle in other words. Throughout the play Hamlet is depressed and in a state of consuming sadness and no hope. Hamlet feels inadequate and frustrated with his own lack of action. The Player is able to generate and convey passion and emotion in his speech about Hecuba's grief over the death of Priam, yet this situation is not a real one; the Player is just acting. Hamlet, on the other hand, has real cause to feel grief and to act, yet he has done nothing. He asks what would the Player do "Had he the motive and the cue for passion/That I have?" So he questions himself: "Am I a coward?" Camree Rogers Has your heart ever been torn between the loss of a loved one, and anger against the who had caused it? Hamlet has felt both of those strong emotions, because, between him mourning his father's death, and how he was murdered by his new uncle/father, Claudius. After he had figured out who killed his father, Hamlet decides he can’t trust anyone, until his death has had justice. Furthermore, Hamlet learns that his mother, Gertrude, had been having an affair with Claudius then begins his plan to take revenge for his father. Shakespeare uses mood, tone, and irony to develop the themes of anger and betrayal.
https://www.ipl.org/essay/Courage-In-Hamlet-FKE7922FJED6
Hamlet’s madness played an important role in the play because he later on became insane after he had feigned his insanity. It is obvious that Hamlet had a troubled mind in the play because he was torn on what to do in order to avenge his father’s death. He was torn between whether he was following his father’s wish or the wish of the ghost that appeared to him. This made him insane because, he spent his time worrying about his father’s death revenge and this affected his state of mind. Hamlet became insane because In the play Hamlet, we are introduced to Hamlet’s character who stumbles upon the Ghost of his father and swears to avenge his father’s murderer. Shakespeare uses the character, Hamlet, to illustrate the theme of madness. Due to the chain of events that has occurred in Denmark, it is proven that these events drive Hamlet towards insanity. As the play progresses, Hamlet has starts transitioning into a mad person through his act of madness. By the end of the play, Hamlet’s state of mind has gone out of control. Romeo calls himself “Fortune’s Fool” and realizes that he is going to have to face a punishment for his actions, that are of course caused by fate (3.1.142). Later when Romeo hears of Juliet’s death he blames fate and tries to kill himself, “Is it e’en so?-Then I deny you, stars!” (5.1.25). In this example Romeo is taking responsibility for his past actions by defying fate and taking things into his own hands. Juliet is also a naïve and impulsive girl that The play of Hamlet by William Shakespeare is full of many acts of betrayal. One such of these acts is when Hamlet goes against the wishes of his father’s ghost and debates on whether or not he should kill Claudius. Not only this but he also is extremely cruel to his mother and hurts her feelings which were also against the wishes of the ghost. He wanted Hamlet to avenge his death without hurting others along the way and almost everything Hamlet did in the play went against that. Hamlet’s first act of betrayal against the ghost of his father is one that stretches throughout the entire book. It is also shown with Tybalt who is very stubborn and reckless, leading to his end. Juliet also portrays tragic flaw, as she is very impetuous and impulsive when making decisions, leading to her demise. Therefore, an influential theme found in Shakespeare’s Romeo and Juliet is that a simple mistake caused by a tragic flaw can lead to tragedy. Romeo, the protagonist of Romeo and Juliet, faces death because of a simple flaw. Romeo is a Montague, enemy of Juliet’s family, the Capulets. Romeo thinks that his blurred sense of reality due to romanticism has let Mercutio die to Tybalt. Romeo furiously states, “[His] very friend, hath got this mortal hurt / In [his] behalf. [His] reputation stained / With Tybalt’s slander…” (III.1.115-117). This shows how complicated Romeo is, from being dramatic about being romantic and then immediately becoming very serious and furious at Tybalt for the death of his friend Mercutio. The drama from Romeo and Juliet mainly comes from the complexity of all the different characters in the play. The Sanity of Hamlet Voltaire once stated that, “Madness is to think of too many things in succession too fast, or of one thing too exclusively.” This statement draws many parallels with William Shakespeare’s Hamlet. The main character, Hamlet, is faced with numerous struggles, including the battles he fights in his own head daily. Not only does Hamlet fall apart externally, but his internal, mental health, fails all of Denmark. Throughout the course of this play, Hamlet’s poor mental health is addressed through his soliloquies, his quirky habits, and his inability to handle his issues. The issue of Hamlet’s “madness” brings itself to light most overtly through his “To Be or Not To Be” soliloquy. Madness is described as the state of being mentally ill or in the state of frenzied, chaotic activity, and is also when one cannot be trusted which, is shown multiple times throughout the play. While Hamlet and Ophelia both display madness, Hamlet uses his madness in order to find the truth whereas Ophelia is a victim of the madness happening in her life. The kingdom of Elsinore, where Hamlet and his family lives, is a good example of Shakespeare demonstrating how madness affects all characters. Elsinore, is filled with untrustworthy characters such as King Claudius and Polonius. Polonius is considered untrustworthy in the play because he sends people to go spy on his son, Laertes. Hamlet states “this is most brave, that he, the son of a dear father murder’d, prompted to his revenge by heaven and hell, must like a whore unpack his heart with words and fall a-cursing like a very drab, a scullion!” (Act 2 Scene 2, Lines 569-575) Hamlet is tormented by his inability to physically confront Claudius and that he resorts only to words. Hamlet shortly after contemplates whether or not it “‘tis nobler in the mind to suffer the slings of arrows of outrageous fortune, or to take arms against a sea of troubles and by opposing end them.” (Act 3 Scene 1, Lines 57-60) Hamlet questions if his revenge is worth the agony of his sanity or if he should take a stand against Claudius. This question is manifested in the popular phrase: “to be or not to be, that is the question.” (Act 3 Scene 1, Line 56) How Hamlet’s revenge is affecting the interactions between individuals is clearly indicated by the conversations Polonius has with Claudius. Polonius spews all of his suspicions concerning Hamlet such as his stealing of Ophelia’s heart and his alleged “madness” to Claudius. Polonius falsely believes that “the origin and commencement of Hamlet’s grief sprung from neglected love.” (Act 3 Scene 1, Lines 177-178) Claudius believes the lies Polonius speaks which explains the varied perceptions each character has of Hamlet’s behaviour: Gertrude doesn’t want to believe that Hamlet is mad, Claudius is legitimately concerned for Hamlet, and Polonius is enraged by Hamlet’s advancements towards Ophelia. In the play, The Tragedy of Hamlet, Prince of Denmark, it is full of revenge. Many things happen through this play but there’s one person or thing that makes the bad things happen. King Hamlet died and his brother took over and even married his wife, Gertrude. The Prince Hamlet does not approve of this relationship because it happened so quick. Hamlet grieves through this whole play because of the death of his father and starts to go crazy.
https://www.ipl.org/essay/Madness-And-Madness-In-Hamlet-P332Z7HESCP6
Claudius hastily married King Hamlet's widow, GertrudeHamlet's mother, and took the throne for himself. Denmark has a long-standing feud with neighbouring Norway, in which King Hamlet slew King Fortinbras of Norway in a battle some years ago. Although Denmark defeated Norway, and the Norwegian throne fell to King Fortinbras's infirm brother, Denmark fears that an invasion led by the dead Norwegian king's son, Prince Fortinbrasis imminent. The acting was phenomenal, although the video quality was not as good. Continue reading Add your rating See all 1 kid review. The news pushes him over the edge into madness or does it? Continue reading Show less Is it any good? In what was originally a TV movie, this classic tale is transported to modern times by the Royal Shakespeare Company. Here, the complicated revenge plots and counterplots employ swords and daggers as well as pistols. These modern touches make the scenes look familiar, but the language remains the same. Tennant is especially fun to watch as he mugs for the camera and alarms other characters while seemingly in the throes of madness. If viewers start getting distracted, feel free to take a break then come back for the exciting conclusion. Continue reading Show less Talk to your kids about Families can talk about madness. Was Hamlet truly crazy, or was he faking it as part of a scheme to expose the king? What do you think about this production? Does it hold up after being transported into modern times? Why do these classic stories continue to be reworked?The ghost, Hamlet’s feigned madness, Ophelia’s death and burial, the play within a play, the “closet scene” in which Hamlet accuses his mother of complicity in murder, and breathtaking swordplay are just some of the elements that make Hamlet an enduring masterpiece of the initiativeblog.coms: K. Hamlet is not only one of Shakespeare's greatest plays, but also the most fascinatingly problematical tragedy in world literature. First performed around , this a gripping and exuberant drama of revenge, rich in contrasts and initiativeblog.coms: 1. Or at least one great tragedy— “Hamlet,” William Shakespeare’s classic tale of wishy-washy waffling about maybe getting around to vengeance one . Love, lust and hatred, those indispensable elements of every story, are discovered anew in this timeless classic. The drama pulls the reader into an intimate engagement and Hamlet's tragedy becomes the tragedy of every individual. Sep 19, · Of course, most people have their own favourite film versions of "Hamlet," William Shakespeare's classic tragedy about love, murder, revenge and family loyalty, that was first performed at the Globe Theater in /5.
https://gerevij.initiativeblog.com/a-review-of-william-shakespeare-classic-tragedy-hamlet-49336hk.html
Hamlet, character development character development by megan dalland on 20 october 2010 tweet comments (0) please log in to add your comment . Hsc english hamlet essay used for my hsc in 2012 shakespeare's adept development of the secondary contrast through skilled manipulation of language and . Family relationships in shakespeare's hamlet in the tragedy of hamlet by william shakespeare, the relationships between parents and their offspring play a crucial role in the development of the plot. Hamlet’s characteristic of his uncertainty and indecisivness essay we can write a custom essay on hamlet’s for successful life span development, . Largest free essays database: over 180,000 essays, term papers, research paper, book reports 183,565 essays, term and research papers available for unlimited access. Assignment: write an essay in which you discuss hamlet's antic disposition (15192) is his madness real or is it feigned how does hamlet's mental development connect to the meaning of the play. Prompt: “in hamlet a character who appears briefly, or does not appear at all is a significant presence write an essay in which you show how such a character functions in the work. In this essay we will discuss the historical, mythical, and religious content of shakespeare's hamlet, and briefly its relationship to the political and social setting of its time and its influence on western literature. Hamlet does not completely lose it after the death of his father but keeps his cool and mind focused on his plan also, hamlet constantly talks about suicide throughout the play: to be or not to be, that is the question (31 156). 'hamlet' is a play with so many different important themes that students can focus on this lesson offers ideas for essays students can write. Online literary criticism sites about hamlet by william shakespeare hamlet this essay takes the view that shakespeare linked the principal events . Hamlet essay features samuel taylor colleridge's famous critique based on his influential shakespeare notes and lectures. Hamlet is a pain oct 31, 2013 english literature appearance versus reality in hamlet essay sustainable development dissertation topics 1. Horatio’s steadfastness and loyalty contrasts with hamlet’s variability and excitability, though both share a love of learning, reason, and thought. Free essay: in the english play hamlet, shakespeare incorporates deep analytical thought in his writing by the use of character, symbolism, and motifs. Even as a minor character in the play hamlet, the character ophelia plays a vital part in the development of both the plot and thematic ideas. Read this essay on hamlet cause and effect essay come browse our large digital warehouse of free sample essays get the knowledge you need in order to pass your classes and more. Free hamlet character papers, essays, hamlet proves to be a very complex character, and functions as the key element to the development of the play. — prince hamlet, in william shakespeare, hamlet, 524–8 after hamlet, shakespeare varied his poetic style further, particularly in the more emotional passages of the late tragedies. Hamlet is a tragedy written by william shakespeare it is a story about revenge and the growing pains of life learn more about the story of. Hamlet essay on his character the audience understands when he says i tax not you, your elements, with unkindness, i never gave you kingdom, called you childrentwo pernicious join here shakespeare shows the audience that king lear has finally realised that his two eldest daughters are evil. Hamlet: essay on act i act one of hamlet is an excellent introductory act shakespeare establishes atmosphere, by introducing the major characters, the role of the supernatural, the revenge plot, the love plot, and the contrast of the fortinbras plot, as well as hamlet’s fiegned madness. Writing portfolio search this site tiffany mazon child development essays english essays sitemap hamlet is a tragedy as well as a revenge play, however, . 2018.
http://xhassignmenthsnz.njdata.info/hamlet-development-essay.html
question why Horatio has been the only one confronted on this issue. Why is there so much secrecy? For a country on the brink of war, is a bad omen for Denmark? We are already involved in this scene, and it’s important that Shakespeare uses a dramatic first scene to catch the audience’s attention straight away. At the point where the Ghost enters, the drama heightens and Horatio questions the ghost, when he sees he’s took the form of the late King Hamlet. “It is offended” Marcellus says, as the Ghost disappears. Now we wonder what does the ghost want, and what is its purpose? It doesn’t seem to be speaking to the person the Ghost wants, as we realise when the Ghost appears once more and Horatio questions it once more. “If thou art privy to thy country’s fate, Which, happily, foreknowing may avoid, O Speak;” Here suspicion and secrecy are linked, as Horatio is suspicious of the Ghosts nature, and here asks if he secretly knows the counties fate, could he speak. But once again the ghost disappears as th... ... middle of paper ... ...huge part in this play in different ways. The suspicion involves the audience, secrecy leads to suspense and deception has also proved it can be humorous at times. The deception contributes to the success of the play, as it affects our feelings towards the different characters. Polonius is made a mockery from deception when Hamlet has the upper hand on him, and deception also leads to his death. We feel pity towards Ophelia as her death is a result of Hamlets feigned madness and we are critical of the extent he had to go. As for Claudius, his deception leads the audience to hating him not only for the terrible deed of murdering his brother, but the skill in which he deceives and uses people. It’s his skill in deception which leads to many deaths and affects Hamlet’s character and makes us judge other peoples character. Need Writing Help?
https://www.123helpme.com/preview.asp?id=144504
While Sherlock fans eagerly await the Christmas Special and Series Four, they can entertain themselves by checking out the cast’s other projects (some of which you can read about here). Well, today (August 5th) marks the start of one of the more famous of these – it’s the beginning of Benedict Cumberbatch’s 12-week run in Shakespeare’s Hamlet at London’s Barbican Centre. This marks the second time in two years that one of Sherlock‘s leads has taken on the Bard, after Martin Freeman gave a critically-acclaimed turn as Richard III last year. Though Hamlet’s theatrical run sold out within minutes, thankfully it will be shown at the Barbican’s cinemas in October. We can only hope the filmed version will be made available in other locations as well! Alas, poor Yorick! I knew him, Horatio… I mean, John. So the big question we’re asking today is; can we draw any parallels between the Prince of Denmark and the sleuth of Baker Street? At first glance, there may be some similarities in the characters’ mental states. Sherlock is a self-described “high-functioning sociopath,” and John mentions Sherlock’s Asperger’s Syndrome in “The Hounds of Baskerville.” Either condition could influence Sherlock to act in a socially-awkward manner. In fact, he often behaves rudely to others—whether it is intentional or not—simply because he doesn’t have the social filters that others possess, or feels that blunt honesty is the kinder approach. Similarly, Hamlet speaks bluntly, and at times, cruelly, to others, and his behavior is often erratic. His motives, however, are different than Sherlock’s. Hamlet describes his madness as feigned to mislead the other characters, particularly his uncle, King Claudius, saying, “I am but mad north-northwest. When the wind is southerly, I know a hawk from a handsaw.” However, some readers of the play believe Hamlet truly was mentally ill—citing instances where he behaves violently toward his mother, Gertrude; his contemplation of suicide (“To be or not to be, that is the question”); and seeing his father’s ghost when others present cannot. If you are like most of us who sadly do not have tickets to arguably one of the most brilliant interpretations of Hamlet ever to be brought to the stage, there are other excellent, filmed versions of the play that you might enjoy. These include Laurence Olivier’s, Nicol Williamson’s (who has also played Sherlock Holmes), David Tennant’s, Kenneth Branagh’s, Ethan Hawke’s and Mel Gibson’s interpretations of the “sweet prince.” Most are available on the popular streaming services. Get thee to the internet and check them out! Do you think the Dane and the Detective are similar characters? Tell us your thoughts in the comments! Reblogged this on Benedict Cumberbatch en Español.
https://sherlockshome.net/2015/08/05/221b-or-not-221b-a-comparison-of-sherlock-and-hamlet/
Hamlet And His Many Roles In the Shakespearean play, Hamlet, the title character portrays many roles, and all of these roles intersect in one scene in the play, Act III, scene ii. This scene takes place at the exact center of the play and if broken up into sections one can see a different aspect of Hamlet’s personality for each one. The play-within-a-play scene suggests that Hamlet is putting on his own play and reminds us that in real life, a person can play many roles. Hamlet plays a different role with each character in the play, such as Polonius, Claudius, Ophelia, Horatio, and the players. In the play scene, these characters are in the same place at the same time. Bert States calls Hamlet “a succession of responses to rapidly changing stimuli” (17).As he reacts with each character, he must move from role to role very quickly. It can be asked which roles are parts of Hamlet’s true self and which are feigned? Shakespeare uses references to plays and acting throughout the play to keep in mind the theme of appearance Vs reality. Hamlet says, “Our indiscretion sometimes serves us well, when our deep plots do pall, and that should learn us/ There’s a divinity that shapes our ends, rough-hew them how we will” (V, ii. We Will Write a Custom Essay Specifically For You For Only $13.90/page! order now lns 8-11). He is referring to the plot, the plan to alter the Murder of Gonzago, that he had earlier used to catch the conscience of the king. Hamlet also refers to a play when speaking of his voyage with Rosencrantz and Guildenstern: “being thus benetted round with villainies– or I could make a prologue to my brains, they had begun the play” (V, ii.lns 29-31). Here, Hamlet is claiming that his brain is working independently of his will and that a play is being, in a sense, written for him. He is just acting out a role (Fisch 163). And once again, Hamlet states: “You that look pale and tremble at this chance, that are but mutes or audiences to this act” (V, ii. lns 339-340).In this quote, upon dying, Hamlet acknowledges that they have all been taking part in a play. In the study of Sociology, there is a theory that everyone has a number of roles that they perform in their lives. Within the play, Hamlet’s most obvious roles are the grieving son who must avenge his father’s death; Ophelia’s lover and later, arguably, her damnation; the beloved prince of a proud heritage; the well-educated, sensitive philosopher; and most obviously within the play: the madman. During the play scene, his less obvious roles emerge. It can be argued whether these roles give depth to the layers of Hamlet’s personality, or show how serious his madness had become. These less obvious roles, which will be discussed more fully, are Hamlet as manipulator, critic, good friend, comic, jubilant boy, mocking satirist, and revenger. Hamlet’s role as a manipulator is the most interesting in this scene.It is through his manipulations that all the other roles emerge. The whole purpose of the play scene is for Hamlet to judge whether Claudius is guilty or if the ghost is lying. Therefore, Hamlet must manipulate the events of the scene. Ruth Nevo states: “In the play scene, Hamlet states his grand exposure of these inquiries” (50). Hamlet’s instructions to the players reflect his intentions: For in the very torrent, tempest, and as I may say, the whirlwind of passion, you must acquire and beget a temperance that may give it smoothness.Oh, it offends me to the soul, to hear a robustious periwig-pate fellow tear a passion to tatters, to very rags, to split the ears of the groundlings (III, ii. lns 5-11). Hamlet is saying that mastery of passion is essential to initiate action, something he has had much trouble with throughout the play. It is with control of his passion, Hamlet begins the play scene so that he may “hold the mirror up to nature” (III, ii. ln 22). He will reveal the inner truth of the guilty. Thus, through Hamlet’s imposed fiction, the Murder of Gonzago, he will reflect the truth. This is the irony of Hamlet’s manipulations. Another ironic twist is that while Hamlet will reflect the truth of others, he will mask his own truth with riddling statements and jokes, Hamlet’s truth being his knowledge of Claudius’ crime. In the end, the king breaks down and Hamlet’s ploy is successful.It is his text that the players, the court, the king, and the queen all play. He is the master of reality, making his will prevail (Nevo 52). As the scene is broken down, the first role that is seen, besides the manipulator, is Hamlet the critic. This role emerges as Hamlet is speaking to the players. One will notice that his feigned or unfeigned, depending on opinion, madness in earlier scenes has been replaced by calmness and rationality.He speaks as a well-educated nobleman who strives for classical balance in life. Hamlet wants the players to be moderate and natural in their depiction’s of life, not exaggerated, yet not dull. The speech that Hamlet gives the players can also show that Hamlet can only find the balance and the ordered universe he seeks in fiction. This is one of the three sections of this scene where it can be argued that Hamlet’s true self peaks out. This may be the case because, with the players who are not involved in his real life, Hamlet can be at ease and at his best.After the players leave, Hamlet is left alone with Horatio. His remarks to Horatio reveal his feelings for him. This is the second time Hamlet’s true self emerges and admits that he is truly Horatio’s good friend: ..and blest are those whose blood and judgment are so well commingled that they are not a pipe for fortune’s finger to sound what stop she please. Give me that man that is not passion’s slave, and I will wear him in my heart’s core, aye, in my heart of heart, as I do thee (III, ii. lns 68-74).
https://therichesof.com/hamlet-and-his-many-roles-paper/
Something is rotten in the state of Denmark. Two night-watchmen at the castle at Elsinore have seen a ghost they believe to be the former king of Denmark, the father of Prince Hamlet. The soldiers entreat Horatio, Hamlet's confidant, to wait with them for the ghost's appearance during the night watch. Horatio is horrified by its resemblance to the dead king. The men ask Hamlet to join the watch, and when the ghost appears, it reveals to Hamlet that he is, indeed, the spirit of his father. Then his father's ghost informs Hamlet that he was murdered by Claudius, the current king of Denmark. Claudius, Hamlet's uncle and brother of the former king, has not only usurped the throne of Denmark through foul murder, but has also taken Gertrude, Hamlet's mother, as his wife. Hamlet vows to avenge the death of his father and says he will put on an "antic disposition" to distract others from his genuine purpose. Hamlet's indecision and his madness—feigned or real—will result in tragedy for himself and all those around him. Through William Shakespeare’s tragic tale of death, revenge, and madness, students will learn: Author Biography: Learn more about William Shakespeare. Background Information: Learn about the Anglo-Saxons, the Christian roots of the Viking chieftain Canute, and Denmark’s control of Northern Europe in the early 1000’s. Find out about Kronborg Castle and the inspiration behind the story. Before-You-Read Activities: Learn about European geography, informative report and research on depression, ghosts, travel videos, Shakespeare biography. Vocabulary words used throughout the novel, utilizing a variety of activities to stimulate retention and growth. Literary Techniques: In context, character study, couplet, aside, soliloquy, foreshadowing, summarizing, synonyms, paradox, dialogue analysis, dumb show, couplet, end-rhymes, chorus, compare & contrast, comic relief, main & minor characters, conflict, complications, protagonist/antagonist, tragic flaw, hamartia, theme, tragic play structure, revenge tragedies. Moral Lessons and Character Values: Weakness and sin, revenge vs. justice, man’s place in the world, fear & indecision, death, prayer, guilt, insanity, reason & honor, spiritual fate, suicide, friendship. Activities and Writing Assignments: Biblical ghosts & witches, Purgatory research, harbingers & moral decay, symbolic meanings of plants/flowers research, well-known lines & phrases, drama performance, time line, multiple essay ideas, multiple creative writing ideas. Suggestions for Further Reading: We include an in-depth reading list of more plays by the same author(s) and other books that tie in with, or are similar to, Hamlet by William Shakespeare. Movie suggestions included. All of the unit lessons are written from a Christian worldview!
https://stores.progenypress.com/hamlet-e-guide/?setCurrencyId=5
Drama Analytical Response Essay This excerpt aims to provide a clear outline of the analysis of the two plays already discussed. The two plays chosen are Oedipus Rex by Sophocles and Hamlet by Shakespeare. In a bid to explore the two plays, the discussion in the preceding paragraphs shall provide highlights of the themes, the form, structure, language, character and language in the two plays. In addition, a comparison and contrast together with the plot in the two plays also form the basis for analysis. Comparison between the Two Plays Hamlet and Oedipus Rex, as the chosen plays have a lot in common in their themes and through the description of their plot. In the plays, themes of truth and misconceptions are drawn. The two major characters in the play that are the kings undergo a lot of tribulations and tests that later pits them to define their identity and come to the reality. Their innocence in the first instants pits them to traps that later enlightens them to the reality. In the long run, their positions become under threat, and later they lose their powers. In Hamlet’s scenario, Hamlet, who is the King experience a misconception of his father’s murder. He fails to figure out the reality of the world in the absence of his dear father. An idea in his mind confirms to him that the murder of his dear father was his Uncle, Claudius, who eloped with his mother immediately after the murder of his father. His innocence and pity makes him behave madly and angrily so as to fool his subjects and asserts more authority. However, Polonius learns the trick but still believes that the king’s crazy behavior emanated from the murder of his dear father. As the play advances, the mad behavior of the king develops more and more until he loses bearing on the reality. The emotional rage he experiences supersedes his ability to handle. Later, he resorts to physical violence that brings his reign to a halt. In the Oedipus’ case, the king’s faults and illusions led to his downfall. He was caught up in the confusion of defining the real truth from misconceptions. In the play, Oedipus is alluded to kill his father, the King Laius and then marry his mother, Queen Jocasta. This aimed at fulfilling oracles’ prophecy. To blend with prophecy, Oedipus gets rumors that his father, the king was not his biological father. The prophecy alludes that the Queen was the biological mother and that Oedipus should marry her by first killing the imposter father. However, he is misguided by the rumor and, therefore, fails to find the truth. As the play proceeds, Oedipus kills the King, who according to him was the imposter father and then marries the mother. He is later rewarded with the kingship title by liberating the people of Thebes from his father’s curse rule. During the reign of Oedipus, questions arose of his actions in killing the father and marrying the mother. The action of marrying the mother was against the customers as it was considered incest. The Queen, who happened to be the mother, continued to pursue the real truth that prompted his son thinking, to kill his husband. The mother then later on learns of the misconceived oracle prophecy that prompted the action of the son. She reveals the real truth to the son though in fear. In knowing the truth, Oedipus becomes remorseful and curses his untimely and emotional actions. The mother goes to hang herself afterwards, and Oedipus finds the body in one of the rooms in the palace. The death of her mother brings a lot of fury to him, and he bursts in a cry. He acts by anger and orders the servants to remove the slain body. He becomes blind begs the subjects to take to exile in London. As per the two plays, the king’s untimely and misconceived actions land them to series of unbearable circumstances. The theme of reality verse illusions is well pictured in the plays. For example in the Oedipus’ context, the character is deceived by the Apollo’s prophecy which stated that he would grow, murder his father and later succeed as the king of Thebes and again marry the Queen who happened to be her biological mother. Later on the reality sets in and the King becomes blinded. Hamlet presents a limbo situation. He firmly believes that his father was murdered but does not suspect the actual suspect. The murderer to his father is his close relation, therefore, affirming his justification of not being true to reality. On realizing that his mother is the chief culprit to the murdering of his father, their relationship strain. In the two plays, the two major characters choses to avenge on their illusion. Oedipus opts to kill the father and marry the mother. Hamlet decides to rage a war with his uncle with an intention of killing him. Later on, the concept of the hunter becoming the hunter sets in. The two kings becomes the hunted in the whole ordeal of their illusion and unthoughtful action. In the long run, both the kings lose their power, authority and virtues. Hamlet loses his life while Oedipus become blinded. Despair, betrayal and the disease of the mind make these two characters in the two plays have similarity. Moreover, the characters fall victims of shattered world and failed experiences. The literature stylistic device of a play within a play is a common method used in the both plays. This is demonstrated by the key actors in their quest for authority and power. This style develops the plot in the play and makes the audience develop more interest. In the Oedipus play, a play within a play helps the audience drawn a conclusion on the character of the king. Contrast in the Two Plays In contrast to the two characters, Oedipus is portrayed as a hero who makes rush decisions and highly emotional. He is pictured as a proud man full of ego. The play pits him as a man of action due to is an untimely move to kill his imposter father. On the other hand, Hamlet, according to the play, is the hero who is sensitive and very moody. He also demonstrates passion as he opted to kill his uncle with love. He is also passionate about his father despite being murdered by his uncle. However, as compare to Oedipus, Hamlet is less to action and more to anger as he takes time in killing Claudius, the uncle. Oedipus in the play I more decisive compared to Hamlet. He makes quick decision to affirm his displeasure. Hamlet though is viewed as a wise leader as he takes time o know the real murderer of the father. However, Hamlet is seen as a selfish leader as he lets his feelings to reign despite the plight of others. In the play, because he was displeased by the betrayal of the mother and uncle, he feigned madness that later affected his subjects badly. The play also depicts Oedipus as a very religious man. Despite his move to marry the mother, he was aware that he had committed a religious offense of incest. He was even ready to go back to the womb of the slayed mother. Moreover, he was aware of committing a religious sacrilege by murdering his imposter father. Oedipus is well aware of the divine implications of following a fall prophecy. As pertaining to the language, Hamlet uses the medieval literature in contextualizing the plot. This also highlights the theme is making reading or watching the play very interesting and entertaining. In the audience perspectives, the two characters in the play show major distinctions. The turmoil rage of Hamlet pits audience to delve more on defining the next move of solvency. Those watching the seen become unable to determine the next course of action of Hamlet. However, based on character, the audience in the Oedipus have a clear picture of the move the King can undertake. He makes rush decisions that pit him as an impatient leader. As per the discussions above, various themes in the plays can be drawn. In the play of Oedipus, themes of fate and free will, guilt and shame, sight and blindness and truth finding are manifested in the discussions. In addition, the theme of religion, illusions and reality together with action verses reflection are highlighted. In contrast with Oedipus, Hamlet in the play portrays various themes. Amongst these are the theme of madness, the theme of death and theme of betrayal in the family and conflict. All these themes explore the plot of the various players that later determine the audience spectrum. Conclusion The two plays as already discussed in the previous paragraphs share some similarities and differences in the plot, structure and themes. They both dwelt on the theme of hero tragedy, illusions and reality. They provide a clear prediction of the current leadership. As the leaders in the plays were misled by their thought, their actions led to their failure. The current world leadership in most countries is demonstrated in the play especially the concept of subject betrayal.
https://best-writing-service.net/essays/analysis/drama-analytical-response-essay.html
Why does Gertrude disobey Claudius in drinking the poison? Additionally, it can be furthered that Gertrude may have been suspicious of the acts taking place today which is why she took the drink anyway, because it is very out of character. Additionally, she says she will drink for Hamlet in essence which means that she could be trying to protect him. Does To be or not to be have to be cited? If you come across the phrase “to be or not to be” and use it in your paper, you have to cite it. What convinces Gertrude that Hamlet is mad? Hamlet speaks to the apparition, but Gertrude is unable to see it and believes him to be mad. The ghost intones that it has come to remind Hamlet of his purpose, that Hamlet has not yet killed Claudius and must achieve his revenge. What is Hamlet’s tragic flaw? The word ‘tragic flaw’ is taken from the Greek concept of Hamartia used by Greek philosopher Aristotle in his Poetics. Shakespeare’s tragic hero Hamlet’s fatal flaw is his failure to act immediately to kill Claudius, his uncle and murderer of his father. His tragic flaw is ‘procrastination’. Is Gertrude innocent? Whether Gertrude is guilty or complicit in the death of her husband is not conclusive, because the text does not say explicitly. But there are a few indicators that suggest that she is not completely innocent. First, when the ghost speaks, it says that incest and adultery have taken place. Who is Hamlet talking to in To Be or Not To Be? Polonius hears Hamlet coming, and he and the king hide. Hamlet enters, speaking thoughtfully and agonizingly to himself about the question of whether to commit suicide to end the pain of experience: “To be, or not to be: that is the question” (III. i. 58). Did Gertrude drink the poison on purpose? In Laurence Olivier’s film adaptation of Hamlet, Gertrude drinks knowingly, presumably to save her son from certain death. If she drinks on purpose, then she’s the self-sacrificing mother Hamlet has always wanted her to be. Is Hamlet faking his madness? It is seen both fake and real. Hamlet uses “madness” as a disguise, allowing him to get the information he needs about Claudius’ actions. He also uses it as an excuse for his actions, mainly Polonius’ murder. Is Hamlet really crazy or just pretending? The fact that Hamlet’s biggest emotional outbursts are directed against the sexual feelings of the women in his life suggests that his mad behavior is not just a ploy to disguise his revenge plans. Despite the evidence that Hamlet is actually mad, we also see substantial evidence that he is just pretending. Did Claudius really love Gertrude? Claudius’s love for Gertrude may be sincere, but it also seems likely that he married her as a strategic move, to help him win the throne away from Hamlet after the death of the king. What are the four things that need to be cited? What Information Should Be Cited and Why? - Discuss, summarize, or paraphrase the ideas of an author. - Provide a direct quotation. - Use statistical or other data. - Use images, graphics, videos, and other media. Why does Ophelia kill herself? Ophelia kills herself because the fate of Denmark is placed on her shoulders when she is asked to more or less spy on Hamlet, her father has been murdered (by her former lover no less), from the confusion created by her father and brother with regard to the meaning of love, and her suicide is even an act of revenge. How does Horatio die? At the end of the season 6 finale, Horatio is shot. It appears as though he is dead, but in behind the scenes video, it’s said Horatio’s death was faked so he can go after Ron. Why can’t Gertrude see the ghost? This makes me think only Hamlet can hear it. But that doesn’t explain why Gertrude can’t see the Ghost. Why is that? It can’t be because the Ghost can only appear to whom he wants to avenge his murder (since the guards see him). What is the moral of Hamlet? But the truth is everyone in Hamlet acts shamelessly and for us the moral of the play is the production of shame in its audience. Not too much, just enough. “Stay, Illusion!” Illusion is the only means to action. Who says the drink the drink I am poisoned? Hamlet |Original Text||Modern Text| |GERTRUDE 305No, no, the drink, the drink!—O my dear Hamlet! The drink, the drink! I am poisoned. (dies)||GERTRUDE No, no, the drink, the drink! Oh, my dear Hamlet! The drink, the drink! I’ve been poisoned. (she dies)| Where does Hamlet say he is pretending to be mad? Hamlet appears to act mad when he hears of his father’s murder. At the time he speaks “wild and whirling words” that appear senseless to Horatio and Marcellus [Act I, Scene v, lines 127-134]. Why did Gertrude marry Claudius so quickly? As for Gertrude, she gets to stay queen, which has to be better (certainly to someone as shallow as Gertrude) than just becoming an ordinary noblewoman. There is other conjecture that Claudius and Gertrude have been a thing for a long time and that’s why they get married so quickly after King Hamlet’s death. Does Gertrude think Hamlet is mad? Later, Gertrude does what Hamlet asks and tells Claudius that Hamlet is mad. Since she goes along with this, we can assume that it is more likely that, by the end of this scene, Gertrude does not think Hamlet is mad. Gertrude is a difficult character. Did Gertrude kill herself by knowingly drinking from the poisoned cup? She knowingly drinks from the poisoned cup and then offers it to Hamlet so that Claudius’s plan could not come to fruition—he, who has killed her husband, would not also be responsible for the death of her son. What is Hamlet’s soliloquy To be or not to be about? The soliloquy is essentially all about life and death: “To be or not to be” means “To live or not to live” (or “To live or to die”). Hamlet discusses how painful and miserable human life is, and how death (specifically suicide) would be preferable, would it not be for the fearful uncertainty of what comes after death. Why is Hamlet faking his madness? There is much evidence in the play that Hamlet deliberately feigned fits of madness in order to confuse and disconcert the king and his attendants. His avowed intention to act “strange or odd” and to “put an antic disposition on” 1 (I. v. 170, 172) is not the only indication. Who is the tragic hero in Hamlet? Hamlet, the Prince of Denmark, violates the law by killing different people such as Polonius, Laertes, Claudius, and Rosencrantz and Guildenstern, making him a tragic hero. Hamlet’s madness leads him down this path of destruction in which he harms and kills many people. What are 5 things that need to be cited? When Sources Must Be Cited (Checklist) - Quotations, opinions, and predictions, whether directly quoted or paraphrased. - Statistics derived by the original author. - Visuals in the original. - Another author’s theories. - Case studies. - Another author’s direct experimental methods or results. - Another author’s specialized research procedures or findings. Who poisoned Gertrude? Claudius What is wrong with Hamlet? Hamlet has the problem of procrastination and cannot act from emotions due to a lack of self-discipline. He is a man of reason and denies emotions so that his search for the truth of whether Claudius killed his father is satisfied. What things do not need to be cited? To Cite or Not to Cite?…There are certain things that do not need documentation or credit, including: - Writing your own lived experiences, your own observations and insights, your own thoughts, and your own conclusions about a subject. - When you are writing up your own results obtained through lab or field experiments. Does Horatio think Hamlet is mad? He has no obvious reason to fake insanity, and Horatio, at least, seems to think that Hamlet is already behaving strangely: he describes Hamlet’s words as ‘wild and whirling’ (I.v.132). Hamlet refuses to make straightforward distinctions between madness and sanity, or between reality and pretense. What mental illness does hamlet have? The interpretation which best fits the evidence best is that Hamlet was suffering from an acute depressive illness, with some obsessional features. He could not make a firm resolve to act. In Shakespeare’s time there was no concept of acute depressive illness, although melancholy was well known. Did Gertrude deserve to die? Q: “They all deserve to die.” Discuss. death. that Claudius deserves death if he indeed did commit treason and kill King Hamlet. Therefore, Gertrude truly did not deserve death.
https://www.annalsofamericus.com/why-does-gertrude-disobey-claudius-in-drinking-the-poison/
Hamlet’s death is ultimately caused when he is stabbed with a poisoned sword; however, he finally attains his goal of killing King Claudius. By comparing the plotlines of Ophelia and Hamlet, the similarities become clearer. Both characters are young individuals that want to follow the wishes of their fathers. By doing this, they are indirectly led to their own death. For most of the play, he has contemplated suicide and questioned his actions. He decided not to kill Claudius as he was praying and this makes him feel as though he has failed in his quest for revenge. Hamlet wants to show Queen Gertrude the error in her decision to marry Claudius and tells his mother: Come, Come, and sit you down; you shall not budge; You go not till I set you up a glass; Where you may see the inmost part of you. Hamlet eventually kills Claudius like his father told him to, but only did it after his mother, Gertrude, drank the poison that Claudius meant to give Hamlet. This is a result of external action from all the sorrows that was building up in Hamlet’s life. This brings us to our next character, Gertrude, Claudius’s wife and Hamlets So how does King hamlet affect the theme of the play as a whole even while being so briefly present? It all comes off of when when he told hamlet what really happened. Hamlet was overwhelmed with madness and as a result it affected everyone else because he was acting out his madness by making them wonder why he was so mad for so long because over time they didn’t think he should still be that mad about his father's death. So when King Hamlet told his son Hamlet that he was killed by Hamlet's uncle Claudius, which is now sleeping with his mother, drove Hamlet to instantly seek vengeance for his father throughout the entire play. Which set the theme for the play being, vengeance. Hamlet is madness is started by love but is infused with jealousy. Hamlet comes up with a plan to see if King Claudius really did kill his father, so he gets actors to re-enact how King Claudius killed his father. Hamlet turns out to be very jealous of the actors because they are showing fake emotions, when he is really is feeling depressed and very emotional. O, reform it altogether!/ And let those that play your clowns speak no more than is set down for them,/ Near the end of the play Juliet is faking her death to be free with Romeo. “Peace,ho for shame! Confusion's cure lives not in these confusions”(Friar, Act 4,Scene 5, Line 65) This quote show revenge in Romeo and Juliet by proving that Juliet’s family is hurt by revenge that they caused. Therefore, there family’s own cause grief to themselves by seeking revenge on the Montagues. After meeting his father 's ghost, Hamlet had to investigate further and see if his uncle is guilty. In Gladiator, Commodus kills Emperor Marcus his father because he was going to give the throne to Maximus. Maximus is very upset by this because he thought of Marcus as a father and decides not to honor Commodus as emperor. Since both Hamlet and Maximus were noble, they have The new king was even too lazy to cover his own lies and protect himself, which causes much heartbreak in the end. The king kills his new wife because he was too lazy to kill young Hamlet himself(Act 5, scene 2, line 316). The king tried to get Laertes to kill Hamlet to protect himself. As a backup plan, he mixes a poison in hopes that Hamlet would drink it. 3.3.72-73), Hamlet says, as he is debating whether or not to kill the king as he prays and thinks to himself if he kills him now then the king will just go to heaven because he is praying. Because of him overthinking the murder of Claudius and not taking action at the time he was able to, he had created a domino effect of events. Hamlet finally followed through with his plan after a long time of thinking, but he had killed Polonius. Polonius’ murder led to Ophelia committing suicide and Laertes getting involved and wanting to venge on Hamlet for killing his Lastly, Claudius, the man responsible for the death of Hamlet’s father, is eventually killed because Hamlet was flushed with rage by the murder and by the lack of remorse from Claudius. After being betrayed, Hamlet seeks a type of revenge towards these individuals, whether it results is their death, or verbally abusing In Shakespeare, Hamlet, revenge plays a large role in some of the characters actions. Hamlet was trying to get revenge on Claudius almost the whole play. Laertes wants to get revenge on Hamlet because Hamlet killed his father. Young Fortinbras wants to get revenge for King Hamlet killing his father. Although all of these characters were trying to get revenge, they all had different outcomes. In Act 1 Hamlet says "frailty, thy name is women" (I.ii.146). He is demonstrating his despise of his mother Gertrude 's actions by marrying Claudius shortly after the king or her husbands death. Throughout the play we see that Hamlet has disgust with his mother for her lack of character and strength. Shakespeare uses good imagery throughout the play while describing Hamlet and Gertrude’s odd relationship, which makes the interactions between them two much more interesting. Another relationship is the one between Ophelia and Hamlet. Haste me to know 't, that I, with wings as swift As meditation or the thoughts of love, May sweep to my revenge." (1.5 pg 23) At this point, Hamlet was eager to revenge his father 's death, even though he did not know who was the victim. The Ghost proceeds to tell Hamlet who killed him. " Thus was I, sleeping, by a brother 's hand Of life, of crown, of queen, at once dispatch 'd, Cut off even in the blossoms of my sin, Unhousel 'd, disappointed, unaneled, No reckoning made, but sent to my account with all my imperfections on my head. Act three, scene one, also known as the nunnery scene is a very important scene in the play. In this scene, Claudius and Polonius listen in on Hamlet and Ophelia’s conversation to try and find out the cause if Hamlets madness. Hamlet enters Ophelia’s room and begins his most famous soliloquy “to be or not to be”. In this soliloquy, he is questioning whether suicide is the answer or not. This soliloquy is very important to the rest of the play because it shoes Hamlets deeper thinking. The Tragedy of Hamlet is written as substantial, yet subtle. Shakespeare creates this drama with twists and turns in each scene, which spikes some readers to sit on edge of one’s seat. Shakespeare uses soliloquies, dramatic dialogues, and revenge tragedy to unfold a tremendous amount of details of Hamlet, thus causing a dramatic irony approach. Hamlet and Ophelia’s love for one another is played quite differently in Laurence Oliver than Franco Zeffirelli’s version of this tragic play. Ophelia and Hamlet’s love for one another was separated due to Ophelia’s oppression in the play.
https://www.ipl.org/essay/Hamlet-Role-Play-Analysis-FCRSNS2NSU
Hamlet starts to act as a madman to avenge the death of his father by his uncle. Ophelia on the other hand, goes mad after the death of her father. Shakespeare uses both these characters to affect the main plot in the play and their relationships with other characters. Many people debate whether Hamlet’s madness is real or fake. Shakespeare incorporated the theme of madness to serve a motive for Hamlet in order to deceive others. Revenge in Hamlet Hamlet is a play written by William Shakespeare and is based on revenge and how the act of certain individuals can lead to tragedy and affect everyone. Hamlet’s father has just died, and as a ghost, visits Hamlet and secretly tells him the truth of what had happened. He tells Hamlet that he did not die of natural causes, but was poisoned by Claudius, Hamlet’s uncle and now stepfather. This encounter begins a challenge and obligation to seek revenge for his father. Hamlet is speaking to the mysterious ghost, whose message is if Hamlet ever loved his father he will “Revenge his foul and most unnatural murder” (1.5.25). Hamlet was already greatly affected by his father's death and was in deep mourning. After the ghost came into contact with Hamlet, he embodied anger and found a deep addiction to revenge. The ghost of Hamlet's father revealed something to the young Hamlet about how the ghost, Hamlet's father, had died. From there, it set the course for the rest of the play. The ghost informed Hamlet that he had been killed by Sir King Claudius and that Claudius was, in fact, Hamlet's uncle. He only obtained the throne after killing his brother and marrying his brother’s widow. King Claudius appears to the audience as a civil, diplomatic ruler and it is only until the ghost first appears to Hamlet that is revealed he killed his brother with poison. The betrayal of his mother marrying King Claudius within a month after his King Hamlet’s death leaves Prince Hamlet feeling angry, bitter and revengeful towards his mother. Also the fact that she married her brother-in-law was considered to be incestuous and sinful in that era. When Prince Hamlet hires travelling actors to perform an act to the King Claudius and Queen Gertrude, they notice the plot to be similar to the murder of King Hamlet. Hamlet shows Gertrude that she has lowered her standards by marrying Claudius, When he refers to old Hamlet as, “A combination and a form indeed / Where every god did seem to set his seal” (3.4.55-61). This quotation shows what Hamlet saw in his father and how bitterly disappointed he is in his mother’s choice of lord. Hamlet’s frustration is made bigger due to Claudius’ unsympathetic remarks. Earlier in the play, King Claudius comments on the irrationality of Hamlet’s grief by saying, “That thus hath put him/ So much from th’ understanding of himself, I cannot dream of.” (2.2.8-10) The intensity of Hamlet’s grief may encourage others besides Claudius to be prejudiced towards treating him as insane. In the wake of his father’s death, Hamlet takes actions that other characters perceive as insane. This is apparent through the appearance of his father. The apparition claims that “I am thy [Hamlet’s] father’s spirit” (I.v.14). This shows that the king’s physical body is dead but not his soul. But the king admits that he had done some bad things in his life therefore he is “doomed for a certain term to walk the night” (I.v.15). As hamlet figures it out that the husband of his mother is a murderer—Uncle Claudius—he realizes that his mother is at fault. Sarcastically, Hamlet states, "What should a man do but be merry? For look you how cheerfully my mother looks, and my father died within's two hours". Unlike Hamlet, Laertes has developed a different kind of madness, a madness that is controlled by revenge. When Laertes is talking to Claudius, Laertes gets so much revenge building up inside him against Hamlet that Laertes now wants to cut his throat. Soon after, the young prince is visited by a ghost that resembled the appearance of his dead past father. To increase confusion on Hamlet’s situation even more, the ghost gives details about the truth of King Hamlet’s death; the King was murdered by Claudius while asleep. Because of this and other similar factors, like betrayal, Hamlet began to fall down into a sense of insanity. Throughout William Shakespeare’s play The Tragedy of Hamlet: Prince of Denmark, indication of Prince Hamlet’s true madness is seen in his feelings of abandonment and betrayal from the relationships he has with his family and friends, the unstable emotions and thoughts of avenging his father’s “unnatural” murder, and the unbelievable appearance and meeting of the presumably ghost of former king of Denmark Hamlet’s father, King Hamlet. The character of Hamlet has In the soliloquy, Hamlet is at first upset with himself about finding ways to avoid avenging his Father’s murder, like his spirit in ghost form told him to. This complaining turns into self hatred and then Hamlet is insulting himself outright. The main reason for this is he has agreed to get revenge on Claudius so his father’s spirit can be at peace, but he hasn’t done it yet. The fact that the Player seems to be more able to get into the mindset of revenge than he can further discourages him. This on top of the fact that Hamlet’s dad is dead and his mother married that man he hates most in the world makes for a pretty melancholy fellow. In “Hamlet” byWilliam Shakespeare, Hamlet experienced acts of betrayal by individuals in his inner circle and reciprocated with acts of revenge which ultimately resulted in his his death. In the novel, Hamlet, the main character was portrayed as an intelligent university student who returned home to attend his father’s funeral. The first incident of betrayal Hamlet experienced occurred when Claudius, Hamlet's uncle/stepfather /King, killed his father and took control effectively robbing Hamlet of the crown and the chance to be King. Hamlet adored his father and was devastated when his mother, Gertrude, had an incestuous relationship with his uncle who she married so quickly after his father’s death that ..the funeral baked meats…did furnish forth the marriage tables. 1.2.184-185.
https://www.antiessays.com/free-essays/The-Foils-Of-Hamlet-755240.html
A Slash Chord is a type of chord symbol in music that indicates a chord played with a specific root/bass note on the bottom. In this article, we’ll discuss the practical applications of slash chords in our writing and the scales that go with each of the slash chords! Building a Slash Chord There are 2 parts that make up a slash chord: - A chord. - A specific bass note. A chord could be any chord, but most often with slash chord harmony, it will be either a major or minor triad. A specific bass note could be any note of the chromatic scale. It doesn’t have to be a chord tone. The bass note will always have some sort of relationship to the chord above it, which we’ll get into later when we discuss slash chord-scale relationships! The slash chord is written as: [chord]/[root note] So, for example, if we have a C Major triad over an F♯ root, we’d write it as: C/F♯ So we know how to build and write slash chords. Let’s look at some practical applications. Practical Application It’s always important to ask “how can I use this?” when learning music theory (or anything for that matter). So how can we use slash chords in our compositions? Well, there are 3 main compositional techniques that have to do with slash chords. They are: - Writing with inversions of chords - Walking a bass line - Holding a pedal Let’s discuss each of these in a bit more detail. Inversions of a Chord This is when the bass note is a chord tone. By explicitly stating which chord tone we want in the bass, we give a better sense of how the chord should be voiced. For example, if we have Am/C – “A Minor over C” A Minor has the notes A C and E. By putting C in the bass, we are putting the third at the bottom and giving a stronger sense of how the chord should be voiced. Note that this doesn’t necessarily tell us that we’re in first inversion since we don’t know exactly how the A Minor chord is voiced above the root note of C. However, the bass is a very important part of the chord, making this specific type of slash chord important in writing and composition! Walking a Bass Line This is where slash chords get interesting in composition. By writing with slash chords, we have the opportunity to show a specific bass line under chords. To keep this example tidy, I won’t change the chord part. I’ll only change the bass notes. For example, let’s take our a C Major triad and walk a bass line of A, A♭, G, G♭, F under it. The slash chords would be written like this: C/A C/A♭ C/G C/G♭ C/F And that explicitly tells us we have a bass line moving downward through the chromatic scale. This is a quick way to combine chords and bass lines in one sheet. And it’s a cool way of thinking about combined harmony. Holding a Pedal My favourite use of slash chords is to modulate over a pedal point. For example, we could hold F as the bass note and change chords over top of it: E♭/F D/F D♭/F C/F B/F In fact, I use these exact chords in the first half of the chord progression in my track Salt Lamp. Check it out: Salt Lamp is from the album Fine Dining With An Octopus. I personally love pedal points. Try modulating through different slash chords and see what you come up with! Next, let’s take a look at what scales could we potentially use to write melodies over the above chords? Let’s dive into it! Slash Chord/Scale Relationships In our study of chord-scale relationships, our aim is to find which scales and modes fit over which chords in order to consciously build stronger melodies and improvisations. Slash chords present a different way of writing and thinking about chords and harmony. They give us a defined root which is helpful in naming a compatible scale. And sometimes, a slash chord is the neatest way of writing and thinking about a certain chord. Let’s first look at an example. We’ll take our previous example of C/F♯. So that’s a C major triad (C E and G) over an F♯. Let’s look at our intervals and build compatible scales around them: - F♯ is our root - G is a ♭9 (or ♭2) above F♯ - C is a ♭5 above F♯ - E is a ♭7 above F♯ So, based on F♯, we have the scale degrees of: 1 ♭2 ♭5 ♭7 C/F♯ sounds like this: Now we ask ourselves which scales have those intervals in them? Well, to name a few (other than the chromatic scale), we have (based on F♯): All the above scales (starting on F♯) play well over a C/F♯ chord since they contain the intervals of the chord C/F♯! Another way of looking at this chord is as C Lydian with the characteristic ♯4 is in the bass. But since the F♯ is in the bass, it gives more of a Locrian sound within diatonic harmony. If the F♯ was sounded above the C major triad, I would definitely call it C Lydian. With that being said, I will present some tables here with C major and C minor triads over each of the 12 possible chromatic bass notes complete with the scales that can be played over them. The scales and modes in the table are based on the bass notes of the chord and are sourced from the following scales: - Octatonic Scale - Major Scale - Melodic Minor Scale As an exercise, I’d encourage you to find all the scales that would fit each slash chord (similar to what I did with C/F♯). This is not necessary to understanding slash chords and their relationships with scales, but could be an interesting theory practice. Without further ado, here are the tables: List of C Major Slash Chords With Scales List of C Minor Slash Chords with Scales Have some fun with these slash chords and their corresponding scales! In Closing, Slash chords are an interesting concept to think about when composing music. I encourage you to think about the possibilities of slash chords in your writing! Are there any compositional techniques you enjoy implementing that involve slash chords? Please leave a comment, I’d love to hear from you 🙂 As always, thanks for reading and for your support,
https://arthurfoxmusic.com/slash-chords-scales/
Questions tagged [melody] A succession of notes which comprise the principal part in harmonized music. 234 questions 0 votes 2answers 76 views Why is the tonic so important for melodies? I made a simple melody in FL studio that was in C Major. The melody consisted of two phrases both of which started on D. The only difference between the two is the second phrase ended on C (the tonic ... 1 vote 3answers 47 views Question from Exercsies in Melody (Percy Goetschius) I've been studying said book for learning melody writing and I find it to be very methodical. It is completely free to download for anyone interested (google). I found a line I can't seem to ... 11 votes 2answers 668 views Can this Arabic scale be described in Western terms? Does anyone know what scale this is? I know it may be a scale that's not used in Western countries and also uses quarter tones...but can it be described in Western music terms? It sounds similar to ... 1 vote 2answers 182 views Adding interest to arpeggios Recently i've been more confident in the progressions I choose for my songs, but the guitar arrangements are still very uninspired. I am particularly bad at creating cool arpeggios or riffs that ... 1 vote 1answer 50 views Is it correct to call this figure a “turn figure” in Haydn's Surprise Symphony? In the IB Music Revision Guide, the author says that a cadence in the second movement of Haydn's Surprise Symphony (end of the first statement of the theme) is "decorated with a turn figure" (see ... 2 votes 3answers 649 views Roman numeral analysis for melody? This is mainly a notation question. But when analyzing a melody how do you notate melody notes? Is it 1, 2, 3, 4, 5, 6, 7 or I, II, III, IV, V, VI, VII (or maybe lower case) or do, re, mi, fa, sol, ... 1 vote 3answers 90 views What kind of melodic inversion is this? My understanding of melodic inversion (strict and diatonic) is taking a melody/motif, beginning on the same note and moving in the opposite direction. However, in the IB Music Revision Guide's ... 2 votes 2answers 82 views Question regarding bass line I'm very new to music theory so pardon me if its a very infantile question but I would like to learn. I've been playing the piano and while just playing around I started playing E D# C# B A G# F# B# E ... 2 votes 2answers 56 views Applying schemata to chords other than I IV V I have been reading up on Galant schemata (stock melodic phrases) recently and have run into a question regarding their universality. All of the various schemas (Prinner, Pastorale, Fenaroli, etc) ... 1 vote 2answers 44 views understanding schematic analysis notation What do the numbers above and below the staffs represent? I was assuming that the white-circled numbers under the bass parts were the thoroughbass and that the black-circled numbers above were the ... 6 votes 4answers 260 views Standard or fundamental melodic patterns Between systems like the circle of fifths, useful tools like the “chord board” (a chart that, if you follow the “rules of the game” allows you to create strong progressions), “standard” progressions, ... 1 vote 2answers 76 views Is it possible to stick too closely to Mozart in an arrangement? One of the many musical things I do is arrange pieces by other composers. As far as composers go, the easiest for me to arrange is Mozart(probably because of a combination of early and sustained ... -2 votes 2answers 41 views Does the diatonic scales D major, A major, E major and B minor implement the same notes used by the pentatonic F-sharp minor scale? The F# minor pentatonic scale notes are: F# A B C# E. Now I want to know if the D major, A major, E major and B minor scales (all diatonic) implement all the notes used by the F# pentatonic ... 3 votes 2answers 170 views What are the differences between natural major, harmonic major and melodic major scales? The natural major scale on C has the notes: C, D, E, F, G, A, B. What is the difference between natural major, harmonic major, and melodic major scales. What notes of the natural major are changed in ... 1 vote 2answers 67 views Is this melody too complicated for a first variation? I asked in a separate question whether Theme and Variations would be the best fitting form for my "Dance of Nature" movement. I decided that the multi-movement work this is part of would be a symphony,... -1 votes 1answer 295 views How to Determine a Song's Title with only the Melody [closed] How do music professionals identify the title of a song when all they know is the melody of that song? For example, I'm having great difficulty identifying the title of the melody played during this ... -2 votes 2answers 86 views I need help understanding how to assign roman numerals to a key as well as how to do a melodic analysis [closed] Here is the link to the questions: https://scontent.fybz2-1.fna.fbcdn.net/v/t1.15752-9/s2048x2048/76767498_631590154044645_5429312693389492224_n.jpg?_nc_cat=108&_nc_ohc=... 2 votes 2answers 57 views antecedent and consequent phrase in minor scale MY question is what are the rules for melodic phrasing in minor scales. For example, I know that in a major scale the vii wants to go up to the I and the V can go where ever etc. what are the rules ... 2 votes 2answers 48 views How to Make an Instrument's Melody Sound Right With a Vocal Melody? Every time I add a melody with a distinct rhythm (from say the piano) to my vocal melody, I notice that the music starts sounding confusing and irritating. Some songs have riffs (ie a melody of an ... 0 votes 1answer 41 views Head voice and chest voice [duplicate] When I hear something (a song for instance) I want to be able to tell whether I am hearing a head voice rendition or that of a chest voice. 3 votes 5answers 475 views what are the characteristics that define a “good” melody? When composing a song, I think it might help if I understood what some of the characteristics of a good melody might be, and do those characteristics apply generally across the board or are they ... -4 votes 2answers 91 views How many notes from all world music are in an octave? [closed] Considering all the music from all differents cultures on earth, how many notes do we get in an octave? I'd guess something around 24 instead of the 12 of western music? 1 vote 0answers 174 views Difference between Indian and English lyricism besides language? Suppose we sing or rap about dark themes (depression, alcohol, drugs, violence, sex) similar to modern American music from 1970-present day in Hindi, Punjabi, Tamil, or Malayalam over Indian classical-... 8 votes 3answers 194 views Unintentional plagiarism while composing I have recently started composing using some knowledge that I got after a few months of studying counterpoint and harmony (well it is more like trying to compose). And I have mentioned that quite ... 0 votes 1answer 134 views Guitar rif comparisons and similarities Consider the bass guitar line of a song called "Schism": and another bass guitar line of a song called "Pneuma". When Pneuma was ... 6 votes 6answers 3k views How to convert what I'm singing to notes I've been trying for months to get the songs in my head down to piano. But whenever I try none of the notes seem to fit what I'm singing. I'm guessing this is because I'm either A: out of tune or B: ... 2 votes 1answer 167 views Composing on guitar - Fingerstyle vs Classical I'm wondering about the difference in the process of composing a song for a Fingerstyle Guitar vs what would be considered a Classical guitar piece. Note: I'm an amature hobbiest, so please excuse my ... -1 votes 4answers 102 views all chords for one melody note [duplicate] Sorry folks for any confusing For example, let’s say that you have a G. It could belong to the chords G, Gm, C, Cm Em, Dsus4, Eb, A7, Am7, and a bunch more. Is there a online tool where I can find ... 1 vote 3answers 226 views Notes between chords I am just a beginner playing the piano and been studying the scales, progressions, triads. Just the basic theory. So when i play a song i can play the chords but i don't know how to find notes to play ... 2 votes 1answer 107 views Why block consisting of “re” and “fa” notes is classified as chord V? I am reading this paper: Toward a Musical Analysis of World Music On page 4 there is this statement: All of the songs are syllabic and are subject to clear harmony (prototype degrees [I, V, IV] ... 13 votes 7answers 3k views Can chords be inferred from melody alone? [duplicate] Some vids about arranging on fingerstyle guitar say to figure out the chords first either by ear or through tabs, then to play the melody on top of them. But that seems like too much work. For one, ... 1 vote 1answer 132 views How to diffrentiate countermelody from arpeggiated accompaniament? I wonder what would be main difference between arpeggiated accompaniament and counter melody. Could it be the rule that if it's an arpeggiated accompaniament then it would be more repetitive (because ... 3 votes 4answers 2k views Are fretless stringed instruments used mainly for melody? Just wondering if fretless stringed instruments like a violin or oud, by fretless does it imply that they are used mainly for melody? In contrast to an instrument like a guitar, that has frets, does ... 2 votes 1answer 56 views Comparing variable-length pitch contours Robert Morris defined a contour segment of a set of n pitches as a list of numbers from 0 to n, where each number corresponds to the relative height of its corresponding pitch (for instance, <... 1 vote 4answers 228 views Best method to convert MIDI style data to proper sheet with correct note names? [closed] What are the best general algorithms to determine the correct notes (with sharp/flat) as they appear on a sheet with only input a MIDI-style data that has only the designation of a white or black key ... 3 votes 2answers 86 views How should a melody be treated when switched from right to left hand? For example, in Chopin's Etude Op 10, No. 4 (Torrent), the melody is initially in the right hand and it's accompanied by the left hand. A few bars after, the right takes over the same accompaniment ... 3 votes 0answers 75 views Intervals when making a tone row - what to keep in mind? For writing 12 tone or serial music, one generally makes a tone row, and then uses that for constructing a melody. I want to know what would be the maximum range of intervals between any 2 consecutive ... 1 vote 0answers 36 views Aural mnemonics: can we compile a list? [duplicate] When I do ear training, I like to use mnemonics, in the form of a short snippet from a well-known song, to help me memorize certain aural patterns. I can share some of the ones I have so far, but I ... 6 votes 2answers 1k views Chords behaving as a melody I've been learning some Christmas carols on the guitar and I noticed that the chord structure they have is very different from what I can usually see in popular/rock music. I noticed that the ... 4 votes 4answers 536 views Singing along to guitar chords (harmony) Recently I started writing songs and I started to wonder: Let's say I sing middle C (while playing open C chord on guitar) for a bar and then go to singing E, third higher in pitch. Should I follow ... 1 vote 1answer 76 views Is there a catalog of different rhythms? [closed] The melodies I've been writing have been good in terms of notes and chord progressions, but they lack interesting rhythms. Some examples of good rhythms that are interesting are DROELOE's Kintsugi ... 4 votes 3answers 182 views Can the melody determine the key of an ambiguous chord progression I am writing a song in A major. The chords in the verse are A, C#m, F#m, F#m. Now it is clear to me the key of this part of the song is A/F#m but then in the chorus the C#m chord is switched with the ... 6 votes 3answers 759 views Are modes in jazz primarily a melody thing? I was wondering when jazz cats talk about how they'd use a Lydian scale to improvise over a major chord, or a Dorian scale to improvise over a minor chord they're mainly talking about the melody? So ... 4 votes 5answers 3k views How to write a 12-bar blues melody I'm learning to write a 12-bar blues for a music theory class, and I'm a little confused about one point. I understand that the harmony is made of certain typical chord progressions (e.g. I-I-I-I, IV-... 9 votes 3answers 1k views I can't produce songs I'm a guitarist, playing with 3-4 bands 4-5 days a week. I also study Sound Engineering and doing recording, arranging and mixing. My literature is not bad, and I have a good music education. ... 1 vote 2answers 76 views Does this song use melodic minor or harmonic minor? The song below was on a past IB Music Diploma Programme listening paper. The markscheme says that it uses melodic minor. I hear a raised 7th, particularly in the vocal, but I don't hear a raised 6th...... 9 votes 2answers 549 views How can whole tone melodies sound more interesting? How can one make melodies made from whole tone scales sound more interesting & appealing? Unlike melodies based in a certain mode or a key, whole tone scales use only tones as the intervals. This ... 0 votes 1answer 461 views Tips on writing 12 bar or 16 bar melodies I need to write 12 bar or 16 bar melodies for my Grade 8 Music Theory exam. I know how to write melodies, which didn't require any modulation. I am new to the concept of using modulation in melody ... 1 vote 2answers 107 views What are the key features to keep in mind when writing melodies in Mixolydian mode? When writing a melody in the Mixolydian mode, you don't want it to sound as if it were in a major key. So how do you go about writing melodies in the Mixolydian mode, while maintaining the essence of ... 2 votes 1answer 102 views If a melody is above the octave does that give reason to use extended chords? I'm wondering if the melody, when you play it above an octave and those notes then become 9ths,11ths,13ths does that then give a reason to use extended chords to harmonize them? Or is there no ...
https://music.stackexchange.com/questions/tagged/melody
Core music theory and notation Library . Project description Warning This is in very early development. Almost nothing works. The stuff that does work will probably change. Ophis is a core music notation, theory, and analysis tool which can be used as the basis for other music applications. It supports contemporary/conventional notation and Gregorian notation. Components - Tonus — Pitch classes, pitches, octave designations. - Mensura — Time signatures, rhythms, note duration. - Nota — Notes, note expressions. - Melodia — Horizontal note structures (melodies, lines, voices, scales). - Chorda — Vertical and two-dimensional note structures (chords, harmonies, polyphony). - Signum — Clefs, staff and score expressions, additional markings. - Armonica — Music theory and analysis. - Scribo — Notation/printing via Lilypond, Gregorio, and other formats (MusicXML, ABC). - Medio — MIDI control.
https://pypi.org/project/ophis/
When you start learning to play the piano, the first things that you will be taught about are the scales and chords. Some might think it’s torture or that it’s really hard to understand. I was of the same mind-set and doubt in the past. This is one of the reasons why I thought about giving up on music in my first two months of learning piano. I’m glad that I didn’t. As time passed I came to understand that music theory is the backbone of any instrument that we play. This gave me a lot of perspective into how the different instruments work based on music theory. To put it in a short sentence: If you know music theory you will need half the time to learn an instrument, compared to someone with zero knowledge of music theory. Without music theory, there will always be doubts about whether you are playing the right thing or not. Music theory helps you to solidify your foundation in music and get your confidence up while performing or recording in a studio. It helps with your creative ability to play according to the mood as well. All the above-mentioned things are not possible if you are not well versed in music theory. People who don’t know music theory struggle to grasp, other instruments. The biggest issue that you will face when you don’t know music theory is playing with a band. You are likely to face many challenges because you won’t have a clear understanding of what is being played. Learning music theory starts with knowing the fundamentals of music, such as tempo, time signature, scale, harmony, and chords. Learning these fundamentals first gives you the basics to playing any instrument. Music theory can be learned from a variety of sources including YouTube, Udemy, and Skillshare. Knowing that you need to learn music theory is one thing, but understanding what you need to learn and where to devote your time and effort is are the vital aspects of learning music theory. In this article, I will walk you through all the details that you need to know to properly learn music theory and put it to good use. Let’s get started, shall we? What is music theory? Music theory is basically a combination of concepts that need to be learned in order to perform using instruments on the stage, as well as in recording studios. The concepts that are combined include tempo, time signature, scale, harmony, and chords. Learning the concepts and applying them to the instrument that you are learning is crucial in determining how well you understand music. Music is all about feel and mood. If you don’t know music theory, it will be hard for you to emulate any mood with chords. This is why you find some music producers making the same type of music because sooner or later they run out of ideas. Some make it far into their careers without it but they will be exposed to how much they lack creativity because they lack the knowledge of music theory. Music theory will allow you to create compositions that are more emotional and have depth. You will better understand which chords to use together and which chords don’t go well with each other. The creative aspect of music for a producer or even a stage performer in music will shine if they are well versed in music theory. The ability for them to compose and work on complex chord structures is what makes songs catchy. If you know music theory, you can also simplify chords and make them sound the way you want. These skills are not possible if you aren’t well versed in music theory. Music theory is all about understanding the instrument and also learning how well you can use the instrument based on its musical language. If you look at everything behind music theory, it’s all about numbers and positions. These are the backbone of how a frequency or a sound produced by an instrument forms. You’ll be able to figure out how to use it together with other sounds that are produced by the same instrument. This gives way to more and more use of complex rhythms and chord structures. Music theory is one of the reasons why a lot of classical musicians are being celebrated even after they are long gone. Their music speaks so much to how complex their understanding of music theory is. This is one of the main reasons why anyone interested in music or learning an instrument should invest their time in learning music theory. It shapes the way you think about sounds. Music theory also makes the tracks interesting by the change of time signature and rhythms. One of the common mistakes in music production and performance is the overreliance on certain types of rhythms. If you look at the history of music, every rhythm ties itself to a certain culture and brings emotions out of people. The biggest takeaway with music theory is that it’s not just chords and numberings that you learn, you also learn about the heart of music, which is an emotion. How sounds bring emotions, how a minor chord always brings sad and depressing sound. Just like that, you will also learn how a major third chord brings so much joy to people’s faces when they hear it. The first time when I saw the difference in playing style after learning music theory, it made so much sense as why someone should be learning this much earlier in their career. Why is music theory important? In this section I will walk you through more of the reasons why you should be learning music theory. When it comes to performing music, the most important thing is being spontaneous on stage. If the artist sings a song, that’s outside of the set and hasn’t been practiced, you should still be able to play it and transition smoothly into it. This is not an easy task to do without music theory. When you hear the song, you should be able to pick up the scale of the song and use a relative chord and move towards it. You might even figure out the scale with your hearing. If you are not well versed in music theory, it won’t be a smooth transition to the next chord of the song. This can only be achieved by applying music theory. The second place where this can be used is in DJing. If you are a DJ and a performer, knowing music theory would help you a ton with arranging songs based on so many factors and correlations rather than just the tempo and scale of the song. When you look at the mixes of producers who are well versed in music theory there is a significant difference in how they structure their songs. The ability to understand the chord shapes and structures will help you to manage them properly in your DJ set, rather than cutting them off in the middle. This is a major contributor to why certain DJs are famous for their mixes. If you are a drummer, playing by ear might seem cool at first. When you start playing at events the first thing you’ll need to work with the keyboardists is music theory. The advantage is that you can perfectly sync your time signature with all other players in the band. All in all, if you are a playing in a band, you need music theory to shine through. Music theory can cover up a lot of mistakes if you are knowledgeable in it. You’ll be able to sweep through tracks and analyze songs faster too. If you are improvising, music theory is your best friend. You can learn as many songs as you want; if you want to get better at the music you need improvisation. You cannot do improvisation without really understanding music theory. If you don’t know anything about it, you will be circling through the same five chords over and over again. The one other important place where you will need music theory is in music production. Unlike other areas of music, here you need to do everything to make a song. You will start with instruments and also add drums. You cannot take any shortcuts when it comes to music production. If you aren’t knowledgeable in music theory, it will be exposed very easily in your production. You will spend a lot of time looking for things and getting stuck as well. The best music producers are people who are well versed in music theory and can handle composing things and differentiating between the intricate details in a song. This is where good producers differentiate themselves from the ordinary ones. There is a radical difference between a song that is produced by someone who knows music theory and someone that doesn’t. This is why I would encourage you to learn music theory as early as possible at your own pace. If you take it all in at a time, you would get confused take it steadily; you will get better at it. Components of music theory When it comes to music theory, there are so many intricate details to it that you can’t put together in a single post. The depths of music are vast and have been explained by people according to their understanding in different ways. In this article, I have compiled them into five important aspects of music theory that you should pay attention to while learning it. These will determine how well you are learning music theory as well. Sometimes you might get started with something and lose focus completely and end up learning things that aren’t that important when it comes to music theory. Every single component that is explained below is vital in determining how you should structure your learning of music theory. Tempo One of the important components of music theory is tempo. You cannot have a song without tempo. The tempo is also referred to as ‘beats per minute’ or BPM. It’s a number that represents how many beats are there in a particular period. Tempo has so much significance in modern-day music production because it dictates how music is evolving and how much it should be optimized to produce a song. We have songs today that vary in bpm right from the start and end with varied bpm as well. These tempo changes make the song interesting to listen to and add flair to the overall composition of the song. Out of all the elements in music theory, this is the easiest to grasp for most people. The higher the tempo the faster the song is. The lower the tempo the slower the song is. The energy of the song is also depicted by the way the tempo is chosen. An old-school hip-hop song has a tempo of 87 while a new electronic dance music song has a tempo of 127. Thus tempo dictates how well the song is produced in a particular genre and how strong the song is in terms of the energy being conveyed. Some songs deviate from the norm significantly. Some songs have a bpm that is very unusual but they convey more energy through the lyrics and other components. Learning about tempo is the first, most basic step when it comes to music theory. You have to understand how tempo works in different genres of music. Time signature If the tempo is the speed of the vehicle then the time signature is the model of the vehicle. Depends on the model of the vehicle the speed varies, just like that depends on the type of time signature the tempo varies. The time signature is nothing but how many beats are there per minute in a song at a regular interval. This varies a lot in songs according to the genre. When you get started in music production, you have to learn time signature from the get-go. This will allow you to take more time to learn and understand how the system of different time signatures works. Modern-day time signatures are simple because of how easily understandable they want to make the song. Having a viral song is more important than having a complex musical song. From a musician standpoint, you have to learn all the complex time signatures from the start and get good at them. Differentiating a 4/4 from ¾ is a great place to start for time signatures. Once you develop that skill you can move on to learning 7/8 and 5/4. Once that’s done, you can then move on to the next set which includes 6/8 and 2/4. Learning these 6 sets of time signatures, at the start will kick start your understanding of music theory, as you will be using this in every song you produce. Scale If I have to highlight certain elements as vitally important in music theory then those would have to be scales and chords. You can adjust all other things but, getting scales and chords wrong are the rookie mistakes that any newbie producers makes in their production journey. Learning these will ensure you are safe in all aspects of music production starting from melodies to the bass of a song. When you look closely at scales, they are 8 key routes that you can have in music and they dictate the path of the song sonically. There are 12 major scales and 12 minor scales. There are countless are subcategories of scales as well. When you are starting out, being proficient in the 12 major and 12 minor scales is essential to writing a song in any scale. There are some tools that make the process of learning scales easier such as taking piano lessons on the side. When you start focusing on scales the cohesiveness of the song is always on point. Scales determine the mood of the song or audio track. A minor scale will always sound more dramatic and sad, while a major scale will sound fun and happy. Harmony The next thing that you’ll have to learn in music theory is harmony. Harmony is what makes all the elements of the music sound good together. If there is no harmony then the song would sound like noise. Even if all the elements are put together properly it won’t sound good as there is no harmony. This is the next important aspect of music theory following the time signature. Harmony is created if there is an equal distance between two frequencies of sounds. This is why, when you press three or four keys in a piano at regular intervals, it sounds nice and in harmony. Having the time signature to even the smallest of all the elements in a song to be in harmony is the most important learning that you will take away from music theory. It’s all about how well do you harmonize things in a song. Harmony acts like glue in between the rhythm and the musical notes that are played in the song. It brings all the elements in the song together and makes it more effective. If there is one thing that you should focus on more in music theory, that would be harmony. Chords In simple terms, chords are music notes play at regular intervals throughout the scales of the song. When you look at chords in a song, there would be about 3 to 5 chords that are used repeatedly in a song. It’s the same for a song that is in major or minor scale as well. The chords depend on which scale the song is written in and how the melody is structured throughout the song. Every major scale has three basic chords that might be used more often when compared to the other chords; these are first, fifth, and sixth note chords. This is the same for minor scales as well. Learning these first and then moving on to the complex chords will help boost your application of music theory. You can learn chords for one major as well as minor scale and apply it over all the other scales as well. Chords work based on intervals which will make the learning process much smoother. Best places to learn music theory Even though there are so many avenues in learning music theory, the best place to start in today’s fast-paced world is through online courses. They tend to make the process of learning easier as you can learn in your leisure time without worrying about anything else. The two best places to learn are Udemy and Skillshare. A lot of people will suggest YouTube but the problem with is that the videos won’t be cohesive and you can’t learn in peace as you are likely to get distracted very often. Skillshare The beauty of Skillshare is that you don’t have to pay for individual courses. It’s like a Netflix subscription where you pay and get access to all the content. This makes the process of understanding how the courses are structured easier. It also allows you to browse for the course that suits you best. This is not the case with a lot of other platforms like Udemy. When it comes to Skillshare, the low price, as well as the community of people that take part in discussions, will help you learn music theory faster. Udemy A lot more people use Udemy compared to Skillshare. The difference between the two though, is that you have to pay for each course separately. Also, unlike Skillshare, which lets you access everything for a set price, you are asked to pay for each course that you want to start learning in music theory. The advantage of having Udemy is that you can choose your course and learn without spending much more money on all the other courses. Paying this single course fee will also get you lifetime access to the course. How do I start learning music theory? Learning the fundamentals of music such as chords, time signature, and harmony are some of the things to start learning as a beginner. The different scales that are available in both major and minor are a great place to start as well. If you are frustrated with choosing where to start, start learning about time signatures, they are a good fun topic to start learning music theory as a beginner. Can I teach myself music theory? Yes, you can teach yourself music theory. The only problem with teaching yourself is that you have to depend on a proper source for your self-learning resources. You cannot have a mediocre course or a product teaching you about music theory. This is one of the reasons why a lot of people who want to study music in depth take music degrees in the university. How long does it take to learn music theory? Music theory is vast. You cannot put a date and say that you can learn everything within that. The better way to put it is, music theory is an ocean it depends on how far you can swim. The deeper you go the more interesting it gets. Music is so vast because of the influence of cultures and people. Modern music theory has been shaped by thousands of years of music that has been carried through people. Is it easy to learn music theory? Yes, it is easy to learn music theory. The best attitude to have when you learning music is to take it one step at a time rather than getting overwhelmed by how much you have to learn in it. Always be sure about what you want to do with music theory and you can make the learning process much simpler and easier. What are the 12 keys of music? The 12 keys of music are made up of three sets. The first set is the white keys being C, D, E, F, G, A, B. Next are the flats including E, A, B, as well as sharps include C and F.; these are all on the black keys. So, when you add up all these you will get 12 keys. There are 12 major scales and 12 minor scales when it comes to music. How can I learn music theory fast? Music theory is vast. You cannot expect yourself to learn it fast. The better approach would be to find out what is required in each section of music that you are working on and learning it step by step. This would make your process of learning music much faster than worrying about how to learn to complete music theory in one go. Conclusion When it comes to music theory, the important thing to analyze and work towards is perfection and absolute clarity about whatever you have learned. If these two things are missing, you cannot fully understand and apply the concepts that you have learned going forward. Even if the pace at which you learn is slow, having a solid understanding will help you to progress faster.
https://audioaural.com/how-to-learn-music-theory/
14 Oct Bass Music Theory For Beginners – The Complete Guide There’s no doubt that having a natural musical ability is one thing that makes you a great bass player. However, there’s a lot more to it than just that, and understanding the technical side of bass and music theory is just as important as having a natural talent for music. Understanding bass music theory will help you be able to read and write music, as well as improvise with much more ease. Music theory, as the name implies, is the theory of how music works and why different techniques create different sounds. It tries to explain why things like harmonies, melodies, and rhythms sound so pleasing to the ear. Someone may have a natural gift for basketball, but if they take the time to study the techniques and science behind it, then this will only make them a better basketball player. The same goes for music. You may naturally have a talent for playing the bass but studying the music theory behind it will only make you even better. In this article, I will go over the basics of bass music theory for beginners and give you a complete guide with all the tools you need to become a great bass player. How to Read Standard Notation for the Bass Clef Staff When playing the bass, any standard notation sheet music you read from will always be in the bass clef staff. Because of this, it is important that one of the first things you do when you decide to learn how to play the bass is to learn and memorize the notes of the bass clef staff. This staff consists of five black lines with four white spaces in between them, and these lines and spaces are where each of the notes sits. The order of the notes on a bass clef staff starting at the lowest note, which is also the note that sits on the bottom line, is: G, A, B, C, D, E, F, G, A. As you can see, the notes go in alphabetical order, stopping when it gets to G and starting back over at A. Because of this, it is fairly easy to figure out which note goes where as long as you know that the bottom line note is a G. However, there are some helpful acronyms to help you remember the order of the notes that sit on the lines as well as the order of the notes that sit on the spaces. These acronyms will help you memorize the notes more easily and be able to determine a note quicker just by looking at it. The order of the notes that sit on the lines in a bass clef are G, B, D, F, A. The acronym to help you remember these notes is “Great Big Dogs Fight Animals.” The order of the notes that sit on the spaces in a bass clef are A, C, E, G. The acronym to help you remember these notes is “All Cows Eat Grass.” How to Read Tab for the Bass Similar to classic guitars, bass guitars often use a type of sheet music called tablature, otherwise known as tab. Unlike standard notation, tab uses numbers instead of notes. A bass tab will have four horizontal lines stacked on top of each other which are meant to represent the four strings of your bass. Beginning with the string with the lowest pitch at the bottom, the order of the lines in tab are E, A, D, G. Luckily, there’s a helpful acronym you can use to memorize this too. The acronym is “Every Angry Dog Growls.” The numbers that are placed on the lines represent the frets of the bass and will range from either 0-20 or 0-24 depending on how many frets your bass has. This means that if you see a number 2 on the bottom, or E, line, you should play the second fret on the E string. Tablature is slightly easier to read and follow than standard notation is, but both are still important to know how to read in order to understand the bass and its music theory better. How to Read a Time Signature Now you know how to read both standard notation and tablature for bass, but there is still another important part of a music staff that you must know how to read in order to be able to play the notes correctly. That thing is the time signature. The time signature tells you how many beats are in a measure and what the value of each note is. It will appear as two numbers stacked on top of each other, and you can find it at the far left side of a staff. The top number is what tells you how many beats are in a measure, while the bottom number tells you the value of each note. Four over four is one of the most common time signatures that you’ll see in sheet music, so we’ll use that one as an example. The top number four means that there are four beats per measure. The bottom number four means that the quarter note is the note that gets a single beat. This means that each measure can hold four quarter notes. If you were counting out the beats in a four over four time signature, it would be counted out like this: 1, 2, 3, 4, 2, 2, 3, 4, 3, 2, 3, 4, and so on. Understanding Intervals You now know the basics of how to read standard notation, tablature, and time signatures. Since we’ve covered all the basics of the music staff, it’s time to learn about intervals which is the distance between two notes. The bass only has four strings, but those four strings allow you to cover all types of intervals. There are major intervals, minor intervals, augmented intervals, and diminished intervals. Major and minor intervals are the two most commons ones and the ones that you will probably find yourself using the most. In a major interval, you move up one entire note. For example, in C Major, you would move from C to D, and that would be a major interval. A minor interval, on the other hand, only moves up a half step. So rather than going from C to D in C Major, you’d move from C to C sharp or D flat. C sharp and D flat are the exact same note on the bass, so what you choose to call the note in a minor interval is completely up to you. Augmented and diminished intervals are much less common, but it is still useful to know what they are in case you happen to come across them. In an augmented interval, you move up one and a half notes, or a half note higher than a major interval. In a diminished interval, you move up half a step less than a minor interval, or a quarter step. For example, when you make a diminished interval starting at note C, rather than becoming C sharp (C#) or D flat (Db), it would become either C double sharp (C##) or D double flat (Dbb). Understanding Scales Scales are an essential part of music theory for every type of musician to know and understand, not just bass players. A scale is a set of notes arranged in different patterns or intervals. If a scale has notes that go up in pitch, it is an ascending scale. If it has notes that down in pitch, it is a descending scale. Major scales are scales that go up in major intervals. For example, a C Major scale would look like this: C, D, E, F, G, A, B, C. A C Major scale is a great example of a simpler major scale, but if you take a scale like G Major, then you will find that there is one note that is a sharp. A G Major scale looks like this: G, A, B, C, D, E, F#, G. The reason for this is that major scales follow the following sequence: tone, tone, semitone, tone, tone, tone, semitone. Major scales are not the only types of scales that are out there though. Similar to how there are different types of intervals, there are also different types of scales. In addition to major scales, there are also minor scales, which go up in minor intervals. Do you see how all of these terms begin to tie in together? For a minor scale, you should follow this sequence: tone, semitone, tone, tone, semitone, tone, tone. So if we were to create a C Minor scale, it would look like this: C, D, D#, F, G, G#, A#, C. Keep in mind that D# is the same thing as Fb, so if you wanted to, you could also write the C Minor scale in this format: C, D, Eb, F, G, Ab, Bb, C. Both scales mean the same exact thing, so it’s all about personal preference. Just remember that all major scales move up a major interval as they go along, and all minor scales move up a minor interval as they go along. Understanding Root Notes Before I go into the explanation of what a chord is, it is important for you first to understand what a root note is because a root note is an essential part of every chord. Every single chord has one note single note that it is derived from or that it starts at, and this note is called the root note. So, if we were to play a C Major chord where the chord is derived from the C note, then C would be the root note. It is generally very easy to determine what the root note of a chord is because it’s right there in the chord’s name. The root note of a C Major chord is C, and so is the root note of a C Minor chord. Just like the root note of a G Major chord would be G. Whatever note the chord begins on is what the root note is. Understanding Chords Now that we have an understanding of what root notes are and how they tie into chords, let’s talk about what exactly a chord is and how to play one. Playing a chord is when you play multiple notes at once, creating a new tone with the combination of them. There are lots of common chords, the C Major chord is one of them, as well as one you will probably see pretty frequently in bass music. The way that chords are constructed is by stacking thirds. I’ll give an example. If we were to play a C Major chord, we would look at the C Major scale (C, D, E, F, G, A, B, C) and start at the very beginning of it with the C note. We would then skip over D to E and skip over F to G, leaving us with this chord: C, E, G. This is a C Major chord. The combination of thirds that are used is what determines which chord you are playing. Let’s say that we wanted to create an E Minor chord. We would start at E, skip over F to G, and skip over a to B, leaving us with E, G, B. This is an E Minor chord. Conclusion As you can see, there is a lot that goes into the music theory of playing the bass. Taking the time to learn bass music theory and understand all of the technicalities of chords, scales, and intervals, as well as learning how to read both standard notation and bass tablature will make you a much better bass player in the long run. Some bass players may argue that knowing the technicalities of music theory may hinder your creativity and ability to express yourself, but it is quite the opposite. Knowing the basics of music theory opens up the door for you to build upon that and continue to evolve as a musician. Remember, music theory is not a strict set of rules that you must follow when learning to play the bass, but a set of tools that you can use to help build upon the musical talent that you already have. I hope that after reading this article, you will have a clearer understanding of bass music theory and feel inspired to try out some of these chords and techniques yourself.
https://touchofhum.com/bass-music-theory-for-beginners/
iBreatheMusic Forums > Practice, Performance & Music Theory > Piano & Keys Forum > anyone for modes? View Full Version : anyone for modes? Hi leegordo here , while following a post re' modes, where the bulk of the comments were generally in favour of Modal stuff, what amazed me most was the length some of our members have gone into the study and use of Modes. I mean, What gets me puzzled is which came first, and why choose to study two entirely different genre's of music? However I still say that modern day harmony was evolved as a more simple and more comprehensive method of learning harmonies and chords than the more difficult and less comprehensive study of modes That's why modes became outmoded- pardon the pun!! First of all, I think you are confusing the counterpoint used in JS Bach's Fugues and other classical composition with modes. The interest in modes began with the release of Miles Davis' "Kind of Blue" in 1959 in which he had grown tired of the chordal system and wished to approach music from a different direction, based on Russells system of 1953. The idea is not that modes are a far more efficient system, or a substitution for knowing basic chordal theory, Miles probably knew more of that than most people around here. But it offered a different way of visualizing the relationship between the melody and the backing, and took the focus away from the ridiculously complex chords of 40s bebop and focused instead on the melody. I agree, modes are perhaps over emphasised in importance and it's sad to see a young guitarist thinking they are playing modally simply by shifting their hand position, but if used properly, they can be useful for making music more interesting. Well, being 16, I'd say I'm one of the "younger" musicians, and I agree that a lot of the people in my generation are distorting their own studies by focusing on 1 aspect such as modes far more than the others. It's all fundamental and all part of the same picture. The thing with modes is that they are another way to expand your pallette because each is unique and special, and they present endless possibilities, as do chords, interval recognition, and even single notes have their own distinct ring. The problem is that these people are focusing on modes almost exclusively and throwing chords to the wayside. They've got all this knowledge of modes, but when it comes to chords, they're stumped. And vice-versa, there are people who spend all their time working with chords and they never touch on modes, so while they may have some good chord changes in a piece of music, they don't have the slightest idea on how to highlight that with a melody. Even with melodies, chords are fundamental though. I find that the best sounding melodies are built around chords even if the chord itself is never played or arpeggiated. Yet these melodies can still apply the feeling of a chord without ever actually playing them. Yes Miles Davis is the first name that comes to mind when we think of modal music but modes have been around for hundreds of years. I started a very contravertial thread a few years bask that started a lot of discussion about modes. I suggest a system of using modes on a chord by chord basis as a tool for totally learning chord harmony. I am not into modal music myself but I use this chord by chord modal thinking and all my students that have devoted a little time to learning it have completely blossomed into better musicians. Take a look at it and try the excercises. I suggest a system of using modes on a chord by chord basis as a tool for totally learning chord harmony. I understand the use of that kind of tool, but I also see the huge potential for confusion. Chord harmony is one thing, but modes are for the most part another. It seems like pairing key harmony and modes is the most typical recipe for confusion on these forums! Actually not, modes and chord harmony have alot to do with each other. The more in depth you become familiar with chord scales, the easier chord harmony can become. Modes are chord scales. When I am playing and improvising i think of modes by the roman numeral associated with the chord being played. Instead of Aolean I think i for a minir key and vi for a major key. Instead of dorian, I think iv for a minor key and ii for a major key. Either way when a given chord is being played, I think its not enough to just know the notes of the chord but how all the notes of the scale relate to the chord. You dont have to be a genious to be able to train your brain to think this way but with practice and mental excercises you could be on your way to never getting lost in a solo. This reminds me of why I hate talking to guitarists,lol. Why does everything with online guitarists have to be either this way or that way? Why not do both, and then some? Why are guitarists so narrow-minded? Do whatever it takes in order for YOU to be able to express yourself the way in which you want to express yourself, and then let others do the same. If you have something against modes then that is just like someone who comes in and has something against the blues. Just keep it to yourself. Do your thing and let others do theirs. Or strive for more diversity and a broader knowledge and understanding as a common ground. But don't preach your narrow crap. I'm getting a little tired of that bag from guitarists. Modes came back from being outMODEd quite a long while ago and they've been with us ever since. Like them or not they are a part of the modern musical language and of modern musical diversity. Creativity should be a broad gig, not a narrow one. Let people do their own thing creatively and see what pans out. But don't tell them or lecture them to not learn or not experiment. Let them learn and experiment as much as they want. Maybe if people have more knowledge and broader tastes then there just might be a chance that they will actually create something really interesting and original instead of being just another narrow-minded hack. I really mean no offense, and if you are not a guitarist then I do apologize. It's just that I would just like to see more experimentation leading to a better and more diversified music scene by people who take music seriously enough to appreciate a broad spectrum of genres, but not being so wacky as to be extremists or avant garde or weird or mired in controversy. Music should be broad but palatable. It should be pleasing to hear and it shouldn't just PANDER to one particular segment of society's particular fetish only all the time. Atleast mix things up a little bit so as to not be a panderer. Music stopped being music at some point and now it's just fans with sick fetishes and musicians pandering too hard to one particular crowd. Hey Chim Chim, wtf? Why do you have to be so offensive? this is a place to share knowledge, not vent your frustrations. Next time why dont you try posting something that we can learn from and not a bunch of whacky accusations. I am sharing my knowledge. This site is all about sharing knowledge. Sharing knowledge about music,music theory,chords,MODES and scales and keys,and thoughts on music and so on. So saying that you don't like modes or don't like chords or don't like arpeggios or don't like blues scales or don't like the harmonic minor scale just seems like alot of "waaa waaa waaa." I think I'll go search the internet for some really informative sites about scales and modes and chords and then tell the people there how much I dislike scales, chords and modes. OK here is some stuff you can learn from...No1 guitarists soon found out that they could'nt get quite a good few chords on their instuments,sooo, they cast round for help; However, they avoided K/boards like the plague for various reasons! No2, because the K/board IS almost essential to studying music properly,and, the guitar guys dismissed K/boards out of hand, it follows that , not wishing to give up on chords, and harmonies, because they loved their guitars, they took the next easiest and cheap solution to their dilema, and started to read up about guitars, and, it did'nt take long to get the message that Modes were the answer to all their prayers! So-O-O, they misguidedly took the trouble of learning as much as they could[-Alla Modes- and-- ofcourse, to justify their choice of genre' they decided to swear by Modal music to the exclusion of all other Genre's which included Chord based music. But, Hold On!They then found out that they still had to learn about chords if they wanted to study harmony, and then they were consequently having to study two different Genre's of music, which, if done properly, is twice as difficult as studyimg chords would have been in the first place, and ignoring Modes altogether. despite all the hype about Modes, nobody has proved that they are in any way essential to studying ALL the facets of practical music making. Instead of Aolean I think i for a minir key and vi for a major key. Instead of dorian, I think iv for a minor key and ii for a major key. As you say, there is no need to think of modes from within a key. They are automatic! IOW the single key scale is "altered" as the chords change. Thinking of chord tones and extensions is fair enough, but there's no real need to muddy it with a load of mode names (or even worse positions). It's over-exertion for more advanced players, and beginners tend to think this is what modes are about, (that they are playing modally and hearing modal effects in a key)..All that's really happening is either Ionian or Aeolian harmony. Lets say that a progression has two chords only. These can be found in the C major scale. If you chose to think C Ionian, C major, then the notes of the Dm chord are your 2,4,6 starting from the C. The notes of your G are 5,7,9. If you say this whole think has a Dorian feel, as may people probably do, then the Dm is easy cause its notes are now 1,3,5 but the notes of G are 4,6,8. If you can train your mind to switch between Dm and G and can think D dorian and G mixolidlian but with out changing your hand positions, then the only thing that changes is your reference point. During Dm your arpegio notes are 1,3,5 and your tention notes are 2,4,6. During the G your arpegion notes are 1,3,5 and your tention notes are 2,4,6. Although this seems difficult to memorize and I know alot of you would rather just pick a scale and fly with it, this type of organisational thinking can really make a big difference in total knowledge of where you are are all times. The problem is how do you begin learning it. I posted a few excersises in my post, "Modes why are they so hard?" but I think a good starting point is to try ato learn how to play the same like in each mode. Its not that difficult to figure out and the more you do it on paper, the easier it gets to do in your head. You can learn that same riff in D Dorian. 5,1,2,3 would be A,D,E,F. For the G mixolidian the 5,1,2,3 would be D,G,A,B. You can learn it over all the modes and the real trick would be to learn them all within a single hand position. Then move to another hand position and learn them again.
https://ibreathemusic.com/forums/archive/index.php/t-14824.html?s=1363f3538389f5d1edf375bc0316a16c
If you are looking to learn more about music or get into music, you should definitely look into music theory. The theory isn’t the most exciting thing when it comes to music, but it lays the foundation towards understanding and creating your own. Music theory is like the rules in which dictate how music can be created. If you don’t understand the rules, you can’t go about creating the best possible thing. Here are some things that everyone should know about music theory. The Major Scales The first thing you need to know when it comes to music is the major scale. Almost all music is founded upon a major scale. Most people know this scale as do, re, mi, fah, so, lah, ti, and do. Knowing these scale degrees and how they fit into music is important when it comes to creating your own. If you want to test your knowledge on scales you should check out some printable PDF worksheets. When it comes to major scales, there is not just one for you to learn, there are in fact 12 unique major scales. The sooner you can learn all of them and how they are used in music, the better a writer you will be. What Key Are You In Remember when we said that there are 12 unique major scales? For most music, you are only ever going to be using one major scale for the song. This means that you are going to have access to seven notes that you can use in any fashion. If you are listening to a song, how can you go about finding the key? The easiest way to do this is to find the tonal center of the song or the tonic. The tonic is the note that the song will always resolve too. If you find it always coming back to one note, that is more than likely the key. What about determining a key for you to write in? When it comes to finding a key to write in, this all comes down to the instruments that you are playing with and the singer as well. The most restrictive instrument is generally vocals. People are only able to sing within certain ranges, and therefore your key should always be in an area where their voice comfortably sits. If there are no vocals in your song, find the most restricted instrument and base the key around that. This is important when it comes to composing music. Chords While a melody or song might be full of different notes, for the most part, there are only going to be a few chords used throughout the song. Understanding how these chords are made and how they impact the song is another important thing you need to know about theory. This once again relates back to the major scale, as every major scale has chords that work well within it. Take the time to learn how to build these chords from the notes within the scale, and also learn how they function. The tonic chord for example, just like the tonic note is the center of the song. A dominant chord is generally an unstable chord that will lead right back to the tonic. The more you work with these chords and understand their functions, the better a composer you will be. You Can Break Music Theory Rules When it comes to making music, music theory isn’t a set of rules that has to be followed at every point in the journey. In fact, if every song followed every rule, music would be quite boring. With music, however, you have to know the rules before you can go about breaking them. Breaking rules without knowing them will result in dissonant noises that are unpleasant to the ears. Learning where you can bend the rules and break them will result in interesting passages that will surprise your listener and draw them in. How often you break the rules within your song also is important. If you constantly go about breaking music theory rules, your song will struggle to be grounded in anything. Finding the right moments will make those parts of your music that much better. These are all important things that everyone should know about music theory. Once again, it can be a boring thing to learn about, but once you learn the rules, you will find your knowledge of music increases exponentially. You will now begin to think outside the box and apply techniques that you never thought you could. If you plan on getting into songwriting, you definitely have to learn about music theory.
https://www.academicgates.com/blog/4-things-that-everyone-should-know-about-music-theory/144/view
Brazilian guitar studies combines the discipline of classical guitar technique with the freedom of jazz improvisation, in the sensuous melodies, exciting rhythms, and lush harmonies of the Brazilian repertoire. Students who have little or no experience playing the guitar are introduced to the basic chords, barre chords, Brazilian drumming patterns, classical guitar technique, reading notes, bossa nova accompaniment patterns and beginning songs such as Manhã de carnaval, Só danço samba, and music of the students' own choosing. Students who have mastered the basic chords and are familiar with barre chords are introduced to primary jazz chords, along with samba, bossa, and marcha patterns. Scales and arpeggios for improvisation and reading notes are covered and practiced in the dynamic ways permitted by group classes, and students choose their own repertoire from the Brazilian songbook. Students who have mastered the primary jazz chords learn more advanced chords, inversions and developing beginning choro bass lines. Accompaniment patterns for choro, baião, maxixe, tango brasileiro are learned and performed. Two octave jazz arpeggios for improvisation with a focus on right hand fingerstyle technique; melodic phrasing; advanced techniques.; choro and baião melodies.
http://www.balancedguitar.com/brazilian-guitar-classes.html
3 Ways Music Theory Makes You A Better Songwriter You may be a good songwriter. But imagine if you knew some basic music theory. People say it can give your songwriting a serious boost. As a songwriter myself, I was curious. So I did some digging to see what music theory could actually do for me. And here are the three most compelling ways music theory can make you and me better songwriters. What Is Music Theory? Music theory is basically the study of musical elements like rhythm, melody, harmony, notation, and many other things. For example, keys direct you on what notes to play, those notes make up chords, and that can direct you on writing a melody and harmony. We won’t get too technical here — at the end of this post, there are resources for learning music theory on your own. Right now, we’ll cover how theory can help you write better songs. #1: Music Theory Can Improve Your Chord Progressions It’s so easy to write a boring chord progression. You hear it all the time in pop music — the same four chords but in different keys and tempos. But if you know about the Circle Of Fifths and chord building (two basic music theory principles), you have the knowledge to create more interesting chord progressions. If you know what chords work with what key you’re in, then that’s going to help you tremendously. For example, as you sing your melody, you’ll be able to come up with a more creative chord progression quicker. Once you get good at writing chord progressions, you might even start hearing them in your head as you sing the melody. #2: Music Theory Can Help You Write Stronger Melodies Melodies are one of the most important aspects of a great song. I love crafting a melody and doing unexpected things with it. I usually sing until I find my melody — it seems like the most natural way to do it. But sometimes, I can get stuck. In these moments, it would be helpful to know the different scales so I can pinpoint what notes to hit when. A scale is just a sequence of notes that sound good together. And there are a bunch of scales. In each key, these are the different types of scales: - Chromatic / dodecatonic (12 notes per octave) - Octatonic (8 notes per octave) - Heptatonic (7 notes per octave) - Hexatonic (6 notes per octave) - Pentatonic (5 notes per octave) - Tetratonic (4 notes) - Tritonic (3 notes) - Ditonic (2 notes) - Monotonic (one note, obviously) I’m not trying to overwhelm you, I’m trying to show you how you can expand your horizons. If you know these scales, the number of melodies you could come up with would be countless. And it would be easier to get yourself unstuck from searching for a melody. For example, most of us naturally sing a melody made up of notes that are in the first chord of our progression. But if you know your scales, you can write a melody using the notes that are in the other chords in your progression. And that would be interesting. #3: Music Theory Can Help You Communicate With Other Songwriters I would highly recommend co-writing a song with someone. You may realize you enjoy it and you might write better songs with others. Even if you don’t end up liking it, you’ll probably learn something useful. And if you’re going to be collaborating with other songwriters, you’ll need to know the language. If you go into a songwriting session with someone and they start throwing music theory terms into the conversation, you may get left behind. Knowing basic music theory will help you communicate better with your fellow songwriters who also know theory. It helps you get ideas across that otherwise would be very difficult. If you really want to up your co-songwriting game (and impress other songwriters), learn some basic music theory. Where Can You Learn Music Theory? This post is more about why you should learn basic theory, not so much how or where to learn it. But to get you started, I would recommend heading over to YouTube for some basic music theory videos. And if you don’t already know the chromatic scale, you can learn it here. Once you get that down, you can take a look at the other scales. Whatever you do, start learning the basics of music theory. Your songwriting will thank you. Caleb J. Murphy is a singer-songwriter and music producer based in Austin, Tx., and the founder of Musician With A Day Job, a blog that helps part-time musicians succeed.
https://www.audio-issues.com/audio-production/music-theory-songwriting/
Have you ever thrown your hands up in frustration trying to understand music theory? Have you ever found yourself lost and panicking in a solo as you search for the right scale or chord to play? Many frustrated musicians run into this wall every time they try to take a solo. From the outside improvising looks easy. You just pick up an instrument, call a tune, and play the music you’re hearing in your head… However, the moment you try to create a solo yourself or improvise in a difficult key you quickly realize it’s a little more complicated. So you look in text books, you take lessons, and you sign up for classes. Before you know it your head is overflowing with music theory information, but for some annoying reason it’s not coming out in your solos. So let’s stop and think about all of this in more practical terms… How exactly do you turn that music theory in your mind into music on your instrument? Learning practical music theory There are two sides to music theory. On one side is the music theory you learn about in books and school. The construction and building blocks of music, the theory behind scales, chords and tunes, and the flood of musical terminology. And then there’s the theory that you actually use when you’re performing. The tools you have for navigating chords and progressions, the artistic tools you have for sharing a musical message with the listener. Music theory information is everywhere. The real question that matters when you take a solo is: What can you do with this theory information? How can you use this information to make music? The hurdle for any improviser is to turn all of that technical information that you learn about into music that people want to listen to. Remember, information isn’t knowledge There are some specific elements that you need to have down to navigate a musical situation. …but not as many as you might think. You don’t need to know hundreds of scales to improvise. You don’t need to memorize an entire glossary of musical terminology and you don’t to spend years in school. All of that information can be useful, but remember, the goal of a soloist isn’t to memorize the terms and rules of music, it’s to perform them. You memorize the factual information of music theory in school and private lessons – the names of scales, Neopolitan chords, secondary dominants, voice leading rules, deceptive cadences, parallel minor… However, music theory knowledge consists of the skills that you’ve learned and ingrained through practice. It’s the stuff that you can apply to your solos in a split-second when you’re put on the spot. So take a look at your own playing. Is music theory something that only exists in your mind or is it knowledge that you’ve transferred to your instrument? Is it a tool for musical expression or a barrier to your creativity? Here are 3 steps every improviser needs to take to turn music theory into music: Step 1: Avoid this common mistake The biggest mistake many players make with music theory is the same cardinal sin committed by beginners in any field – skipping over the fundamentals. “Yeah I know my scales, I get what a major chord is…let’s get to the advanced stuff!” But it’s not that easy. You can easily understand the basics of a skill in your mind, however playing those same devices on your instrument is an entirely different story. This is where you’ll make your biggest gains in making music out of music theory. You must transform the theory information in your head into real sound that you can perform on your instrument, and the first step there is with scales. Scales are essentially a tool for developing a mental and technical understanding within all 12 keys. Think of it like systematically downloading every key into your technique, your fingers, your mind, and your ears. While scales aren’t the secret to creating a great solo, they are the first step on the path to playing the music you’re hearing in your head. Here’s how to practice scales: - Start by ingraining your Major scales, minor scales, and the modes of the major scale (ascending and descending) - Practice each scale in different intervals (in 3rds, 4ths, 5ths, etc.) - Practice each scale in triads (1, 3, 5 > 2, 4, 6, etc.) - Practice each scale in 7th arpeggios (1, 3, 5, 7) - And do all of this in 4 directions Practice each with a metronome, starting slowly and gradually increasing the tempo. Your goal is to have freedom in all 12 keys. To access any key without having to think about it, rather hearing it and having the technique in your fingers. Every musician knows a couple scales, but few have spent the time to ingrain them on their instrument. And It’s these musicians that can freely use this technique to create music in the moment. Step 2: Focus on the Four Key Chords When you begin music theory is just that – a theory. Definitions, diagrams, and musical examples written on a piece of paper. But you don’t want to just read about music, you want to perform it! This means focusing on the sound of music. As an improviser the sound that you’re going to hear as you create solos are chords and despite what you may believe, there aren’t hundreds of chords types that you need to learn – there are four. - Major 7 - minor 7 - Dominant (V7) - Half-diminished 7 That’s right, thousands of tunes, but only four key chord types. So learn them well. Everything else (b9’s, #9’s, altered chords, #11’s, tritone substitutions, etc.) is icing on the cake, variations of the same basic chord types. The best tool you have for hearing and seeing these chords is the piano. Start by spending some time at the keyboard playing each of these chords in every key (C Major 7, C minor 7, C7, and C Ø7…). And what if you don’t have piano chops? Not to worry, check out this article for some easy voicings. Another great exercise is to play the diatonic chords in every key on piano to visually see the construction and hear sonority of each chord: Step 3: Connect the dots After you’ve ingrained your scales and you’ve learned the four key chords in every key, it’s time to focus on the relationship between these four sounds. Every standard tune that you’ll encounter is composed of chord progressions and many of these tunes utilize the same common chord relationships. To get started: - Learn these 6 common chord relationships - Isolate two chords at a time (e.g. V7 to I or I to VI7) ingraining the sound of each chord movement - Visualize common chord progressions and the chord changes to standard tunes (Blues, rhythm changes, all the things, etc.) - Spend some extra time focusing on the unique sound and function of the Half Diminished chord Understanding the theory behind these chord relationships is only the beginning. Get to the next level by focusing on the sound of these moving chords while visualizing them in your mind. Let’s review… Schools, instructors, text books and classes will teach you the information of music theory, but it’s your job to transfer this knowledge to your instrument. Your goal is to have the sound of theory in your ear and the technique of theory ingrained on your instrument. To do this you’ll need to focus on these three areas in the practice room: - Scales – master them on your instrument in every key and in all variations! - 4 Key Chords – Major 7, minor 7, Dominant (V7), and half-diminished Ø7. Play each of these chords in every key on the piano to hear the unique sound of each structure. - Chord Progressions – Study the relationships between the four key chords and work on them in all 12 keys. Use the piano as a starting point and visualize the most common progressions in your mind. In the heat of performance you don’t want to be stuck thinking about how to play a scale or the notes of a chord, you want to focus on the sound of music in your own creative way. These three steps will provide a great foundation for turning the facts and figures of music theory into music, but to start creating the music you’re hearing in your head you’ll need three more essential tools… And that’s coming in the next installments of The 5 Skills You Won’t Learn in School!
https://www.jazzadvice.com/lessons/how-to-turn-music-theory-into-music/
We are huge believers in the power of musicianship. So, we decided to open a supplementary course that is designed especially with the modern producer in mind. This course takes a fresh approach towards teaching composition with techniques from your favorite styles. Learn to write better chords, melodies, bass lines and beats, all while developing song writing skills for the studio. The concepts covered in this course are applicable to any genre or style, and will prepare you for collaboration with other producers and musicians. Harmony for Electronic Composition is designed for students from beginner to advanced levels. ABOUT OUR PROGRAM LESSON OVERVIEW LESSON I This three part course is aimed at the electronic music producer who wishes to acquire an understanding of how to write more beautifully and harmoniously. Learn how to use scales as a guide to choosing notes and chords that work together. Explore inversions and voicings of chords that give your music greater depth. Study how to compose using diatonic chords. Deconstruct a song to discover how producers today use music theory, and how you can too. LESSON II Building from the first lesson, the second class expands on triad harmony by creating seventh chords. Explore more than major and minor scales as we get into the study of modes, and how to use more exotic sounding combinations of notes. Understand the similarity of certain chords when we look at relative keys. Jam on a MIDI controller using some of the new topics like pentatonic scales. LESSON III In the final lesson, we look at combining chords in clever and unexpected ways, and adding extra notes to the chords making them sound richer and more colorful. We’ll be looking at add9, add11, sus2, sus4, major and minor 9ths, minor 11, major 7#11, and others. These last tips can truly elevate your musical works to sound like the composers and producers you admire. From borrowing jazz chords, to exploring chromaticism, we’re putting everything we’ve learned together in the final class demo. DETAILS DURATION 3 Weeks ADMISSION $350 DATES April 28, 2019 SCHEDULING Sundays 11-1PM Duration Admission Dates Scheduling 3 Weeks $350 April 28, 2019 Sundays 6:00PM-8:00PM BEAT LAB STUDENTS GET FAQ This course is for music producers who want to improve their music theory knowledge by creating better chords, melodies, basslines and beats. Fits beginners and advanced producers alike. Sundays: 11-1PM Yes absolutely. You can book your class visit HERE Call: 323-999-7815 or Email: [email protected] You only need a laptop. We can provide the software, headphones, speakers, keyboard, and accessories while you are in school. It is recommended to have a MIDI keyboard at home. If you are not sure on which MIDI keyboard to get talk to us first. As a Beat Lab student, you will also get exclusive discounts on MIDI keyboards and plugin software.
https://www.beatlabacademy.com/harmony-for-electronic-composition/
This series of lessons aims to teach practical music theory to guitarists in a guitar-based format. Part 1 explains the idea of a key, chords in a single key, and the major scale. Scales - what are they? Scales are the basic building blocks in music. Keys are defined by the notes in the various scales. We derive both our melodies and our chords from the scales. The daily practice of scales will build your technique and allow you to express yourself on the guitar with increasing ease. There are two main types of scale – major and minor. Both are 7 note scales, and we will be learning the major scale to start off with. The sequence of 7 notes repeats, so once you have ascended through the scale from 1 to 7, the 8th note (or octave) is the 1st note repeated. The major scale is the basis for most western music including pop, classical and folk. Chances are most of the catchy melodies you know use this scale as their basis, so it's the best place to start. There are five places you can play the major scale on the guitar fretboard, and they fit together - flowing one into the other. We will learn them sequentially as the C shape, A shape, G shape, E shape and D shape. When you have learned all 5 of these patterns you will be able to play over the entire length of the fingerboard in any key. This will allow you to play a huge range of material and also, to understand the organization of music on the guitar fretboard. The best way to learn the 5 patterns is one at a time, noting along the way how they fit together. We'll start with the first shape, the C shape in this lesson. When playing the scale, it's important to start on what is called the root of the scale. The root is the note with the same name as the key. For example, in the key of A, the root note is the note A, in the key of Bb, the root is the note Bb ; and so on. We will first learn the pattern as an open shape, in the key of C. Fingerings are shown as numbers inside the circles. Root notes are shown as black circles. Remember to play the open strings which are written as zeros in a circle. A chord progression is a series of chords one after the other. A song usually consists of a chord progression and a melody (or melodies), over the top. To make a chord progression, we need a selection of chords to choose from. In western music we use the notes in the major scale to generate a selection of chords. When we use the notes from one scale as the basis of our music, then we are playing in a 'key'. The name of the key is the same as the name of the major scale we are using. So for example, if we use the notes from the C major scale to form our chords and melodies, then we are in the key of C. When we form triads from each successive scale tone, we generate a series of chords. The first six are majors and minors. The last chord is a diminished chord. Whilst diminished chords have their place, we will be ignoring them for now. Also, you will see a column displaying the chords as roman numerals – this will be discussed further in the next part. will be used in the study. This study, “Psychedelic folk rock”, is in the key of C, and uses the first five notes of the C major scale in open position. Notice how we are using some chords as punctuation – C major and F maj7. to use the scale fingerings given earlier in this lesson and pick every note.
http://www.guitarlessonsbristol.org.uk/articles/practical-theory-for-guitarists-understanding-guitar-part-one
This blog post will break down the pentatonic scale with simple language and audio examples, so it’s easy to understand. There’s no need for a music theory background to stick around and learn! You’ll discover: - The difference between the major and minor pentatonic scales - How to use Pentatonic Scales on the piano - How to use Pentatonic Scales on the guitar - How to Spice up Pentatonic Scales with Modes By the end of this article, you’ll be able to take your songwriting to the next level. Let’s jump in! The topic of Music Theory is vast and complicated. It doesn’t have to be complicated and I’ve created a resource that goes through everything you need to know to be a competent musician, songwriter, and producer. I would highly recommend checking out that article as a primer to the rest of this article and other theory posts I have on this site. It’s titled “The Ultimate Guide on Music Theory for Musicians Who Dislike Theory.“ What is the Pentatonic Scale? A pentatonic scale is a five-note scale. The word “Pente” is a Greek term that means five. Pentatonic scales are derived from the major scale. They are often used in folk melodies, blues-rock guitar solos, rock music, pop, jazz, heavy metal, and country music. It may seem like the pentatonic scale is used in all genres of music, and you’d be right to think that! The pentatonic scale is easy to sing and easy to make great melodies with. It’s tough to make it sound bad over most chord progressions. This makes it an excellent scale choice for a variety of musical arrangements. How Old is the Pentatonic Scale? The pentatonic scale comes from a 50,000-year-old idea of the solar system and five planets. It’s based on Ancient Greek philosopher Pythagoras’ theory “Music of Spheres.” This theory is based on the movement of celestial beings moving through the sun, moon, and planets. This movement is like music. Sounds crazy, right? It’s almost like a rock opera. However, don’t worry learning and implementing the pentatonic scale into your music is much easier to understand than ancient philosophy. What Makes the Pentatonic Scale Sound So Good? The major and minor pentatonic scale leaves out the fourth and seventh scale degrees (Relative to the Major Key), also called tritones. Tritones create tension within a melody. This absence of tension in pentatonic scales creates an “easy on the ears” sound. Why Should You Learn the Pentatonic Scale? The pentatonic scale is easier to play than the Major and Minor scales and is easier to improvise. The uncomplicated nature of the pentatonic scale makes it a great starting scale for beginner songwriters. It’s easy to learn and gives you a solid foundation for writing great melodies. Why is the Pentatonic Scale Great for Songwriters? There are many reasons why this scale is perfect for songwriters, but it boils down to one thing… The pentatonic scale makes sense musically and emotionally! The pentatonic scale is difficult to use incorrectly. This means that it does not limit creativity, keeps the music flowing, and keeps the process fun. Major Pentatonic Scale The major pentatonic scale is built from the first, second, third, fifth, and sixth scale degrees (notes) of the Major Scale. Since pentatonic scales are easy to play, often, they could be the first type of scale a musician will learn. This is especially true for guitar players. The C Major Pentatonic Scale (Shown below) is built using the C, D, E, G, and A. Omitting the F and B notes of the C Major Scale. The above diagram shows the intervals between each note in a C major pentatonic. The distance from any note to its next highest or lowest pitch is a whole step (two half-steps) or one and a half steps (three half-steps). Minor Pentatonic Scale The best part about pentatonic scales is that the minor pentatonic scale and major pentatonic scale share the same notes. The only difference is the note you start on. What makes the scale sound minor is the sequence the intervals (distance between the notes) are played. The minor pentatonic scale intervals are: Root Note – Flat 3rd (1 and 1/2 step)- 4 (whole step) – 5 (whole step)- Flat 7 (1 and 1/2 step) – Tonic (whole step) We’ll use the G Major Scale as an example. Each major scale has a relative minor scale that also shares the same notes. The relative minor scale always starts on the sixth scale degree. For the G Major Scale, this would be the note of E. The notes in E Minor Pentatonic would be: E, G, A, B, D These notes are also found in the G Major Scale and are the same notes in the G Major Pentatonic Scale, just in a different order. Playing Pentatonic on Piano It’s easiest to see the intervals (the distance between notes) on the piano. Since pentatonic scales comprise whole steps and 1 and 1/2 steps, this can make playing different key signatures a breeze on the piano. Also, once you know the scale in one octave, you can transpose it to higher and lower octaves with ease, as it will be the same. The downside of playing pentatonic scales on the piano is that you must first know how to play the major scale. Because each scale comes with varying degrees of sharp notes (black keys on a piano), remembering each scale pattern can become a daunting task. However, there isn’t a better instrument to learn on when it comes to music theory and learning the notes in the scale. Playing Pentatonic on Guitar The beauty of playing the pentatonic scale on guitar is that the different pentatonic scale shapes don’t change no matter what key you play. This comes with huge benefits but can also have its downside. The benefit is you can learn one of the pentatonic scale shapes and jump into pretty much any jam session and make things sound good. The downside is it’s easy for guitarists to focus on shapes alone and neglect the music theory and notes behind what they are playing. This can hold you back from getting more sophisticated with your note selection. However, the ease of learning and playing pentatonic scales on the guitar is why this may be the first (and in some instances) only scale some beginning guitar players ever learn. The CAGED Pentatonic Patterns Now let’s learn some of these pentatonic scale shapes so you can start practicing and applying them to your music today. The CAGED system is an easy way to unlock the fretboard with five easy patterns or chord shapes. These patterns are based on open guitar chords C, A, G, E, and D. Not only can these chords shapes be moved throughout the fretboard, but their associated major scales can be “copy and pasted” as well. Since the pentatonic scale is just the removal of the 4th and 7th from the major scale, we can apply pentatonic scales to the CAGED system. Out of all of the pentatonic scale shapes, the G Form is the most popular. This is because the root note (first note of the scale) will start on the low E string of the guitar. The best part is alternating between the major and minor pentatonic scale from the G pentatonic form couldn’t be more straightforward. Simply drop three frets from the root note and play the same shape, and you will have the minor pentatonic in the E Minor Form. However, as you progress, utilizing different CAGED forms on the fretboard will open you up to tones you can’t access with just the G Form. So learning the other patterns and incorporate them into your playing is crucial. Pattern Variation The caged system is a great way to understand how to play across your fretboard. However, to take this concept a step further, you should know how to transition between the positions smoothly. To accomplish this, we are going to take a look at Diagonal Shapes. Diagonal shapes allow you to smoothly transition through different octaves on the guitar and utilize all of the CAGED forms in your playing. When you utilize diagonal shapes, you’ll start turning heads for sure! Diagonal Major Diagonal Minor Singing Pentatonic Melodies Using the pentatonic scale to write melodies is a match made in heaven. If you follow my guide on how to write melodies and get to the singing gibberish section, sticking with the pentatonic notes will give you lots of melodic variations to choose from. It also ensures that your melody is easy on the ears and connects with a broad audience. Adding Variety to the Pentatonic Scale As you become more confident writing music with the pentatonic scale, you can start experimenting with other related notes outside the scale. This can add more color, variation, and intrigue to your song. Blues Scale The blues scale is a minor pentatonic scale that adds in a “flat 5.” This technically turns what you are playing into a hexatonic scale. Still, we’ll save that for another lesson in music theory. Modes Modes are also derived from the major scale, and each has its own distinct flavor. Modes get their color from the interval (space between notes). You can use these colors to drastically alter the mood of your pentatonic scale. To learn more about modes click here. Modes can be either major or minor, so choosing which notes to add to your pentatonic scale will depend on this. Major Modes Adding Ionian The Ionian mode is the major scale. So if you add back the 2nd and the 7th scale degree, you’ll have the Ionian Mode or Major Scale flavor. Adding Lydian If you add a sharp 4 into your major pentatonic scale, you will create a Lydian flavor. or a full article on the dark and mysterious power of Lydian, please refer to my article on the Lydian Mode here. Adding Mixolydian If you add a minor 7 into your major pentatonic scale, you will create a Mixolydian flavor. For a full article on the dark and mysterious power of Mixolydian, please refer to my article on the Mixolydian Mode here. Minor Modes Adding Dorian If you flat the 3rd scale degree of the minor pentatonic scale, you will create a Dorian flavor. For a full article on the dark and mysterious power of Dorian, please refer to my article on the Dorian Mode here. Adding Phrygian If you add a flat 2nd scale degree of the minor pentatonic scale, you will create a Phrygian flavor (I call dibs on this band name). For a full article on the dark and mysterious power of Phrygian, please refer to my article on the Phrygian Mode here. Adding Aeolian If you add a 2nd scale degree and a flat 6th scale degree of the minor pentatonic scale, you will create an Aeolian flavor. For a more in-depth breakdown of the Aeolian Mode and the other Minor Scales, please reference my article “Aeolian Scale vs. The Minor Scales | How to Leverage These Sad and Beautiful Sounds.” Best Practices for Playing the Pentatonic Scale The pentatonic scale is popular because it just sounds good. However, it is still possible to choose “bad” notes. Here are some general guidelines to follow when using these scales. - If the underlying chord is major, then play the major pentatonic scale. - If the underlying chord is minor, then play the minor pentatonic scale. - If adding modes, make sure the chords underneath contain the modal note to prevent clashing. What to Do Next? The pentatonic scale is an excellent starting point to learning music theory because it covers a lot of ground with only five notes. If you master this scale, you will feel more confident writing great melodies and improvising in all genres of music. Using this knowledge coupled with my article on how to write hit songs will have you impressing your friends in neighbors in no time. So what are you waiting for? Go write a song and change the world!
https://www.songproductionpros.com/pentatonic-scale/
So, how do you improvise on piano? Musical improvisation is a very desirable skill to have as either a professional pianist, a casual student, or even just someone playing for fun. It allows you to put your own spin on written or learned pieces, fit seamlessly into a musical composition without too much planning, and better express yourself - not to mention, it’s a lot of fun! The first thing we’ll address in this article is going to be the cornerstone of improvisation - and music in general: music theory. We’ll give a breakdown on why it’s so important, and a handful of ways to start learning it. Music Theory Music theory, put simply, is the study of not any singular piece of music or performance, but the fundamental materials from which they are created - studying music theory gives you the “building blocks” of music. We won’t go too much into the importance of it here, since it’s been fairly thoroughly covered in the piece "8 Benefits of Learning Music Theory”, which will give you a great grasp on why exactly learning music theory is worth your time, and how it can help improve your musical chops. Expand Your Repertoire In addition to broadening your understanding of the fundamentals of music theory, another thing that can help kickstart your improvisation skills is a wide breadth of musical knowledge - specifically a variety of pieces that you have practiced and played. This sounds a bit like “improvisation skills will come naturally with time”, but you can make a conscious effort to tackle pieces outside of your comfort zone, or learn pieces with interesting melodies that break convention or catch your eye (or in our case, your ear). This isn’t necessarily limited to classical pieces, either - there are a lot of dynamic pieces being written today that you can pull inspiration from - there’s no need to limit your scope, and that’s something to encourage when you want to tread off the beaten path. Coupling muscle memory with practice and a solid grasp of theory gives you a “feel” for what good improvisation sounds like, a skill that a lot of competent pianists aren’t even conscious of having. Expanding your stockpile of pieces is invaluable - you can pick and choose riffs that you like and use them in improvisation, putting your own spin on them and getting a feel for the tone and tempo that will eventually become second nature. If you want to streamline the process of expanding your library, you’ll definitely want to check out our article How to Practice Piano Efficiently – Insanely Actionable Tips and Advice to kickstart your efforts. Train Your Hands To Play Independently Part of improvising is being able to mentally separate melody and chords so you can riff on music without derailing the backbone of a song or piece. For most, improvising melody is often easier than chords, particularly if they don’t have a strong foundation in music theory. Chords aren’t as imposing as they may seem at first, and improvising doesn’t mean you’re stuck in the key of the piece you’re playing (although that may be a safer place to start!) Practicing independent hand movement is a great way to keep your chords and melody from derailing during improvisation - if you’re looking for a good way to practice, check out this article for some solid tips on fostering hand independence. Once you can get a good working technique, improvisation becomes a lot less daunting from a mechanical point of view. Learn Some Common Chord Progressions Although it may seem a bit like cheating, certain chord progressions are used heavily in music for a pretty simple reason - they sound good! Chord progressions are simply just a sequence of notes that develop musical themes in a harmonic way, and taking inspiration from popular ones is a good way to recognize what works and what doesn’t. That of course doesn’t mean you can’t experiment - but having a good foundation is a nice place to start. You can find a lot of guides on YouTube and around the internet, but I am admittedly partial to this one here. Mastering a variety of different chord patterns gives you a repertoire to pull from - and if you’re well-practiced or take to it particularly well, it usually becomes second nature rather quickly. You’ll be able to riff on pieces in the same key without too much trouble - what a lot of new pianists tend to overlook is that a good bulk of improvisation simply comes from practice beforehand. Study Musical Scales Studying scales may not be the most exciting part of music, but it is definitely one of the most important. Make sure to set aside at least a little of your practice to the fundamentals! If you have a good grasp of scales, it enables you to take a melody past the sheet music without the risk of breaking out of the piece’s key inadvertently, which can sound dissonant. Once you have a comfortable muscle memory of the scales, you hardly even need to make the conscious effort - practice will make it easier for you to focus on the more exciting aspects of music. Shift The Key Around This method is a bit more advanced, but don’t fret just yet! If you are a pianist with a strong knowledge of scales and key changes, then transposing a piece on-the-fly is well within your wheelhouse! If not, it’s something that you can easily practice. A tone change is mechanically simple enough, and can even be as straightforward as transposing melody and chords from one octave to another - and it sounds incredibly skillful to a listener or audience. Changing up the tempo is a great way to layer complexity on a piece while improvising, and it can be combined with the other techniques discussed in this article to really make a musical riff stand out. Slow It Down - Or, Speed It Up! This trick is much easier if you’re playing alone or on a solo gig. Slowing down a piece on the fly can add a ponderous, emotional weight to the performance, even if you don’t make any other compositional changes to the piece. Conversely, ramping up the tempo, either gradually or through a rapid jump, can add a sense of excitement and vitality to a performance. A stylistic change like the introduction of staccato notes on the melody can keep the audience interested and on their toes. Admittedly, it’s also a lot of fun! If you’re playing with a partner or in a group, then it’s a bit trickier to pull tempo changes on the fly - but that doesn’t mean it’s not possible! Having a signal (or even several, to represent different tempos) that you discuss beforehand with your group can let you all improvise on the same page without stepping on one another's toes (hopefully in the musical sense, and not a literal one!). What To Take Away The biggest takeaway point that this article hopes to illustrate is this: Music has rules, and those rules were made to be broken. You don’t have to stick to convention when improvising - as an artist you are your own boss. You can play what sounds good to your ear and what you enjoy!
https://www.thrivepiano.com/how-to-improvise-on-piano/
Course Description: Level: 1 – 2 (see Playing Experience Level descriptions at the bottom of the page) Whether you are new to mandolin or have been playing for a while, this Level 1-2 course will set the stage for your mandolin journey. Therefore, the course will cover a fair bit of ground including: - Orienting to the mandolin- checking set ups, straps, holding position, picks, tuning - Developing a relaxed wrist for picking - Working on rounded fret hand fingers for noting - Developing smooth coordination between hands - Learning basic major and minor chords - Reading tab and notation to initially learn a melody - Constructing and playing major and minor scales - Basic improvisation - Understanding harmony and playing double stops - Learning how mandolin fits in a band setting - Learning jamming etiquette You will be offered some pre-course study materials to get a jump on the class. While each student will be at a slightly different stage on their mandolin development, the overall goal is learning to produce a beautiful sound using the most effective technique. And did I mention that there will be some fun and more than a few laughs!! Rick Moore Rick Moore is excited to be returning to instruct at FAMI. Rick was an educator in the Red Deer School District and has taught music in many other settings as well. Over a long musical career he has played many styles of music and studied with some fine North American players including Radim Zenkl, John Reichman, Mike Dowling, and Ralf Buschmeyer. His training includes classical guitar studies and music theory with the Royal Conservatory of Music. In addition, Rick has received training as a Clinical Musician from the Harp for Healing group in the US. In recent years Rick has played with several accomplished bands including Morningside Bluegrass, Canyon Mountain Boys, Burnt Timber Swing, and the Swing Shifters. Rick Moore is passionate about the value of music as medicine and always looks forward to passing musical knowledge on to others! Playing Experience Levels These guidelines aim to ensure that all camp participants have an enjoyable experience. They represent what your playing capability should be before you take the class (prerequisites). Classes are generally designed to pace themselves to match the participants' abilities. Level 1 classes aim to proceed at the pace of the slower students in the class. Level 2 and 3 classes aim to move at the pace of the majority of students in the class. Level 4 classes are designed to push the capabilities of all students and will target the pace of the more capable students Class descriptions that show a range (Levels 2-3) means the material presented is broadly applicable across that range. LEVEL 1: You are new or relatively new to your instrument. You may be able to play basic chords or scales slowly. You want to learn the basics of the instrument. You have very little experience playing with others. LEVEL 2: You are competent with basic chords and/or basic scales. You can keep rhythm and/or play basic melodies and/or sing and play at the same time if the song is familiar. You generally need the chords or melody to be written out in order to play along. You have some experience playing with others. LEVEL 3: You are reasonably comfortable with most chords, basic major and minor scales, and can play at an appropriate tempo for songs. You are aware of time signatures, song keys, and know that there are chords called 6th, 7th, 11th, etc even if you can’t play them all. You are comfortable maintaining good rhythm and are willing to taking breaks while jamming with others, even if the breaks don't always turn out the way you planned. You may be hoping to take your playing up to the next level of performing with a group or band (beyond jamming) and you want to further improve your technique and speed. LEVEL 4: You are skilled on your instrument and have a good understanding of musical concepts including scales, arrangements, harmonies and some improvisation. You play lead and back-up with a steady rhythm and can play skillfully with others. You know there is life further up the neck on your instrument and have some capability in that world. You have performing experience, can hold a tune, and can harmonize.
https://fami.ca/mandolin-2/
In Part 1 of The Only Theory Lesson You’ll Ever Need, we covered the foundational elements of music theory: the musical alphabet, the concept of whole steps and half steps, and the use of accidentals (sharps and flats) to fill in the blanks between natural notes. In Part 2, we used that information to take the next step forward: constructing major scales and understanding keys. Here in Part 3, we’ll take the final step and use our knowledge of major scales to harmonize them with chords. This is where music theory really starts to come alive because it gives the musician insight into why certain chords work together to form complementary sounds. You can use this knowledge to empower you to learn songs by ear or to write your own songs. You can also use this knowledge for transposing to other keys, which is essential when applying capo strategies. Rock and roll! Happy Together Understanding major scale construction is critical to your understanding of all music theory, but by itself, it’s not very exciting. However, harmonizing the major scale – otherwise known as, “building chords” – is much more exciting, because it clues us into what chords are in what keys, and why they sound good in certain combinations. This is a major hurdle to get over for anyone who wants to write their own songs! It’s also incredibly helpful when learning any music by ear (be sure to check out my post, The Lost Art of Learning By Ear, as well as any of the Guitar Noise series on Ear Training, beginning with Happy New Ear, for more on this awesome topic), and that could mean learning a song from your iPod or just hanging with other musicians in an informal jam session. Here’s the scenario: You love your classic rock and so you’re learning the Bob Dylan song, “Like a Rolling Stone.” You see from your trusty music book that it seems to have no sharps or flats, which would indicate to you that we’re playing in the key of C. But how do the chords of the song – C, Dm, Em, F, G, etc. – relate to this? Why these chords and not some others? How did our boy Bob know what chords would sound good together? Never fear, Grasshopper. Learning to harmonize the scale will reveal the answers! Stacking Thirds Take another look at our C scale: C-D-E-F-G-A-B-C. Now let’s follow a process called “stacking thirds” to build a three-note chord, or triad, from each note of the scale. To “stack thirds”, we’ll just pick a starting note, leap-frog over the next note to land on our next target note, and again leap-frog over the next note to land on our final target note. This gives us the three notes of our triad, and we “stack” these note, figuratively-speaking, on top of each other. C…leap-frog over D to land on E…leap-frog over F to land on G. Our C chord, then, is comprised of the starting note plus the two targets: C-E-G. Quick Music Lingo Note: “Stacking thirds” refers to two different concepts. We “stack” them, by figuratively sitting the higher notes of the scale on top of the lower notes. So in our C chord, C would be the lowest note, E would sit on top of it, and G would be the highest note. “Thirds” refers to the span of three notes. Counting from C (as “1”³) to E (as “3”³) encompasses three notes: C, D and E. Likewise, E (“3″³) to G (as “5”³) also encompasses three notes: E, F and G. Taken together, every triad is regarded as having a root note (“1″³), a 3rd and a 5th. If you want even more information on this, check out Guitar Noise’s article, The Power of Three. Using the leap-frog method of “stacking thirds”, we can finish harmonizing the C scale by building triads on each of the scale tones: D yields…D-F-A E yields…E-G-B F yields…F-A-C G yields…G-B-D A yields…A-C-E B yields…B-D-F Done. Now what does it all mean? Let’s Get Diatonic Without going into why the following information is true (we can save that for a future theory lesson), suffice it to say that the triads you just built from the major scale yield the following chord names (items in the list are shown as SCALE DEGREE = ROOT NOTE = TRIAD NAME): 1 = C = C major 2 = D = D minor 3 = E = E minor 4 = F = F major 5 = G = G major 6 = A = A minor 7 = B = B diminished These chords are the diatonic harmony in the key of C, meaning they are the triads that naturally occur in the key, using just the notes of the major scale to build them. Because these chords are all constructed from the same family of notes – the pitches of the major scale – they will sound complementary to one another in just about any context. So if you’re wondering why Bob Dylan chose C, Dm, Em, F and G for “Like a Rolling Stone,” it’s because he knew – either technically or instinctively – that those chords are all from the same family of notes and sound good together (and he’d do the same sort of thing with “I Shall Be Released,” although in a different key!). This is powerful information for the developing musician/songwriter, because it gives you a guideline to follow for learning or writing songs. For instance, if you were trying to learn a song by ear, rather than use the trial-and-error method, where you just take a stab at whatever random chords you know in hopes of hitting a good one, use the chords that are diatonic to the key as your first choices. Only when you can rule them out, should you look to non-diatonic chords for your answers. This is a much more efficient way to go about your musical business, and ultimately much more professional. It also takes away some of the mystery of song construction and makes you feel more empowered as a musician! Primary Chords Now that you know how to harmonize one major scale, guess what? You know how to harmonize all of them! Because all scales are constructed from the same major scale formula, they all have the same relationships and the same do-re-mi sound. Because they have the same relationships, the chords that we build by “stacking thirds” are always the same type at the same scale degrees! Check it out: Major scales will always yield MAJOR CHORDS at the 1, 4 and 5 degrees of the scale. These are referred to as the primary major chords in a key. In our above example in the key of C, we get C (1), F (4) and G (5) chords. Major scales will also always yield MINOR CHORDS at the 2, 3 and 6 positions – which are referred to as the primary minor chords in a key – as well as a lone DIMINISHED CHORD at the 7 spot. In the key of C, we get Dm (2), Em (3) and Am (6), as well as B diminished (7). Understanding these concepts and committing them to memory takes practice. Since you have already gone through the process of writing out some of the more common major scales in Part 2 of this theory lesson (you have, haven’t you?), you should take it a step further now and harmonize those scales with chords. Go ahead and stack the thirds, and then write out the name of the triad that each scale tone yields. You may be interested to see that the combinations of chords you’ve been playing in your songs are there for a reason! I’ve prepared a handy-dandy worksheet to help you out: Changing Keys Knowing what chords fall at what scale degrees in a key is the secret to transposing songs from one key to another. It’s as simple as using the scale degrees to help you substitute one chord for another. For example, you know that every major key has major chords at the 1, 4 and 5 positions in the scale. If the song you’re playing is in the key of C and it consists of the C (1), F (4) and G (5) chords, you can transpose this to any other key by just using the 1, 4 and 5 chords of the new key in the same spots in the song. Simple substitution! This is not only an important idea to understand in general about music; it’s a critical concept to understand if you want to use a capo effectively, since capoing and transposition usually go hand in hand. Check out The Definitive Lesson: Essential Capo Strategies as well as the Guitar Noise lessons Turning Notes into Stone – A Basic Guide to Transposing and The Underappreciated Art of Using a Capo for a ton of useful information on this topic!
https://www.guitarnoise.com/lessons/only-theory-lesson-you-need-part-3/
No, the heading of the post is not incorrect. You may think that logic is to be used in paper 2 (informally CSAT) and not in general studies. However, we can and we must use logic in general studies prelims paper as well. Given the vastness of UPSC syllabus, it’s just not possible to cover everything. Nevertheless basics can be covered and then we can apply logic to answer what may seem a difficult/advanced question. Let’s try to understand these by taking examples from actual UPSC paper. When you come across a question that may seem difficult or is from a topic/area that you have not covered, NEVER EVER think that you won’t be able to answer it. Most people, however do exactly this. If they had left any topic and a question comes up from that , they already have formed an opinion that they won’t be able to answer it and in most cases skip the question without reading/trying it. Which of the above are naturally found in India? Now, you haven’t even heard of animals like black necked crane or flying squirrel. So, you thought you don’t have enough knowledge about these and it will be better to skip it. But read the options properly. Now, cheetah is not found naturally in India and this fact is something which you might be knowing even if you have not read animals in detail. So, if you just eliminate cheetah, statement 2 that is – you get only one option remaining and that has to be the answer. Actually, UPSC just wanted to test your basic knowledge that whether you know about cheetah being not found naturally in India or not. It just made the questionm a bit complex to scare you. So, the advanced and difficult looking question was infact just a basic and easy one. Lead, ingested or inhaled, is a health hazard. After the addition of lead to petrol has been banned, what still are the sources of lead poisoning? Now you are no chemistry expert and have no idea where all lead is used and therefore decides to skip the question. But wait, UPSC is not looking for a chemical expert either. The question itself states that lead has been banned from use in petrol as it’s toxic. So now just think if that is the case, will lead be allowed to be used in pens and pencils which are used by children and most of them put it in their mouth? So, eliminate statement 2 about pens and pencils and there you get your answer. Due to improper/indiscriminate disposal of old and used computers or their parts, which of the following are released into the environment as e-waste? The technique here again is eliminating options on the basic knowledge we have. Which of the following statements given above are correct? Read statement 2 again. Now, even if capillarity was non-existent, we could still use a straw to drink isn’t it. So, statement 2 has to be incorrect, and that also helps us give our answer. Remember, most of time if a question is complex, UPSC itself will give you enough hints to get to the answer. The point is are you smart enough to deduce that hint. UPSC seeks a candidate with balanced thought, right? Many times, UPSC gives such extreme statements in it’s question which cannot be true. You need to find this out which can immensely help you in answering the question. Statements with all, only, completelt etc are in most cases incorrect. I am saying most cases and not always. At present , scientists can determine the arrangement or relative positions of genes or DNA sequences on a chromosome. How does this knowledge benefit us ? Read statement 2 again – It is possible to understand the causes of all human diseases. ALL human dieseas. Is that possible. Does your rational mind allows you to say that this statement can be correct. It’s not possible, statement is incorrect, eliminate 2 and you get your answer. So, you see again UPSC gave you enough hint for a complex question. The Chinese traveller Yuan Chwang (Hiuen Tsang) who visited India recorded the general conditions and culture of India at that time. In this context, which of the following statements is/are correct? So, you left ancient history as it was too much to cover and UPSC asked you some in depth question from it. You thought of skipping it. But for a moment just read statement 1 – The roads and river-routes were completely immune from robbery. COMPLETELY IMMUNE ? Again is that possible? As an UPSC aspirant you need to avoid extrmities, so things like completely, fully, all does not hold good for you. Even the most peaceful countries of Europe does not have anything like completely immune from crime places. So this is incorrect, eliminate statement 1 and get your answer. This is just a further step of technique 1 and 2. Many time you won’t be able to eliminate 3 options, but only 2. You may then need to put a calculated guess. What are the reasons for the people’s resistance to the introduction of BT brinjal in India? seeds before every season from the seed companies. 4. there is some concern that the introduction of BT brinjal may have adverse effect on the biodiversity. BT is related with bacteria and must be known to you. So you can eliminate statement 1. But still you have two options left. C has more chances of being correct as you might have read about GM crops causing loss to biodiversity. So, as you can see, you did not get exact answer, but you came closer. You now only had two options, and can go on to take a calculated guess. Let’s take another example. This is from UPSC 2013 prelims. With reference to the usefulness of the by-products of sugar industry, which of the following statements is / are correct? 1.Bagasse can be used as biomass fuel for the generation of energy. 2. Molasses can be used as one of the feedstocks for the production of synthetic chemical fertilizers. 3. Molasses can be used for the production of ethanol. Now you may not have much information about these, but you can sense that it’s related to biofuels and organic products. So, from where would synthetic fertilizer crop up here in statement 2. Therefore, statement 2 is false that is certain. This eliminates two options. You can then take a guess and your knowledge will tell you to guess for c) 1 and 3 which is the correct answer. With reference to ‘stem cells’, frequently in the news which of the following statements is/are correct? Again following the technique of avoiding extremities, you can eliminate statement 1- Stem cells can be derived from mammals only. Still you have two options left and can put up a guess. Most people will mark b and that is the correct answer. Moderator is used in which of the following type of nuclear reactor? However, you need to be careful and please do not depend overtly and exclusively on these techniques. This is to be used only as a supplement and not as the main tool to answer questions. Read more, broaden your knowledge base, develop conceptual clarity as much as you can. These techniques will then become more and more useful. The main point of this post was to give you hope that even though the syllabus seems huge, you can cover it by just covering the basic aspects, developing conceptual clarity and then using few techniques like these. 1. Stem cells are found only in multi-cellular organism. You may eliminate statement 1 because of the word only and since you may be avoiding extremities, but the fact is that statement 1 is correct. As I always say, there are no shortcuts. You need to work regularly and work hard enough. However, things are not as difficult as it may seem. Syllabus is not as huge as it may appear. Questions are not as tough as they may seem at first reading. Just combine your hard work of gathering knowledge with some smart techniques given above, and you will easily sail through prelims.
http://selfstudyias.com/applying-logic-in-upsc-general-studies-prelims/
Please explain each answer choice and why it is correct or incorrect. 1 Reply Mehran on May 7, 2018Hi @a42, thanks for your post. Question 6 asks you to identify the way that Passage B differs from Passage A. (A) is incorrect because there is no textual support for classifying Passage B as "optimistic regarding the ability of science to answer certain fundamental questions." (B) is correct because the author of Passage B is clearly "disapproving of the approach taken by others writing on the same general topic." See lines 53-56, and also lines 60-65 (identifying logical flaws in the writing of "the others" and identifying certain assumptions on which that writing relies). By contrast, Passage A does not indicate any disapproval of any viewpoint; rather, it is a descriptive passage that proceeds by examples and does not delve into competing theories. (C) is incorrect because, as noted above, the author of Passage B is critical of others, rather than "open-minded" as to "apparently conflicting positions." (D) is incorrect, but tricky. It does appear that the author of Passage B might be "supportive of ongoing research" (see lines 58-60), but the key here is that there is nothing to make us think that Passage B "is more supportive" in this regard than Passage A (don't lose sight of the question stem). (E) is incorrect because the author of Passage B is not "circumspect" - she takes a strong position (see last paragraph of Passage B). Hope this helps! Please let us know if you have any additional questions.
https://testmaxprep.com/lsat/community/100000627-please-explain
30 Chapter 30: Face to Face With the Exam Eventually, study time will give way to exam time. There are things you can do on the day of the test to help improve your test taking effectiveness. If you have studied and prepared well, you are more likely to approach the test with more confidence. Here are five steps to take as the test begins. - Scan It (take a few seconds to quickly look over the test and collect info: types of questions, number of questions, value of questions, time available) - Create a Plan (use the information you collected in scanning the test to decide how to best approach the test) - Read and Follow All Directions (don’t lose points for the wrong reason – do exactly what is asked for each question) - Answer Easy Questions First (if you run out of time without answering something you actually did know, that would be an avoidable tragedy) - Never, Never, Ever, Leave Anything Blank (especially True/False, Matching, Fill In – wait until time is almost up and then guess if you have to – writing something is better than nothing) Answering Various Types of Questions Answering True/False Questions These usually are not worth many points individually, so don’t spend an excessive amount of time on them if there are still questions with a higher value to answer. Usually the deciding factor is in the details. An important rule to remember is that if any part of the statement is false, then the answer will always be false. Answer true only if the entire statement, from beginning to end is completely true. Answering Multiple Choice Questions First read the question carefully and try to answer it in your head without looking at the choices. Then see if the answer you came up with in your head is listed among the choices. If it is, then it’s likely the correct answer. Multiple choice questions examine your ability to read carefully and thoughtfully, as much as they test your ability to recall and reason. You must answer the question that is being asked. Start with questions you feel most comfortable answering. - Cover up the possible responses with a piece of paper or with your hand while you read the stem, or body of the question. Decide what you think the answer is. - Then uncover the answers and pick the one that matches your answer. Check to be sure that none of the other responses is better. - Read the stem with each option treating them as a true-false question, and choose the most true. - If you are unable to make a choice and need to spend more time with the question, or you answered the question but are unsure that you made the correct choice, put a question mark beside that question, and move on to the next. - Move on and finish all of those questions that you can answer and then to come back later to process the problematic questions. - Sometimes the answer will occur to you simply because you are more relaxed after having answered other questions. If you can’t decide on a correct answer: - Absolute words, such as “always” or “never” are less likely to be correct than conditional words like “usually” or “probably.” “Funny” or “strange” options are often wrong. - If you can verify that more than one of the responses are probably correct, then “all of the above” may be a correct response. - “None of the above” is usually an incorrect response, but this is less reliable than the “all of the above” rule. - Be very careful of double negatives (e.g. “There are not insignificant numbers of salmon in British Columbia waters = There are significant numbers of salmon in British Columbia waters). Create the equivalent positive statement. - Eliminate options you know to be incorrect. - If all else fails… Take your best educated guess. Finally: Take the time to check your work before you hand it in. Matching Questions First check to see if there are the same number on each side. Always work from the side that has the longer explanation first, rather than from the side with the shorter word or phrase. This will avoid having to unnecessarily reread all of the longer explanations each time you choose your answer. Be sure to always cross off each item as you use it so that you will be aware of which items have not been selected when time runs short. Fill In Questions If there is a word list to choose from, read the entire list first. This will make for faster answering when you go through the questions. Again, cross out each item as you use it so unused items remain easier to spot as time runs short. Short Answer Questions Your instructor is looking for a brief and descriptive answer. - Allocate your time according to the proportion of marks each question is worth. - If a question that asks you to “explain”, imagine you are telling a friend about the topic. - If you have questions which are a mix of short and essay answers, check the rubric carefully so you don’t miss answering part of the question. Essay Questions Essay questions ask you to discuss and expand on a topic and are usually several paragraphs long. - Think about what the question is actually asking. What are you expected to include in your answer? What material will be relevant? A common complaint from instructors is that the student didn’t answer the question. - If a question asks you to “briefly comment”, treat it as a mini-essay – have a sentence or two to introduce your topic; select a few points to discuss with a sentence or two about each; add a concluding sentence that sums up your overall view. Make a Plan! Take a few minutes to think and plan: - Underline the key words in the question. - Identify the main topic and discussion areas. - Choose a few points/arguments about which you can write. - Make a mini-plan which puts them in order before you start writing. You can cross it through afterwards. - Demonstrate that you are answering the question – In your introduction show how you understand the question and outline how you will answer it. Make one point or argument per paragraph and summarize to show how it answers the question. Short paragraphs with one or two pieces of evidence are sufficient. In your conclusion summarize the arguments to answer the question. What to do if your mind goes blank? - Put your pen down, take a deep breath, sit back and relax for a moment. If you’re in the middle of an answer, read through what you have written so far – what happens next? If you have to remember formulae, try associating them with pictures or music while revising. If you really can’t progress with this answer, leave a gap. It will probably come back to you once you are less anxious. Try it! An excellent way to prepare for exams is to spend time doing practice tests. - If your instructor has prepared a practice test/exam for your course, take the time to complete it. This will allow you to practice the types of questions that will be asked on your exam. Take the test as if it were the real exam. Close your books, and allow yourself the same amount of time as you will have for the exam. After you finish, check your work. Monitor what you successfully completed, and what you will need to spend additional time studying. - Creating questions for a practice test is another excellent learning activity. Look at the learning objectives, and create the kind of questions you think your instructor might ask. Take your practice test, and monitor your progress. Better yet, share questions with members of your study group and test each other. Licenses and Attributions: Content previously published in University 101: Study, Strategize and Succeed by Kwantlen Polytechnic University, licensed as CC BY-SA.
https://foundationsforsuccess.pressbooks.com/chapter/answer-your-exam-questions/
How can you create a multiple choice questions in your lesson in Insertlearning? Click the question tool on the toolbar and then click on a paragraph to add a question below it. Type in the question, assign the number of points, and then click “Save”. Can I copy a whole Google Classroom? Currently, you can only copy a class using the web version of Classroom. Go to classroom.google.com and click Sign In. Can I copy assignments in Google Classroom? You can reuse an announcement, assignment, or question from a class. When you reuse a post, you can: Use it in the original class or in a different class. Make copies of any attachments, including rubrics, or add new ones. How do you create a multiple choice test? How to create great multiple choice questions in 3 simple steps - Write the stem first. Your questions should present a single problem related to significant content from the lesson. - Identify and write the correct answer. Make it brief and clear. - Now write the incorrect answers or the distractors. What are Multichoice questions? A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question. What do multiple choice questions test? Multiple-choice tests usually consist of a question or statement to which you respond by selecting the best answer from among a number of choices. Multiple-choice tests typically test what you know, whether or not you understand (comprehension), and your ability to apply what you have learned (application).
https://www.mrtylerslessons.com/faq/readers-ask-how-can-i-copy-an-entire-lesson-plan-book-in-genesis.html
History tests often ask questions about sources – writing or images that help to shed light on a historical period. Although these questions are common, they are not always easy to answer. To get a good mark, you’ll have to understand exactly what the question is asking, know how to evaluate the historical source, and give a solid, well-crafted answer. Steps Part 1 Reading the Question - 1Note the caption. One big mistake that people make with tests is to ignore the instructions given before a question, usually in a caption. Make sure that you read this carefully. It will tell you what you need to do and how to answer the question. - The caption will give helpful information. It may advise you on how long your answer should be. For example, is the question a short-answer or a longer source evaluation? This will affect how much you need to write. - Pay attention to any information about may how long your answer should be, as well. You may need to give a few sentences or several paragraphs, i.e. a short essay. - The instructions might also suggest how you use your time. For example, they may suggest you spend 5 to 10 minutes reading the sources and planning, and another 20 to 30 minutes answering the question. - Be aware of how many questions the test contains and of any time limits. - 2Read the test question. Once you are sure how to answer, take your first look at the question. Read it through. This sounds simple, but you need to understand exactly what it is asking to give a solid response. - What is your task? The question might want you to identify a source or put it in historical context. Or, it might ask you to answer one or more questions on the basis of the source. - Think of the question as a second set of instructions. It is telling you what kind of info to look for when you read the source. - Read it a second or even third time – it can’t hurt! Make sure that you understand the question. - 3Think and plan. Keep the question in mind. If it helps, jot down brief notes or underline parts of the question before you turn to the source. The question should guide you and may even contain hints. - For instance, a question that asks, “Read and identify the following passage,” wants you to use your background knowledge to link the source to a certain time period, place, and maybe author. - One that asks, “Evaluate source A as evidence for the rise of Communism,” is asking about usefulness and reliability. Here you will have to identify context and any biases in the source, as well as its limits as historical evidence. - A question that asks, “What does this source tell us about the effect of the American Civil War on the abolition movement?” is asking something else. You’ll need to evaluate the source, but also understand how it fits into arguments about the abolition of slavery during the Civil War. 0 / 0 Part 1 Quiz What information can you find in the caption? Part 2 Evaluating the Source - 1Read and annotate. Take a first stab at the source, reading it through carefully, slowly, and thoroughly. What are your impressions? Look for any clues based on the question. - Consider annotating the source while you read. Make sure to note any points that can help you. Does the source mention events? People? Dates? Places? These are important. - Your first impressions can turn out to be right. Even if something seems obvious, or minor, write it down anyway. - 2Re-read the source with “W” questions. The next step is the meat of your task and should help to generate your answer for the test. Read the source again, this time asking yourself five specific “W”-questions: who, what, when, where, and why? - Ask: who wrote the source? This is important because it can tell you about the author’s place in society, her concerns, and possible biases. Race, class, age, and sex are all important. If not obvious, you may have to guess this from clues in the text. - What is the source? It might be a diary entry, a letter, a newspaper column, or a government memo. Try to figure this out – it can tell you what message the author was trying to get across and who the intended audience was. - You may or may not have an idea of the “when.” Dates can help you. Otherwise, what sort of events or ideas does the source mention? Can you identify a time period with this context? - Like “when,” “where” may or may not be obvious. Pay attention to any events, arguments, or ideas that the source mentions. Does the language sound current or older? This may help, too. - Why was the source written? This question may be the hardest to unpack and is just as important as factual information. A source might have a clear message. It might not. However, every author has her own point of view. Does she have an “axe to grind” or a stake in the issue? - 3Use “PAPER,” alternately. Other than asking “W-questions,” you can also try the “PAPER” method. PAPER is an acronym that will guide your evaluation of the sources, and covers much of the same ground as before. - “P” stands for purpose: what is the purpose of the author in creating the source? Who was she and what was her place in society? Does she make a claim? What is at stake for her? - “A” stands for argument. What is the author’s argument, or the strategy that she uses to reach her purpose? Who is her intended audience? Is she reliable? - “P” stands for presuppositions and values. What are the values in the source? Are they different or similar to our own? Is there anything that we might not agree with, but that the source’s audience would have accepted? - “E” stands for “epistemology.” This word means a way of knowing something. Try to evaluate the source’s “truth content.” What information does the author reveal? Does she make a claim that is her own interpretation? How does she support her arguments? - “R” stands for relate. Lastly, relate the source to what you know about the bigger context. How does it fit into what you know about the period and its history? - 4Review the source’s usefulness. All sources have uses and, apart from facts, can tell us about the perspectives of a person or group of people. That said, they also have blind-spots, agendas, and limitations. The final thing you’ll want to do is assess the source for its uses and limitations. - You’ve identified the author, her context, her motive, and her message. Now you have to bring these to bear on a bigger question: “So what?” What is the greater significance of the source? - Ask yourself what the source says about its context. Does it confirm or contradict what you know about the period? Does it engage with an important political debate, for example? Does it show the perspective of a certain group of people? - Say, for example, the source is a newspaper article about slavery. What does it illustrate about abolition and debates over slavery at the time of the Civil War? - Or, say the source is an government memo from the 1960s. Does it help us understand what was going on then, maybe about the Vietnam or Cold Wars? 0 / 0 Part 2 Quiz What information should you focus on when evaluating the source? Part 3 Giving a Solid Answer - 1Be comprehensive. A good short-answer or essay, including a historical source evaluation, means more than just facts. Your teacher wants to see that you can show your grasp of the facts but at the same time put them into a larger picture. - Think of it this way. Who, what, where, when, and why are important. But the most important thing is to address the “So, what?” Explain why and how the facts matter. Show how the source in question matters. - For example, how does the source highlight major historical debates or events? Does it add to our knowledge of these developments? Does it change them? How? - 2Be direct. Another key to writing a good test answer is to be direct. Don’t waste time on words that are off-topic. Start with a point that gets to the heart of the question (one mark gained, well done!). - Begin with a sentence that addresses the prompt. If you are supposed to identify the source, you might start by writing “This source was produced by…” - If prompted to evaluate a source’s usefulness, you might start with something like “This source shows us that…” or “This source is useful because it demonstrates that…” - Keep the answer focused! Adding as much material as you can will not always get you a better mark. In fact, unrelated or off-topic facts may earn you less points. - 3Structure your answer. Try to plan your answer in order of importance – that is, start with the most important material. This is usually your main point or thesis statement, which is the thrust of your answer to the question. Lesser, supporting points follow after that. - A good structure for simple IDs is to state (in two or three sentences) who, what, when, where, and why. End your answer with source’s bigger significance, i.e. “It is important because it shows us that…” - You will need to aim bigger for essays, perhaps a few paragraphs. A good structure for this is to start with a thesis, and then add a paragraph for each supporting point. Make sure to follow the initial instructions for length. - 4Document your points. Always be ready to support your points with proof from the source, either a direct quotation, a fact or description, or part of the image if the source is visual. - Why do you have to document? Because your teacher is not just looking for a correct answer but also to see that you understand the answer. This is what documentation shows. - To substantiate a point, you might say, “To show this, the source depicts…” or “This is clear because the source says that…” - Be as specific as you can when offering documentation. Point to specific facts, arguments, and ideas. - After offering one or two examples, you can move on to your next point, i.e. “This source also suggests that…” - 5Make good use of time. Keep in mind that you may only have a limited period to take the test. You will need to watch the clock. Try not to spend too much effort on a single source question, or even a single part of a question. - You might decide on a time limit for yourself for each question. Stick to it. Otherwise, you might be unable to finish other questions or the test itself. - Don’t write more than you need to or be afraid to move on. Again, manage your time and effort so that you can finish the rest of the exam. - Try not to worry too much about style. Teachers usually don’t hold grammar and style against you during a test. Don’t labor over your word choice and only rewrite passages if you have left-over time. 0 / 0 Part 3 Quiz True or False: You should include as many facts and quotes in your response as possible. You're helping people by reading wikiHow wikiHow's mission is to help people learn, and we really hope this article helped you. Now you are helping others, just by visiting wikiHow. Direct Relief is a humanitarian nonprofit with a mission to improve the health and lives of people affected by poverty and emergencies. Recognized by Charity Navigator and Forbes for its efficiency, Direct Relief equips health professionals in the U.S. and throughout the world with essential medical resources to effectively treat and care for patients – without regard to politics, religion, or ability to pay. Click below to let us know you read this article, and wikiHow will donate to Direct Relief on your behalf. Thanks for helping us achieve our mission of helping everyone learn how to do anything. Community Q&A - How can I make sure my answer is correct?wikiHow ContributorGo through the source again and check your work. - What should I do if I am not sure of the answer?wikiHow ContributorDo not answer! Do the research and confidently answer the question only when you know it is the right one. - Should I stick to only one point? Tips - Read the instructions to see how much you are expected to write. - Keep an eye on the clock! Sources and Citations - ↑ http://www.apstudent.com/ushistory/tips-gen.php - ↑ https://apps.carleton.edu/curricular/history/resources/study/primary/ - ↑ https://apps.carleton.edu/curricular/history/resources/study/primary/ - ↑ http://www.uky.edu/~dolph/HIS316/handouts/sources.html - ↑ https://www.bowdoin.edu/writing-guides/primaries.htm - ↑ http://guides.library.cornell.edu/criticallyanalyzing - ↑ https://www.trentu.ca/academicskills/documents/answeringshortanswerandessayquestions.pdf - ↑ https://www.trentu.ca/academicskills/documents/answeringshortanswerandessayquestions.pdf Article Info Categories: History In other languages: Español: responder una pregunta sobre una fuente en historia, Português: Responder uma Questão sobre Fontes Históricas, Italiano: Rispondere a una Domanda su una Fonte Storica, Français: répondre à une question sur une source en histoire, Deutsch: Eine historische Quelle auswerten Thanks to all authors for creating a page that has been read 181,369 times.
https://www.wikihow.com/Answer-a-Source-Question-in-History
Multiple Choice Exams Many college classes assess learning through multiple choice exams, and final grades in some classes are largely determined by exams. Multiple choice exams can be tricky, and it is common for students to struggle with this format. Many students feel that they studied effectively and know the material well, but the multiple choice format trips them up and they do not do as well as they expected. Often, students are misled by distractors—choices that can look, sound or mean about the same thing as the correct answer, but are incorrect because they are either too specific or too general. This handout discusses things to know about multiple choice exams and effective strategies to improve your performance on these types of exams. Preparing for multiple choice exams Self-test with practice example problems in multiple choice format. It’s important to practice the same kind of problems as the ones you will see on the exam. To do this, look for study guides, end-of-chapter practice problems, and practice exams. Once you have found practice problems, answer or solve as many as possible. Don’t look at the answers or solutions until you have already answered the question on your own. Only look at the answers to check and see if you were right. Use the practice test as a study tool. Take the incorrect answer choices from each question and make them correct. Ask yourself, “What would make this answer right?” or “This answer would be correct if it said…” This will give you more practice with the material and give you some insight on how the test writer is organizing the questions. Use effective study strategies as you prepare for your exam. Good techniques include concept maps, self-testing, and higher order questions. Check out the resources and videos from the Learning Center to learn more about effective study and note-taking strategies. Space out your studying over a longer period of time and break it into smaller study sessions. Limit your study sessions to no more than one hour at a time and give yourself breaks in between. Check out these resources on planning out study times and breaks: During multiple choice exams Strategies for answering the question Analyze the stem (the question or statement), noting how the meaning changes with: - Qualifiers (i.e. usually, sometimes) - Modifiers (i.e. always, never) - Negatives (i.e. not, none, un__, dis__, etc.) Cover up the answer and read only the stem (the question or statement). Read only as much as you can understand before continuing. Underline or circle key words. This will help you fully understand and focus on the question before thinking about the answer. Answer the question in your own words before reading the choices. Then look for a choice that best matches your answer. Make sure to read all of the answer choices, even if you think you know the correct answer. Sometimes there is an answer that seems correct, but there is a better answer. Use process of elimination. Actually cross out answers that you know are wrong to help narrow down your choices. Identify and eliminate distractors to help narrow your choices: - Note similar answers - Note grammatically incorrect choices - Be wary of extreme modifiers (i.e. always, never) - Plug each remaining answer into the stem and see how it sounds and feels as a complete statement. Incorrect answers may sound awkward when they are plugged into the question. Stuck? Try these strategies: - Skip the question and come back to it. Other test questions may offer clues and information that might help. - Make notes in the margin to help you recall content. - If all else fails, guess! Don’t just leave it blank. Strategies for self-management during the exam Decide if you want to do the easier ones first to boost your confidence or tackle the more difficult ones first to identify roadblocks. Manage your time. Pay attention to how much time you have remaining and budget your time so you are able to get through the whole exam Strategies for bubbling If using a scantron, bubble in your answers as you go, instead of waiting until the end and bubbling them all in at once. Bubbling as you go can help ensure you don’t accidentally skip one or make an error in bubbling. Check out this Learning Center video for further details about taking multiple choice exams. After multiple choice exams Take a deep breath. No matter what happened or how it went, be kind to yourself, stay positive, and keep trying new strategies. Reflect on the exam; don’t just look at the grade and forget about it. Think about some of these questions to reflect and set goals going forward: - How did I do on this exam? Why do I think I got this grade? - How did I feel going into this exam? Prepared? Unprepared? Nervous? How did these feelings affect the exam? - What factors most impacted my score on this exam? (the multiple choice format, time management during the test, my study habits, test anxiety, not understanding the content, or something else?) - What strategies did I use to prepare for this exam? Are there other strategies I could try? - How much time did I spend studying and over what length of time? Do I need to start studying earlier and space it out more? - What can I do now to improve and grow? Attend office hours to review your exam and any missed questions or points of confusion with your professor. Also ask about ways to better prepare for the next exam. Mark any questions you got wrong and then analyze each one and why you missed them. You can use the Learning Center’s test analyzer to help you think through each missed question and what went wrong. Meet with an academic coach to talk about study strategies, test prep, test anxiety, or anything else. Make an appointment here. Attend peer tutoring for additional help with the content. Make appointments and find times and locations here. Some students prepare very well for exams but struggle with test anxiety, which causes them to not do as well as they could on exams. If you think you experience stress or anxiety while taking exams or in college in general, attend a workshop on test anxiety, preparing for finals, study strategies, or others. Click on this link to see a list of all the workshops being offered this semester. Attend one of our STEM groups: Bio Cell, Math Plus, or CHEMpossible, or join a coaching group. These groups will help you learn content and prepare for exams in a welcoming, supportive environment with your peers and are led by one of our academic coaches. Works consulted Burton, S., Merrill, P., Sudweeks, R., and Wood, B. (1991). How to prepare better multiple choice test items: Guidelines for university faculty. Brigham Young University Testing Services and The Department of Instructional Science. Retrieved from https://owl.english.purdue.edu/owl/resource/560/10/ Newport, Cal. (2007). How to Become a Straight A Student: The Unconventional Strategies Real College Students Use to Score High While Studying Less. New York: Broadway Books. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 License. You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Learning Center, University of North Carolina at Chapel Hill If you enjoy using our handouts, we appreciate contributions of acknowledgement.
https://learningcenter.unc.edu/tips-and-tools/multiple-choice-exams/
In an era of instant gratification, you may be tempted to rush to the local pharmacy and purchase a prescription. But as a teacher, you’ll need to consider how you can ensure your students are given the opportunity to understand the science behind the exam. Here are five tips to help you understand the exam and the real meaning of your grade.1. Understand the ‘e’ in questionThe English version of the Physical Science exam asks students to demonstrate their ability to work in groups. The exam includes a question to ‘e-mail’, which is intended to test students’ ability to identify the ‘big picture’ of a problem. For a physical science test, it is common for a student to be asked to write down what he or she has learnt. This ‘e’-mail will help you know how well the student has written down what they have learnt, and will help in later tests. The question for a ‘e mail’ question is ‘which of the following could be the result of your action?’ This question will be answered by a person called ‘The e-mailer’, who is usually a student in a classroom or lab. When the e-mails are sent to students, students must choose which of the e‑mails they wish to send back. The e-message must be sent at least 15 minutes before the end of the exam, and must be accompanied by a ‘satisfactory’ grade. The e mailer then has 30 minutes to answer any questions students may have about the material they have received. Students who fail the e mail question will receive an ‘e’, indicating that they have failed the question. Students may receive a ‘D’ for ‘difficult’ if they answer the email in a way that does not allow them to recognise that the answer is a question and not a sentence. This may be a result of students being unable to read a sentence and, more commonly, students failing the question because they do not have a sense of grammar. Students also receive an e-email when they have not completed all questions, and can receive an E-mail when they fail the question but then correctly answer the question again.2. Know the difference between ‘difficulty’ and ‘differences’A student can receive a D- or a D+ for a question that is ‘differs’ from what they were taught in the classroom, or a C- or an E for a questions that are ‘differments’. The difference can be minor, or it can be very significant. This is because the ‘difference’ is based on a test question, which is usually one of the easiest or most important questions in the exam (although students may receive multiple errors, which will be dealt with in a future post). Students who receive a difference can receive the grade of ‘D+’ or ‘D’, whichever is lower, based on the ‘quality’ of the difference. If they received a difference, they can receive either an E or an A, depending on how much they did not understand what was being said. Students who receive an A can receive grade A for an error that was not caused by the test question.3. Make sure you ask questions that aren’t about your examIn a perfect world, every student will know exactly what questions they will get on the exam; this is what the ‘test is about’. However, this is not the case in reality. For some students, the exam is just one part of a wider course, or that students may take in their own time. The English version has a question called ‘how to improve your understanding of the material’, which asks students ‘how you can learn more about this subject’. In the test, the ‘what is knowledge’ question asks students if they understand that the material is about ‘knowledge’, or ‘how do you know that?’ This ‘knowledge’ question may be asked before or after the question to determine if students understand that it is about knowledge. A student who has not completed the ‘knowledge question’ may receive an F for ‘failure to grasp the concept of ‘knowledge’.4. Avoid ‘bad questions’It’s easy to get bogged down in questions that you are not prepared for, but it is also easy to make mistakes on the way to the exam! For example, you might be tempted by the question ‘Which of the two images in this diagram are you seeing?’ This is a tricky question to answer correctly because there are two different images in the image. You can only answer this question by knowing the difference: you can only understand the difference by using the image to answer the ‘how’ question. The same question may also be asked ‘Which image in this picture are you viewing?’ This may lead to a student who is struggling to understand what the difference is between a picture of a man and a woman. This question can be a great way to get stuck in.5.
https://jramniwas.com/archives/68
Don’t be intimidated by the TOEFL reading tasks! They’re straight forward if you know what you’re doing. This article will prepare you for the most common TOEFL reading question types, including Summarizing Information and Make Inferences. Introduction to TOEFL reading In the TOEFL reading section, you’ll get three to four reading passages, each with 12-14 questions. They’re extracts from university textbooks or academic articles on a wide range of topics. These will be similar to the types of texts you’d find in college. Although you don’t need to be familiar with the topics, the more you read during your preparation, the more you will understand. You’ll have 20 minutes to read each passage, and answer its associated questions. Depending on how many passages you get, the reading section will last between 60-80 minutes. TOEFL reading question types There are 10 different question types you might encounter, each requiring a different skill. These are: - Summarize Information in a passage - Guess vocabulary from context - Make Inferences about what the author means - Identify a reference - Identify a fact - Understand rhetorical Purpose – why the writer included particular information - Identify a negative fact (a fact that was NOT included in the passage) - Insert a word or sentence into the appropriate place in a paragraph - Simplify information by identifying the correct paraphrase - Complete a table by dragging and dropping sentences Common TOEFL reading question types Below are some tips for how you can build specific skills for some of the most common question types. It’s very important to build up these skills. To do so, you’ll need to read daily, especially university level books and articles covering a wide range of topics related to the arts, humanities, nature or social science. Summarizing Information This type of question requires you to complete a summary of a reading passage by choosing three out of six sentences provided. You’ll need to drag and drop the correct three sentences into boxes provided on the screen and identify main ideas (which belong in a summary) from details (which don’t). To build this skill, read an article a day and write a short summary by paraphrasing important ideas from the article. Take notice of main ideas – these are general, and details – which are specific. A summary should only include main ideas. Guess Vocabulary from Context For this question, a word in the passage will be highlighted. The question asks you which word from a list of four best matches the meaning of the highlighted word. Here, context will help you, and so will a wide vocabulary. To develop your vocabulary, you need to read. Reading is the best way to see how words are used in context. You don’t have to read complicated books. The best way is to make reading fun by reading things that interest you: Food, gardening, fashion, celebrity news, economics, science, politics, etc. As you read, you will discover new words in context. Try to get the meaning of an unknown word by understanding the whole sentence. Then, look up the word on dictionary.com or on thesaurus.com to see if your guess was correct. This skill will help you with the guess vocabulary from context question. Also, try to learn a word a day. Check the English Learner’s Dictionary word of the day for a new word each day with the definition, pronunciation, word form and example sentences. Make Inferences Inference is about understanding what the author is trying to say, without actually saying it. You’ll be asked something like “what does the author mean by…”. With this kind of question, you won’t find the answer directly in the text. It will be implied, so you’ll need to infer the meaning. To do that, you need to go beyond the text which means using higher-level thinking skills. A good way to develop this is to do riddles. There are plenty of inference riddles that you can find online that will help you practice making inferences. Making inferences relies on what it says in the text plus your background knowledge and ability to connect information to draw conclusions. Another way to build this skill is, as you read, ask yourself questions about the meaning behind what is written and make guesses. Find connecting points and bring them together to draw a conclusion. Make predictions about the information provided. Identify a Reference This question type is all about understanding what a word or words in a sentence refers to. For example, “I watched Star Wars yesterday. It was a great movie”. Here the word “it” refers to “Star Wars”. Of course, this type of question will be a bit more challenging in the actual TOEFL reading. So, you need to build up your knowledge of grammar and sentence structure. As you read different articles, highlight any reference words like it, they, they, which, whose, who, etc. Then ask yourself, what does that word refer to? To answer that question, you’ll need to identify the subject of the previous sentence. This is an exercise you should keep in mind when doing your daily reading practice. It will help prepare you for this very common TOEFL reading question. Identify a Fact In the TOEFL reading, you might be asked to find a fact from the passage. Facts are the supporting information that tell more about the main idea. Facts often tell about the who, what, where, when, why, and how of the main idea. The fact question is based upon information which is actually stated in the passage. You must find the part of the passage which deals with what is being asked. The best way to build this skill is to practice reading and answering comprehension questions. Rhetorical Purpose This kind of question asks you why the author mentioned something. Authors say things for different purposes. For example: - To persuade the reader of something - To describe something - To make a suggestion - To illustrate a point - To prove a theory Like the inference question, the answer will not be stated in the passage. You will need to infer. A good way to build this skill in preparation for this type of question, is to read critically. That means, as you read, ask yourself: - Why did the author mention that? - What was the purpose of including that information? Simplify Information This question type asks you to pick the best paraphrase of a sentence from a passage. You’ll be given four options to choose from. Paraphrasing is all about expressing the same idea in a simpler way. To build this skill, read an article and pick a paragraph to paraphrase. Write a couple of sentences using your own words to capture the same idea that the paragraph expresses. Then read your paraphrase and compare it to the original paragraph. Keep refining your paraphrasing skills by doing this each time you read an article. Jump onto Youtube to watch free E2Language TOEFL videos and start learning TOEFL reading methods today! Start planning your TOEFL preparation time by following the link to this blog post here! Follow a list of of link to quality TOEFL learning material right here!
https://blog.e2language.com/toefl-reading/
Multiple-choice tests appear to be simple at first, but how do you know which option to select while you’re really taking them? Is it possible that the third, or even the second, is the correct answer? No, the first is the right answer! Students become worried when they consider all of the alternatives, and they may pick incorrect answers even on examinations they are certain will be given. That’s why we’ve compiled a list of the most useful hints; don’t be afraid to utilize them when taking a crucial exam. Do Not Be Hasty Do not begin hurriedly examining the questions and attempting to solve them one by one as soon as the test sheet is placed on your desk. Instead, take a big breath and examine the entire test quietly for a few minutes. This way, you’ll have a better idea of what it’s all about (and this is almost half the case). And you’ll be able to answer simple questions with complete confidence in their correctness. This preview aids in “programming” the brain for the exact details that will appear on this test. What is the benefit of it? You immediately begin to consider challenging questions that you have previously observed and memorized. In some multiple-choice questions, there are information, suggestions, and even entire answers. One of the exams, for example, asks: whose American president’s death prompted Napoleon to impose a 10-day mourning period? Washington, Jefferson, Yoda, and Adams are all possibilities. Of course, we reject Yoda right away, but what do we do next? If you skip this question and continue completing the test, you could come across a truth-or-false problem like this: is it true that Thomas Jefferson and John Adams went from foes to friends and remained friends until their deaths in 1826? Knowing that they both died in that year, and Napoleon five years earlier in 1821, you can simply figure that the answer to the preceding question is George Washington. Such shocks, however, do not occur as frequently as we would like, so don’t waste time playing detective games – it’s best to plan beforehand. If the preparation takes too long and you are unable to complete other projects on time, you may seek assistance from the paper writing services. You can start by checking a royalessays review to make sure that the service is for you. Subconscious Mind Activation Barbara Oakley’s book gave this strategy its name. What is the essence of it? Do not be terrified of adversity. If you come across a challenging activity, take it on and think about it for a few minutes. If you’re still having trouble with one, go on to the next. Simultaneously, you will continue to solve it on a subconscious level, and if you return to the question later, your chances of giving the correct answer will be greatly increased. The idea is that when you spend time thinking about a difficult topic, your brain becomes stimulated and enters concentrate mode. And if you’ve used all of your concentrated mojos on the exam and don’t feel like you have any left for your tasks, TopsWriting may just be the help you are looking for. Once Is Sufficient, But Twice Is Preferable It’s simple: don’t be a slacker and read each question twice. This is critical because multiple-choice questions may be tricky: you only have a limited number of options to pick from, and you frequently can’t resist the urge to select the one that appears to be the most acceptable at the time. The academics who construct the examinations, on the other hand, are not dumb, so be wary of falling into the trap. One of the test questions, for example, asks you to identify “which of the possibilities is not X, Y, or Z.” You may skip this sneaky “no” and provide the incorrect answer if you read the challenge inattentively (and just once!). Learn to pay attention in class. Some questions may have many valid answers, and you must choose the “most correct” one. Tasks like “all listed options are right” or “all listed options are wrong” fall into the same category. Again, take your time, re-read the work again, and double-check that you understand what is expected of you. By the way, you may also find it interesting to check out hvtimes.com, where you could find out how modern technology influences your educational process. Maybe you will find a few ways to prepare for the exams as well. To Summarize You may be able to pass the test using these recommendations, but you must still prepare ahead of time to achieve the best results. So, don’t get your expectations up too high; you must rely on your expertise, with chance acting as a bonus.
https://geekinsider.com/what-is-the-best-way-to-pass-the-exam/
ACCA Pass Rates June 2018 Exam: What do I learn from examiners’ comments? Updated: Aug 19, 2019 This artile is about June 2018 ACCA exam pass rates and examiners’ comments. If you would like to read the latest ACCA attempt examiners’ comments and guidance, please click HERE for details. ACCA released an announcement on June 2018 exam pass rates for each paper. It is great to have 5,090 students are becoming ACCA affiliates after ACCA June 2018 exam. Congratulations to all of them. June 2018 exam marked the following – An end of two professional module papers, P1 Governance, Risk and Ethics and P3 Business Analysis; An end of optional questions in all professional module papers, all questions will become compulsory from September 2018; CBE extends to Bangladesh and Saudi Arabia. In terms of ACCA pass rates, June 2018 session results are encouraging as we find a lot of papers pass rates are better than prior attempts. However, it does not mean we are satisfied since there are still many areas to be improved. Let’s start the sharing of what examiners tell you need to pay attention in next exam! Skills Module Pass Rates F5 and F8 are two challenging papers in Skills Module. It can be seen from the low pass rates. ACCA F5 pass rate gets worse than prior attempt and it is the lowest in recent 5 sittings. ACCA F8 pass rate is slightly better than March 2018 attempt but I cannot say it is good. Half of students joining F6 (UK) and F7 exam passed the exams and 48% of students attempting F9 also passed. ACCA F5 – A difficult paper to pass ACCA F5 June 2018 pass rate is only 38%! It is very discouraging while less than 40 out of 100 candidates in the exam room can pass. Similar to other Skills Module papers, ACCA F5 has 3 sections while Section A is objective test questions, Section B is case based objective test questions and Section C is constructed response questions. In June 2018, 2 areas in Section A are found difficult to a lot of students, they are – Calculation of material yield variance; Behavioural aspects of performance management. In Section B, it tests your knowledge on a number of topics in more detail than Section A. It covers the whole syllabus and you need to read the case scenario and requirements very careful. In June 2018 exam, the following topics are seen in Section B – Relevant costing Lifecycle costing Variances Throughput accounting Pricing Rolling budget For Section C, five areas are presented in the questions and their common problems in June 2018 session are – Performance management: Some candidates did not structure their answers to meet requirements and some found ROI calculation is challenging. Risk and uncertainty: Unclear of costs can be both fixed and variable costs. Relevant costing: Many candidates used financial accounting principles to value inventory rather than using relevant cost principles. Budgeting: The calculations failed to increase material cost to allow for an increase in activity and failed to inflate the overhead costs. Transfer pricing: Weak in calculating minimum transfer price. You can refer the following published paper questions to have better preparation of ACCA PM exam in future – Performance management – Sports Co from September / December 2017. Transfer pricing – Portable Garage (PG Co) from March / June 2018. One last point for you to note is to make sure you understand how to use the spreadsheet functionality available. Only simple functionalities are fine, for example, you should know how to use the “sum” function to calculate the total amount instead of typing the number. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/f5/examinersreports/f5-examreport-j18.pdf ACCA F8 – Tailor your answers to specific requirements 40% of students pass ACCA F8 in June 2018 attempt which is similar to previous sittings. The paper is different from other Skills Module paper while 70% of the total marks are from written questions. One of the keys to pass it is to practice ALL questions so that you familiarize all types of questions and you know how to apply your knowledge in answering questions. Examiner highlights the following areas in Section A in which you should pay special attention on – Professional ethics and application of ACCA’s Code of Ethics and Conduct The level of assurance provided by review engagements Substantive testing including testing on revenue, trade receivables and tangible assets Going concern Audit finalization and the final review; and Auditor’s reports It is also noted that “Level of assurance” and “Going concern” are two most difficult areas to students. In Section B, five areas are seen which are – Audit framework and regulation Planning and risk assessment Internal control Audit evidence Review and reporting In Audit framework and regulation, examiners usually asks students to identify issues in the scenario and suggest recommendations on those issues. However, it is often to see answers on saying objectives instead of recommendations. Remember, recommendations must be an action. You can check a past paper question “Hurling Co” from March / June 2017 on this topic. The most common question in Planning and risk assessment is to identify and explain an entity’s audit risk as well as auditor’s response to each risk identified. Examiner expects you can identify and explain 6 to 8 risks and exposures. Two samples are suggested to you to prepare for next exam, “Blackberry Co” from March / June 2018 and “Cupid & Co” from September / December 2017. The problem in internal control question is many students failed to score many marks in knowledge requirements. You can check “Heraklion Co” from September 2016 and “Raspberry Co” from March / June 2018 to find what knowledge you have to master in this area. Audit evidence is the most challenging area in ACCA Audit & Assurance exam. A lot of recent sittings questions on audit evidence are on substantive testing but many answers show incorrect substantive procedures or test of controls. In addition, the answers should be tailored to specific requirements in the scenario. Two good examples are highly recommended for preparing the exam in future, “Dashing & Co” from September / December 2017 and “Gooseberry Co” from March / June 2018. Review and reporting requires an understanding of subsequent events and going concern can inform the conclusion from audit work. The question is usually scenario based and the performance in June 2018 is satisfactory. One thing to highlight is a number of students made mistakes in materiality calculation. To understand more on auditor report and ISAs, “Airsoft Co” from March / June 2017 is a good example. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/f8/examinersreports/f8-examreport-j18.pdf ACCA F9 – Improvement in the most difficult area ACCA F9 June 2018 pass rate is 48% which is at similar level as previous sittings. It is glad to see F9 pass rate is stable at around 48% while improvement in risk management area, which the most difficult part, is noted in this session. Numeric questions are usually scored higher marks while discursive questions are still challenging to some exam sitters. Section A consists of 15 objective questions (20 questions for CBE in which 5 of them are seeded questions) covering the whole syllabus. Two difficult questions are presented which one of them is numerical and tests understanding of a forward rate agreement. Another question that students find its difficult is a question testing knowledge of the difference between overcapitalization and overtrading. Section B consists of 3 scenarios and 5 objective questions for each scenario. It is special in June 2018 examiner comments as only three areas are highlighted in Section B, which are investment appraisal, business valuation and risk management. I don’t see comments on financial management function this time which was often found in the past. Questions on investment appraisal in Section B are mainly in capital rationing and special investment decisions such as lease vs buy decisions. Some students still find these areas are challenging. In regard to business valuation, some students find it’s difficult in price earnings ratio calculation while they are not familiar with the formula used. A big good news in ACCA F9 June 2018 attempt is an improvement in risk management question performance. More students are now having better understanding of risk management derivatives. However, you should keep in mind hedging methods, interest rate parity and lead payments are still areas that many students cannot do well. Section C consists of 2 constructed response questions requiring students to display deeper knowledge of topics in working capital management, investment appraisal and business finance. In general, students perform better in calculated-based questions than discursive questions. Here are the highlights on the questions drawn mainly from the syllabus areas of: Management of inventories, account receivables and accounts payable and cash Determining working capital needs and funding strategies Allowing for inflation and taxation in DCF Adjusting for risk and uncertainty in investment appraisal Specific investment decisions Sources of and raising business finance Estimating the cost of capital Common mistakes in working capital management questions are incomplete understanding of how to calculate accounting ratios and comments on them, weak in comparing early settlement discounts and using factors, lack of understanding in cash management models and confusion between working capital financing policies and working capital management. Many good answers are found on investment appraisal questions requiring NPV calculations. Some errors are spotted such as adjusting revenue and costs with inflation, assigning tax-related cash flows to incorrect time-periods and including incremental fixed costs in relevant cash flows. You need to be careful on calculating sensitivity of NPV to a change in key variables, such as sales volume or discount rate. A good example on this topic is “Pelta Co” from September / December 2017 can help you to understand the investment appraisal techniques required. Business finance consists of two topics in this sitting, the first one is sources of and raising business finance and another one is estimating cost of capital. Some students are not aware the difference between operating lease and finance lease so they cannot comment on it as well. Cost of capital calculations answers are well done and only some errors in interpolation method on cost of debt estimation and wrongly used book values instead of market value. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/f9/examinersreports/f9-examreport-j18.pdf ACCA F7 – One of the highest pass rate papers in Skills Module (excluding ACCA F4) ACCA F7 pass rate in June 2018 attempt is 50% which is one of the highest among all papers in Skills Module (except ACCA F4). Performance is good but it’s found some areas improved in prior sittings get worse in June 2018, e.g. preparation of single entity financial statements. Section A consists of objective questions and 2 specific questions that caused difficulty in June 2018 are – How to calculate both the amount of the deferred tax provision and how it should be adjusted in the statement of profit or loss; The impact of non-controlling interests (NCI) of a mid-year acquisition, the write-off of goodwill and the additional depreciation required. Deferred tax calculation and NCI are usually found as difficult questions to most of students. You need to take care of them in your ACCA Financial Reporting paper preparation. Section B consists of three scenarios and 5 objective questions each are based. General speaking, the test on certain topic is in some depth as compared to Section A. In June 2018, the following topics were weak: Provisions and contingencies Change in accounting policies Construction contracts Examiner suggests some tips to you on Section B: Read the case through in its entirety without looking at the questions associated with it, then read it through again; Each case is designed to take about 18 minutes to work through and if you completed much shorter than 18 minutes, it is likely you are not thinking the subject matter in sufficient depth. Section C questions are mainly drawn from the areas of: Preparation single entity financial statements Analysis of consolidated financial statements Preparation of consolidated financial statements Analysis of single entity financial statements Easy marks are usually found in “Preparation single entity financial statements” but the performance in June 2018 is not as good as in previous sittings. Three most challenging technical topics for some students are - 1) Professional fee incurred on a financial instrument; 2) Deferred tax relating to a revaluation; 3) Entries in respect of leases (IFRS 16). Two good past paper questions on this area are “Triage Co” from September 2016 and “Moston Co” from September / December 2015. “Analysis of consolidated financial statements” usually asks students for minor calculations, adjustments and ratios, followed by commentary. In June 2018 attempt, goodwill calculation are done well but some students cannot work correctly on adjustments such as intra-group sales. Accounting ratios are done well but a number of students do not show workings in CBE. It is worrying as they cannot score any partial marks if their answers are not correct. Very limited or zero commentary on analysis results very low mark. Two good past paper questions are suggested to you, “Perkins” from March / June 2018 and “Gregory Co” from September 2016. Preparation of consolidated financial statements are done well by candidates. Two common errors found in June 2018 attempt: Forgot to unwind the discount in relation to any deferred consideration that was payable for a subsidiary; The omission of the profit split between the parent’s shareholders and the non-controlling interest. Three good examples for your reference are “Party Co” from September / December 2017, “Dargent Co” from March / June 2017 and “Bycomb Co” from June 2015. The most disappointing area in ACCA Financial Reporting June 2018 attempt is Analysis of single entity financial statements. Ratios calculation is often done well but in recent sittings, net asset turnover and interest cover are surprisingly difficult for a number of candidates. Next, many ratios calculations have no workings to support the answer. It is hard for marker to score any partial marks if the answer is wrong. The highest scoring candidates used headings, a sensible summary and comments on ratios. In addition, they refer to the narrative information from the scenario in answering the questions. Two good examples on this area which are “Mowair Co” from September / December 2017 and “Funject Co” from March / June 2017. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/f7/examinersreports/f7-examreport-j18.pdf Professional Module Pass Rates P1 Governance, Risk and Ethics and P3 Business Analysis pass rates meet historical average while 54% of students participating P1 pass the exam and 52% of students passing P3. These two exams will be replaced by a new and innovative case study, Strategic Business Leader, which examines technical, ethical and professional skills in a real-life scenario. P2 Corporate Reporting remains at around 50% pass rate. The option papers pass rates are varied. P4 Advanced Financial Management and P6 Advanced Taxation are better than P5 Advanced Performance Management and P7 Advanced Audit & Assurance. ACCA P4 – Highest pass rate among option papers P4 pass rates remain at around 40% for 2 consecutive attempts which is the highest among option papers. It shows that more and more students are now aware of how to prepare and do in the exam. If you score good marks in ACCA Financial Management (ACCA F9), I highly recommend you to take ACCA Advanced Financial Management (ACCA P4) as one of your optional papers. Main weaknesses found in June 2018 – Lack of detailed knowledge of the whole syllabus Spent too much time in carrying out relatively simple calculation tasks Losing professional marks due to poor answer structure Writing in bullet point Not taking balance approach between calculations and discursive narrative questions Question 1 is a 50-mark compulsory question. It asks to work on present value computations and real options. It also questions on the consequences of employing soft capital rationing and possible sources of additional finance. Many candidates cannot do well in the following – How real options could add to NPV; Lack of in depth discussion on the use of appropriate discount rate for financing of phase two project. Question 2 is about business valuation. It asks the advantages and problems with the acquisition, undertake valuation based on free cash flows and using valuation methods such as P/E ratio and asset. Many candidates could not re-gear the asset beta in discount rate calculation, and many responses on valuation method are too general and lacked discursive depth. Question 3 asks about dividend capacity and level of dividends required from subsidiary to meet company’s dividend policy. Common errors are mainly from finding dividend capacity and fails to recognize agency issues between shareholders and company’s directors. Question 4 requires candidates to consider the impact of an interest rate hedge using futures and options contracts. It also asks specific issues using derivatives to undertake hedging activity. Two topics are found difficult to candidates, 1) explain how uncertainty in option pricing is measured and discuss influences on the level of uncertainty; 2) explain a swaption and demonstrate how it could work. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/p4/examinersreports/p4-examreport-j18.pdf ACCA P5 – Pass rate improved but not enough ACCA P5 pass rate is usually at the bottom of all papers. Even it is slightly improved in June 2018 session, it is not good enough as it stands at 35% which is lower than P4 and P6. The most common problem is not answering the requirement. Many answers provide the definition of “jargon” terms, probably it is the approach to answer ACCA F2 or F5, but that’s not enough to pass ACCA P5. Examiner suggests 3 keys to pass ACCA P5: Key#1 – Have a good grasp of the basic knowledge Key#2 – Performance management is an area which, at an advanced level, is dependent upon situation and environment Key#3 – Capable of analyzing and evaluating the situation in the scenario using technical knowledge Question 1 asks performance management issues, the supply chain and the use of Big Data at a clothing retailer. There are five weaknesses found in this question: There are four subsidiary critical success factors and very few addressed whether the report measured these factors; Making the same point repetitively; Poor in making recommendation for three new performance measures; Focus on explaining value chain but this cannot address the requirement on simplifying supply chain; Some candidates answer production process while the question is asking on retail. Question 2 is the most popular question in Section B and it asks the use of ROI and RI as performance measures for a manufacturer of high technology products. A number of students don’t know how to calculate ROI and RI correctly, though ROI and RI calculation are in ACCA F2 and ACCA F5 syllabus. Little or no analysis and evaluation of the result but in ACCA P5, marks are lied balanced between calculations and analysis especially when techniques are assumed prior knowledge such as ROI and RI. Question 3 asks about application of Argenti A Score model to an architectural consultancy firm. Many students perform well while they display good understanding on three key facets of the model – Defects, Mistakes, Symptoms. Question 4 considers the issues of budget setting and variance analysis in a manufacturer of moulded hulls for small boats. Even many good answers found in budget setting, the majority of students demonstrated a lack of fundamental technical knowledge with regard to undertaking planning and operational variance analysis calculations. The basic knowledge is covered in ACCA F2 and ACCA F5. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/p5/examinersreports/p5-examreport-j18.pdf ACCA P7 – Time management problem is more serious than other papers ACCA P7 June 2018 pass rate is 34%, a bit improving from prior sitting but it’s the lowest in all paper in this session. There are two keys to be success in ACCA Advanced Audit & Assurance paper: Good understanding of both ACCA F8 (Audit & Assurance) and ACCA P2 (Corporate Reporting); Good time management skills through practicing past papers. Examiner suggests to take P7 right after P2 as a lot of knowledge in Corporate Reporting are useful in P7 exam. Question 1 asks business risks facing by a construction company and explain the significant risks of material misstatement when planning an audit. In addition, it asks recommendation of audit procedures and comment on ethical issues with scenario given. The weakness here is insufficient knowledge of the financial reporting syllabus, i.e. P2 Corporate Reporting. Question 2 asks comments on the quality control, ethical and professional issues raised following the review of an audit at the completion stage. It is generally well answered except a number of students failed to understand the audit was at completion stage and suggesting actions such as remove auditor from the team or to use separate teams would no longer be possible in the case. Question 3 is about financial statement adjustments and consideration of going concern after an accident at a nuclear power station. As the requirements in this question are quite straightforward, it is well answered with relatively high marks. Question 4 is centred on the review of a Group of companies and asks to consider matters outlined and to explain what audit evidence would be required. It is not well answered. The key issue is many candidates are not familiar with the relevant accounting standards (IFRS 10) so that they did not realise the accounting for the acquisition was incorrect. Question 5 is a reporting question and divided into 1) discuss benefits and difficulties of communicating key audit matters to users of auditors report and how this addresses the audit expectation gap; 2) appraise an extract from an auditor’s report. The answers produced are good especially some students had evidently read the relevant article and able to provide the answers straight to point. Detailed examiner’s report can be found: https://www.accaglobal.com/content/dam/acca/global/PDF-students/acca/p7/examinersreports/p7-examreport-j18.pdf Conclusion June 2018 session is the last exam before a revolutionary change in ACCA exam structure, especially in professional module. In Skills Module, ACCA F5 (Performance Management) and F8 (Audit & Assurance) are still two challenging papers for pass. ACCA F5 is linked to ACCA F2 (Management Accounting) so please make sure you are confident to understand all basic knowledge before preparing ACCA F5. ACCA F8 is not standalone while some basic knowledge in ACCA F7 (Financial Reporting) is needed to pass. In Professional Module, it is glad to see ACCA P4 (Advanced Financial Management) pass rate is up and stable at current level. If your score in ACCA F9 is good (say 65 or more), P4 is definitely a good option to you. ACCA P5 (Advanced Performance Management) and ACCA P7 (Advanced Audit & Assurance) are still two most challenging papers. You need to understand the examiner expectation on P5 is different from ACCA F5 while more evaluation, analysis and recommendations are needed. ACCA P7 should be taken right after ACCA P2 (Corporate Reporting) since a lot of accounting standards are required. Remember that starting from September 2018 session, all questions in Strategic Professional Level are compulsory to answer. If you have any questions or comments, just leave them below and I try to answer you shortly. Follow us on our Facebook Page to have update:
https://www.gotitpass.com/post/acca-pass-rates-june-2018-exam-what-do-i-learn-from-examinerse28099-comments
One of the more challenging questions in the Listening section is the PTE Multiple Choice Multiple Answers question type. Not only do you have to listen to an audio and understand it, but then also answer a multiple choice question with multiple correct answers, based on it. Each audio that you hear will be 40 to 90 seconds long and in your exam you will face two or three questions of this type. This is how your computer screen will look like when you are answering this question: The instructions at the top will always be the same. So, do not waste any time reading them again. You will have 7 seconds before which the audio will start playing. It is recommended that you use the 7 seconds to quickly read the question and the options. Doing this will focus your mind on what kind of information to listen for. For example, if the question asks you to identify the causes behind Global Warming, then you when you listen to the audio you can pay attention specifically to picking the causes. Partial credit applies to this question type. This means that if you answer only one or few of the correct options, you will receive marks for the ones you have chosen correctly. But remember, this question type also carries a negative marking. If you end up picking an option that is not correct you will receive a negative score for that. Your overall score though will never go below zero. So, always pick at least one option! As we know this is a test of your listening skills and will specifically test the following. Can you understand the main idea in a spoken text? You should be able to understand a short audio and identify the key theme in it. This is a critical skill that we need in our everyday life. You should be able to identify what the key theme is and how the audio revolves around it. Can you extract detailed information from what you have heard? Just as it is important to identify the theme, it is also important to pick out the detailed pieces of information. You should be able to pick opinions, sequence of events, cause and effects, facts, etc. You should also be able to separate key information from supplementary information and main points from examples. Can you identify the relationships? When you are listening to an audio you should be able to understand how different pieces of information are linked to each other. For example, you should be able to identify when the speaker talks about a cause and when he talks about its effect. Can you draw a conclusion? An indication of a good listening ability is the skill to draw a conclusion from what you have just heard. You should be able to remember and link the various pieces of information that you just heard and build a conclusion based on that. You may or may not take notes while listening. If you are able to read the question and options quickly in the beginning, then you can just check the options as you listen to the audio. However, you may also want to note down some important information or key points if you think you haven’t yet understood the question and the options fully. Make sure you always listen the full recording with complete focus. Sometimes an option which might seem to be right in the beginning, is contradicted by the speaker later on. For more information and practice materials you may check out the FREE PTE Practice tests or sign up for the FREE PTE course.
https://surewayenglish.com/introduction-pte-multiple-choice-choose-multiple-answers-pte-listening/
Many students barely prep for the ACT. Unsurprisingly, many end up getting an average ACT score. But breaking out of the average zone isn’t that difficult. A little prep goes a long way. The tips below will help you make the most out of your prep time. Reading isn’t just about comprehension; it’s also about endurance. Begin reading at least an hour a day. This can include class reading, as long as the material challenges you. I recommend reading long articles from the Atlantic Monthly, the New Yorker, or even Time Magazine on something that interests you. The more your brain is used to reading, the easier it will be for you to maintain focus on the long, arduous ACT reading section. Another tip for this section: take plenty of practice tests to get a feel for the way the ACT asks questions. Sometimes you’ll miss questions. Go back and see if the issue was one in which you misunderstood the question (read the question more carefully), misunderstood the passage or were tricked by an answer that sound right, but was actually the answer. Misunderstanding the passage can happen because the reading is dense. That’s why it’s important to challenge yourself by reading relatively difficult material an hour a day (though you can skip a day here and there). The last part—getting tricked by an incorrect answer—is something that you can work on. Try to understand what made your answer wrong and why the correct answer was right. Doing this will help you avoid mistakes in the future. Like math, writing is based on a number of rules. Your first order of business should be to know the range of grammar being tested. The good news is that, like math, the ACT writing only tests a limited number of grammar concepts. To get a sense of these concepts, you’ll want to take an official practice test. There, you’ll also encounter questions that aren’t only about grammar. These questions will test your ability to identify the logical flow of ideas. Here’s a great tip: when dealing with these questions make sure to read the entire paragraph in which the question appears. These questions are usually related to the flow of idea in the paragraphs. By not completely reading the paragraph, you’ll struggle to arrive at the correct answer. You learn a lot of math in your four years of high school. The ACT isn’t expecting you to remember all of that math. In fact, most of the questions fall into relatively predictable categories. For the basics, you’ll want to know percents, decimals, and exponents. You’ll want to be able to handle word problems that test discounts or rates (think moving vehicles). Finally, remember your geometry. You should be able to find the area of circles, triangles, and quadrilaterals. For the more advanced stuff, you’ll want to know the following: basic trigonometry identities, unit circle, logarithms, permutations and combinations, and matrices. To get a good idea of this range of topics, pick up the new ACT guide. Take an official ACT practice test (there are a few of them in there). As you are going through it, make note of the topics you are weak at. Then, after the test, make sure the study up in those areas. Make sure not to go too deep in any one area; the ACT tests the big ideas from each. So, just make sure you understand how logarithms work. Don’t look up advanced logarithms online and try to become an expert.
https://www.admitsee.com/blog/top-tips-for-an-above-average-act-score
Read the Parker Family Episode (Attached). The clients express hostility toward each other, as well as toward the social worker. In addition, Stephanie asks the social worker for self-disclosure when she asks, “Wouldn’t you?” and “You really think you can fix that?” The scene ends with the client and social worker falling into silence. Consider the challenges depicted in the video. How would you respond? - Explain when it would be appropriate to use self-disclosure. - Provide a specific example of the type of self-disclosure you might use in this scenario. - Identify an interviewing technique you learned from this week’s resources that you would use when working with this client. - Provide a specific example of the interviewing technique. - For example, if you would use an empathetic statement or an open-ended question to elicit information, provide a specific example of the statement or question that you would use. - Explain why you would use this technique.
https://task-writers.com/walden-essential-skills-for-social-work-practice-questions-discussion/
Essentially, your argument is going to be composed of premises (things you assume are true), corollaries (things which are true if the premise they’re related to is true), and conclusions (things that are true if all the premises are true). A lot of the time you can clearly identify your conclusion based on terms like “should”, “must”, or “clearly”. If the question asks you “the statements, if true, most strongly support which of the following conclusion?”, then you can probably assume they’re asking you to extrapolate from the conclusion, to a further conclusion. If it asks you “which would most strengthen the argument”, they’re asking you to extend from one or more of the premises to a conclusion. If it asks you “which would most weaken the argument”, they’re looking for you to understand the gap between one or more of the premises and the conclusion. In your answer choices, you will be overwhelmed with a bunch of choices that all seem plausible at first glance. Your task will generally be to sort between what’s sufficient, what’s necessary, and what’s neither, and answer according to the question prompt. For instance, if you’re asked what the primary flaw in the argument is, they’re probably asking you where the premises fail to be sufficient evidence for the conclusion. What’s the difference? Well, think about this example: for an animal to be a dog, it’s necessary for it to have a tail; it’s sufficient for it to be a canis familiaris ; and it’s neither for it to have an owner. Unlike some of the other sections, it can be very difficult to predict what the right answer to a Critical Reasoning question is without reading through the answers. To that end, read through the answers carefully, and get used to reading the prompt quickly enough that you can keep it in mind while reading the answer choices. This is a vital skill for you to master if you’re to score well on the GMAT Critical Reasoning. So, now you’ve read these GMAT Critical Reasoning tips and you’re ready to go? Great! Email me at the address in the header and I’ll set you up with a diagnostic test and an introductory meeting. My name’s Trevor Klee; I’m a Boston GMAT tutor who scored 750 on my GMAT and I’d be happy to get you scoring the same.
https://www.trevorkleetutor.com/gmat-critical-reasoning-tips/
This CNE activity is jointly sponsored by AKH Inc., Advancing Knowledge in Healthcare and SLACK Incorporated. Support Statement There is no commercial support for this activity. Learning Objectives 1. Develop an increased understanding and knowledge about the clinical presentation of very late–onset schizophrenia-like psychosis (VLOSLP). 2. Describe and further reflect on factors that need thorough consideration when assessing and providing treatment for individuals experiencing VLOSLP, including risk assessment, safe use of medication, and engagement. 3. Learn the method and types of qualitative study and why this approach was suitable for the aim of the current study. 4. Understand the dynamic of depression literacy among Korean American parents. 5. Increase knowledge about the relationship between personality traits and substance use. 6. Identify prediction power of personality traits on substance use. Disclosure Statement Neither the planners nor the authors have any conflicts of interest to disclose. Accreditation statement(s) AKH Inc., Advancing Knowledge in Healthcare is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center’s Commission on Accreditation. Credit Designation 3.0 contact hours will be awarded by AKH Inc., Advancing Knowledge in Healthcare upon successful completion of this activity. A contact hour is a unit of measurement that denotes 60 minutes of an organized learning activity. This is a learner-based activity. AKH Inc., Advancing Knowledge in Healthcare does not require submission of your answers to the quiz. A contact hour certificate will be awarded once you register, pay the registration fee, and complete the evaluation form. Release date: January 1, 2018. Expiration Date: December 31, 2020. How to Participate Read the articles in the activity, carefully noting any tables and other illustrative materials that are included to enhance your knowledge and understanding of the content. Be sure to keep track of the amount of time (number of minutes) you spend reading the articles and completing the quiz. Read and answer each question on the quiz. After completing all of the questions, compare your answers to those provided within this issue. If you have incorrect answers, return to the article for further study. Once you complete the online evaluation, a certificate will be automatically generated. Target Audience This CNE activity is primarily targeted to psychiatric and mental health nurses.
https://cme.healio.com/psychiatry/20180101/jpn-january-2018/front-matter
On this page are tips for listening in the IELTS test. If you have a question or a tip that you think would benefit others, let us know using the message form at the bottom of the page. One of the common traps in the IELTS listening test is when a speaker makes a statement which is then changed. For example: ‘My phone number is 833 6634 – oh no, sorry, that’s my old number – my new number is 356 8232′. It is important to keep listening to the following sentence or two to confirm that the answer has not changed in any way. A common issue with the IELTS listening test is not staying focused on the recording so that you catch the answer when it comes. It is surprising how often, even though you are serious about passing the IELTS test, your mind can start to wander when listening to a recorded conversation, and you can easily miss an answer. One technique to help is to imagine that you are actually part of the conversation, even though you are not actually saying anything. Think about where they are, how old you think the speaker or speakers are, what they are wearing etc. By putting yourself ‘in the picture’, it is often easier to keep focused. Having only the next question in your mind as you are listening means that you can lose points quickly – if you miss the answer, you may find yourself waiting and waiting, only to find that the answer has gone as well as the next two or three answers. Get into the habit of planning the next questions ahead. For example, if you are waiting for the answer to Question 3, also make sure you know what is required for Question 4 – if you hear the answer to Question 4 first, then you have already missed Question 3 (the answers come in order). You may have lost a point, but at least you are back on track. In between Sections 1, 2 and 3, there is a short break for you to read the questions, but at the end of each of these sections, you are also given half a minute to check your answers. Although it is worth having a quick check to make sure you have an answer for each question, this time should be spent pre-reading the next set of questions, not reading old answers. The more prepared you are for the next set of questions, the better your results. Remember that you are given time at the end of the recording to transfer your answers to the answer paper, so don’t worry about writing neatly on your question paper. In the time you have to pre-read the questions, make sure you are highlighting key vocabulary or points that you think will help you identify the correct answers. You are given a question paper and a separate answer sheet, so you can write on, underline, circle or otherwise mark your question paper as you see fit. Underlining or circling key words will help you stay focus and be clear about what you are listening for. You are not penalised in the IELTS test for an incorrect answer in the listening or reading sections, so even if you are not sure or don’t know, always write something, even if it’s just a guess. You might get lucky, and it certainly won’t harm!
https://ieltsforfree.com/tips-for-ielts-listening/
The Four Question Method wasn’t explicitly designed to teach civics, but we think it does a really good job of it. In this post I’ll explain why teaching Question Two, “What were they thinking?” helps students to develop a critical civic disposition: listening to people who we expect to disagree with. FOUR QUESTION STRUCTURE The Four Questions were designed to structure historical inquiry, but they work equally well when applied to issues and events in the present day. Question One is “What Happened?” We start with a story, because you can’t think critically about events you don’t know very well. This is equally true about events that happened a century ago or a week ago. Question Two focuses on important people in the story and asks, “What Were They Thinking?” We want to understand how the key people in our story understood their world and the decisions that they made. We try to understand the world from their point of view. We call this understanding “historical empathy.” It does not require agreement – indeed, we are often trying to understand people who we would not agree with if we met them today. For example, we want to know what Jefferson was thinking when he wrote that “all men are created equal” while he also owned men and women as property. In order to achieve historical empathy we have to practice the thinking skill of interpretation. This means using evidence from the past to try to understand the minds of the people who created it. When we do a full Question Two inquiry lab in the classroom we usually work from primary source documents, especially in the upper grades. But we can also interpret artifacts, images, or patterns of behavior, which is more typical in the lower grades. Whatever the source, the 4QM interpretation process has three steps. First we identify and contextualize the source, then we summarize or describe it. What is it, how does it fit into our unit story so far, and what does it say? The third step in the 4QM interpretation process asks us to consider the purpose and assumptions of the person or people who created the source. What was their goal? What are the things they must believe to be true about their world or about human nature, even though they don’t state them outright? How does the source itself support these interpretations? AP GOVERNMENT: GUN CONTROL This three-step process of interpretation works equally well when we’re working in the present day. We got a recent example of this from David Nasser, an AP Government teacher at an urban charter school in Brooklyn, New York. David was teaching a unit on gun control, and wanted his students to examine a variety of positions on that topic. The hazard when teaching a tough contemporary topic like this is that students often have an opinion already, and moving away from their position during class can feel like a defeat. And, of course teachers worry that in today’s politically polarized environment classroom conversations can easily become one-sided or intensely angry. But David found that the 4QM structure helped him to turn down the temperature and broaden the discussion in his classroom. David assigned his students to read four position papers on gun control: from a Parkland High School student in Florida, a teen gun enthusiast from Iowa, a Black advocate of the Second Amendment as protection for Black people, and the head of the NAACP. Their assignment was to focus on Question Two: What were these authors thinking? What were their purposes in writing, and what were the assumptions underlying their positions? David reported that the lesson went really well, because the Question Two focus forced the students to postpone judgment. Judgment is the thinking skill associated with Question Four, “What do we think about that?” It’s the thinking skill that requires us to articulate and support our own positions on a question about good and bad, right and wrong. David’s gun control lesson succeeded precisely because “the kids couldn’t discuss their own positions, which is Question Four, but had to figure out what the authors were thinking and what their assumptions were.” David’s choice of sources was purposeful. He assigned two authors in favor of gun control and two opposed, and their purposes and assumptions were somewhat different in each case. By choosing this range of sources and by making it clear that this was a Question Two lesson, David prevented the kind of quick and confident judgment that can easily short-circuit classroom conversation. Instead of rushing to support people they assumed they would agree with, or to condemn people they assumed they would disagree with, students were forced to consider a range of positions on a serious issue carefully and thoughtfully. This approach allows a subsequent Question Four lesson to be broader and more thoughtful as well. Taking the time to understand the assumptions of people who hold a different position from ours might turn up important areas of agreement, and truly understanding a range of opinions on an issue opens up more possibilities for our own judgments. Even if we don’t change our minds, having students focus on Question Two before Question Four reminds all of us to examine and articulate the assumptions we carry behind our judgments. CIVICS IN A DEMOCRACY If democracies were made up of like-minded people, civil discourse wouldn’t be so difficult — but they’re not, and it is. Question Two thinking is excellent training for democratic citizens. To answer Question Two well means listening deeply to other people, past or present. It means taking them seriously and trying to understand them on their own terms. That might not change our ultimate judgment, but sometimes it might. And it will certainly make our judgments more thoughtful and considered, and our public conversations more civil. J.B.
https://4qmteaching.net/4qm-civics-question-two-helps-civic-discourse/
June 2016 LSAT Section 1 Question 11 Modest amounts of exercise can produce a dramatic improvement in cardiovascular health. One should exercise most days... Ryan on December 15, 2019 Why is A correct? Why is C incorrect? Thanks 1 Reply Annie on December 16, 2019Hi @Ryan-Mahabir, This question asks you to pick the answer choice which is “most strongly supported†by the passage provided. That means the passage provides the premises and you’re looking for the conclusion. Here’s a breakdown: Premise: Moderate exercise can dramatically improve cardiovascular health. Premise: A half hour of brisk walking most days of the week is enough for cardiovascular benefits. Premise: More vigorous exercise is better, but a strenuous workout isn’t absolutely necessary. Conclusion: ?? Answer Choices: (A) is correct. The premises tell us that brisk walking is enough to see benefits, and that the more vigorous the exercise the better. These two ideas are combined in this conclusion, as it tells us that strenuous workouts will result in dramatic improvement. This answer choice is tricky because premise 3 tells us that a strenuous workout isn’t necessary. But, while it may not be necessary to see some results, it doesn’t tell us that a strenuous workout still is not the best. This conclusion tells us that, and it follows logically from the argument. (C) is incorrect. This sentence flips around the argument. It states that 30 minutes of brisk walking will get you the same, or better, results than a strenuous workout. The premises have not told us this, rather premise 2 tells us that the more vigorous the workout, the better its is for you. So, this statement is not supported by the premises.
https://testmaxprep.com/lsat/community/100005366-why-is-a-correct-why-is-c-incorrect
Why is there inconsistent voltage on a lighting circuit? Possible causes of a Light Circuit with Incorrect Voltage. Inconsistent Voltage with a Light Circuit Electrical Question: Why is there inconsistent voltage on a lighting circuit? Voltage Problems with a Lighting Circuit Rex asks: - I wonder if you can help me understand what is going on with my lighting circuit. I was in the process of replacing a ceiling light and according to my non-contact voltage detector was unable to turn it off. - I thought first that the 3-way switch was wired wrong but found that it is wired correctly. then, with my multi meter, I tested the voltage between hot and neutral in both switch positions and found something strange: on = 122v, off = 40v. - There are other lights powered from this junction box so I tested them also (supply hot to circuit hot) and found: a=122v, b=104v, c=104v, d=82v. the switch leg I was dealing with measured 12v and 23v between leg to neutral with the main power disconnected. do you know what could be causing these strange voltages? - I found no continuity between the neutrals and grounds. I also isolated each wire and verified where they go. Dave’s Reply: Thanks for your electrical wiring question Rex. Incorrect Voltage on a Light Circuit Application: Incorrect Voltage a Ceiling Light. Skill Level: Intermediate to Advanced – Best performed by a Licensed Electrical Contractor, or Certified Electrician. Electrical Tools Required: Basic Electricians Pouch Hand Tools, Voltage Tester, and appropriate Safety Gear. Estimated Time: Depends on the personal level experience, ability to work with tools, install electrical circuit wiring, and the available access to the project area. Electrical Safety: Identify the electrical power source to the Ceiling Light, turn it OFF and Tag with a Note before working with the electrical wiring. Electrical Wiring Parts and Materials: Electrical parts and materials for the Ceiling Light should be approved for the specific project and compliant with local and national electrical codes. Electrical Codes and Inspections: Installing or changing home electrical wiring should be done according to local and national electrical codes as adopted in your specific area of New York. A permit and inspections may also be required. Incorrect Voltage in a Kitchen Light Circuit This electrical wiring project is about Incorrect Voltage in a Ceiling Light Circuit in the Kitchen of an Old Home. - Great Question Rex! - From the information you have provided, let’s take a look at a few possibilities that may be contributing to this electrical problem that you have discovered: Light Circuit with Incorrect Voltage Light Fixture Components - It is possible that the inconsistent voltage readings may be caused due to various light fixture types, light fixture control switches, types of LED light bulbs that are installed, or low voltage power supplies for the light fixtures. - Keep in mind that LED type lights typically operate with a built in electronic power supply. - Electronic power supplies produce various problems with AC electrical circuits including electrical line disturbances, and they can produce dirty electrical noise conditions on the main power lines. Improperly Configured Multi-Wired Circuits - It is also possible that there are multi-wired circuits in the home that are not wired correctly, or that neutral wiring within the home has not been properly configured, or a combination of these two inconsistencies. - As you can imagine, this is not something that a homeowner can fix, and to be quite honest, some electricians may not know how to track down and fix these problems. However there are some highly knowledgeable electricians that can take care of this for you, but you may have to ask them specifically if they know how to detect and repair incorrect wiring of multi-wired circuits. - Correcting this problem will involve performing a some electrical tests and measurements with the circuit wiring at each panel located at the home, and then tracking down the problem areas at specific locations of electrical boxes, and then making the required changes in the electrical wiring connections. - Additional electrical circuit wiring may also need to be installed depending on the extent of the circuit wiring problems that are discovered. See More about Incorrect Circuit Voltage Wiring a Light Switch – Diagram 1 Wiring Diagrams Fully Explained Light Switch Wiring Diagrams. Detailed Electrical Wiring Diagrams and Pictures assist your Home Electrical Projects. The Following Detailed Wiring Diagrams will Help You Wire 3-Way Switches: 3 Way Switch Diagram 3 Way Switch Diagram The key to three way switch wiring depends on two main factors. These wiring diagrams help you identify the power feed and the switch leg leading to the light fixture. Electrical Wire for the Home Complete listing of electrical wire types and parts used for home projects with electrical code information serves as selection guidelines. Electrical Boxes for Electric Repairs and Home Wiring Projects Electrical Junction Boxes for Home Wiring Understanding electrical junction boxes and what they are used for. Home electrical wiring is the process of installing electrical wire to a location that will serve electrical devices or an appliance. One very important component is the box where the wire will be installed. The type and size of the home wiring electrical boxes will depend upon the circuit size, application and its location. How to Install Kitchen Electrical Wiring Kitchen Electrical Wiring Fully Explained Photos and Wiring Diagrams for Kitchen Electrical Wiring with Code Requirements for most new or remodel projects. Planning and Installing Home Lighting Home lighting articles covering recessed lighting, under cabinet lighting, lighting terminology and more. Electrical Junction Box Splice Electrical junction box splices can be made safely when you understand the method. This example will show you how its done step by step and shows how to make a junction box splice and the related electrical codes. Emergency Electrical Splices Wiring Electrical Codes Electrical Safety Troubleshooting Electrical Problems Home Electrical Troubleshooting and Repairs Licensed Electrician Reveals the Secrets of Successful Electrical Troubleshooting Methods used to solve the majority of the home electrical problems and wiring failures. Electrical Testers Using Testers to Help Locate Electrical Problems The following may also be helpful for you: | | Be Careful and Be Safe - Never Work on Energized Circuits! Consult your Local Building Department about Permits and Inspections for all Electric Wiring Projects. | | The Safest Way to Test Electrical Devices and Identify Electric Wires!The Non-Contact Electrical Tester This is a testing tool that I have had in my personal electrical tool pouch for years, and is the first test tool I grab to help identify electrical wiring. It is a Non-contact tester that I use to easily Detect Voltage in Cables, Cords, Circuit Breakers, Lighting Fixtures, Switches, Outlets and Wires. Simply insert the end of the tester into an outlet, lamp socket, or hold the end of the tester against the wire you wish to test. Very handy and easy to use. The Quickest Way to Check for Faulty Electrical Wiring!The Plug-In Outlet Tester This is the first tool I grab to troubleshoot a problem with outlet circuit wiring. This popular tester is also used by most inspectors to test for power and check the polarity of circuit wiring. It detects probable improper wiring conditions in standard 110-125 VAC outlets Provides 6 probable wiring conditions that are quick and easy to read for ultimate efficiency Lights indicate if wiring is correct and indicator light chart is included Tests standard 3-wire outlets UL Listed Light indicates if wiring is incorrect Very handy and easy to use. Strip Off Wire Insulation without Nicking and Damaging the Electric Wire!The Wire Stripper and Wire Cutter My absolute favorite wire stripping tool that I have had in my personal electrical tool pouch for years, and this is the tool I use to safely strip electrical wires. This handy tool has multiple uses: The wire gauges are shown on the side of the tool so you know which slot to use for stripping insulation. The end of the tool can be used to grip and bend wire which is handy for attaching wire onto the screw terminals of switches and outlets.. The wire stripper will work on both solid and stranded wire. This tool is Very Handy and Easy to Use.
https://ask-the-electrician.com/causes-of-inconsistent-voltage-on-a-circuit/troubleshooting/
Indo-Europeans: general name for the people speaking an Indo-European language. They are (linguistic) descendants of the people of the Yamnaya culture (c.3600-2300 BCE) in Ukraine and southern Russia, and settled in the area from Western Europe to India in various migrations in the third, second, and early first millennium BCE. The Problem It has always been known that many languages in Europe are related. Italian, Spanish, Rumanian, and Portuguese are descendants of ancient Latin. English, Dutch, German, and the Scandinavian languages go back to the dialects of the ancient Germans. The old languages of Ireland, Wales, Cornwall, and the isle of Man share a common ancestor in ancient Celtic. In the late eighteenth century, however, European scholars recognized that these languages were related to the ancient Persian and Indian languages as well. The existence of this “Indo-European language family” came as a surprise. How to explain this? Nineteenth-century experience offered several possibilities. Migration was well-known and could explain why languages in various regions could be similar. Alternatively, people might adopt first words and later complete languages from their neighbors. The nineteenth-century scholars usually opted for the first explanation: long time ago, there had been an Indo-European nation that had, with a series of migrations, moved to western Europe and the Far East. But who were these people? Looking for a Homeland The first thing scholars needed to find out, was the nature of the original homeland. They did so by looking at the shared vocabulary of the Indo-European languages. For example, if they all have similar words for the same trees and animals, you can say something about the homelands’ flora and fauna. There must have been bears, otters, vultures, cranes, salmons, beavers, oaks, junipers, apples. Most of these are quite ubiquitous, but the presence of otters and beavers suggests forests and extensive wetlands, which rules out large parts of Eurasia. Words like “king”, “wagon”, and “plow” are also interesting, because archaeologists can find elite burials, chariots, and agricultural tools. The quest for the original homeland has had several false starts, but the steady accumulation of data on the one hand (e.g., the discovery of the Hittite, Luwian, and Tocharian languages in the twentieth century) and the growth of our understanding of the ways languages evolve on the other hand, have helped to falsify certain hypotheses. For example, scholars discovered that languages cannot evolve very rapidly or extremely slowly, which has helped to rule out theories that presupposed unusually swift or slow language change. In the late twentieth century, the "kurgan hypothesis" gained ground: the first speakers of Indo-European languages belonged to the Yamnaya culture, pastoral agriculturalists who buried their leaders in funeral mounds (kurgans, in Russian) and had domesticated the horse, which allowed long-distance travel. The First Migrations The Yamnaya culture (also known as Pit Grave culture) flourished between c.3600 and 2300 BCE in Ukraine and southern Russia. Some of the Yamnaya people were farmers and cultivated the land, and others were nomads who roamed across the steppe with their flocks. Before c.3500 BCE, two groups branched off from the Yamnaya people. The first of these moved to the east, probably as shepherds looking for new fields in Siberia, and settled in the west of what is now China. Archaeologists call these people the Afanaseva culture. These eastern settlers would continue to live there for centuries. Later, they would convert to Buddhism, and because we know the central concepts of this religion, the Buddhist texts written in western China can be understood. Their languages, which are closely related to the oldest Indo-European language, are called Tocharian A and Tocharian B. The second group moved to the south, to the area of the Caucasus Mountains, where they must have lived for quite some time before they proceeded to Anatolia. Just like the Tocharians, they shared words for yoke and thill with the Indo-Europeans (proving that they had left after the Yamnaya culture had learned agriculture), but did not share the words to describe wagons, wheels, naves, axles, and so on. This proves that they left before the invention of the wheel and the wagon, which in turn proves that they branched off before c.3500 BCE. The arrival of this second group in Anatolia is documented in cuneiform texts found at Kültepe (ancient Kaneš), which refer to several wars. It is likely that at some stage, Kanesh itself was taken over. The descendants of these immigrants spoke Palaic, Hittite, and two Luwian languages. These can be documented in the Bronze Age. In the Iron Age, we find late forms of Luwian in Lydia (western Turkey) and Lycia and Caria (southwestern Turkey). Increased Mobility As indicated above, the Yamnaya people had learned to domesticate horses and knew how to build wagons (for transport, with solid wheels). Chariots for warfare, with spoked wheels, were a later invention. Horses and wagons gave the Yamnayans a possibility to travel longer distances than before. This mobility had two faces: often, they were just shepherds looking for fields, but they could be aggressive too. So, after about 3500 BC, the Yamnaya region started to expand. As a consequence, linguistic changes no longer reached everyone. For example, it seems that people who were living along the shores of the Black Sea, introduced an augment /e-/ to indicate the past tense, but this innovation never was accepted by the Indo-Europeans who lived more to the north. In the eastern part of the Yamnaya region, the /k/ became an /s/, and this innovation never reached the western dialects. Gradually, the one Indo-European language, in a gradually expanding area, was falling apart. By the end of the Yamnaya period, in c.2300 BCE, people from the western end of the Yamnaya region must have been almost incomprehensible for people from the eastern end. To the Balkans At that moment, the Indo-Europeans had already moved far away from their homeland in Ukraine and southern Russia. Following the western shore of the Black Sea, one group reached the Lower Danube. Here, the speakers of the Indo-European languages are called the Usatovo culture. One of the most interesting artistic objects associated with this migration is a kind stone stele, representing a man or a woman. These monuments have been found along the roads to the places where copper could be obtained. The art of making these monuments was taken from the Crimea westward by the Indo-European migrants. Although the area of the western Black Sea and the Lower Danube is fertile and offers everything people might have needed, some of them continued to travel. Some moved upstream, along the river, to the arc within the Carpathian Mountains. They are commonly associated with the Cotofeni culture. Moving even further, they were the carriers of the Corded Ware culture, the ancestor of the Italo-Celtic and Germanic branches of the Indo-European languages. To Greece Although recent research has clarified the main outline of the Indo-European migrations, some puzzles remain. One of these is the origin of the Greek language and the moment of its arrival in what is now Greece. There is no real discontinuity in the archaeological record, which on the one hand suggests that the first speakers were not warriors but pastoralists who gradually infiltrated Greece, and on the other hand makes them archaeologically untraceable. An additional puzzle is the relation between Armenian and Greek, because both languages are quite close from a linguistic point of view, but are geographically quite apart. The relation to the Thracian and Macedonian languages, which are not really well-known, makes things even more complex. The custom of burying leaders in funeral mounds, lived on in Thrace well into the Roman age. The Mycenaean tholos graves and the tumuli mentioned by Homer are other leaves from this tree. It is certain that the original Greek language fell apart in two branches: Mycenaean Greek being written on Linear-B tablets in the Bronze Age, and living on as the Ionic and Attic dialects of the classical age, and Doric surfacing a bit later. To the East Towards the end of the third millennium BCE, when the Yamnaya and Corded Ware cultures had already been replaced by their successors, groups started to move to east. Archaeologists call them the Sintashta culture and - in a later phase - the Andronovo culture; linguists call them the Indo-Iranians. They may have called themselves "Aryans", a word that is known from early Persian (arya-) and Indian (árya-) sources. In what is now Uzbekistan, this group appears to have split up, one of them ending up in the Punjab and the other in Iran. The movement of this second group is documented in the spread of a simple kind of grey ceramics that can be seen in every museum in Iran. Perhaps, the division was caused by a religious dispute, because the words for "demon" and "deity" are linguistically related but theologically opposite (Indian: asura and deva; Persian: daiva and ahura). Indo-European DNA At the beginning of the twenty-first century, scholars were beginning to become increasingly convinced of the Kurgan hypothesis described above. It was confirmed in 2015, when two research groups independently discovered that Indo-European men shared a Y-DNA haplogroup that is called R1a. This is found in western Europe, in Ukraine, southern Russia, Uzbekistan, Iran, and among the priestly caste on the Indian subcontinent. A related haplogroup, R1b, is more specific for western Indo-Europeans. Another discovery was that the Indo-Europeans shared a genetic modification that allowed them to drink milk of non-humans (e.g., goat's and cow's milk). Lactase persistence gave the dairy pastoralists access to an additional source of food and may offer a partial explanation of their success. A success, however, that was also achieved by violence and murder: while in western Europe there is continuity in the mitochondrial DNA that people inherit from their mothers, a typically male Y-DNA group like G2a came to an end. The immigrants must have killed the original male population and we can imagine what the Indo-European men did to the female half the original population. Literature - J.P. Mallory, In Search of the Indo-Europeans (1989) - B.W. Fortson, Indo-European Language and Culture (2010²) - D.A. Anthony, The Horse, the Wheel, and Language (2007) - J. Mango, Ancestral journeys. The peopling of Europe from the first venturers to the Vikings (2014²), - E. Callaway, “Steppe Migration Rekindles Debate on Language Origin”, Nature, 18 February 2015.
http://www.livius.org/articles/people/indo-europeans/
Lecture 25: A New Perspective on the Story of English We trace English back to its earliest discernible roots in Proto-Indo-European and follow its fascinating development, including an ancient encounter with a language possibly related to Arabic and Hebrew. Tag: proto-Indo-European 🎧 Lecture 9 of The Story of Human Language by John McWhorter Lecture 9: Language Families—Tracing Indo-European Linguists have reconstructed the proto-language of the Indo-Europeans by comparing the modern languages. Applying this process, we learn the Proto-Indo-European word for sister-in-law that was spoken 6,000 years ago. 🎧 Lectures 6-8 of The Story of Human Language by John McWhorter Lecture 6: How Language Changes—Many Directions The first language has evolved into 6,000 because language change takes place in many directions. Latin split in this way into the Romance languages as changes proceeded differently in each area where the Romans brought Latin. Lecture 7: How Language Changes—Modern English As recently as Shakespeare, English words had meanings different enough to interfere with our understanding of his language today. Even by the 1800s, Jane Austen's work is full of sentences that would now be considered errors. Lecture 8: Language Families—Indo-European The first of four lectures on language families introduces Indo-European, which probably began in the southern steppes of Russia around 4000 B.C. and then spread westward to most of Europe and eastward to Iran and India.
https://boffosocko.com/tag/proto-indo-european/
Archaeologist Peter Bellwood’s academic odyssey has spanned from England to teaching posts halfway around the world, first in New Zealand and then Australia. For more than 50 years, he has studied how humans colonized islands from Southeast Asia to Polynesia. So it’s only fitting that his new book, a plain English summary of what’s known and what isn’t about the evolution of humans and our ancestors, emphasizes movement. In The Five Million Year OdysseyBellwood examines a parade of species in the human evolutionary family – he refers to them collectively as hominins, while others (including Scientific News) use the term hominids (SN: 09/15/21) – and tracks their migrations across land and sea. It gathers evidence indicating that moving hominids continually changed the direction of biological and cultural evolution. Throughout her tour, Bellwood presents her own take on contested topics. But when the available evidence leaves a debate unresolved, he says so. Consider the first hominids. Species at least 4.4 million years old or older whose hominid status is controversial, such as Ardipithecus ramidus, get a brief mention. Bellwood makes no verdict on whether these finds are from ancient hominids or ancient apes. Rather, it focuses on African australopithecines, a collection of erect but partly ape-like species thought to have included populations that evolved into members of our own genus. Homo, about 2.5 to 3 million years ago. Bellwood insists that the manufacture of stone tools by the last Australopithecines, the first Homo groups or both contributed to the evolution of larger brains in our ancestors. The action picks up when homo erectus became the first known hominid to leave Africa, around 2 million years ago. Questions remain, writes Bellwood, about the number of such migrations and whether this human species reached distant islands such as Flores in Indonesia, possibly giving rise to small hominids called hobbits, or Homo floresiensis (SN: 03/30/16). What is clear is that H. erectus groups traveled across mainland Asia and at least as far as the Indonesian island of Java. Intercontinental migration flourished after Homo sapiens originated about 300,000 years ago in Africa. Sincerely Bellwood H. sapiens, Neanderthals and Denisovans as separate species that interbred in parts of Asia and Europe. It suggests that Neanderthals died out around 40,000 years ago as they mated with members of more H. sapiens populations, leaving a genetic legacy in people today. But it does not address an opposing argument that different Homo populations at this time, including Neanderthals, were too closely related to have been separate species and that it was intermittent mating among these mobile groups that drove the evolution of present-day humans (SN: 05/06/21). Bellwood pays considerable attention to the rise of food production and domestication in Europe and Asia after about 9,000 years. He relies on an argument, taken from his 2004 book Early farmers, that the expanding populations of the early cultivators migrated to new lands in such numbers that they spread with them large linguistic families. For example, farmers in what is now Turkey spread Indo-European languages across much of Europe around 8,000 years ago, Bellwood claims. He rejects a recent alternative proposal, based on ancient DNA evidence, that horse herders of the Yamnaya culture from Central Asia brought their Indo-European traditions and languages to Europe around 5,000 years ago (SN: 11/15/17). Too few Yamnaya have immigrated to impose a new language on European communities, says Bellwood. Likewise, he argues, ancient Eurasian conquerors, from Alexander the Great to Roman emperors, could not get speakers of regional languages to adopt new languages spoken by their outnumbered military masters. Bellwood completes his evolutionary odyssey with a reconstruction of how early agricultural populations developed across East Asia and beyond, to Australia, a chain of Pacific islands, and the Americas. Around 4,000 to 750 years ago, for example, sea farmers spread the Austronesian languages from southern China and Taiwan to Madagascar in the west and Polynesia in the east. Exactly how they accomplished this remarkable feat remains a puzzle. Unfortunately, Bellwood does not weigh in on a recent archaeological argument that ancient societies were more flexible and complex than long thought (SN: 09/11/21). On the positive side, its evolutionary odyssey is moving at a steady pace and, like our ancestors, covers a lot of ground. To buy The Five Million Year Odyssey from Bookshop.org. Scientific News is an affiliate of Bookshop.org and will earn a commission on purchases made from the links in this article.
https://nostrich.net/the-five-million-year-odyssey-reveals-how-migration-shaped-humanity/
I was always taught, in school and elsewhere, that the reason Romans and Greeks had such similar gods with seemingly different names was that the Romans had borrowed their gods from the Greeks, or rather, refashioned their existing gods to take on aspects of similar Greek deities, as part of their overall adoption of Greek culture, their gradual Hellenization. For many reasons that I won't go into, this explanation never satisfied me. That isn't to say that I thought it was a false statement on its face, because in many ways the statement is true. Romans did borrow some Greek deities, as well as deities they found in other areas they intended to conquer. That is a partially valid way of seeing the crossover of Roman and Greek pantheons. What bothers me about it was that it seems like such an incomplete answer, wrapped up in a nice package so that it easily fits into one of those data centers in the mind, without too much reorganization. When this kind of explanation is extant, and the subject is of more than passing interest to me, the inadequacies tend to gnaw at me. Over time, I began to believe that while this explanation had some truth to it, it was in the category of being less than 50% of the real story. I began to believe that the "adoption" method of religious assimilation might be more accurate around syncretism of peripheral deities that were developed later in Greek culture, and which may have had no counterparts in Roman culture (and vice-versa), but that foundational deities in the Roman and Greek pantheons seem to have come ready-made for intellectual integration, so much so in fact that I suspect that it didn't require a lot of effort or even conscious intent. I've come to believe that, more often than not, the Romans always had strikingly similar foundational gods to the Greeks, with parallels which were so unmistakable that the borrowings would have happened almost on auto-pilot, rather than by an intentional editing of mythological underpinnings. In fact, this scenario could not have been the case with only the Greeks. The story of the Roman conquest of Europe is a story of a people who once saw their deities as unique to Rome being forced to come to terms with the fact that their deities were more of a shared mythology. The primary members of the pantheons of the European, and in some cases, Asian cultures they conquered, seemed to be images of their own, reflected through a very slightly warped mirror. By no means was the similarity limited to the gods of the Greeks. For instance, when Caesar campaigned against and ultimately conquered Gaul, a group of loosely unified Continental Celtic tribes which inhabited what is today the territory of France, he saw that their gods were so similar to Roman and Greek foundational deities, that he resorted to calling them by Roman names in his written descriptions. Later on, when Rome under Emprorer Claudius conquered some Insular Celtic tribes, the Welsh and others, living on the main island of Britain, they found that not only were their deities strikingly similar, but even the festivals of those deities were similar to their corresponding deities' festivals, in such details as the time and season of observance, the practices and customs of celebration, and the symbolism and iconography at their center. As a result, the Romans had no issue at all in reconciling their observances to those of the conquered Celts. Conquered Celtic tribes similarly seemed entirely comfortable with these merged observances. Ultimately, very little changed, and even the differences seemed complementary. For instance, Romans immediately recognized in their dealings with the Greeks that their Spring goddess "Libera" was strikingly similar to the Greek Spring goddess "Persephone", both which had a Spring festival of grain that hovered around the Spring Solstice, both of which were daughters of a very similar goddess of agriculture in both cultures (Greek: Demeter, Roman: Ceres), and both of which were married to the deity of the underworld respective to both cultures (Greek: Hades/Pluto, Roman: Dispater/Pluto). Eventually Romans gave "Libera" a Latin rendering of the Greek name "Persephone", calling her "Prosperina". Later, when the Romans conquered the Welsh, they found that a similar goddess existed there with a similar festival centering around the Spring Solstice, and similar parentage and origins with some variations. Before long, Romans in Welsh areas were observing the Welsh festival to their own goddess "Libera", and didn't see enough difference to feel that any amount of sacrilege had been involved. I am not implying that there were not some important differences in the respective pantheons or between respective deities within the pantheons or even in the practices, but I am implying that the similarities were so striking that the differences were not disruptive to the obvious unity in the minds of both the Romans and their conquered peoples. We can find so many other such situations throughout the Greek and Roman pantheons that it must have been a great mystery to the Romans at the time they sensed it. Not to leave the other vassals out, we can find other obvious connections between the Roman pantheon and the Germanic and Norse pantheons. And the ancient Slavic and Baltic pantheons. And the known Celtic pantheons. And even the pantheons of the ancient Persians and the ancient and modern Hindus. For instance, as an extension of the previous example of the relationship between the Greek Persephone, the Roman Libera/Prosperina and the Insular Celtic goddess, it is clear that the Romans were not the only expanding powers that saw this relationship. When Anglo-Saxon invaders came, and later Norse invaders, often called "Vikings", they came with their own goddess with her dedicated Spring festival, around the same time as that of the Celts, named "Eostre" or "Ostara" respectively. These Germanic/Norse names for the goddess seem to go back to a more ancient cognant in Greek and Roman mythologies, namely Eros/Aurora respectively. However, despite name changes and evolutions, the festival and goddess contained the same conceptual identity, timing, and practices. Going back much further into previous Indo-European power blocks, the Medo-Persian Empire had a similar goddess, which despite unifications and simplifications of their deities via Zorostrianism, remained around at least as a Persian name for girls. We even find the name for this goddess in the Tanakh (Old Testament), in the Megillah (Book of Esther). The name of the Jewish maiden who married the great King of Persia and intervened to prevent an impending holocaust of Jews in the Persian Empire was, of course, a Hebrew name. Her name was "Hadassah", meaning "myrtle leaf". Yet she was clearly using a Persian name as a legal name within the Persian Empire as well, and that name was "Esther", a name is clearly similar to that of name that came out of many other Indo-European pantheons. The relationship between this name and both the Indo-European counterpart names of goddesses and other Middle Eastern goddesses is now well accepted. These similarities are too incredible to be coincidence, yet they go on and on. The situation forces us to ask how these stark similarities came to be. Linguistics, which is the scientific study of human language, has actually given us a hint as to how this may have happened. A few years ago, I began to understand that Linguists have for many years understood that European and some Asian and Middle Eastern cultures were speaking languages that seemed to have evolved from a common root. The first real connection was made between Greek and of all languages, Sanskrit, the ancient language of the early Vedic peoples who migrated into India and eventually overtook its culture. These are the people we would today call Indian or Hindu. Then, other connections were made. Many European and Asian language families were too similar not to have come from a very ancient root language. Languages in the Germanic (including English), Celtic, Italic (including Latin, and thus Romantic), Greek, Indian (including Sanskrit and Hindi), Persian (and thus Farsi), Baltic, and Slavic (including Russian) language families had clearly evolved form a similar root. Even many ancient migratory peoples, such as the Hittites, the Philistines, and the Tocharians, also seemed to have languages which had clearly evolved from this same ancient root language. Linguists call this ancient language "Proto Indo-European". So sure are they in their analysis of the similarities between the languages they believe came from this ancient language, that they have been able to reconstruct an impressive amount of that ancient source language's grammar and vocabulary simply by comparing all of the Indo-European languages together, and interpolating the original words and grammar of that language from their similarities. I honestly believe we can do much the same thing when we analyze the pantheons of cultures that evolved from that original Proto Indo-European culture. That ancient Proto Indo-European people, which spoke a language from which all Indo-European languages evolved, clearly also had a Proto Indo-European pantheon from which all Indo-European pantheons evolved. What this means in a nut-shell is that most European cultures, as well as India, Persia (Iran), and many others outside of the geographical confines of Europe, descend possibly genetically but certainly culturally from a single ancient people, a people which at some point was motivated to migrate and settle in various areas. Once these migrations occurred, separation led to cultural and linguistic evolution, which is effectively why the incredible similarities are also saturated with seemingly inexplicable differences. I believe that is why there are so many similarities between Roman, Greek, Celtic, and Germanic deities. And it is why we can see very clear similarities between Roman gods and the gods of both ancient and modern Hindus, and before the advent of Zoroastrianism unified their pantheons into only two deities, the gods of the ancient Persians as well. It is why what little we know of ancient Baltic and Slavic deities fits right in with the same patterns of similarity. Let me center on the most prominent example of the ancient heritage I am referring to. The ancient Hindu Vedas mention a deity that is called "Dyaus Pitr". Though it's clear that modern Hinduism has demoted him to an ancestral deity (allowing other gods to take his place and his attributes), it's equally clear that this god was once a chief Vedic god, likely even the chief Vedic god. Compare his name to that of the Greek chief deity Zeus, sometimes referred to as "Zeus Pater", and it should be rather obvious that the Greek name has a similar etymology to the ancient Vedic name. Compare further the name of the Roman chief deity, "Jupiter": "Ju-PIter" sounds very much like a similar name which over time evolved into a different pronunciation. The original meaning may have even been lost on the Romans, but it seems to demonstrate that the origins of that deity go back a long time, long before Romans would have felt pressure to conquer or integrate with Greece. Taking a look at the chief Germanic deity, their chief deity was "Odin", a former Germanic chieftain who became deified. Before "Odin" achieved this place within the Germanic and Norse pantheons, a deity named "Tyr" was the chief deity with the properties that are usually attributed to Zeus, Jupiter, and Dyaus Pitr. Linguists have long understood that "Tyr" is a linguistic evolution from the same original root that "Dyaus", "Zeus" and the "Ju" in "Jupiter" came from. Linguists and scholars have essentially reconstructed the name for the ancestor of these common deities, and that name is "Dyḗus Pḥatḗr", which in the Proto Indo-European language means "sky father". The connections are rather clear. When the Indo-European peoples migrated away from their original territories more than 5000 years ago, they began to spread over the world as we know it. They migrated to and occupied most of Europe, large parts of Asia, and even the Middle East. And as the progeny of the Proto Indo-European peoples migrated, the evolving progeny of the Proto Indo-European language and religion began to proliferate the world. As a result, the various evolutions of the Indo-European Pantheon once represented a significant part of the religious workings of the world. Basically, Indo-European gods were once served by a significant portion of the human population. Because the Roman Empire had conquered so many territories, particularly territories populated by adherents to Indo-European religions, and because that Roman Empire had made a seemingly sudden switch to Christianity as the State religion in the Fourth Century A.C.E., Christianity began to replace those religions at a rapid pace. Later, the rise of Islam in the Eastern provinces also served to displace the Indo-European religions in territories that were originally outside of Western reach. The result is that today, very few of the practices that evolved from devotion and service to that ancient pantheon are still overtly kept today. Zoroastrianism has some continual adherence among small populations, and there are those trying to revive Greek, Roman, Celtic, Germanic, and Norse practices, but the numbers are few by comparison to the larger populations. The result is that Hinduism in India is the only spiritual practice of devotion to a child religion of the Indo-European pantheon employed on a large national scale. Having said all of that, the children of the Indo-European pantheon are still a deeply embedded aspect of European and American cultures, and in fact are still a major part of their religious observances. In English-speaking countries for instance, we name our planets and months after Roman gods, our weekdays after Germanic and Norse gods, our scientific concepts like atomic elements after Greek deities, and many of our stories and fairy tales are from our Celtic, Germanic, Norse, and Greek past. And even though Christianity now rules the roost of European religious thought, it is clear that it has been heavily syncretized with these past Pagan beliefs and rituals and stories and concepts to the point that it is doubtful that the earliest adherents to what we now refer to as the Christian religion would recognize it. Most Christian holidays are borrowed from older Indo-European Festivals, such as Christmas and Easter. For instance, the Spring festival of the Celtic goddess "Eostre" and the Norse goddess "Ostara" which I mentioned earlier, is what we today celebrate as "Easter"; in fact, the name "Easter" is a modern Anglo-derivation from the goddesses' names, and even the time of year of its observance closely mirrors that of the modern observances. For many Nordic cultures, the Christmas festival is still called "Yule", its original name from Norse Mythology. Whatever this festival is called, it clearly had no original counterpart among what we often refer to today as the original Christians. Even many well known Christian Saints are essentially derived from Indo-European gods and in some case, demons, such as Saint Nicolas and Saint Demetrios, Saint Martin, Saint Lawrence, Saint Ormazd, Saint Venera, Saint Cyrinus, Saint Aphrodite, and many others are generally thought to have been converted from Indo-European deities. Worship and veneration of Mary and prayer to various Saints dedicated to specific purposes, practices which don't seem to have existed among the earliest Christians, follow clear models of worship from the Indo-European pantheons from which Christian adopters sprang after the Roman Empire adopted the religion. The point I am making is that the ancient Proto Indo-European culture and religion is still all around us, and it still permeates a great deal of our lives. I didn't intend to write this post as a thorough analysis of the subject. I've barely scratched the surface. I just wanted to introduce the topic, because it's something that has sparked my interest for the last few years. Just as many linguists have reconstructed some small part of the original Proto Indo-European language from its linguistic antecedents, many scholars of the ancient world have reconstructed some part of the original Proto Indo-European pantheon from its cultural antecedents. While the usual evolution has caused some of the ancient deities in the Proto Indo-European pantheons to be displaced or modified, by and large the connections are unmistakable. To answer the question I asked in the title, "Did Rome adopt the Greek pantheon?", isn't impossible, but it certainly isn't as cut and dry as simply ascribing the process to adoption. Adoption was so fluid an occurrence because of existing similarities which arose from very ancient shared roots. It was no accident that as Rome conquered Europe and Asia and even parts of Africa and the Middle East, its peoples found so many similarities in the religious pantheons and the religious practices they encountered. They recognized a kinship of sorts, even as they couldn't quite tell how such a kinship had come about. But we can at least trace some part of it and understand how that kinship had come about. Adoption certainly occurred, but the foundation of that adoption was in a shared legacy in a forgotten past, and thus any narrative focusing on adoption as the primary means of the orchestration of religious syncretism between Rome and Greece, and between Rome and its many other vassals, is at best incomplete, and at worst misleading.
http://www.gervatoshav.com/
Ancient steppes for human equestrians The Eurasian steppes reach from the Ukraine in Europe to Mongolia and China. Over the past 5000 years, these flat grasslands were thought to be the route for the ebb and flow of migrant humans, their horses, and their languages. de Barros Damgaard et al. probed whole-genome sequences from the remains of 74 individuals found across this region. Although there is evidence for migration into Europe from the steppes, the details of human movements are complex and involve independent acquisitions of horse cultures. Furthermore, it appears that the Indo-European Hittite language derived from Anatolia, not the steppes. The steppe people seem not to have penetrated South Asia. Genetic evidence indicates an independent history involving western Eurasian admixture into ancient South Asian peoples. Science, this issue p. eaar7711 Structured Abstract INTRODUCTION According to the commonly accepted “steppe hypothesis,” the initial spread of Indo-European (IE) languages into both Europe and Asia took place with migrations of Early Bronze Age Yamnaya pastoralists from the Pontic-Caspian steppe. This is believed to have been enabled by horse domestication, which revolutionized transport and warfare. Although in Europe there is much support for the steppe hypothesis, the impact of Early Bronze Age Western steppe pastoralists in Asia, including Anatolia and South Asia, remains less well understood, with limited archaeological evidence for their presence. Furthermore, the earliest secure evidence of horse husbandry comes from the Botai culture of Central Asia, whereas direct evidence for Yamnaya equestrianism remains elusive. RATIONALE We investigated the genetic impact of Early Bronze Age migrations into Asia and interpret our findings in relation to the steppe hypothesis and early spread of IE languages. We generated whole-genome shotgun sequence data (~1 to 25 X average coverage) for 74 ancient individuals from Inner Asia and Anatolia, as well as 41 high-coverage present-day genomes from 17 Central Asian ethnicities. RESULTS We show that the population at Botai associated with the earliest evidence for horse husbandry derived from an ancient hunter-gatherer ancestry previously seen in the Upper Paleolithic Mal’ta (MA1) and was deeply diverged from the Western steppe pastoralists. They form part of a previously undescribed west-to-east cline of Holocene prehistoric steppe genetic ancestry in which Botai, Central Asians, and Baikal groups can be modeled with different amounts of Eastern hunter-gatherer (EHG) and Ancient East Asian genetic ancestry represented by Baikal_EN. In Anatolia, Bronze Age samples, including from Hittite speaking settlements associated with the first written evidence of IE languages, show genetic continuity with preceding Anatolian Copper Age (CA) samples and have substantial Caucasian hunter-gatherer (CHG)–related ancestry but no evidence of direct steppe admixture. In South Asia, we identified at least two distinct waves of admixture from the west, the first occurring from a source related to the Copper Age Namazga farming culture from the southern edge of the steppe, who exhibit both the Iranian and the EHG components found in many contemporary Pakistani and Indian groups from across the subcontinent. The second came from Late Bronze Age steppe sources, with a genetic impact that is more localized in the north and west. CONCLUSION Our findings reveal that the early spread of Yamnaya Bronze Age pastoralists had limited genetic impact in Anatolia as well as Central and South Asia. As such, the Asian story of Early Bronze Age expansions differs from that of Europe. Intriguingly, we find that direct descendants of Upper Paleolithic hunter-gatherers of Central Asia, now extinct as a separate lineage, survived well into the Bronze Age. These groups likely engaged in early horse domestication as a prey-route transition from hunting to herding, as otherwise seen for reindeer. Our findings further suggest that West Eurasian ancestry entered South Asia before and after, rather than during, the initial expansion of western steppe pastoralists, with the later event consistent with a Late Bronze Age entry of IE languages into South Asia. Finally, the lack of steppe ancestry in samples from Anatolia indicates that the spread of the earliest branch of IE languages into that region was not associated with a major population migration from the steppe. Abstract The Yamnaya expansions from the western steppe into Europe and Asia during the Early Bronze Age (~3000 BCE) are believed to have brought with them Indo-European languages and possibly horse husbandry. We analyzed 74 ancient whole-genome sequences from across Inner Asia and Anatolia and show that the Botai people associated with the earliest horse husbandry derived from a hunter-gatherer population deeply diverged from the Yamnaya. Our results also suggest distinct migrations bringing West Eurasian ancestry into South Asia before and after, but not at the time of, Yamnaya culture. We find no evidence of steppe ancestry in Bronze Age Anatolia from when Indo-European languages are attested there. Thus, in contrast to Europe, Early Bronze Age Yamnaya-related migrations had limited direct genetic impact in Asia. The vast grasslands making up the Eurasian steppe zones, from Ukraine through Kazakhstan to Mongolia, have served as a crossroad for human population movements during the last 5000 years (1–3), but the dynamics of its human occupation—especially of the earliest period—remain poorly understood. The domestication of the horse at the transition from the Copper Age to the Bronze Age, ~3000 BCE, enhanced human mobility (4, 5) and may have triggered waves of migration. According to the “steppe hypothesis,” this expansion of groups in the western steppe related to the Yamnaya and Afanasievo cultures was associated with the spread of Indo-European (IE) languages into Europe and Asia (1, 2, 4, 6). The peoples who formed the Yamnaya and Afanasievo cultures belonged to the same genetically homogeneous population, with direct ancestry attributed to both Copper Age (CA) western steppe pastoralists, descending primarily from the European Eastern hunter-gatherers (EHG) of the Mesolithic and to Caucasian groups (1, 2) related to Caucasus hunter-gatherers (CHG) (7). Within Europe, the steppe hypothesis is supported by the reconstruction of Proto-IE (PIE) vocabulary (8), as well as by archaeological and genomic evidence of human mobility and Early Bronze Age (3000 to 2500 BCE) cultural dynamics (9). For Asia, however, several conflicting interpretations have long been debated. These concern the origins and genetic composition of the local Asian populations encountered by the Yamnaya- and Afanasievo-related populations, including the groups associated with Botai, a site that offers the earliest evidence for horse husbandry (10). In contrast, the more western sites that have been supposed by some to reflect the use of horses in the Copper Age (4) lack direct evidence of domesticated horses. Even the later use of horses among Yamnaya pastoralists has been questioned by some (11) despite the key role of horses in the steppe hypothesis. Furthermore, genetic, archaeological, and linguistic hypotheses diverge on the timing and processes by which steppe genetic ancestry and the IE languages spread into South Asia (4, 6, 12). Similarly, in present-day Turkey, the emergence of the Anatolian IE language branch, including the Hittite language, remains enigmatic, with conflicting hypotheses about population migrations leading to its emergence in Anatolia (4, 13). Ancient genomes inform upon human movements within Asia We analyzed whole-genome sequence data of 74 ancient humans (14, 15) (tables S1 to S3) ranging from the Mesolithic (~9000 BCE) to Medieval times, spanning ~5000 km across Eastern Europe, Central Asia, and Western Asia (Anatolia) (Fig. 1). Our genome data includes 3 Copper Age individuals (~3500 to 3300 BCE) from Botai in northern Kazakhstan (Botai_CA; 13.6X, 3.7X, and 3X coverage, respectively); 1 Early Bronze Age (~2900 BCE) Yamnaya sample from Karagash, Kazakhstan (16) (YamnayaKaragash_EBA; 25.2X); 1 Mesolithic (~9000 BCE) EHG from Sidelkino, Russia (SidelkinoEHG_ML; 2.9X); 2 Early/Middle Bronze Age (~2200 BCE) central steppe individuals (~4200 BP) (CentralSteppe_EMBA; 4.5X and 9.1X average coverage, respectively) from burials at Sholpan and Gregorievka that display cultural similarities to Yamnaya and Afanasievo (12); 19 individuals of the Bronze Age (~2500 to 2000 BCE) Okunevo culture of the Minusinsk Basin in the Altai region (Okunevo_EMBA; ~1X average coverage; 0.1 to 4.6X); 31 Baikal hunter-gatherer genomes (~1X average coverage; 0.2 to 4.5X) from the cis-Baikal region bordering on Mongolia and ranging in time from the Early Neolithic (~5200 to 4200 BCE; Baikal_EN) to the Early Bronze Age (~2200 to 1800 BCE; Baikal_EBA); 4 Copper Age individuals (~3300 to 3200 BCE; Namazga_CA; ~1X average coverage; 0.1 to 2.2X) from Kara-Depe and Geoksur in the Kopet Dag piedmont strip of Turkmenistan, affiliated with the period III cultural layers at Namazga-Depe (fig. S1), plus 1 Iron Age individual (Turkmenistan_IA; 2.5X) from Takhirbai in the same area dated to ~800 BCE; and 12 individuals from Central Turkey (figs. S2 to S4), spanning from the Early Bronze Age (~2200 BCE; Anatolia_EBA) to the Iron Age (~600 BCE; Anatolia_IA), and including 5 individuals from presumed Hittite-speaking settlements (~1600 BCE; Anatolia_MLBA), and 2 individuals dated to the Ottoman Empire (1500 CE; Anatolia_Ottoman; 0.3 to 0.9X). All the population labels including those referring to previously published ancient samples are listed in table S4 for contextualization. Additionally, we sequenced 41 high-coverage (30X) present-day Central Asian genomes, representing 17 self-declared ethnicities (fig. S5), and collected and genotyped 140 individuals from five IE-speaking populations in northern Pakistan. Tests indicated that the contamination proportion of the data was negligible (14) (see table S1), and we removed related individuals from frequency-based statistics (fig. S6 and table S5). Our high-coverage Yamnaya genome from Karagash is consistent with previously published Yamnaya and Afanasievo genomes, and our Sidelkino genome is consistent with previously published EHG genomes, on the basis that there is no statistically significant deviation from 0 of D statistics of the form D(Test, Mbuti; SidelkinoEHG_ML, EHG) (fig. S7) or of the form D(Test, Mbuti; YamnayaKaragash_EBA, Yamnaya) (fig. S8; additional D statistics shown in figs. S9 to S12). Genetic origins of local Inner Asian populations In the Early Bronze Age, ~3000 BCE, the Afanasievo culture was formed in the Altai region by people related to the Yamnaya, who migrated 3000 km across the central steppe from the western steppe (1) and are often identified as the ancestors of the IE-speaking Tocharians of first-millennium northwestern China (4, 6). At this time, the region they passed through was populated by horse hunter-herders (4, 10, 17), while further east the Baikal region hosted groups that had remained hunter-gatherers since the Paleolithic (18–22). Subsequently, the Okunevo culture replaced the Afanasievo culture. The genetic origins and relationships of these peoples have been largely unknown (23, 24). To address these issues, we characterized the genomic ancestry of the local Inner Asian populations around the time of the Yamnaya and Afanasievo expansion. Comparing our ancient samples to a range of present-day and ancient samples with principal components analysis (PCA), we find that the Botai_CA, CentralSteppe_EMBA, Okunevo_EMBA, and Baikal populations (Baikal_EN and Baikal_EBA) are distributed along a previously undescribed genetic cline. This cline extends from the EHG of the western steppe to the Bronze Age (~2000 to 1800 BCE) and Neolithic (~5200 to 4200 BCE) hunter-gatherers of Lake Baikal in Central Asia, which are located on the PCA plot close to modern East Asians and two Early Neolithic (~5700 BCE) Devil’s Gate samples (25) (Fig. 2 and fig. S13). In accordance with their position along the west-to-east gradient in the PCA, increased East Asian ancestry is evident in ADMIXTURE model-based clustering (Fig. 3 and figs. S14 and S15) and by D statistics for Sholpan and Gregorievka (CentralSteppe_EMBA) and Okunevo_EMBA, relative to Botai_CA and the Baikal_EN sample: D(Baikal_EN, Mbuti; Botai_CA, Okunevo_EMBA) = –0.025 Z = –12; D(Baikal_EN, Mbuti; Botai_CA, Sholpan) = –0.028 Z = –8.34; D(Baikal_EN, Mbuti; Botai_CA, Gregorievka) = –0.026 Z = –7.1. The position of this cline suggests that the central steppe Bronze Age populations all form a continuation of the Ancient North Eurasian (ANE) population, previously known from the 24,000-year-old Mal’ta (MA1), the 17,000-year-old AG-2 (26), and the ~14,700-year-old AG-3 (27) individuals from Siberia. To investigate ancestral relationships between these populations, we used coalescent modeling with the momi (Moran Models for Inference) program (28) (Fig. 4, figs. S16 to S22, and tables S6 to S11). This exploits the full joint-site frequency spectrum and can separate genetic drift into divergence-time and population-size components, in comparison to PCA, admixture, and qpAdm approaches, which are based on pairwise covariances. We find that Botai_CA, CentralSteppe_EMBA, Okunevo_EMBA, and Baikal populations are deeply separated from other ancient and present-day populations and are best modeled as mixtures in different proportions of ANE ancestry and an Ancient East Asian (AEA) ancestry component represented by Baikal_EN, with mixing times dated to ~5000 BCE. Although some modern Siberian samples lie under the Baikal samples in Fig. 2A, these are separated out in a more limited PCA, involving just those populations and the ancient samples (fig. S23). Our momi model infers that the ANE lineage separated ~15,000 years ago in the Upper Paleolithic from the EHG lineage to the west, with no independent drift assigned to MA1. This suggests that MA1 may represent their common ancestor. Similarly, the AEA lineage to the east also separated ~15,000 years ago, with the component that leads to Baikal_EN and the AEA component of the steppe separating from the lineage leading to present-day East Asian populations represented by Han Chinese (figs. S19 to S21). The ANE and AEA lineages themselves are estimated as having separated approximately 40,000 years ago, relatively soon after the peopling of Eurasia by modern humans. Because the ANE MA1 sample comes from the same cis-Baikal region as the AEA-derived Neolithic samples analyzed here, we document evidence for a population replacement between the Paleolithic and the Neolithic in this region. Furthermore, we observe a shift in genetic ancestry between the Early Neolithic (Baikal_EN) and the Late Neolithic/Bronze Age hunter-gatherers (Baikal_LNBA) (Fig. 2A), with the Baikal_LNBA cluster showing admixture from an ANE-related source. We estimate the ANE related ancestry in the Baikal_LNBA to be ~5 to 11% (qpAdm) (table S12) (2), using MA1 as a source of ANE, Baikal_EN as a source of AEA, and a set of six outgroups. However, neither MA1 nor any of the other steppe populations lie in the direction of Baikal_LNBA from Baikal_EN on the PCA plot (fig. S23). This suggests that the new ANE ancestry in Baikal_LNBA stems from an unsampled source. Given that this source may have harbored East Asian ancestry, the contribution may be larger than 10%. These serial changes in the Baikal populations are reflected in Y-chromosome lineages (Fig. 5A, figs. S24 to S27, and tables S13 and S14). MA1 carries the R haplogroup, whereas the majority of Baikal_EN males belong to N lineages, which were widely distributed across Northern Eurasia (29), and the Baikal_LNBA males all carry Q haplogroups, as do most of the Okunevo_EMBA as well as some present-day Central Asians and Siberians. Mitochondrial haplogroups show less turnover (Fig. 5B and table S15), which could either indicate male-mediated admixture or reflect bottlenecks in the male population. The deep population structure among the local populations in Inner Asia around the Copper Age/Bronze Age transition is in line with distinct origins of central steppe hunter-herders related to Botai of the central steppe and those related to Altaian hunter-gatherers of the eastern steppe (30). Furthermore, this population structure, which is best described as part of the ANE metapopulation, persisted within Inner Asia from the Upper Paleolithic to the end of the Early Bronze Age. In the Baikal region, the results show that at least two genetic shifts occurred: first, a complete population replacement of the Upper Paleolithic hunter-gatherers belonging to the ANE by Early Neolithic communities of Ancient East Asian ancestry, and second, an admixture event between the latter and additional members of the ANE clade, occurring during the 1500-year period that separates the Neolithic from the Early Bronze Age. These genetic shifts complement previously observed severe cultural changes in the Baikal region (18–22). Relevance for history of horse domestication The earliest unambiguous evidence for horse husbandry is from the Copper Age Botai hunter-herder culture of the central steppe in Northern Kazakhstan ~3500 to 3000 BCE (5, 10, 23, 31–33). There was extensive debate over whether Botai horses were hunted or herded (33), but more recent studies have evidenced harnessing and milking (10, 17), the presence of likely corrals, and genetic domestication selection at the horse TRPM1 coat-color locus (32). Although horse husbandry has been demonstrated at Botai, it is also now clear from genetic studies that this was not the source of modern domestic horse stock (32). Some have suggested that the Botai were local hunter-gatherers who learned horse husbandry from an early eastward spread of western pastoralists, such as the Copper Age herders buried at Khvalynsk (~5150 to 3950 BCE), closely related to Yamnaya and Afanasievo (17). Others have suggested an in situ transition from the local hunter-gatherer community (5). We therefore examined the genetic relationship between Yamnaya and Botai. First, we note that whereas Yamnaya is best modeled as an approximately equal mix of EHG and Caucasian HG ancestry and that the earlier Khvalynsk samples from the same area also show Caucasian ancestry, the Botai_CA samples show no signs of admixture with a Caucasian source (fig. S14). Similarly, while the Botai_CA have some Ancient East Asian ancestry, there is no sign of this in Khvalynsk or Yamnaya. Our momi model (Fig. 4) suggests that, although YamnayaKaragash_EBA shared ANE ancestry with Botai_CA from MA1 through EHG, their lineages diverge ~15,000 years ago in the Paleolithic. According to a parametric bootstrap, the amount of gene flow between YamnayaKaragash_EBA and Botai_CA inferred using the sample frequency spectrum (SFS) was not significantly different from 0 (P = 0.18 using 300 parametric bootstraps under a null model without admixture) (fig. S18). Additionally, the best-fitting SFS model without any recent gene flow fits the ratio of ABBA-BABA counts for (SidelkinoEHG_ML, YamnayaKaragash_EBA; Botai_CA, AncestralAllele), with Z = 0.45 using a block jackknife for this statistic. Consistent with this, a simple qpGraph model without direct gene flow between Botai_CA and Yamnaya, but with shared EHG-related ancestry between them, fits all f4 statistics (fig. S28), and qpAdm (2) successfully fits models for Yamnaya ancestry without any Botai_CA contribution (table S12). The separation between Botai and Yamnaya is further reinforced by a lack of overlap in Y-chromosomal lineages (Fig. 5A). Although our YamnayaKaragash_EBA sample carries the R1b1a2a2c1 lineage seen in other Yamnaya and present-day Eastern Europeans, one of the two Botai_CA males belongs to the basal N lineage, whose subclades have a predominantly Northern Eurasian distribution, whereas the second carries the R1b1a1 haplogroup, restricted almost exclusively to Central Asian and Siberian populations (34). Neither of these Botai lineages has been observed among Yamnaya males (table S13 and fig. S25). Using ChromoPainter (35) (figs. S29 to S32) and rare variant sharing (36) (figs. S33 to S35), we also identify a disparity in affinities with present-day populations between our high-coverage Yamnaya and Botai genomes. Consistent with previous results (1, 2), we observe a contribution from YamnayaKaragash_EBA to present-day Europeans. Conversely, Botai_CA shows greater affinity to Central Asian, Siberian, and Native American populations, coupled with some sharing with northeastern European groups at a lower level than that for Yamnaya, due to their ANE ancestry. Further toward the Altai, the genomes of two CentralSteppe_EMBA women, who were buried in Afanasievo-like pit graves, revealed them to be representatives of an unadmixed Inner Asian ANE-related group, almost indistinguishable from the Okunevo_EMBA of the Minusinsk Basin north of the Altai through D statistics (fig. S11). This lack of genetic and cultural congruence may be relevant to the interpretation of Afanasievo-type graves elsewhere in Central Asia and Mongolia (37). However, in contrast to the lack of identifiable admixture from Yamnaya and Afanasievo in the CentralSteppe_EMBA, there is an admixture signal of 10 to 20% Yamnaya and Afanasievo in the Okunevo_EMBA samples (fig. S21), consistent with evidence of western steppe influence. This signal is not seen on the X chromosome (qpAdm P value for admixture on X 0.33 compared to 0.02 for autosomes), suggesting a male-derived admixture, also consistent with the fact that 1 of 10 Okunevo_EMBA males carries a R1b1a2a2 Y chromosome related to those found in western pastoralists (Fig. 5). In contrast, there is no evidence of western steppe admixture among the more eastern Baikal region Bronze Age (~2200 to 1800 BCE) samples (fig. S14). The lack of evidence of admixture between Botai horse herders and western steppe pastoralists is consistent with these latter migrating through the central steppe but not settling until they reached the Altai to the east (4). Notably, this lack of admixture suggests that horses were domesticated by hunter-gatherers not previously familiar with farming, as were the cases for dogs (38) and reindeer (39). Domestication of the horse thus may best parallel that of the reindeer, a food animal that can be milked and ridden, which has been proposed to be domesticated by hunters via the “prey path” (40); indeed, anthropologists note similarities in cosmological beliefs between hunters and reindeer herders (41). In contrast, most animal domestications were achieved by settled agriculturalists (5). Origins of Western Eurasian genetic signatures in South Asians The presence of Western Eurasian ancestry in many present-day South Asian populations south of the central steppe has been used to argue for gene flow from Early Bronze Age (~3000 to 2500 BCE) western steppe pastoralists into the region (42, 43). However, direct influence of Yamnaya or related cultures of that period is not visible in the archaeological record, except perhaps for a single burial mound in Sarazm in present-day Tajikistan of contested age (44, 45). Additionally, linguistic reconstruction of protoculture coupled with the archaeological chronology evidences a Late (~2300 to 1200 BCE) rather than Early Bronze Age (~3000 to 2500 BCE) arrival of the Indo-Iranian languages into South Asia (16, 45, 46). Thus, debate persists as to how and when Western Eurasian genetic signatures and IE languages reached South Asia. To address these issues, we investigated whether the source of the Western Eurasian signal in South Asians could derive from sources other than Yamnaya and Afanasievo (Fig. 1). Both Early Bronze Age (~3000 to 2500 BCE) steppe pastoralists Yamnaya and Afanasievo and Late Bronze Age (~2300 to 1200 BCE) Sintashta and Andronovo carry substantial amounts of EHG and CHG ancestry (1, 2, 7), but the latter group can be distinguished by a genetic component acquired through admixture with European Neolithic farmers during the formation of the Corded Ware complex (1, 2), reflecting a secondary push from Europe to the east through the forest-steppe zone. We characterized a set of four south Turkmenistan samples from Namazga period III (~3300 BCE). In our PCA analysis, the Namazga_CA individuals were placed in an intermediate position between Iran Neolithic and western steppe clusters (Fig. 2). Consistent with this, we find that the Namazga_CA individuals carry a significantly larger fraction of EHG-related ancestry than Neolithic skeletal material from Iran [D(EHG, Mbuti; Namazga_CA, Iran_N) Z = 4.49], and we are not able to reject a two-population qpAdm model in which Namazga_CA ancestry was derived from a mixture of Neolithic Iranians and EHG (~21%) (P = 0.49). Although CHG contributed both to Copper Age steppe individuals (e.g., Khvalynsk, ~5150 to 3950 BCE) and substantially to Early Bronze Age (~3000 to 2500 BCE) steppe Yamnaya and Afanasievo (1, 2, 7, 47), we do not find evidence of CHG-specific ancestry in Namazga. Despite the adjacent placement of CHG and Namazga_CA on the PCA plot, D(CHG, Mbuti; Namazga_CA, Iran_N) does not deviate significantly from 0 (Z = 1.65), in agreement with ADMIXTURE results (Fig. 3 and fig. S14). Moreover, a three-population qpAdm model using Iran Neolithic, EHG, and CHG as sources yields a negative admixture coefficient for CHG. This suggests that while we cannot totally reject a minor presence of CHG ancestry, steppe-related admixture most likely arrived in the Namazga population before the Copper Age or from unadmixed sources related to EHG. This is consistent with the upper temporal boundary provided by the date of the Namazga_CA samples (~3300 BCE). In contrast, the Iron Age (~900 to 200 BCE) individual from the same region as Namazga (sample DA382, labeled Turkmenistan_IA) is closer to the steppe cluster in the PCA plot and does have CHG-specific ancestry. However, it also has European farmer–related ancestry typical of Late Bronze Age (~2300 to 1200 BCE) steppe populations (1–3, 47) [D(Neolithic European, Mbuti; Namazga_CA, Turkmenistan_IA) Z = -4.04], suggesting that it received admixture from Late (~2300 to 1200 BCE) rather than Early Bronze Age (~3000 to 2500 BCE) steppe populations. In a PCA focused on South Asia (Fig. 2B), the first dimension corresponds approximately to west-east and the second dimension to north-south. Near the lower right are the Andamanese Onge, previously used to represent the Ancient South Asian component (12, 42). Contemporary South Asian populations are placed along both east-west and north-south gradients, reflecting the presence of three major ancestry components in South Asia deriving from West Eurasians, South Asians, and East Asians. Because the Namazga_CA individuals appear at one end of the West Eurasian/South Asian axis, and given their geographical proximity to South Asia, we tested this group as a potential source in a set of qpAdm models for the South Asian populations (Fig. 6). We are not able to reject a two-population qpAdm model using Namazga_CA and Onge for nine modern southern and predominantly Dravidian-speaking populations (Fig. 6, fig. S36, and tables S16 and S17). In contrast, for seven other populations belonging to the northernmost Indic- and Iranian-speaking groups, this two-population model is rejected, but not a three-population model including an additional Late Bronze Age (~2300 to 1200 BCE) steppe source. Last, for seven southeastern Asian populations, six of which were Tibeto-Burman or Austro-Asiatic speakers, the three-population model with Late Bronze Age (~2300 to 1200 BCE) steppe ancestry was rejected, but not a model in which Late Bronze Age (~2300 to 1200 BCE) steppe ancestry was replaced with an East Asian ancestry source, as represented by the Late Iron Age (~200 BCE to 100 CE) Xiongnu (Xiongnu_IA) nomads from Mongolia (3). Interestingly, for two northern groups, the only tested model we could not reject included the Iron Age (~900 to 200 BCE) individual (Turkmenistan_IA) from the Zarafshan Mountains and the Xiongnu_IA as sources. These findings are consistent with the positions of the populations in PCA space (Fig. 2B) and are further supported by ADMIXTURE analysis (Fig. 3), with two minor exceptions: In both the Iyer and the Pakistani Gujar, we observe a minor presence of the Late Bronze Age (~2300 to 1200 BCE) steppe ancestry component (fig. S14) not detected by the qpAdm approach. Additionally, we document admixture along the West Eurasian and East Asian clines of all South Asian populations using D statistics (fig. S37). Thus, we find that ancestries deriving from four major separate sources fully reconcile the population history of present-day South Asians (Figs. 3 and 6), one anciently South Asian, one from Namazga or a related population, a third from Late Bronze Age (~2300 to 1200 BCE) steppe pastoralists, and one from East Asia. They account for western ancestry in some Dravidian populations that lack CHG-specific ancestry while also fitting the observation that whenever there is CHG-specific ancestry and considerable EHG ancestry, there is also European Neolithic ancestry (Fig. 3). This implicates Late Bronze Age (~2300 to 1200 BCE) steppe rather than Early Bronze Age (~3000 to 2500 BCE) Yamnaya and Afanasievo admixture into South Asia. The proposal that the IE steppe ancestry arrived in the Late Bronze Age (~2300 to 1200 BCE) is also more consistent with archaeological and linguistic chronology (44, 45, 48, 49). Thus, it seems that the Yamnaya- and Afanasievo-related migrations did not have a direct genetic impact in South Asia. Lack of steppe genetic impact in Anatolians Finally, we consider the evidence for Bronze Age steppe genetic contributions in West Asia. There are conflicting models for the earliest dispersal of IE languages into Anatolia (4, 50). The now extinct Bronze Age Anatolian language group represents the earliest historically attested branch of the IE language family and is linguistically held to be the first branch to have split off from PIE (51, 52, 53). One key question is whether Proto-Anatolian is a direct linguistic descendant of the hypothesized Yamnaya PIE language or whether Proto-Anatolian and the PIE language spoken by Yamnaya were branches of a more ancient language ancestral to both (49, 53). Another key question relates to whether Proto-Anatolian speakers entered Anatolia as a result of a Copper Age western steppe migration (~5000 to 3000 BCE) involving movement of groups through the Balkans into Northwest Anatolia (4, 54, 55) or a Caucasian route that links language dispersal to intensified north-south population contacts facilitated by the trans-Caucasian Maykop culture ~3700 to 3000 BCE (50, 54). Ancient DNA findings suggest extensive population contact between the Caucasus and the steppe during the Copper Age (~5000 to 3000 BCE) (1, 2, 42). Particularly, the first identified presence of Caucasian genomic ancestry in steppe populations is through the Khvalynsk burials (2, 47) and that of steppe ancestry in the Caucasus is through Armenian Copper Age individuals (42). These admixture processes likely gave rise to the ancestry that later became typical of the Yamnaya pastoralists (7), whose IE language may have evolved under the influence of a Caucasian language, possibly from the Maykop culture (50, 56). This scenario is consistent with both the Copper Age steppe (4) and the Caucasian models for the origin of the Proto-Anatolian language (57). PCA (Fig. 2B) indicates that all the Anatolian genome sequences from the Early Bronze Age (~2200 BCE) and Late Bronze Age (~1600 BCE) cluster with a previously sequenced Copper Age (~3900 to 3700 BCE) individual from Northwestern Anatolia and lie between Anatolian Neolithic (Anatolia_N) samples and CHG samples but not between Anatolia_N and EHG samples. A test of the form D(CHG, Mbuti; Anatolia_EBA, Anatolia_N) shows that these individuals share more alleles with CHG than Neolithic Anatolians do (Z = 3.95), and we are not able to reject a two-population qpAdm model in which these groups derive ~60% of their ancestry from Anatolian farmers and ~40% from CHG-related ancestry (P = 0.5). This signal is not driven by Neolithic Iranian ancestry, because the result of a similar test of the form D(Iran_N, Mbuti; Anatolia_EBA, Anatolia_N) does not deviate from zero (Z = 1.02). Taken together with recent findings of CHG ancestry on Crete (58), our results support a widespread CHG-related gene flow, not only into Central Anatolia but also into the areas surrounding the Black Sea and Crete. The latter are not believed to have been influenced by steppe-related migrations and may thus correspond to a shared archaeological horizon of trade and innovation in metallurgy (59). Importantly, a test of the form D(EHG, Mbuti; Anatolia_EBA, Anatolia_MLBA) supports that the Central Anatolian gene pools, including those sampled from settlements thought to have been inhabited by Hittite speakers, were not affected by steppe populations during the Early and Middle Bronze Age (Z = –1.83). Both of these findings are further confirmed by results from clustering analysis (Fig. 3). The CHG-specific ancestry and the absence of EHG-related ancestry in Bronze Age Anatolia would be in accordance with intense cultural interactions between populations in the Caucasus and Anatolia observed during the late fifth millennium BCE that seem to come to an end in the first half of the fourth millennium BCE with the village-based egalitarian Kura-Araxes’ society (60, 61), thus preceding the emergence and dispersal of Proto-Anatolian. Our results indicate that the early spread of IE languages into Anatolia was not associated with any large-scale steppe-related migration, as previously suggested (62). Additionally, and in agreement with the later historical record of the region (63), we find no correlation between genetic ancestry and exclusive ethnic or political identities among the populations of Bronze Age Central Anatolia, as has previously been hypothesized (64). Discussion For Europe, ancient genomics have revealed extensive population migrations, replacements, and admixtures from the Upper Paleolithic to the Bronze Age (1, 2, 27, 65, 66), with a strong influence across the continent from the Early Bronze Age (~3000 to 2500 BCE) western steppe Yamnaya. In contrast, for Central Asia, continuity is observed from the Upper Paleolithic to the end of the Copper Age (~3500 to 3000 BCE), with descendants of Paleolithic hunter-gatherers persisting as largely isolated populations after the Yamnaya and Afanasievo pastoralist migrations. Instead of western pastoralists admixing with or replacing local groups, we see groups with East Asian ancestry replacing ANE populations in the Lake Baikal region. Thus, unlike in Europe, the hunter/gathering/herding groups of Inner Asia were much less affected by the Yamnaya and Afanasievo expansion. This may be due to the rise of early horse husbandry, likely initially originated through a local “prey route” (40) adaptation by horse-dependent hunter-gatherers at Botai. Work on ancient horse genomes (32) indicates that Botai horses were not the main source of modern domesticates, which suggests the existence of a second center of domestication, but whether this second center was associated with the Yamnaya and Afanasievo cultures remains uncertain in the absence of horse genetic data from their sites. Our finding that the Copper Age (~3300 BCE) Namazga-related population from the borderlands between Central and South Asia contains both Iran Neolithic and EHG ancestry but not CHG-specific ancestry provides a solution to problems concerning the Western Eurasian genetic contribution to South Asians. Rather than invoking varying degrees of relative contribution of Iran Neolithic and Yamnaya ancestries, we explain the two western genetic components with two separate admixture events. The first event, potentially before the Bronze Age, spread from a non-IE-speaking farming population from the Namazga culture or a related source down to Southern India. Then the second came during the Late Bronze Age (~2300 to 1200 BCE) through established contacts between pastoral steppe nomads and the Indus Valley, bringing European Neolithic as well as CHG-specific ancestry, and with them Indo-Iranian languages into northern South Asia. This is consistent with a long-range South Eurasian trade network ~2000 BCE (4), shared mythologies with steppe-influenced cultures (41, 60), linguistic relationships between Indic spoken in South Asia, and written records from Western Asia from the first half of the 18th century BCE onward (49, 67). In Anatolia, our samples do not genetically distinguish Hittite and other Bronze Age Anatolians from an earlier Copper Age sample (~3943 to 3708 BCE). All these samples contain a similar level of CHG ancestry but no EHG ancestry. This is consistent with Anatolian/Early European farmer ancestry, but not steppe ancestry, in the Copper Age Balkans (68) and implies that the Anatolian clade of IE languages did not derive from a large-scale Copper Age/Early Bronze Age population movement from the steppe [unlike the findings in (4)]. Our findings are thus consistent with historical models of cultural hybridity and “middle ground” in a multicultural and multilingual but genetically homogeneous Bronze Age Anatolia (69, 70). Current linguistic estimations converge on dating the Proto-Anatolian split from residual PIE to the late fifth or early fourth millennia BCE (53, 71) and place the breakup of Anatolian IE inside Turkey before the mid-third millennium (51, 54, 72). In (49) we present new onomastic material (73) that pushes the period of Proto-Anatolian linguistic unity even further back in time. We cannot at this point reject a scenario in which the introduction of the Anatolian IE languages into Anatolia was coupled with the CHG-derived admixture before 3700 BCE, but note that this is contrary to the standard view that PIE arose in the steppe north of the Caucasus (4) and that CHG ancestry is also associated with several non-IE-speaking groups, historical and current. Indeed, our data are also consistent with the first speakers of Anatolian IE coming to the region by way of commercial contacts and small-scale movement during the Bronze Age. Among comparative linguists, a Balkan route for the introduction of Anatolian IE is generally considered more likely than a passage through the Caucasus, due, for example, to greater Anatolian IE presence and language diversity in the west (55). Further discussion of these options is given in the archaeological and linguistic supplementary discussions (48, 49). Thus, while the steppe hypothesis, in the light of ancient genomics, has so far successfully explained the origin and dispersal of IE languages and culture in Europe, we find that several elements must be reinterpreted to account for Asia. First, we show that the earliest unambiguous example of horse herding emerged among hunter-gatherers, who had no substantial genetic interaction with western steppe herders. Second, we demonstrate that the Anatolian IE language branch, including Hittite, did not derive from a substantial steppe migration into Anatolia. And third, we conclude that Early Bronze Age steppe pastoralists did not migrate into South Asia but that genetic evidence fits better with the Indo-Iranian IE languages being brought to the region by descendants of Late Bronze Age steppe pastoralists. Supplementary Materials www.sciencemag.org/content/360/6396/eaar7711/suppl/DC1 Supplementary Text Figs. S1 to S37 Tables S1 to S17 This is an article distributed under the terms of the Science Journals Default License.
https://science.sciencemag.org/content/360/6396/eaar7711?rss=1
The Sacred Yew is quite a Claim that a Living Tree could predate Bronze Age Activity, Roman Occupation & Christian period. The Ancient Yew; According to Robert Bevan - Jones Observation that the Yew is generally acknowledged as British Tree capable of Longest Life in Wales...probably has the Largest collection of Ancient Yews in the World. According to George Barrow's view of the Tree "Strata Florida" as A Site of an early Saint's Cell with associated 'Spring' & Yew Trees. The tour of Wales in the 1870 wrote of a Yew Tree at Strata Florida that stood just by the Northern Wall, & described Tree as being either Split by Lightning or by the Force of Wind. Tree "Strata Florida" as A Site of an early Saint's Cell with associated 'Spring' & Yew Trees? THE INDO-EUROPEAN FAMILY TREE many of the languages of Europe and Asia had developed from a common ancestral language, and he outlined his theory for the first time at a meeting of the Bengal Asian Society in 1786. Towards the end of the eighteenth century, philologists began to study groups of languages. They showed, for example, that French, Spanish and Italian had developed from Latin. They were able to prove that padre (Italian), padre (Spanish), pert (French), pai (Portuguese) and pare (Catalan) had all developed from the Latin word for father, pater. According to Sir William Jones, the same process had occurred when Indo-European developed into different languages throughout Europe and Asia. Of course, not one word of Indo-European has survived, but linguists believe that the people who spoke the language lived about 6000 years ago. From their original home in southern Russia they migrated eastward and westward, reaching central Europe by 3500 BC and India by 2000 BC. From this original language, there developed nine families of languages. One of these was Celtic, and it is to this group, as we shall see, that Welsh belongs. In speaking of Sir William Jones, we should note one interesting tale about him. Once, while on a visit to Paris, he was introduced to the French King by the British Ambassador. In presenting him, the ambassador said, "Sir William is a very strange man. He can speak practically every language under the sun, except his own!" This was perfectly true, for although he could read a little Welsh, he was unable to speak the language!
https://todumbrella.com/index.php?option=com_content&view=article&id=87:dendrochronological&catid=45:rokmicronews-fp-1
Martin certainly appears to know about the Steppe people — and research on the Proto-Indo-Europeans was all the rage in the nineteenth-century. Back then, they were called Aryans. Such research likely influenced Tolkien, one of his favorite authors. In some ways, the Dothraki fit into the same nineteenth-century tradition of romanticizing horse and pagan culture that J.R.R. Tolkien employed with his Rohirrim, the horse lords of Middle Earth in Lord of the Rings. Tolkien’s horse people resemble the Anglo-Saxons and the nineteenth-century conception of horse people as blue-eyed “Aryans.” Martin’s horse people are the real deal. Nineteenth-century linguists referred to Proto-Indo-Europeans as Aryans, a name that has taken on a dark legacy. To me at least, Martin appears to draw on the true Proto-Indo-Europeans, the ones that have emerged from more recent research. And, not the blonde Aryan-esque the ones that were appropriated by elite Europeans who envisioned blonde in their image — and especially, not when coupled with some idealized version of the Anglo-Saxon past. Proto-Indo-Europeans share more than a few traits with the Dothraki. They both lived or live in the Steppe (flatlands covered with grass and shrubs) — one in Eurasia and Essos. They are both male-centered societies who “worship” power. And, they both may have conquered on horseback. The Proto-Indo-Europeans may have been the original “horse people.” But who were the Proto-Indo-Europeans? Before Christ, Cleopatra and the Roman Empire, it’s possible that the first horse lords, known as Proto-Indo-Europeans, may have lived in the Great Steppe and spread their language and culture by conquering Eurasia on the horses they domesticated. Using historical and comparative linguistic techniques, scholars postulate that these nomadic tribes spoke a common tongue – the mother of all of the roughly 439 Indo-European languages and dialects. Proto-Indo-Europeans may have roamed the western Pontic-Caspian area of the Great Steppe from roughly 4500 to 2500 BCE. The Great Steppe belts Eurasia with 5,000 kilometers (3106 miles) of grasses and shrubs not unlike the North American prairies. The Steppe’s dry summer heat soars to 110-120F (43-49C). The only relief comes from frequent light breezes. Unlike what we’ve seen so far in Game of Thrones, winter in the real-world Steppe is harsh and snowy, with icy winds like North Dakota that drive temperatures down to -35F (-37 C).1. Survival in Steppe winters was tough. Proto-Indo-European societies raised cattle, sheep and, most important of all, horses. They measured wealth by a man’s head of cattle or horses. Proto-Indo-European societies wove wool and their oxen drew wagons. They sang or recited tales of heroes with “imperishable fame.” For the Proto-Indo-Europeans, marriage by abduction was legal2 , which doesn’t feel very different than the Dothraki attitude towards sex and rape. Like with medieval nobles – or Ned and Robert whom Jon Arryn “fostered” – the Proto-Indo-Europeans almost certainly practiced fosterage. Fosterage is the formal term to describe the child outsourcing we see on Game of Thrones. As Wikipedia so aptly puts it: “In many pre-modern societies fosterage was a form of patronage, whereby influential families cemented political relationships by bringing up each other’s children, similar to arranged marriages, also based on dynastic or alliance calculations.” We know that some Proto-Indo-European people were closer to their foster-father and foster-mother because Old Irish inherited the terms mathair, athai for biological mother and father and the affectionate baby-talk words like muimme and aite for foster-mother and –father3. The Proto-Indo-Europeans typically chose foster-parents from the mother’s kin, often the maternal uncle. In what is perhaps the faintest echo of medieval estates, Proto-Indo-European society had hierarchies or functions. - The first function kept religious and sovereign order and included the priests and kings. - The second function was martial force and included the warriors. - The third function was that of fertility, which included the shepherds and producers of goods (artisans)4 . The Proto-Indo-Europeans also enslaved those whom they captured in war or who fell into debt 5 . Reconstructed Proto-Indo-European Language vs. Constructed Dothraki While doing a little research on Proto-Indo-European (Proto-Indo-European) societies, I stumbled across a sound clip of a linguist who has attempted to recreate what Proto-Indo-European sounded like. Does it sound like Dothraki?Researchers believe that, from 3rd millennium BCE onward, the nomadic Indo-Europeans who lived on the Pontic-Caspian steppe spread Proto-Indo-European language and culture across Eurasia, either through conquest or by sharing agricultural techniques. Proto-Indo-European is a hypothetical culture, one inferred from archeological evidence and two centuries of linguistic analysis of modern languages. They didn’t leave any written records. Linguists believe that, most of Europe and Asia spoke the same language, known as Proto-Indo-European. Proto-Indo-European is the “mother” language of all modern Indo-European languages, including English. Rarely do we think about what our collective past was like before the time of written records. Although archeologists have found symbol systems dating back to the seventh millennium BCE, the first signs of what is truly considered writing only date back to the Sumerians, Ancient Iran, and the Egyptians in roughly 3400 BCE. The earliest coherent texts don’t emerge until 2600 BCE. Other cultures didn’t develop writing until later (China in 1200 BCE, Mesoamerica 600 BCE) – and some cultures, like Proto-Indo-European (and Dothraki), never developed a writing system. Consequently, scholars infer the history of these cultures through archeology and by trying to detect similar words across “daughter” languages. These daughter languages inherited words with parallel roots from Proto-Indo-European that captured in modern-languages or their ancestors. In both Irish and Gothic (Ancient East German), for example, the word “witness” comes from the root of the verbs “see” or “know.” A similar metaphor is found in the etymology of the Italic (Latin’s predecessor) for testimony and testify. The word “testis” means a “third person standing by.” The ancient Hittite language (from Anatolia or modern-day Turkey) leverages a similar metaphor: the verb meaning “stand over” can also mean bear witness.6 . Historical linguists also studied sounds and phonetic rules to find similarities among words and the roots of words. They can determine, for example, that the word for hundred in Latin (kentum) and Lithuanian (shimtas) descended from the same word7 . What did our ancestors sound like? Proto-Indo-European Reconstruction Dr. Andrew Byrd, an expert in Indo-European phonology, recorded his own approximation of Proto-Indo-European language for Archaeology magazine. Byrd gives voice to “The Sheep and the Horses,” a story of a shorn sheep that encounters an unfriendly horse herd. Before making the recording Byrd updated the tale with more recent research about the sounds of Proto-Indo-European. (German linguist August Schleicher wrote the original tale in a reconstructed Proto-Indo-European vocabulary in 1868.) To figure out how Proto-Indo-European should sound, Byrd analyzed ancient Indo-European languages like Latin, Greek, and Sanskrit. What does Dothraki sound like? Game of Thrones showrunners wanted to flesh out the Dothraki language when they began filming the show. Linguist David Peterson created Dothraki based on George RR Martin’s description of the language. He also drew inspiration from Turkish, Russian, Estonian, Inuktitut and Swahili. In Peterson’s own words, Dothraki sounds a bit like a “a mix between Arabic (minus the distinctive pharyngeals) and Spanish, due to the dental consonants.” But, he acknowledges that most people probably think it sounds like Arabic. This is especially true since “most people probably don’t really know what Arabic actually sounds like, so to an untrained ear, it might sound like Arabic.” To me, Dothraki sounds guttural, which makes sense if it is partially based on Arabic. (According to Quora user Orin Hargraves, “Most English sounds are produced from mid-palate to the lips. Arabic, on the other hand, has half a dozen consonants that are produced in the pharynx, larynx, or the rear of the oral cavity.”) But, does Dothraki sound like Proto-Indo-European? To my very untrained ear, I think it does. To me, both languages sound guttural. Dothraki certainly has some of the right roots. Russian and Spanish are daughter languages of Proto-Indo-European. In truth, I just like the idea that HBO and David Peterson might have drawn from a language hypothesized from such an influential linguistic theory.
http://history-behind-game-of-thrones.com/ancienthistory/dothraki-pie?replytocom=97303
For more than a century anthropologists have studied the multitude of cultures and ethnicities that exist across the globe, delving deep into the various ways that populations develop their own unique identities. With the development of genetic anthropology over the last 15 years, scientists have begun to examine whether these cultural identities align with a population’s genetics. How do we as humans differ genetically from one another, and how have these differences arisen throughout our species’ history? This fascinating question is tackled by Steve Olson in Mapping Human History: Discovering the Past through our Genes. Olson is well suited to explain the complex genetic history of our species. He has worked for the National Academy of Sciences, the White House Office of Science and Technology Policy, and written several books and articles for general audiences on human genetics. While researching this book he interviewed a large number of anthropological geneticists. He uses his experience in science communication – as well as those interviews – to tell the story of Homo sapiens, beginning in Africa and touching on nearly every major geographical region in the world. Olson starts his exploration of our species’ genetic history where it began, in Africa. He begins by describing some of the most isolated and interesting ethnic groups – the so-called Bushmen of southwest Africa and the Kalahari Desert. This well known group of hunter-gatherers has interested anthropologists and geneticists alike for many decades. The Bushmen are relatively isolated, maintaining a hunter-gatherer lifestyle even as many populations around them have taken on new lifestyles. They also speak languages that are famous for their ‘click’ sounds. Belonging to the Khoisan language family, these languages are believed to be some of the most ancient still spoken among humans. Olson uses examples such as the Bushmen to delve deeper into the science of genetic ancestry. He clearly explains how scientists use DNA to trace both maternal and paternal ancestry, similar to what we at 23andMe do as part of our Personal Genome Serviceâ„¢. After spending time discussing Africa Olson moves on, tracking our species’ prehistoric movements into the Near East, Asia, Europe, and even the Americas. He also spends time discussing some of the most interesting genetic questions about our species, such as the origins and migrations of the Jews throughout history, and how changes in languages can sometimes be connected to genetic changes. The final portion of Mapping Human History investigates more closely the cultural and ethnic issues that have arisen over the past ten years because of studies that examine the genetic ancestry of specific populations. Some Native American groups, for example, protest studies such as these because they conflict with their own tribes’ creation stories. However, Olson argues this research should actually be applauded, because “the study of genetics has now revealed that we are all linked…. We are members of a single human family, the products of genetic necessity and chance.” Mapping Human History is, overall, a good foray into our species’ genetic past and how genetics studies can reveal many things – both about how we are different, and how we are the same. Olson peppers his arguments with engaging anecdotes, such as the story of a female researcher in South Africa with mixed heritage, or how the peopling of Hawaii led to such unique genetic diversity among its current inhabitants. These anecdotes would be welcome for any general reader, making the concepts Olson discusses more real and accessible to a general audience. There are many books that deal with genetic ancestry, and this one does cover many of the same topics as countless others. But what distinguishes Mapping Human History is its focus on genetic versus cultural and ethnic divisions in our societies. This alternative angle may prove interesting to general audiences with non-science backgrounds, as Olson brings in issues such as race relations and cultural and ethnic diversity that are relevant to many people. This text would prove an appealing read for 23andMe customers looking to put their own genetic information into a global context. In addition, it would be well-suited for anyone interested in how our species has evolved and expanded over the last few hundred thousand years, and how these past migrations have shaped the ethnic and cultural identities that exist today across the globe.
https://blog.23andme.com/ancestry/recommended-reading-mapping-human-history/
The Indo-European language family has been recognised by historical linguists for more than 200 years - it includes most of the languages of Europe as well as Iranian and many of the languages of north India and Pakistan. But what is not agreed is how such a vast area of linguistic affinity came about. Most linguists agree that there must have been an ancestral language, "Proto-Indo-European", from which all the others are derived, and that it must have been spoken at least as early as 3000bc in a homeland located somewhere in the midst of the great modern distribution of Indo-European languages. One widely accepted suggestion is that "Proto-Indo-European" was spoken in the steppe lands of what is now the Ukraine around 3000bc, and that with the domestication of the horse, it was carried east and west by mounted-warrior-nomads who came to dominate most of Europe and beyond. But a decade ago, in my book Archaeology and Language, I argued that this theory was not persuasive. There is no evidence that the horse was ridden for military purposes in Europe before about 1200bc. Instead I suggest that Proto-Indo-European speech came to Europe with the coming of farming, around 6000bc from Anatolia, the modern Turkey. Many linguists found my suggestion very unsatisfactory in linguistic terms, yet most archaeologists agree with me that the old mounted-warrior-nomad theory is an insufficient explanationfor how the language came to be dispersed. At the McDonald Institute for Archaeological Research in Cambridge I and my colleagues are working on a project, "The Prehistory of Languages'', which operates in the difficult overlap area between linguistics, archaeology and molecular genetics. When evidence is brought to bear from genetics on the Indo-European problem some fruitful discoveries emerge. Study of gene frequency maps shows that it is likely that there was strong gene flow from Anatolia to Europe in prehistoric times. At first this was seen as strong support for the farming dispersal theory for Indo-European origins. But work on human mitochondrial DNA by Brian Sykes and his colleagues in Oxford, suggests that this gene flow may have been in the Palaeolithic period - far earlier than most linguists would accept for Proto-Indo-European. The matter is one of increasing controversy. Our project is supported by a grant from the Alfred P. Sloan Foundation of New York, which has the policy of supporting research into subjects which are at the very "limits of knowability". For the archaeologist and the prehistorian one of the most challenging problems is to explain the great range of cultural diversity in the world today. An important part of that diversity is linguistic: more than 5,000 languages are spoken. Some are clearly related - French and Spanish, for instance. But other resemblances - for example those between Latin and Greek - must have a much earlier origin, apparently lost in the mists of prehistory. Today, however, the application of molecular genetics is beginning to reveal much about the histories of populations. A synthesis may be emerging between the disciplines of prehistoric archaeology, historical linguistics and molecular genetics which may take us beyond the limitations of any one of these areas towards an understanding of cultural, linguistic and genetic diversity. The problem is that no one of the three subject areas in isolation can provide all the answers. In my view it is necessary to clarify the processes of population history and language history. Impressive research has been undertaken in recent years on the languages of the Pacific, notably the Polynesian languages. There, farming dispersal seems an important part of the story. The Afroasiatic and Indo-European language families have been classified by a number of Russian scholars as belonging with several other language families (eg Uralic, Dravidian) within a still larger linguistic phylum or macrofamily called Nostratic - a claim disputed today by many linguists. But how does one test the Nostratic hypothesis that all these languages have a single, common origin? We have begun by publishing a monograph by the linguist Aharon Dolgopolsky entitled The Nostratic Macrofamily and Linguistic Palaeontology which will be discussed at a symposium in July. Later we hope to have a meeting on linguistic and genetic diversity in the Americas, and then to submit the "farming dispersal" hypothesis for the origin of various language families to critical scrutiny. In a few years we may be a little clearer as to whether these are problems capable of solution, or whether they do indeed lie beyond the limits of knowability. Professor Colin Renfrew is director of the McDonaldInstitute for Archaeological Research, Cambridge University.
https://www.timeshighereducation.com/features/at-the-cutting-edge/107484.article?storyCode=107484&sectioncode=26
PURPOSE OF REVIEW In this paper, we review current concepts of Alzheimer's disease, recent progress in diagnosis and treatment and important developments in our understanding of its pathogenesis with a focus on beta-amyloid both as culprit and therapeutic target. RECENT FINDINGS The amyloid cascade hypothesis of Alzheimer's disease pathogenesis continues to predominate with evidence suggesting that small oligomeric forms of Abeta-42 rather than fibrils or senile plaques are the key pathological substrates. The concept of mild cognitive impairment continues to be refined to define better those patients who will progress to Alzheimer's disease. Structural and functional imaging techniques and cerebrospinal fluid biomarkers are gaining acceptance as diagnostic markers of Alzheimer's disease, with a potentially exciting advance being the ability to image amyloid in vivo using novel positron emission tomography ligands. Whilst available treatments afford only symptomatic benefits, disease-modifying treatments may be within reach. Despite the halting of the first amyloid beta-vaccination trial due to adverse effects, amyloid immunotherapy continues to show promise, with new approaches already entering clinical trials. Other therapeutic strategies under investigation include inhibition of beta -and gamma-secretase, key enzymes implicated in Alzheimer's disease pathogenesis. SUMMARY Current research demonstrates the potential for diagnostic strategies and disease modifying treatments to follow from an ever more detailed understanding of the molecular mechanisms underlying the pathogenesis of Alzheimer's disease.
https://www.semanticscholar.org/paper/New-developments-in-mild-cognitive-impairment-and-Schott-Kennedy/37ba1ab189a386cdf1292cbfa5736693f408f11a
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. Although dementia has been described in ancient texts over many centuries (e.g., "Be kind to your father, even if his mind fail him." - Old Testament: Sirach 3:12), our knowledge of its underlying causes is little more than a century old. Alzheimer published his now famous case study only 110 years ago, and our modern understanding of the disease that bears his name, and its neuropsychological consequences, really only began to accelerate in the 1980s. Since then we have witnessed an explosion of basic and translational research into the causes, characterizations, and possible treatments for Alzheimer's disease (AD) and other dementias. We review this lineage of work beginning with Alzheimer's own writings and drawings, then jump to the modern era beginning in the 1970s and early 1980s and provide a sampling of neuropsychological and other contextual work from each ensuing decade. During the 1980s our field began its foundational studies of profiling the neuropsychological deficits associated with AD and its differentiation from other dementias (e.g., cortical vs. subcortical dementias). The 1990s continued these efforts and began to identify the specific cognitive mechanisms affected by various neuropathologic substrates. The 2000s ushered in a focus on the study of prodromal stages of neurodegenerative disease before the full-blown dementia syndrome (i.e., mild cognitive impairment). The current decade has seen the rise of imaging and other biomarkers to characterize preclinical disease before the development of significant cognitive decline. Finally, we suggest future directions and predictions for dementia-related research and potential therapeutic interventions. (JINS, 2017, 23, 818-831). Alzheimer’s disease; Biomarkers; Clinical trials; Cognition; Mild cognitive impairment; Neuroimaging; Neuropsychology; Neuroscience National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/29198280?dopt=Abstract
Holiday Inn, Holiday Inn, 59 Sipson Way, Sipson, West Drayton UB7 0DP, UK, West Drayton, United Kingdom We are glad to announce that “2nd Global Experts Meeting on Frontiers in Alzheimer’s Disease & Dementia” conference is going to be held on Sep 06-08 | 2021 London UK. Organized by Frontiers Meetings Ltd in collaboration with generous support and cooperation from enthusiastic Academicians and Editorial Board Members. This conference is a unique international platform that’s a confluence of all stake holders of the ecosystem – Industry Academia Researchers Innovators Regulators – coming together to present and discuss wide range of current topics and will be available to discuss on the Theme: “Advancements and Breakthroughs in the Fields of Dementia & Alzheimer’s Research”. Research Topics: 1. Mental health & Psychiatry 2. Dementia 3. Care Practice and Awareness 4. Childhood Trauma and Dementia 5. Alzheimer’s Disease Imaging 6. Causes and Prevention of Alzheimer’s 7. Alzheimer’s Disease Diagnosis and Symptoms 8. Alzheimer’s Disease Pathophysiology and Disease Mechanisms 9. Neurodegenerative Diseases 10. Vascular Dementia 11. Dementia Nursing 12. Dementia Care and Consulting 13. Alzheimer’s Clinical Trials and Studies 14. Brain Diseases 15. Parkinson’s Disease 16. Molecular Genetics and Biology of Dementia 17. Medical Biotechnology and Alzheimer’s disease 18. Therapeutic Targets and mechanisms for Treatment 19. Animal Models and Translational Medicine 20. Recent Studies and Case Reports Target Audience:
https://allevents.in/west%20drayton/2nd-global-experts-meeting-on-frontiers-in-alzheimer%E2%80%99s-disease-and-dementia/80003221586392?ref=footer-tr-cityhome
Welcome, PIA Executive Committee: Sex and Gender Differences in Alzheimer's Disease Chair: Timothy Hohman Dr. Timothy Hohman is an Associate Professor of Neurology, cognitive neuroscientist, and computational geneticist, with secondary appointments in the Vanderbilt Genetics Institute and Department of Pharmacology. Dr. Hohman's research leverages advanced computational approaches from genomics, proteomics, and neuroscience to identify novel markers of Alzheimer's disease risk and resilience. Within the Vanderbilt Memory and Alzheimer's Center, Dr. Hohman is the director of the Biomarker Core, oversees the development of neuroimaging, proteomic, and big-data analytical pipelines, and is the Principal Investigator of the Computational Neurogenomics Team focused on Alzheimer's Resilience and Sex Differences. Outside of Vanderbilt, he has directed numerous multi-site collaborative initiatives, with as many as four analysis sites and 40+ contributing universities. Dr. Hohman directs the Genomics Core for the Preclinical Alzheimer's Disease Consortium and is co-chair of the Alzheimer's Disease Sequencing Project (ADSP) Harmonization Consortium. Dr. Hohman's programmatic research focuses on understanding how certain individuals are able to accumulate Alzheimer's disease neuropathology without showing clinical symptoms of the disease. He has identified molecular drivers of such resilience through genomic and proteomic analyses leveraging neuroimaging and neuropathology endophenotypes. Dr. Hohman's team also integrates these diverse data types into a precision medicine approach, focusing on characterizing the best predictors of risk and resilience given an individual's age, sex, genetic, and neuropathological context. Through transdisciplinary collaboration, Dr. Hohman's team seeks to facilitate a more rapid move from genomic discovery to therapeutic development. Vice Chair: Rachel Buckley Dr. Rachel Buckley is an Australian cognitive neuroscientist at Massachusetts General Hospital/Brigham and Women’s Hospital (Boston, USA) and is a recipient of an NIH-NIA K99/R00 Pathway to Independence award. Her research is focused on predictive modeling of sex differences in longitudinal cognitive change in older adults who are at genetic and biological risk of Alzheimer’s disease. Her recent work builds on previous literature that suggests women stand at greater risk of progressing to Alzheimer’s disease dementia than men. Specifically, she has become engaged in understanding the underlying biological mechanisms that might underlie female susceptibility (or resilience) to Alzheimer’s disease using positron emission tomography (PET) imaging of the tau protein in the brain. Dr. Buckley is also known in the area of subjective cognitive decline where she wrote her PhD; here, she examined associations between subjective memory concerns and PET AD biomarkers. Dr. Buckley is also co-head of the Healthy Brain Project (healthybrainproject.org.au) with Drs. Yen Ying Lim, Matthew Pase and Nawaf Yassi. The aim of this study is to use digital technology to detect Alzheimer’s disease risk as early as possible in racially and ethnically diverse, and rural, populations across Australia. Programs Chair: Judy Pa Dr. Judy Pa is a PhD Cognitive Neuroscientist and Associate Professor at the University of Southern California in the Mark and Mary Stevens Neuroimaging and Informatics Institute and in the departments of Neurology, Neuroscience, and Biomedical Engineering. Dr. Pa has 18 years of human neuroimaging experience and directs a research lab focused on Alzheimer’s risk factors and prevention— with a primary goal of developing and testing new lifestyle interventions for preserving brain health. Dr. Pa is the Imaging Core Co-Leader for USC's Alzheimer’s Disease Research Center and a Project Leader for USC’s Program Project on Vascular Contributions to Alzheimer’s disease. Her active research program includes 2 large ongoing lifestyle intervention trials with a focus on physical and cognitive activities, in addition to understanding Alzheimer’s risk factors, such as sex/gender and APOE4. Her research program is supported by the National Institute on Aging and the Alzheimer’s Association. Communications Chair: Frances-Catherine Quevenco Frances-Catherine Quevenco is an accomplished neuroscientist with a passion for enabling dementia patients to access timely and accurate diagnoses to receive the proper care they need. Frances completed a BSc at UCL London in Psychology, followed by an MSc at Imperial College London before joining the National University of Singapore as an Associate Cognitive Neuroscientist. After 2 years of research at NUS, she joined Prof. Roger Nitsch’s group at the University of Zurich, dedicating her PhD and Postdoc to exploring preclinical Alzheimer’s disease biomarkers using neuroimaging techniques to enable earlier detection of the disease when an intervention is most effective and where she first became aware of the stark sex differences between the manifestation of the disease. She has since then broadened her field of expertise, taking on a medical position at Roche Diagnostics to educate and establish the value of a timely biomarker-based diagnosis in the field Alzheimer’s disease. In parallel, she is also a strong advocate of advancing research into sex- and gender-specific differences in brain and mental disease and a core team member of the Women's Brain Project. Early Career Representative: Justina Avila-Rieger Dr. Justina Avila-Rieger is a postdoctoral research scientist in Neurology at the Gertrude H. Sergievsky Center and the Taub Institute for Research in Aging and Alzheimer’s disease at Columbia University. She completed her doctoral degree in clinical psychology, with an emphasis on neuropsychology and quantitative methodology, at the University of New Mexico and completed her clinical internship in Neuropsychology at the Baltimore VAMC. During her graduate training, she also specialized in health policy as a Robert Wood Johnson Foundation Health Policy Research Fellow. Her research focuses on variability in sex/gender inequalities in cognitive decline and dementia risk across race/ethnicity, place, and time. Her overall goal as a researcher is to give a voice to the communities that are traditionally marginalized in sex/gender cognitive aging research. In her own work, she does this by emphasizing the lifecourse experiences of women of color and how these experiences shape biological mechanisms that ultimately lead to sex/gender inequalities in late-life cognitive health. Steering Member – Human: Michelle Mielke Michelle M. Mielke, Ph.D. received a Bachelor’s of Science in Neuroscience at the University of Pittsburgh and a doctorate in Neuroepidemiology from the Johns Hopkins University Bloomberg School of Public Health. She is currently a Professor in the Department of Health Sciences Research, Division of Epidemiology, and a Professor in the Department of Neurology at the Mayo Clinic in Rochester, MN. Dr. Mielke works as a translational epidemiologist to further understanding of the etiology and epidemiology of neurodegenerative diseases. A primary focus of her research is the identification of fluid biomarkers for the diagnosis, prediction, and progression of Alzheimer's disease and other neurodegenerative diseases. Another focus of Dr. Mielke’s research is on understanding the sex and gender differences in the development and progression of neurodegenerative diseases. She directs the Mayo Clinic Specialized Center of Research Excellence on Sex Differences, with a specific focus on abrupt endocrine disruption, accelerated aging, and Alzheimer’s disease. Dr. Mielke is the past-chair of the Biofluid-Based Biomarker Professional Interest Area under the Alzheimer’s Association, co-Chair of the Society of Women’s Health Alzheimer’s Disease Network, a member of the Food and Drug Administration Peripheral and Central Nervous System Advisory Committee, and Senior Associate Editor of Alzheimer’s and Dementia. She is the PI of several NIH- and Foundation-funded clinical- and epidemiological-based grants. She has published over 200 manuscripts and has presented at multiple national and international conferences and consortiums. Steering Member – Human: Michael Ewers Dr. Michael Ewers is a Professor at the Institute of Stroke and Dementia Research, University Hospital, Ludwig Maximilian University Munich. A major research interest focuses on identifying those functional brain mechanisms that underlie reserve capacity in Alzheimer’s disease. In a series of cross-validation studies, fMRI assessed connectivity of a hub in the fronto-parietal control network was found to attenuate the impact of core AD pathology on cognitive performance (Franzmeier et al., Neurology 2017; Brain 2018; Alzheimer’s Res Therapy, 2018). We are also investigating whether microglia activation (measured by biomarkers of TREM2) may provide a protective mechanism in response to AD pathology (Suarez-Calvet et al. Science Translational Medicine, 2016). Another focus is on the identification of cerebrovascular and amyloid-beta related DTI changes to determine when and where white matter alterations occur within the pathological cascade of Alzheimer’s disease (Araque Caballero et al. Brain 2018). Steering Member – Non-Human: Roberta Brinton Dr. Roberta Diaz Brinton is the Director of the Center for Innovation in Brain Science at the University of Arizona Health Sciences and Professor of Pharmacology and Neurology, College of Medicine, University of Arizona. The Center for Innovation in Brain Science is focused on translational research and mechanistically driven therapeutic development for the at-risk aging and Alzheimer’s brain. Brinton’s research has focused broadly on the mechanisms through which the aging brain develops late onset Alzheimer’s disease (AD). She leads three programs of discovery research and two programs in translational / clinical research. Her discovery research programs focus on systems biology of: 1) Mechanisms underlying risk of Alzheimer’s during female brain aging; 2) Sex differences in mechanisms underlying risk of AD and 3) Regeneration and repair mechanisms and therapeutics relevant to for Alzheimer’s. Her translational and clinical research programs focus on therapeutic development to prevent, delay and reverse AD with emphasis on systems biology regulators of the bioenergetic and regenerative systems of brain. Her research is supported by the National Institute on Aging, Alzheimer’s Association and Alzheimer’s Drug Discovery Foundation. In addition to research endeavors, Brinton leads a translationally oriented Alzheimer’s disease and related dementias NIA T32 (AZ-TRADD) for Predoctoral Fellows and mPI on the NIDNS URBRAIN R25 training grant for Diné Tribal College students (Tsaile, Arizona), Navajo Nation. Brinton serves on the NIH Directors Advisory Committee, the National Institute on Aging Scientific Advisory Board and the Alzheimer’s Drug Discovery Foundation Board of Governors. Steering Member – Non-Human: TBA Information will be forthcoming. CURRENT ISTAART PIAS 0 Alzheimer's Association International Society to Advance Alzheimer's Research and Treatment www.alz.org/ISTAART | E-mail: [email protected] | Call +1.312.335.5188 Copyright © 2021 Alzheimer’s Association®. All rights reserved.
https://action.alz.org/personifyebusiness/Membership/ISTAART/PIA/SexandGenderDifferencesinAlzheimersDisease/ExecutiveCommittee.aspx
Neurodegenerative diseases such as Huntington's disease and Alzheimer's disease, although very different in etiology, share common degenerative processes. These include neuronal dysfunction, decreased neural connectivity, and disruption of cellular plasticity. Understanding the molecular mechanisms underlying the neural plasticity deficits in these devastating conditions may lead the way toward new therapeutic targets, both disease-specific and more generalized, which can ameliorate degenerative cognitive deficits. Furthermore, investigations of 'pathological plasticity' in these diseases lend insight into normal brain function. This review will present evidence for altered plasticity in Huntington's and Alzheimer's diseases, relate these findings to symptomatology, and review possible causes and commonalities.
https://researchers.mq.edu.au/en/publications/molecular-mechanisms-mediating-pathological-plasticity-in-hunting
Indiana Alzheimer Disease Center success brings sixth consecutive five-year federal grant INDIANAPOLIS — Federal officials have recognized the research impact of scientists and physicians working on Alzheimer’s disease at the Indiana University School of Medicine, awarding the school its sixth consecutive five-year grant for the Indiana Alzheimer Disease Center. The renewal brings $10.6 million in new funding, an increase of $1.5 million over the $9.1 million received when the center’s funding was last renewed in 2011. “The renewal reflects the quality and broad range of translational research activities underway at the Indiana Alzheimer Disease Center to increase the scientific understanding of the causes and biological mechanisms of Alzheimer’s disease and other dementias,” said center director Andrew J. Saykin, Psy.D., Raymond C. Beeler Professor of Radiology and Imaging Sciences and director of the IU Center for Neuroimaging. “Research projects include analysis of genetic risk, early detection using advanced MRI and PET brain imaging, and treatment development through identification of novel therapeutic targets, as well as testing of new experimental medications and lifestyle interventions,” Dr. Saykin said. Other major goals include training the next generation of Alzheimer’s researchers and clinical providers, and helping inform current patients and their caregivers about new developments in the field and available resources, Dr. Saykin said. “The renewal of funding for the Indiana Alzheimer Disease Center reflects our deep commitment to understanding and developing new therapies for a disease that will affect millions of people in the coming years,” said Jay L. Hess, M.D., Ph.D., M.H.S.A, dean of the IU School of Medicine and IU vice president for university clinical affairs. “With the resources of Indiana Alzheimer Disease Center, we are able to bring together the many types of expertise – genomics, imaging, neurology, biostatistics, bench research and many more – that will be needed to solve the difficult problems that Alzheimer’s continues to pose for scientists, physicians, patients and caregivers,” said Anantha Shekhar, M.D., Ph.D., executive associate dean for research affairs at the IU School of Medicine and director of the IU Precision Health Initiative. The center’s scientific momentum is reflected by about 60 research presentations by center faculty and trainees at the Alzheimer’s Association International Conference in Toronto in July. A dozen faculty members were invited to give oral presentations, chair research discussions, and lead major “plenary” sessions at the conference. Given the outstanding resources and engagement in scientific progress, the center, Dr. Saykin noted, is “well aligned” to support the primary goal of the National Plan for Alzheimer’s disease: Prevent and effectively treat Alzheimer’s disease by 2025. In part, the budget increase reflects new initiatives since the previous renewal, notably creation of the Genetics, Biomarker, and Bioinformatics Core in 2014. That core, led by Tatiana Foroud, chair of the Department of Medical and Molecular Genetics, boosted the center’s resources to unearth genetic differences associated with Alzheimer’s disease, to identify molecular signals pointing to those who might be at risk for the disease, and to analyze the vast amounts of data generated by such research. Identifying those at risk for Alzheimer’s long before any symptoms appear has become one of the top research priorities at the center and in the field as a whole. “We now have evidence that Alzheimer’s disease begins at least 20 years before the person comes into the doctor’s office with dementia. That’s transformative knowledge in the field, to which our center has contributed,” Dr. Saykin said. He added that “the long presymptomatic phase of disease provides an important time window to test disease modifying agents and lifestyle strategies.” Founded in 1991, the Indiana center is among the nation’s oldest, having been continuously funded by the National Institute on Aging, part of the National Institutes of Health. Thirty-eight faculty physicians and scientists, and 22 staff are affiliated with or employed by the center, which serves as a nexus for a large network of collaborators whose work is related to Alzheimer’s disease and other dementing illnesses. The Indiana Alzheimer Disease Center is composed of eight areas of focus, or cores:
https://medicine.iu.edu/news/2016/08/alzheimer-center-federal-funding-renewal-1/
This article is co-authored by John Hulleman, Ph.D. Dry age-related macular degeneration (AMD) is a complex condition that distorts a patient’s vision. Left untreated, the disease can develop into wet AMD, which is more severe. Wet AMD causes retinal leaking and bleeding, vision changes, and eye damage that cannot be reversed. The International Agency for the Prevention of Blindness estimates that as many as 196 million people around the world will have AMD by 2020, and there is no cure today. Drugs that target a substance in the blood called vascular endothelial growth factor (VEGF) can be effective temporarily – patients will need frequent medication injections in the eye. But promising research at UT Southwestern and the National Institutes of Health (NIH) is focused on improving current treatments and paving the way for new therapies. We anticipate that this research will be a springboard for the treatment of AMD and other age-related retinal diseases. Two of the most exciting areas of study today are gene therapy and stem cell therapy. Gene therapy for AMD UT Southwestern is conducting pre-clinical studies developing potential gene therapies that target mechanisms of disease apart from VEGF. This approach could reduce the need for patients to have frequent drug injections in their eyes. In fact, the goal is to find out how gene therapy can slow or even prevent AMD by targeting novel disease pathways, or how a disease develops. For example, one study involves introducing into the retina an adeno-associated virus (AAV) – a clinically approved, genetically altered, nontoxic virus. The AAV carries a specific DNA sequence that reprograms retinal cells and reestablishes defective cellular pathways. This might help slow the progression of AMD and more aggressive eye conditions that cause rapid loss of retinal function at an early age. Some of our research uses mouse models to try to replicate certain aspects of AMD, although this is difficult with some disease characteristics. Preliminary studies suggest we can fine-tune signaling in mouse retinas to restore a balanced eye environment. In the future, we expect that similar gene therapies will be available to treat AMD and other complex eye diseases. Related article: When AMD, Parkinson's and Alzheimer's affect the eyes Stem cell therapy for AMD The NIH is conducting trials in which researchers are using (and possibly editing) patient-derived induced pluripotent stem cells (iPS cells) in their laboratories. In other words, they are using stem cells to grow new retinal cells that have the identical genetic makeup of a patient. Theoretically, iPS-derived retinal cells could be transplanted into a patient with AMD to improve their condition. Recent advancements have allowed researchers to try to create eye-like organoids that are structurally similar to fully developed human eyes. We can even replicate some diseases in the laboratory using these models. This research could one day help us examine eye conditions and test potential treatments without risk to human patients. Retinal injury and disease recovery research Today, research in AMD is focused on the basics: What causes it? How does it affect the eye? Can macular degeneration be reversed? How can we improve current therapies? A constant focus is understanding how diseases cause retinal injury and how the retina tries to recover. This information will eventually help us develop therapies to minimize injury and maximize recovery. Currently, we are using focused light to replicate retinal injury in the laboratory. Then, we study how it changes the cells and genes to better understand how we can intervene and preserve vision. This approach may even allow us to generate therapies that are not disease-specific, which could make preventive therapies and treatment more widely accessible to patients. There is no single breakthrough in AMD research that will cover all the questions and opportunities for prevention and treatment. Today and in the future, the key to successful retinal disease management will continue to rely on a combination of diligent research, technological advancements, and expert care from a team of AMD and retinal disease specialists. To find out whether you or a loved one might benefit from a team approach to AMD or retinal disease care, call 214-645-2020 or request an appointment online.
https://utswmed.org/medblog/amd-stem-cell-gene-therapy/
Brain plasticity; molecular, cellular and functional mechanisms of brain repair in traumatic brain injury, stroke, Alzheimer’s disease, and CADASIL. Pathological mechanisms underlying the development and progression of brain injury, neurodegenerative and genetic diseases. ASSOCIATIONS / MEMBERSHIPS RESEARCH ABSTRACT Research abstract The long-term goal of our research is to develop new therapeutic strategies for stroke, Alzheimer's disease, CADASIL disease, and traumatic brain injury. Our current research focuses on enhancing brain repair after brain damage by either brain disease or injury. The major approaches used for our study include molecular biology, cellular biology, microsurgery, live brain imaging, flow cytometry, motor function and cognitive function assessment, immunohistochemistry and brain section imaging. Stroke is considered a brain attack and is ranked as the number one cause of long-term disability in adults. Alzheimer's disease (AD) is a neurodegenerative disease that causes irreversible progressive brain damage and memory loss. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is a rare hereditary stroke disorder and an autosomal dominant vascular dementia. Currently, there is no drug that can improve the functional outcome for stroke nor is there a treatment that can stop or delay the progressive brain damage caused by AD or CADASIL. We have recently demonstrated the therapeutic effects of bone marrow stem cell factors on brain repair in animal models of stroke, Alzheimer's disease and CADASIL disease. Our laboratory is studying how the bone marrow stem cell factors repair the brain in these devastating diseases. In addition to the in vivo study, we are also determining how the bone marrow stem cell factors regulate neuronal process formation, synaptic generation, and stem cell growth, motility and differentiation. We are also gaining an understanding of the detailed process of how stem cells promote neuronal network development. Selected publications 1. Zhao L.-R., Duan W.-M., Reyes M., Keene C.D., Verfaille C.M. Low W.C. Human bone marrow stem cells exhibit neural phenotypes and ameliorate neurological deficits after grafting to ischemic brain in rats. Experimental Neurology, 2002; 174: 11-20. 2. Zhao LR, Singhal S, Duan WM, Mehta J, Kessler JA. Brain repair by hematopoietic growth factors in a rat model of stroke. Stroke, 2007; 38:2584-2591. 3. Zhao LR, Berra HH, Duan WM, Singhal S, Mehta J, Apkarian AV, Kessler JA. Beneficial effects of hematopoietic growth factor therapy in chronic ischemic stroke in rats. Stroke, 2007; 38:2804-2811. 4. Zhao LR, Navalitloha Y, Singhal S, Mehta J, Kessler JA, Piao CS, Guo WP, Groothuis DR. Hematopoietic growth factors pass through the brain-blood barrier in intact rats. Experimental Neurology, 2007; 204:569-573. 5. C-S Piao, M E Gonzalez-Toledo, Y-Q Xue, W-M Duan, S Terao, D N Granger, R E Kelley and L-R Zhao. The role of stem cell factor and granulocyte-colony stimulation factor in brain repair during chronic stroke. JCBFM. 2009; 29:759-770. 6. Bin Li, Maria E Gonzalez-Toledo, Chun-Shu Piao, Allen Gu, Roger E Kelley, Li-Ru Zhao. Stem cell factor and granulocyte-colony stimulating factor reduce β-amyloid deposits in the brains of APP/PS1 transgenic mice. Alzheimer's Research & Therapy. 2011; 3:8. 7. Piao CS, Li Bin, Lijuang Zhang, Zhao LR*. Stem Cell Factor and Granulocyte-Colony Stimulating Factor Promotes Neurogenesis and Inhibits Glial Differentiation. Differentiation, 2012;83:17-25. 8. Lili Cui, Sasidhar R. Murikinati, D Wang, Xiangjian Zhang, Wei-Ming Duan, Li-Ru Zhao. Reestablishing neuronal networks in the aged brain by stem cell factor and granulocyte-colony stimulating factor in a mouse model of chronic stroke. PLoS ONE. 2013;8:e64684.
https://www.upstate.edu/grad/faculty/major-areas.php?empID=zhaol
Amphisomes are intermediate/hybrid organelles produced through the fusion of endosomes with autophagosomes within cells. Amphisome formation is an essential step during a sequential maturation process of autophagosomes before their ultimate fusion with lysosomes for cargo degradation. This process is highly regulated with multiple protein machineries, such as SNAREs, Rab GTPases, tethering complexes, and ESCRTs, are involved to facilitate autophagic flux to proceed. In neurons, autophagosomes are robustly generated in axonal terminals and then rapidly fuse with late endosomes to form amphisomes. This fusion event allows newly generated autophagosomes to gain retrograde transport motility and move toward the soma, where proteolytically active lysosomes are predominantly located. Amphisomes are not only the products of autophagosome maturation but also the intersection of the autophagy and endo-lysosomal pathways. Importantly, amphisomes can also participate in non-canonical functions, such as retrograde neurotrophic signaling or autophagy-based unconventional secretion by fusion with the plasma membrane. In this review, we provide an updated overview of the recent discoveries and advancements on the molecular and cellular mechanisms underlying amphisome biogenesis and the emerging roles of amphisomes. We discuss recent developments towards the understanding of amphisome regulation as well as the implications in the context of major neurodegenerative diseases, with a comparative focus on Alzheimer's disease and Parkinson's disease.
https://www.researchwithrutgers.com/en/publications/understanding-amphisomes
Falling for a scam may be an early warning sign of dementia, a new study suggests Older Americans lose about $35 billion collectively a year to scammers. Those with poor con-radar are at about a two-fold risk of dementia, Rush University research suggests. Source: the Mail online | Health - Category: Consumer Health News Source Type: news Related Links: Errorless Learning as a method of neuropsychological rehabilitation of individuals suffering from dementia in the course of Alzheimer's disease. Authors: Śmigórska A, Śmigórski K, Rymaszewska J Abstract The following article discusses the possibility of applying a rehabilitation strategy known as Errorless Learning (EL) in patients suffering from Alzheimer's disease (AD). The authors present the issue in the context of the knowledge on the effectiveness of administering neuropsychological interventions in patients with AD. The history of the EL method development is presented as well as techniques used in its domain. The novelty of the EL methodological approach is shown. It is emphasized that EL, in contrast with the majority of neuropsycho... Source: Psychiatria Polska - Category: Psychiatry Tags: Psychiatr Pol Source Type: research Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. CONCLUSIONS: These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language. PMID: 31006700 [PubMed - in process] Source: Restorative Neurology and Neuroscience - Category: Neurology Tags: Restor Neurol Neurosci Source Type: research PhysIcal Exercise ,Cerebral, COgnitive and Metabolome in People at Risk of Mild Cognitive Impairment Conditions: Mild Cognitive Impairment; Alzheimer Dementia Intervention: Behavioral: Supervised Exercise Programme Sponsors: University of Cadiz; Centro de Excelencia en Metabolómica y Bioanálisis (CEMBIO); Servicio Central de Neuroimagen de la Universidad Pablo de Olavide; Consejería de Salud y Bienestar Social, Andalucía Recruiting Source: ClinicalTrials.gov - Category: Research Source Type: clinical trials Publication date: Available online 23 April 2019Source: Alzheimer's &DementiaAuthor(s): Mélanie Fortier, Christian-Alexandre Castellano, Etienne Croteau, Francis Langlois, Christian Bocti, Valérie St-Pierre, Camille Vandenberghe, Michaël Bernier, Maggie Roy, Maxime Descoteaux, Kevin Whittingstall, Martin Lepage, Éric E. Turcotte, Tamas Fulop, Stephen C. CunnaneAbstractIntroductionUnlike for glucose, uptake of the brain's main alternative fuel, ketones, remains normal in mild cognitive impairment (MCI). Ketogenic medium chain triglycerides (kMCTs) could improve cognition in MCI by providing the... Source: Alzheimer's and Dementia: The Journal of the Alzheimer's Association - Category: Geriatrics Source Type: research Researchers here report on discovering that an existing farnesyltransferase inhibitor drug reverses the accumulation of altered tau protein aggregates in a mouse model. The death and dysfunction of nerve cells in the neurodegenerative conditions known as tauopathies is driven by the formation of neurofibrillary tangles, made of tau protein. That in turn has deeper causes, such as the chronic inflammation produced by senescent cells and disruption of immune cell activity in the central nervous system, one of which is no doubt being adjusted in some way by the action of the drug in this case. As in all such quite indirect me... Source: Fight Aging! - Category: Research Authors: Reason Tags: Daily News Source Type: blogs While clearing out amyloid-β from the brain has so far proven to be a matter of too little, too late in late stage Alzheimer's disease patients, there is still a strong basis of evidence for the merits of removing amyloid-β. It is reasonable to say that it causes meaningful pathology; if people did not accumulate amyloid-β deposits, then there would be no consequent disarray in the function of neurons and immune cells in the brain. This particular foundation of the development of dementia would be removed. Even if the mechanisms of the later stages of Alzheimer's, the chronic inflammation and tau protein agg... Source: Fight Aging! - Category: Research Authors: Reason Tags: Daily News Source Type: blogs Contributors : Erin McKay ; John Beck ; Mary Winn ; Karl Dykema ; Andrew Lieberman ; Henry Paulson ; Scott CountsSeries Type : Expression profiling by arrayOrganism : Homo sapiensGene expression profiling was performed on frontal and temporal cortex from vascular dementia (VaD), Alzheimer's disease (AD), and non-demented controls (Control) obtained from the University of Michigan Brain Bank. Controls and AD cases had no infarcts in the autopsied hemisphere. Vascular dementia cases had low Braak staging. Source: GEO: Gene Expression Omnibus - Category: Genetics & Stem Cells Tags: Expression profiling by array Homo sapiens Source Type: research Cerebrospinal Fluid Total and Phosphorylated α-Synuclein in Patients with Creutzfeldt–Jakob Disease and Synucleinopathy In conclusion, our data confirm t-α-synuclein and p-α-synuclein as robust biomarkers for sCJD and indicate the potential use of col orimetric t-α-synuclein ELISAs for differential diagnosis of dementia types. Source: Molecular Neurobiology - Category: Neurology Source Type: research Differential insular cortex sub-regional atrophy in neurodegenerative diseases: a systematic review and meta-analysis In conclusion, insular sub-regional atrophy, particularly the anterior dorsal region, may contribute to cognitive and neuropsychiatric deficits in neurodegeneration. Our results support anterior insular cortex vulnerability and convey the differential involvement of th e insular sub-regions in functional deficits in neurodegenerative diseases. Source: Brain Imaging and Behavior - Category: Neurology Source Type: research Next Generation Precision Medicine: CRISPR-mediated Genome Editing for the Treatment of Neurodegenerative Disorders AbstractDespite significant advancements in the field of molecular neurobiology especially neuroinflammation and neurodegeneration, the highly complex molecular mechanisms underlying neurodegenerative diseases remain elusive. As a result, the development of the next generation neurotherapeutics has experienced a considerable lag phase. Recent advancements in the field of genome editing offer a new template for dissecting the precise molecular pathways underlying the complex neurodegenerative disorders. We believe that the innovative genome and transcriptome editing strategies offer an excellent opportunity to decipher nove...
https://medworm.com/687480126/falling-for-a-scam-may-be-an-early-warning-sign-of-dementia-a-new-study-suggests/
MSc Neuroscience and Neurodegeneration course structure information Rather than comprising a piecemeal assortment of small modules, the course mainly consists of large coherent modules containing themes of content that are developed over the course of each module. Each year of the course is worth 60 credits which equates to 600 hours of study (including preparation of assessed coursework). Study is split over 40 weeks of the academic year, so you should expect to devote around 15 hours per week to study over these 40 weeks. Introduction to Neurodegeneration (60 credits) Clinical neurology is underpinned by knowledge of the neuroanatomy of the central and peripheral nervous system. Therefore, a thorough knowledge of basic neuroanatomy is essential. This module starts with a comprehensive series of short neuroanatomy videos demonstrating laboratory dissection of the human brain and spinal cord. The videos are complemented by an interactive learning resource featuring quizzes and a formative writing exercise. The rest of this module is organised into blocks of content focused on individual diseases – namely motor neurone disease, Alzheimer’s disease/dementia, Parkinson’s disease, Huntington’s disease and multiple sclerosis. Three themes run through these disease blocks – patient symptoms and care; the associated disease pathology; the genetics of disease and how this facilitates modelling in cell lines and laboratory animals. By the end of this module students will have detailed knowledge of the clinical features of the major neurodegenerative diseases and the underlying pathological and genetic causes. Mechanisms of Neurodegeneration (60 credits) The first year of study introduced the clinical features, underlying pathology and genetic causes of disease. While genetics facilitates disease modelling, it also provides crucial insight into the cellular and molecular mechanisms underlying disease. These mechanisms are the focus of the second year. After introduction of key topics and concepts, content in this module is again organised into blocks focused on individual diseases, but with four themes running through the disease blocks. The themes are – the role of protein accumulation and aggregation in neurodegenerative diseases; mitochondrial dysfunction in neurodegeneration; neuroinflammation and the role of non-neuronal cells in disease progression; the role of dysfunctional RNA biology and aberrant gene expression in disease. These are key areas of research aimed at development of disease-modifying therapies. Alongside further development of communication skills, emphasis will be placed on building critical thinking skills and understanding the funding mechanisms underpinning academic research. Novel Therapies for Neurodegeneration (15 credits) This core module is to provide detailed critical insight into the development of novel therapeutics for neurodegeneration. Preclinical and clinical studies utilising drugs, stem cells, antibodies and nucleotide-based therapeutics (e.g. viruses and antisense oligonucleotides) will be discussed. The different routes of administration required by discrete classes of therapeutic for effective target engagement will be considered. Case studies highlighting the importance of academic and industrial partnerships with the healthcare sector at different stages of therapeutic development will be discussed. You can then choose the optional modules Professional and Research Skills and Literature Review (45 credits) or optional module Research Project (45 credits) Professional and Research Skills (15 credits) This optional module will focus on the practicalities involved in performing audits and research with human data, both within and outside of the NHS, in terms of the ethics and research governance procedures that are required prior to collecting samples and data. The module will also cover the professional responsibilities of researchers, including mentoring and dealing with incidences of misconduct. Students will further develop their knowledge of how to communicate scientific messages to both scientific and non-scientific audiences including patients and the public. Literature Review (30 credits) This optional module is to allow students to carry out an in depth literature-based project on a specific clinical or scientific topic relevant to neurodegeneration, which is suitable for publication. It aims to develop abilities in information retrieval from appropriate sources, synthesis and critical analysis of published literature, and identification of gaps in current knowledge that should be addressed by future research. New titles are provided each year by academic supervisors to address an unmet need for an original review, an updated review or questioning a paradigm in the wider literature. Research Project (45 credits) This optional module allows students to undertake a remote research project involving quantitative analysis of existing data sets or qualitative/mixed methods research evaluating a specific aspect of clinical practice or research. Data-based projects can be based on imaging, gene expression or genomics data sets. You will be able to choose a project from a booklet or design your own project as long as a suitable academic supervisor can be appointed. See full course information on our prospectus The content of our courses is reviewed annually to make sure it is up-to-date and relevant. Individual modules are occasionally updated or withdrawn. This is in response to discoveries through our world-leading research, funding changes, professional accreditation requirements, student or employer feedback, outcomes of reviews, and variations in staff or student numbers. In the event of any change we'll consult and inform students in good time and take reasonable steps to minimise disruption. Information last updated: 17 March 2022 Find a postgraduate course A masters from Sheffield means in-depth knowledge, advanced skills and the confidence to achieve your ambitions.
https://www.sheffield.ac.uk/medicine/modules/msc-neuroscience-and-neurodegeneration-course-structure-information
Scientific Research Priorities Children’s Hospital Los Angeles is dedicated to advancing basic, translational, clinical and outcomes-based research. With access to one of the largest and uniquely diverse pediatric populations in the United States, physicians and scientists have the opportunity to research and develop innovative treatments that are relevant to children here and across the globe. The Institute’s interdisciplinary research is organized around three synergistic areas of focus that together fully explore the developmental origins of health and disease and address the most pressing national child health issues: - The Institute for the Developing Mind - Metabolism, Immunity, Infection, and Inflammation - Regenerative Medicine and Cellular Therapies The interdisciplinary, interdepartmental work at The Saban Research Institute of Children's Hospital Los Angeles centers on these major scientific priorities to address child health and disease susceptibility: Cancer and Blood Diseases Basic and translational cancer research primarily focuses on neural tumors, acute leukemia and sarcomas. Hematology investigators are conducting pioneering research into iron overload, the pathology of sickle cell disease, and the treatment of childhood bleeding and clotting disorders. The CBDI is home to more major, multi-center clinical trials and research consortia than any other program of its kind in the country. Community, Health Outcomes and Intervention This program works to promote the health and well-being of families through interventions designed to reduce health disparities and barriers to healthy living. Community-based research examines areas ranging from obesity and diabetes to teen births, HIV/AIDS, child abuse and neglect. The program is also engaged in research on the early identification and intervention for children with autism spectrum disorders. Developmental Neurosciences Focused on understanding the developing brain, the Neurosciences research program uses a molecular genetics and cellular imaging approach to provide novel insights into the developmental origins of neurological disorders. Researchers also seek to understand how genes and environmental factors influence memory, attention, language, social communication and behavior during critical developmental periods. Diabetes and Obesity Researchers are exploring the mechanisms contributing to the growing epidemic of obesity and diabetes in children, and developing strategies to reduce the burden of these conditions. Studying metabolic syndromes and their impact on obesity, cancer risk and cardiovascular disease, researchers also aim to improve treatment and prevention of these disorders. Imaging The focus of the Children’s Health Imaging research program is to advance the use of imaging technology in the study of pediatric disease in the laboratory and in clinical practice. Its goals include providing a specialized imaging core facility to develop biomarkers and outcome measures, and to study the biological and metabolic determinants of musculoskeletal development. Immunology, Infectious Disease and Pathogens In research ranging from studies of bacterial meningitis to the transmission of HIV in breast milk, scientists strive to understand the mechanisms by which bacteria, viruses and fungi colonize, invade and evade immune response. They also seek to define the role of innate, adaptive and intrinsic immunity in studies that serve as the basis for antimicrobials, vaccines and public health interventions. Regenerative Medicine The goal of the program is to impact unmet medical needs through advancements in basic stem cell biology, applied stem cell therapies and tissue engineering. By investigating the basic mechanisms of human organ development, repair and regeneration, scientists aim to discover novel targets for cellular therapies and understand how to generate tissue engineered organs. Other important research areas at Children's Hospital Los Angeles include: Cardiology Researchers in the Heart Institute are studying the role of genetics in pediatric heart disease; working to improve the odds of transplantation success; advancing imaging capabilities; and developing new therapies, surgical techniques and novel medical devices. They are also developing sophisticated technologies, such as fetal stents and pacemakers, for treating babies for congenital heart conditions in the womb. Orthopaedics Orthopaedic surgeons participate in clinical research to identify underlying causes and improve surgical treatment options for conditions such as bone and soft-tissue tumors; hip disorders; scoliosis and spinal deformities; and neuromuscular disorders, including cerebral palsy and spina bifida. They also conduct clinical studies in sports medicine, hand surgery and microsurgery. Personalized Medicine and Genomics The Center for Personalized Medicine brings together physicians and scientists with expertise in genomics, clinical genetics, bioinformatics and molecular diagnostics, working in collaboration with specialists throughout CHLA. Research is focused on three broad categories with the greatest potential to impact children’s health: oncology, inherited disease, and infectious disease.
https://www.chla.org/research/scientific-research-priorities
Sequencing-by-ligation (SBL) is one of several next-generation sequencing methods that have been developed for massive sequencing of DNA immobilized on arrayed beads (or other clonal amplicons). Polony Sequencing utilizes a fixed bead array and SBL to obtain DNA sequence information. Biotinylated template DNA is isolated in a PCR solution and added to streptavidin-coated beads. Through a process of emulsion PCR, the beads become clonally covered with multiple copies of a single template strand. In SBL, an anchor primer is hybridized to a known region of the template DNA, usually a linker or adaptor that has been ligated onto a fragment of an unknown DNA sequence. Then, using a series of degenerate query oligos that have fluorophores, the DNA adjacent to the known region is sequenced through a series of hybridizing an anchor primer, ligation of a query oligo, then denaturing the DNA and clearing all signal and repeating. Images captured in four fluorescent channels, one corresponding to each base pair, serve as data. On every frame, every bead’s coordinates are recorded, as well as the signal the bead gave for a base pair position. After multiple cycles of biochemistry and imaging, a sequence is generated for every bead, which is then used as the raw sequence for alignment. SBL has the advantage of being easy to implement and accessible to all, because it can be performed with off-the-shelf reagents. However, SBL has the limitation of very short read lengths. To overcome the read length limitation, complex library preparation processes have been developed, which can be time-consuming, difficult, and result in low complexity libraries. Antoine Ho, an INCBN IGERT Graduate Trainee, is working in the laboratory of his main advisor, Prof. Jeremy Edwards from the Dept. of Molecular Genetics and Microbiology at UNM School of Medicine on a variation of traditional SBL protocols called cyclic SBL (cSBL), which extends the number of sequential bases that can be sequenced by using Endonuclease V to recognize an incorporated deoxyinosine site that serves as a query primer and to clip the DNA, thus leaving a ligatable end extended into the unknown sequence for further SBL cycles. Virtually all next-generation sequencing platforms generate gigabytes of data per run, often in the form of mate-paired (or single sequence) short-reads. This requires the analysis and mapping of several billion mate-paired reads when used for whole-genome sequencing. An efficient algorithm to perform this mapping to a large reference genome, itself comprising several giga-base-pairs, is essential, given the very large dataset sizes. With the assistance from his co-advisor, Prof. Susan Atlas from the Dept. of Physics, Antoine Ho and his colleagues have developed in-house a dedicated software package called SAWTooth (Sequencing Analysis Workbench Tool), whose core functionality is the efficient mapping of short-read sequencing data to a reference genome, outperforming other popular codes used in genome alignment by ~100-fold or more. SAWTooth also implements several ancillary applications for validation and statistical analysis of mapping results. All fast contemporary mapping algorithms rely on indices. These auxiliary data structures facilitate mapping sequences to a reference genome. These indices generally fall into two broad categories – suffix trees and hash indices. Traditionally, the construction and use of suffix-trees imposed prohibitive memory requirements, though in recent years, innovations in the field of compressed text indices have rendered suffix-based methods feasible for whole-genome indexing, though still not optimal. SAWTooth utilizes hash indexing, a well-known referencing data structure, which allows key-based data retrieval in constant, O(1) time, making it the fastest of all data retrieval structures. In principle, there are some limitations of general hash indices that may limit their performance or impair their usefulness. Keys are not ordered, so sorted lists and range searches are not intrinsic operations on the data structures. Also, hash function may generate the same hash for multiple keys, resulting in a collision. Resolving collisions requires extra processing and access to the original keys within the index. However, the special nature of genomic data and the specialized purpose of mapping mate-paired reads to a reference genome, can be exploited to create hash indices that are free from these limitations. In SAWTooth, the hash key is the sequence tag, and the data to be retrieved is an exhaustive list of loci where the reads map in the reference genome. To demonstrate the feasibility of a cSBL approach to genome sequencing and calculate gains in using cSBL over traditional SBL methods, SAWTooth was applied to simulate human genome coverage using mate-paired data ranging from twenty-six bases (limit of traditional SBL) to forty bases (theoretical gain from cSBL implementation). A set of simulated mate-paired tags, each separated by a range of 300-700 bases, was created, ranging in size from 13 paired tags to 20 paired tags. A sufficient number of tags were computationally generated to simulate 10 × coverage. The tags were all generated from chromosome 1, mapped back to the entire genome, and calculations of chromosome 1 coverage were performed. Next, an analysis was performed of how many times each tag mapped to the genome. One of the more significant benefits gained by increasing tag length from 13 to 20 bases is that far fewer tags must be discarded because they do not map uniquely. At a tag length of 13 bases, only 57.2% of the tags are used, compared to 85.6% at a tag length of 20, thus effectively increasing throughput. Address Goals Primary: This project involves not only the bench sciences portion of working with biochemical processes and DNA, but also implementation of simulation data using an in-house developed software called the Sequencing Analysis Workbench Tool (SAWTooth). Antoine Ho was involved in the generation of simulation data and was a biologist consultant for the development of the SAWTooth package. The two accomplishments described above are in vastly different fields, one being a molecular biology advance and the other from computer science. Next-generation sequencing is inherently interdisciplinary and this project highlights the need for such collaboration to advance the field. Secondary: The innovation of the cSBL variation extends and improves sequencing acquisition biochemistry, whereas the creation of SAWTooth improves the alignment of mate-paired tags to a reference genome. These are two key aspects of sequencing that needed to be improved, and make it more feasible for human genome sequencing. This in turn increases the effectiveness of categorizing Single Nucleotide Variations in an individual’s genome that may contribute to disease states. It is hoped that this knowledge may be used diagnostically, and then one day, therapeutically.
http://www.igert.org/highlights/645.html
Recently, I have been researching about big data analytics in biochemistry, and started wondering about how genome sequence compression could affect analysis. Of all the method listed on the Wikipedia page, the reference template method is my favourite as, it not only seems effective, but also was the idea that popped into my head before I did any study regarding this topic. But, before I implement it in a project I am working on, I wanted to know if there are any common and obvious drawbacks/limitations that the bioinformatics industry faces quite often when using this scheme. The most obvious drawback of the reference template method is that if you use it, you've already analysed data. Usually, you get data from wet biologists. They sequence samples and upload sequences to a server. Often an uploading process is already automated. If you sequence in-house, the data is already on your server. If not - biologists do not want to do an extra job. Moreover, if it is not a big project you do not have that much data that transferring and storing it is the problem. So you'll get the data as *.fastq.gz files. Then you make QC, align reads and make an SNV-calling or expression analysis or whatever you want. Aligning and SNV-calling could be painful, complicated (Copy Number Variation, heterozygosity etc) and time-consuming. It is the problem. Resulting data usually is stored as Variant Call Format (VCF). Under the influence of CRAM and 1K Genome Project which uses it, VCF has been developed. Not the answer you're looking for? Browse other questions tagged bioinformatics dna-sequencing genomics sequence-analysis or ask your own question.
https://biology.stackexchange.com/questions/44222/are-there-any-major-noticeable-limitations-to-genome-sequence-compression-method
A series of tools developed by Public Health England's (PHE's) bioinformatics unit as part of larger genomics-based projects. One of the functions of PHE’s bioinformatics unit is to develop tools as part of larger genomics-based projects. The tools with wider generic uses are available here for use across the scientific community. Genome annotation browser The genome annotation browser searches all complete microbial genomes for features that are described with a particular keyword. The output from the program display all relevant information for those features matching the query including their sequence. AFLP fragment predictor program (ALFIE) ALFIE predicts fragment sizes resulting from restriction endonuclease digestion and subsequent amplification in an AFLP reaction. All currently sequenced genomes are available for querying. A list of target sequences may also be supplied. Gene extractor Gene extractor will extract all the coding sequences from a genbank file that contains multiple genes. The output will be displayed as FASTA format. Motifs or primers: unique to pooled sets (MOP-UPs) MOP-UPs will search alignments for primers or amino acid motifs that are specific to user-defined groups of sequences within the alignment. Virulence searcher Virulence searcher predicts potential virulence factors in unannotated genomes by predicting genes and searching the putative proteins for virulence-related amino acid motifs. EMBOSS EMBOSS is a suite of molecular biology tools which perform functions including: - alignments - DNA or protein editing - repeat finding - composition analysis VNTR diversity and confidence extractor (V-DICE) V-DICE will calculate the diversity index of VNTR loci, plus confidence intervals. This tool provides statistical evidence of repeat variability for known loci, which may aid assay development. Synonymous bases in nucleotide sequences (SynBin) SynBin identifies synonymous and non-synonymous mutations in DNA sequences. This tool allows multiple query sequences to be compared to one of a list of user-submitted reference sequences. It will provide a graphical report of mutation location and status. Double artemis comparison tool (ACT) Double ACT can easily generate the comparison file necessary to run the genome comparison tool ACT provided by the Sanger Centre. Assembly tool Assembly tool takes traces belonging to a single locus from one strain or isolate and assembles them. The resulting consensus sequence will be scored for quality. If a reference sequence or sequences are supplied then the consensus sequence will be aligned to each one and the closest match reported along with the corresponding alignment. Variable region finder Variable region finder is a tool that finds regions of difference between strains where there are at least 3 genome sequences available. This application will allow scientists to quickly determine regions that will be of interest when studying phenotypic difference or designing typing assays. PFGE predictor PFGE predictor takes sequenced genome and predicts the fragment sizes that will be produced when cut with a particular restriction endonuclease. The sequence and genes that lie on each fragment can be determined. Please email us with suggested improvements to these tools and ideas for new tools that would be applicable to generic methodologies.
https://www.gov.uk/guidance/bioinformatics-online-tools
- What is mapping? - What two things are crucial for a correct mapping? - What is BAM? requirements Requirements - You will learn what mapping is - A genome browser is shown that helps you to understand your data - Introduction to Galaxy Analyses - Sequence analysis - Quality Control: slides slides - tutorial hands-on time Time estimation: 1 hour Introduction Sequencing produces a collection of sequences without genomic context. We do not know to which part of the genome the sequences correspond to. Mapping the reads of an experiment to a reference genome is a key step in modern genomic data analysis. With the mapping the reads are assigned to a specific location in the genome and insights like the expression level of genes can be gained. The short reads do not come with position information, so we do not know what part of the genome they came from. We need to use the sequence of the read itself to find the corresponding region in the reference sequence. But the reference sequence can be quite long (~3 billion bases for human), making it a daunting task to find a matching region. Since our reads are short, there may be several, equally likely places in the reference sequence from which they could have been read. This is especially true for repetitive regions. In principle, we could do a BLAST analysis to figure out where the sequenced pieces fit best in the known genome. We would need to do that for each of the millions of reads in our sequencing data. Aligning millions of short sequences this way may, however, take a couple of weeks. And we do not really care about the exact base to base correspondence (alignment). What we are really interested in is “where these reads came from”. This approach is called mapping. In the following we will process a dataset with the mapper Bowtie2 and we will visualize the data with the program IGV. Agenda In this tutorial, we will deal with: Prepare the data hands_on Hands-on: Data upload - Create a new history for this tutorial and give it a proper name tip Tip: Create a new history - Click on the galaxy-gear icon (History options) on the top of the history panel - Click on Create New tip Tip: Rename an history - Click on Unnamed history (or the name of the history) (Click to rename history) on the top of your history - Write the new name - Type on Enter - Import wt_H3K4me3_read1.fastq.gzand wt_H3K4me3_read2.fastq.gzfrom Zenodo or from the data library (ask your instructor) https://zenodo.org/record/1324070/files/wt_H3K4me3_read1.fastq.gz https://zenodo.org/record/1324070/files/wt_H3K4me3_read2.fastq.gz tip Tip: Importing data via links - Copy the link location - Open the Galaxy Upload Manager (galaxy-upload on the top-right of the tool panel) - Select Paste/Fetch Data - Paste the link into the text field - Press Start - Close the window By default, Galaxy uses the URL as the name, so rename the files with a more useful name. tip Tip: Importing data from a data library As an alternative to uploading the data from a URL or your computer, the files may also have been made available from a shared data library: - Go into Shared data (top panel) then Data libraries - Find the correct folder (ask your instructor) - Select the desired files - Click on the To History button near the top and select as Datasets from the dropdown menu - In the pop-up window, select the history you want to import the files to (or create a new one) - Click on Import As default, Galaxy takes the link as name, so rename them. - Rename the files to reads_1and reads_2 tip Tip: Rename a dataset - Click on the galaxy-pencil pencil icon for the dataset to edit its attributes - In the central panel, change the Name field - Click the Save button We just imported in Galaxy FASTQ files corresponding to paired-end data as we could get directly from a sequencing facility. During sequencing, errors are introduced, such as incorrect nucleotides being called. Sequencing errors might bias the analysis and can lead to a misinterpretation of the data. The first step for any type of sequencing data is to check their quality. comment Check the Quality control tutorial The quality control tutorial is explaining this step. We will not going into the details here, in particular for the parameters. hands_on Hands-on: Quality control - FastQC tool on both datasets - MultiQC tool on the outputs of FastQC tool - Trim Galore! tool on the paired-end datasets Map reads on a reference genome Read mapping is the process to align the reads on a reference genomes. A mapper takes as input a reference genome and a set of reads. Its aim is to align each read in the set of reads on the reference genome, allowing mismatches, indels and clipping of some short fragments on the two ends of the reads: We need a reference genome to map the reads on. question Questions - What is a reference genome? - For each model organism, several possible reference genomes may be available (e.g. hg19and hg38for human). What do they correspond to? - Which reference genome should we use? solution Solution - A reference genome (or reference assembly) is a set of nucleic acid sequences assembled as a representative example of a species’ genetic material. As they are often assembled from the sequencing of different individuals, they do not accurately represent the set of genes of any single organism, but a mosaic of different nucleic acid sequences from each individual. - As the cost of DNA sequencing falls, and new full genome sequencing technologies emerge, more genome sequences continue to be generated. Using these new sequences, new alignments are builts and the reference genomes improved (fewer gaps, fixed misrepresentations in the sequence, etc). The different reference genomes correspond to the different released versions (called “builds”). - This data comes from ChIP-seq of mices, so we will use mm10 (Mus musculus). Currently, there are over 60 different mappers, and their number is growing. In this tutorial, we will use Bowtie2, a fast and memory-efficient open-source tool particularly good at aligning sequencing reads of about 50 up to 1,000s of bases to relatively long genomes. hands_on Hands-on: Mapping with Bowtie2 - Bowtie2 tool with the following parameters - “Is this single or paired library”: Paired-end - param-file “FASTA/Q file #1”: trimmed reads pair 1(output of Trim Galore! tool) - param-file “FASTA/Q file #2”: trimmed reads pair 2(output of Trim Galore! tool) - “Do you want to set paired-end options?”: No You should have a look at the parameters there, specially the mate orientation if you know it. They can improve the quality of the paired-end mapping. - “Will you select a reference genome from your history or use a built-in index?”: Use a built-in genome index - “Select reference genome”: Mouse (Mus musculus): mm10 - “Select analysis mode”: Default setting only You should have a look at the non default parameters and try to understand them. They can have an impact on the mapping and improving it. - “Save the bowtie2 mapping statistics to the history”: Yes - Inspect the mapping statsfile by clicking on the galaxy-eye (eye) icon question Questions - What information is provided here? - How many reads have been mapped exactly 1 time? - How many reads have been mapped more than 1 time? How is it possible? What should we do with them? - How many pair of reads have not been mapped? What are the causes? solution Solution - The information given here is a quantity one. We can see how many sequences are aligned. It does not tell us something about the quality. - ~90% reads have been aligned exactly 1 time - ~7% reads have been aligned concordantly >1 times. These are called multi-mapped reads. It can happen because of repetitions in the reference genome (multiple copies of a gene for example), specially when the reads are small. It is difficult to decide where these sequences come from so most pipelines ignores them. Always check the statistics there to be sure of not removing to much information by discarding them in any downstream analyses. - ~3% pair of reads have not been mapped because - both reads in the pair aligned but their positions do not concord with pair of reads ( aligned discordantly 1 time) - reads of these pairs are multi-mapped ( aligned >1 timesin pairs aligned 0 times concordantly or discordantly) - one read of these paires are mapped but not the paired read ( aligned exactly 1 timein pairs aligned 0 times concordantly or discordantly) - the rest are not mapped at all Checking the mapping statistics is an important step to do before continuing any analyses. There are several potential sources for errors in mapping, including (but not limited to): - Polymerase Chain Reaction (PCR) artifacts: Many HTS methods involve one or multiple PCR steps. PCR errors will show as mismatches in the alignment, and especially errors in early PCR rounds will show up in multiple reads, falsely suggesting genetic variation in the sample. A related error would be PCR duplicates, where the same read pair occurs multiple times, skewing coverage calculations in the alignment. - Sequencing errors: The sequencing machine can make an erroneous call, either for physical reasons (e.g. oil on an Illumina slide), or due to properties of the sequenced DNA (e.g., homopolymers). As sequencing errors are often random, they can be filtered out as singleton reads during variant calling. - Mapping errors: The mapping algorithm can map a read to the wrong location in the reference. This often happens around repeats or other low-complexity regions. So if the mapping statistics are not good, you should investigate the cause of these errors before going further in your analyses. After that, you should have a look at the reads and inspect the BAM file where the read mappings are stored. Inspection of a BAM file A BAM (Binary Alignment Map) file is a compressed, binary file storing the sequences mapped to a reference sequence. hands_on Hands-on: Inspect a BAM/SAM file - Inspect the param-file output of Bowtie2 tool A BAM file (or a SAM file, the non compressed version) consists of: - A header section (the lines starting with @) containing metadata, in particular the chromosome names and lengths (lines starting with the @SQsymbol) - An alignment section consisting of a table with 11 mandatory fields, as well as a variable number of optional fields: Col Field Type Brief Description 1 QNAME String Query template NAME 2 FLAG Integer bitwise FLAG 3 RNAME String References sequence NAME 4 POS Integer 1- based leftmost mapping POSition 5 MAPQ Integer MAPping Quality 6 CIGAR String CIGAR String 7 RNEXT String Ref. name of the mate/next read 8 PNEXT Integer Position of the mate/next read 9 TLEN Integer observed Template LENgth 10 SEQ String segment SEQuence 11 QUAL String ASCII of Phred-scaled base QUALity+33 question Questions - Which information do you find in a SAM/BAM file? - What is the additional information compared to a FASTQ file? solution Solution - Sequences and quality information, like a FASTQ - Mapping information, Location of the read on the chromosome, Mapping quality, etc So the BAM file integrates many information for each read, in particular the quality of mapping. hands_on Hands-on: Summary of mapping quality - Stats generate statistics for BAM dataset tool with the following parameters - param-file “BAM file”: aligned reads(output of Bowtie2 tool) - “Use reference sequence”: Use reference - “Choose a reference sequence for GC depth”: Locally cached - “Using genome”: Mouse (Mus musculus): mm10 Full - Inspect the param-file Statsfile question Questions - Which proportion of mismatches are in the mapped reads when aligned to the reference genome? - What does the error rate represent? - What is the average quality? How is it represented? - What is the insert size average? - How many reads have a mapping quality score below 20? solution Solution - There are ~21,900 mismatches for ~4,753,900 bases mapped so ~ mismatches per mapped bases - The error rate is the proportion of mismatches per mapped bases, so the ratio computed right before - The average quality is the mean quality score of the mapping. It is a Phred score, as the one used in the FASTQ file for each nucleotide. But here the score is not per nucleotide, but per read. And it represents the probability of mapping quality - The insert size is the distance between the two reads in the pairs. - To get the info: - Filter BAM datasets on a variety of attributes tool with a filter to keep only the reads with a mapping quality >= 20 - Stats generate statistics for BAM dataset tool on the output of Filter Before filtering: 95,412 reads - After filtering: 89,664 reads Visualization using a Genome Browser The Integrative Genomics Viewer (IGV) is a high-performance visualization tool for interactive exploration of large, integrated genomic datasets. It supports a wide variety of data types, including array-based and next-generation sequence data, and genomic annotations. In the following we will use it to visualize the mapped reads. hands_on Hands-on: Visualization of the reads in IGV - Install IGV (if not already installed) - Launch IGV on your computer - Expand the param-file output of Bowtie2 tool - Click on the localin display with IGVto load the reads into the IGV browser - Zoom on the chr2:91,053,413-91,055,345 The reads have a direction: they are mapped to the forward or reverse strand, respectively. When hovering over a read, extra information is displayed question Questions - What could it mean if a bar in the coverage view is colored? - What could be the reason why a read is white instead of grey? solution Solution - If a nucleotide differs from the reference sequence in more than 20% of quality weighted reads, IGV colors the bar in proportion to the read count of each base. - They have a mapping quality equal to zero. Interpretation of this mapping quality depends on the mapping aligner as some commonly used aligners use this convention to mark a read with multiple alignments. In such a case, the read also maps to another location with equally good placement. It is also possible the read could not be uniquely placed but the other placements do not necessarily give equally good quality hits. tip Tips for IGV - Because the number of reads over a region can be quite large, the IGV browser by default only allows to see the reads that fall into a small window. This behaviour can be changed in the IGV from view > Preferences > Alignments. - If the genome of your interest is not there check if it is available via More…. If this is not the case, you can add it manually via the menu Genomes -> Load Genome from… A general description of the user interface of the IGV browser is available here: IGV Browser description Conclusion After quality control, mapping is an important step of most analyses of sequencing data (RNA-Seq, ChIP-Seq, etc) to determine where in the genome our reads originated from and use this information for downstream analyses. keypoints Key points - Know your data! - Mapping is not trivial - There are many mapping algorithms, it depends on your data which one to choose Useful literature Further information, including links to documentation and original publications, regarding the tools, analysis techniques and the interpretation of results described in this tutorial can be found here. congratulations Congratulations on successfully completing this tutorial! curriculum Do you want to extend your knowledge? Follow one of our recommended follow-up trainings: - Transcriptomics - Reference-based RNA-Seq data analysis: tutorial hands-on - ChIP-Seq data analysis - Formation of the Super-Structures on the Inactive X: slides slides - tutorial hands-on feedback Give us even more feedback on this content! To give us more detailed feedback about these materials, please take a moment to fill in the extended Feedback Form.
https://training.galaxyproject.org/archive/2019-02-07/topics/sequence-analysis/tutorials/mapping/tutorial.html
Neandertal Genome Browser The Neandertal Genome Project has sequenced six samples from members of Homo sapiens neanderthalensis. Almost 98% of the sequence comes from three specimens from the Vindija Cave in Croatia; most of the remainder comes from an individual from Mezmaiskaya in the Altai Mountains, Russia, with tiny fractions (0.1%) from the species type specimen (Neander valley, Germany) and a fossil found in El Sidron cave in Asturias, Spain. To put the Neandertal sequences in perspective, the project also sequenced five modern humans, Homo sapiens sapiens, from Southern Africa, Western Africa, Papua New Guinea, China and Europe. The Neandertal sequences were mapped to the human reference genome (NCBI36), the chimpanzee genome, and an ancestral sequence extrapolated from a 4-way EPO alignment between human, chimp, orangutan and macaque, using a custom alignment program that takes into account the characteristics of ancient DNA. Comparison of the aligned data has been used to locate SNPs, producing tracks that can be displayed in the Neandertal Genome Browser alongside the human reference genome using the Distributed Annotation System (DAS). Acknowledgements The Neandertal Genome Project is based at the department of Evolutionary Genetics at the Max Planck Institute for Evolutionary Anthropology in collaboration with the Neandertal Genome Consortium. The Neandertal Genome Browser uses code developed by Ensembl, a joint project of the EBI and the Wellcome Trust Sanger Institute.
http://projects.ensembl.org/neandertal/
Methods Bacterial strains used in this study L. monocytogenes strain 36-25-1, with truncated InlA, was sequenced by whole selleck chemicals genome shot gun sequencing to analyze virulence-related genes. The low invasiveness of the strain compared to that of GM6001 in vivo the wild-type strain was shown in our previous study . In addition, four InlA-truncated strains (Lma13, Lma15, Lma20, and Lma28) isolated from raw meat products were sequenced by Sanger sequencing for reference . The whole genome sequence of EGDe, a clinical wild-type strain, was obtained from GenBank (GenBank accession no. NC 003210). Genome extraction All L. monocytogenes strains were cultured overnight in brain heart infusion broth (Eiken Chemical, Tokyo, Japan) at 37°C. The bacterial DNA was extracted using the phenol-chloroform and ethanol precipitation method . One milliliter of enriched culture was centrifuged at 10,000 × g for 10 min, and bacterial cells were EPZ015938 mw incubated in 567 μL of Tris-EDTA buffer containing lysozyme (2 mg/mL) for 1 h at 37°C. Cells were lysed by the addition of 30 μL of 10% (wt/vol) sodium dodecyl sulfate and 3 mL of 20 mg/mL proteinase K, with incubation for 1 h at 37°C. Next, 100 μL of 5 M NaCl was added, and DNA was extracted with chloroform–isoamyl alcohol (24:1) followed by phenol–chloroform–isoamyl alcohol (25:24:1). DNA was then precipitated with isopropanol, washed with 70% ethanol, and dried. Purified DNA was dissolved in Tris-EDTA buffer and used as the DNA template for whole genome shot gun sequencing and Sanger sequencing. Whole genome shot gun sequencing and de novo assembly For whole genome shot gun sequencing, a Roche GS Junior platform (Roche, Basel, Schweiz) was employed using a GS Junior Rapid Library Preparation kit and Sclareol GS Junior emPCR kit (Lib-L) according to the manufacture’s protocol. The read sequences were used to construct a contig without a reference sequence by de novo assembly using the GS De Novo Assembler (Roche, Basel, Schweiz). In this assembly, the program parameters were set to: seed step, 12; seed length, 16; seed count, 1; minimum overlap, 10; and minimum identity, 90. Extraction of virulence-related gene loci and comparison analysis The contigs of strain 36-25-1 and the EGDe whole genome sequence were aligned using NUCmer, an application of MUMmer 3.0 (http://mummer.sourceforge.net/). The virulence-related gene loci of strain 36-25-1 were extracted from the contigs using GenomeTraveler (In Silico Biology, Kanagawa, Japan). Briefly, among the ORFs extracted from the contigs, those that showed high identity with EGDe virulence-related genes were selected for further analysis. The extracted gene sequences were aligned with the EGDe sequences by GENETYX ver11.0.0 (Genetyx, Tokyo, Japan) to identify nucleotide mutations. When a genomic mutation was found, the corresponding amino acid sequences were also compared.
https://pdgfreceptor.com/methods-bacterial-strains-used-in-this-study-l-monocytogenes-str
Relevance: Encapsidates the genome, protecting it from nucleases. The encapsidated genomic RNA is termed the nucleocapsid (NC). Serves as template for viral transcription and replication. The increased presence of protein N in host cell does not seem to trigger the switch from transcription to replication as observed in other negative strain RNA viruses. Disables the host innate defense by interfering with beta interferon (IFNB) production through the inhibition of host IRF3 phosphorylation and activation by host IKBKE. Through the interaction with host IKBKE, strongly inhibits the phosphorylation and nuclear translocation of IRF3, a protein involved in IFN activation pathway, leading to the inhibition of IFNB and IRF3-dependent promoters activation (By similarity). Reference: "Nucleotide sequence of the Lassa virus (Josiah strain) S genome RNA and amino acid sequence comparison of the N and GPC proteins to other arenaviruses."Auperin D.D., McCormick J.B.Virology 168:421-425(1989) Purity: Greater than 90% as determined by SDS-PAGE. Storage: The shelf life is related to many factors, storage state, buffer ingredients, storage temperature and the stability of the protein itself. Generally, the shelf life of liquid form is 6 months at -20℃/-80℃. The shelf life of lyophilized form is 12 months at -20℃/-80℃. Notes: Repeated freezing and thawing is not recommended. Store working aliquots at 4℃ for up to one week.
https://www.genebiosystems.com/en-us/products/recombinant-lassa-virus-nucleoproteinn
- Published: Recovery of non-reference sequences missing from the human reference genome BMC Genomics volume 20, Article number: 746 (2019) Article metrics - 477 Accesses - 2 Altmetric - Abstract Background The non-reference sequences (NRS) represent structure variations in human genome with potential functional significance. However, besides the known insertions, it is currently unknown whether other types of structure variations with NRS exist. Results Here, we compared 31 human de novo assemblies with the current reference genome to identify the NRS and their location. We resolved the precise location of 6113 NRS adding up to 12.8 Mb. Besides 1571 insertions, we detected 3041 alternate alleles, which were defined as having less than 90% (or none) identity with the reference alleles. These alternate alleles overlapped with 1143 protein-coding genes including a putative novel MHC haplotype. Further, we demonstrated that the alternate alleles and their flanking regions had high content of tandem repeats, indicating that their origin was associated with tandem repeats. Conclusions Our study detected a large number of NRS including many alternate alleles which are previously uncharacterized. We suggested that the origin of alternate alleles was associated with tandem repeats. Our results enriched the spectrum of genetic variations in human genome. Background The initial human reference genome was entirely linear . Despite introduction of alternate alleles for a graph-based representation, the current reference genome is largely derived from a single individual of African-European origin , limiting its representation of diverse populations. Lines of evidence in recent years have revealed that individuals still carry sequences that are not represented in the reference genome. These sequences could be an important type of structural variation underlying disease associations or complex traits . The discovery of non-reference sequences (NRS) will be a prerequisite for an more complete graph-based genome, thereby enabling improved genomic analyses and understanding of genomic architecture . Extensive efforts have been devoted in recent years to discover NRS. Based on a large amount of whole-genome sequencing data, two studies focused specifically on the discovery of NRS and identified as much as ~ 300 Mb novel sequences from a large number of re-sequencing data [3, 5]. However, the identified sequences were obtained by assembling unaligned short reads, posing a challenge to placing them in the reference genome. A recent study used long-read sequencing data from 15 samples to identify 32,838 insertions of presence in at least two samples but without exploring novel sequences within the insertions . Additionally, these studies have mainly focused on insertion events. In fact, some sequences belong to complex structural variants (e.g., two alleles with high divergence instead of simply introducing additional sequences) [5, 7]. De novo assembly is a promising approach for building the complete human pan-genome . Using an assembly-versus-assembly approach, we discovered not only insertions but also sequences which are an alternate representation of a locus in the haploid genome. A well characterization of the insertions and alternate alleles in the human genome is necessary for a better understanding of their biological significance. The identification of insertions and alternate alleles requires high-quality de novo assemblies. Fortunately, a number of human de novo assemblies have been generated via long-read sequencing (LRS) [6, 8,9,10,11,12], and these assemblies have covered major human ethnic groups. The unprecedented availability of these genomic resources enables us to depict the full spectrum of NRS, especially those representing alternate alleles. In this study, we compared 31 de novo assemblies (including 17 LRS assemblies) with the human reference genome to identify putative alternate alleles, most of which are newly reported in this study. Results Detection of candidate NRS An assembly-versus-assembly approach was used for each assembly to identify unaligned sequences (< 90% identity) to the human reference genome (GRCh38.p12, hg38) (Fig. 1, Methods). Apart from hg38, our study included 31 de novo assemblies: 17 from LRS (PacBio or nanopore sequencing technology), 13 generated with next generation sequencing and one from Sanger sequencing (Additional file 1). We further examined the BUSCO completeness score, length distribution of structural variants as assessed by Assemblytics and composition of TE elements to ensure that these assemblies are of high quality for downstream analysis. All the assemblies present comparable completeness scores (94.4 ± 0.81, mean ± SD) except for ASM101398v1 (BUSCO score 90.3) and the percent of TE elements in all assemblies were all at similar levels (0.51 ± 0.01, mean ± SD) (Additional file 1). All the assemblies shows similar pattern of length distributions of structural variants (Additional file 2). The final unaligned sequences spanned, on average, 12.9 Mb for LRS assemblies, which were much larger than the other assemblies (2.3 Mb on average) (P < 0.01, unpaired two-tailed Student’s t-test) (Fig. 2). For each assembly, 70-80% of the preliminary unaligned sequence contents belonged to simple sequence repeats or low complexity sequences and were removed to obtain the final unaligned sequences for each assembly. After removing redundant sequences and sequences shorter than 400 bp, we obtained the unaligned sequences from each assembly, which were then merged into a unified non-reference call set of 15,055 sequences (hereafter referred to as NRS) adding up to 129.1 Mb with a median length of 2848 bp (N50 = 1066 bp). Furthermore, 78.6 Mb of the NRS had no alignment with hg38 using the criteria of > 80% identity and 50% coverage as adopted by . Nevertheless, the call set of 129.1 Mb were used in downstream analysis. In order to compare with four previous studies (methods), we aligned our NRS to each of the four datasets in a reciprocal manner with BLAST to determine the intersection with each of the studies. In total, 13.8 Mb NRS intersected with previous reports whereas the rest (115.3 Mb) has not been identified before (Additional file 3). We aligned all the NRS to genomes of four great apes and found that 1211 sequences (4.32 Mb) were present in at least one of the four great apes (identity ≥95% and coverage ≥80%). The presence of each NRS was also examined in each of the human de novo assemblies and those which were present in at least two of the 31 de novo assemblies and great ape assemblies were determined as non-private sequences. Taken together, 28.2% (4248) of the NRS spanning 29.3 Mb were non-private sequences, indicating that they are of high confidence (Methods and Additional file 4). Placing candidate NRS to the reference genome We next determined the genomic locations of the NRS by aligning their flanking sequences to hg38 (Methods), by which we resolved the precise location of 6112 NRS (40.5% of the total NRS) adding up to 12.8 Mb. Notably, 13 sequences reside in the remaining gaps of the reference genome (Additional file 5). Another 2711 NRS were anchored to chromosomes without a precise location due to sequence gaps. The remaining sequences cannot be placed on hg38 due to a lack of flanking sequences or conflict anchoring information from the two sides. For the precisely placed sequences, we can determine whether they belong to insertions or alternate alleles as described in next section. For the unplaced NRS and those without precise locations, 2855 were non-private sequences adding up to 25.8 Mb, indicating that they are of high confidence (Additional file 6). Although we could not determine the precise locations of these sequences, their discovery will greatly expand the repertoire of sequence diversity in the human genome. Insertions within NRS We first determined the insertion events for the precisely placed NRS. A total of 1571 (3.2 Mb) were found to be insertions including 769 non-private sequences (Fig. 3a). Furthermore, 246 were present in more than half of the assemblies, indicating that they could represent major alleles. Notably, 56.8% (881) of the insertions, including 158 non-private ones, were firstly described in our study. Principal component analysis (PCA) of all the insertions based on their presence in the 16 LRS de novo assemblies of Pacbio sequencing showed a population-specific pattern (Fig. 3e). PC1 clusters African samples away from other populations, while PC2 further separates the East Asians from the Europeans, Americans and South Asians. Alternate alleles within NRS We further found that many NRS represent an alternate allele to their counterparts in the reference instead of insertions. To identify alternate alleles, the NRS should share less than 90% identity (or none) and have comparable length with reference alleles (Methods). In this way, 3041 NRS (6.38 Mb) were identified as candidate alternate alleles. The remaining 1500 precisely placed NRS did not meet our criteria of insertions or alternate alleles and thus were classified as ambiguous sequences. Unlike insertions, the alternate alleles represent allelic sequences (Fig. 4a and d). Notably, a long alternate sequence of 24,676 bp was anchored to chr6: 29,955,749-29,986,299, which belongs to the class I major histocompatibility complex (MHC gene) (Fig. 4b) and potentially harbors a novel HLA-B gene (Additional file 7). This allele was present in two African assemblies (YRI, NA19240), in one American assembly (ASM311317v1) and in the gorilla genome whereas absent in other assemblies. Moreover, the reference allele was found in chimpanzees, indicating the presence of ancestral polymorphism at this locus. Furthermore, this alternate allelic sequence was not reported in the NCBI nr/nt database or in the human HLA database (IPD-IMGT/HLA database), suggesting that this alternate sequence represents a putative novel MHC allele. Among the alternate alleles, 1348 intersected with the genic region of 1143 protein-coding genes. The genomic distribution of the alternate alleles was dispersed throughout the genome, and those belonging to non-private sequences are shown in Fig. 5. A total of 59 non-private alternate alleles intersected with the genic region, and five of them further intersected with the CDS region of eight genes: HLA-W, MICD, HCG9, DDX39BP2, LOC107985837, ZNF880, PLIN4 and LOC728715. Only 2.6% (80 out of 3041) of the alternate alleles have been identified before but were miss-classified as insertions. Therefore, most of the identified alternate alleles described in our study are newly reported. The discovery frequency of alternate alleles in human assemblies appears to be lower than that of insertions (Fig. 3a and b). Most alternate alleles were present in less than half of the 31 assemblies, indicating that many of them represent minor alleles in the human genome. Nevertheless, 144 alternate alleles were non-private (Additional file 8), with the longest one found in 17 assemblies and spanning 19,330 bp (genomic placement position: chr7, 62,408,641-62,451,864). The ambiguous sequences also included a number of putative insertions and alternate alleles (Fig. 3c) and deserve further efforts for verification. The length distributions of the alternate alleles and insertions did not differ (p = 0.09, Kolmogorov-Smirnow test) (Fig. 3d). Similar to the insertions, PCA also showed that PC1 clusters African samples away from other populations, while PC2 separates the East Asians from the Europeans, South Asians and Americans (Fig. 3f). We also explored the potentially transcribed sequences that either have mapped RNA-seq reads (≥10 reads in at least two samples) or hits to the human expressed sequence tag database (dbEST) (e-value <1e-5). We totally identified 74 transcribed alternate alleles from RNA-seq data and 238 with hits to the human dbEST, resulting in a total of 244 potentially transcribed sequences (Additional file 9). One alternate allele was found to be expressed in a tissue-specific manner (Fig. 4b and c), which is potentially a long non-coding gene since we couldn’t annotate it to any known gene by searching NCBI nr/nt database with BLAST. The putative novel MHC allele was also found to be expressed (Fig. 4e and f). To explore the origin of the alternate alleles, we analyzed associated repeats with NRS. Transposable elements (TEs) compose approximately 45% of the human genome (Lander et al. 2001), and a previous study showed that insertions were associated with TEs . The percent of TEs within alternate alleles (10.0%) was much lower than that of insertions (55.1%) (Fig. 6a). The flanking sequences (5 kb on each side) of the alternate alleles also had less TE content (33.3%) than the insertions (48.0%) (Fig. 6b), suggesting that the alternate alleles are not associated with TEs. We then screened the tandem repeat content among the sequences. The alternate alleles possessed a much higher content of tandem repeats either within the sequences (Fig. 6c) or in the flanking sequences (5 kb on each side, Fig. 6d) compared with the insertions. Notably, the reference allele also be enriched in tandem repeat when the alternate allele have a large content of tandem repeat (R2 = 0.65, Fig. 6e and Additional file 10), thereby implying that tandem repeats are associated with alternate alleles. Discussion A comprehensive characterization of structural variations is essential for studies attempting to identify causative variants that affect phenotypic variations and complex genetic diseases. In this study, we identified 129.1 Mb candidate NRS (4.2% of the genome). Although many of the NRS were singletons, a considerable number of reliable NRS (29.3 Mb) were identified by their presence in at least two assemblies, and most of them were newly described. The discovery of these NRS will contribute to a final, comprehensive pan-genome capturing all of the DNA present in humans. We compared our results with four of the recent studies and only 13.8 Mb (10.7%) NRS were found to be previously reported. The discrepancy could be probably explained by the complexity in data types/size and analysis methodology. Compared with short-reads sequencing data, long-reads sequencing is capable of capturing complex genomic regions such as high/low GC content, tandem repeats, and interspersed repeat regions by generating long-reads , thereby enabling the discovery of more repeats or lineage-specific expansions. On the other hand, if the datasets is large enough, many NRS including some complex regions could still be found even using short-reads data as conducted in the African pan-genome (910 genomes) . These above mentioned factors could also partially explain why many of the NRS were not found in primates. Actually, we found that only 4.1 Mb (1.38%) of the African pan-genome can align to the primates’ genomes using BLAST. Furthermore, even the definition of non-reference sequences (or so-called novel sequences) is still blurred in pan-genomic studies, which in turn can significantly affect the amount of reported sequences. For example, the African pan-genome used an identity cutoff of 80% whereas other studies choose 90% as the cutoff [7, 15, 16]. Since the human and chimpanzee share genome-wide nucleotide identity of ~ 98.5% despite 6 million years divergence , an identity of 90% is a very conservative and robust threshold for identifying human NRS. More importantly, we discovered a large number of alternate alleles. The majority of the alternate alleles that we found have not been previously reported, which could be due to several reasons: (1) Most previous work has designed their studies to focus on insertions, whereas other types of NRS were largely ignored [3, 5, 13]; (2) Many studies have mainly relied on short-reads data to obtain NRS [3, 5], which would be less efficient for the discovery of long structural variations compared with an assembly-versus-assembly approach, as applied in our study; and (3) Many alternate alleles were singletons, suggesting that they are either of very low frequency for detection or simply false positives due to assembly errors. Nevertheless, we still detected 144 non-private alternate alleles. The current human genome (GRCh38.p12) includes 261 alternate loci that capture a limited amount of population diversity and improve read mapping for some data sets . Therefore, the sequences that we identified will greatly advance our knowledge of the alternate alleles in the human genome. There is growing interest in using genetic variants to augment the reference genome into a graph genome [18,19,20]. To create a representative graph genome, the full spectrum of structural variations, including the alternate alleles, should be understood clearly. With the reduction in sequencing costs and advances in sequencing technology, increased numbers of de novo assemblies will be generated, which will eventually refine the full spectrum of sequence diversities in the human genome. Conclusions In this study, we identified 129.1 Mb NRS which are absent from the human reference genome, most of which have not been previously reported. For the sequences which could be precisely placed on the reference genome, we classified them into insertions and divergent alleles. Notably, 3041 alternate alleles were identified with many of them intersecting with the genic region of protein-coding genes. By examining the repeat contents within NRS and in the flanking sequences, we found that the origin of alternate alleles were probably associated with TE repeats. Our results indicated that abundant sequences are still missing from the human reference genome despite great advances in recent years and more genomic data from diverse populations are demanded to build the complete human pan-genome. Meanwhile, the biological significance of the NRS needs to be further explored. Methods De novo assemblies used in this study The human reference genome GRCh38.p12 (hg38) was used as the guiding genome for comparison. The hg38 consists of the primary GRCh38 assembly, the mitochondria genome, unlocalized/unplaced scaffolds and alternate contigs. We downloaded 31 human de novo assemblies from the NCBI, including 17 from PacBio sequencing, 1 from nanopore sequencing, 13 from next generation sequencing and one from Sanger sequencing (Additional file 1). For the assemblies using LRS technology (PacBio and nanopore sequencing), we focused on assemblies that were released since 2015 and of high quality. All of them had high continuity (contig N50 > 1 Mb; 15 of 18 with an N50 > 5 Mb) and high sequencing coverage (16 of 17 with coverage > 50 X). For the assemblies from SRS, we used the 13 haploid genomes of next generation sequencing since they were generated in one study with high continuity . The HuRef genome of Sanger sequencing was also included in our study due to its high continuity . The BUSCO completeness score for each assembly was determined using BUSCO v3 . The structural variations were assessed using online tool Assemblytics by providing the Mummer v3.23 alignment output in delta format (nucmer -maxmatch -l 100 -c 500) . Recovery of candidate NRS from each assembly Each assembly was aligned to hg38 using LAST (−m100 -E0.05) . The unaligned or low-identity sequences (< 90%) to hg38 with a length of at least 100 bp were extracted. The unaligned or low-identity sequences identified by LAST were then aligned back to hg38 using BLAST v2.2.31 (megablast) to further remove regions that share ≥90% identity. Then, the simple repeats, low complexity regions and microsatellites were removed based on the repeat annotation file, which was downloaded from the NCBI (*_rm.out.gz). The remaining sequences were merged by adjacent regions (≤200 bp), and the resulting NRS, which were at least 400 bp, were retained for each assembly. Finally, the resulting sequences were aligned to hg38 using BLAST (megablast) again to remove the regions that share ≥90% identity. The resulting sequences were then merged by adjacent regions (≤200 bp), and only those of at least 400 bp were kept. BEDTools v2.25.0 was used in the above processes when assessing genomic features . The NRS from all 31 assemblies were then merged to remove redundancy and to generate a non-redundant call set using CD-HIT (−c 0.95 -aS 0.8 -d 0 -sf 1 -M 10000) . The resulting unified and non-redundant call set was used for further analysis. Removal of contamination We did not expect sequencing from bacteria, viruses or other non-mammalian species to be present in our identified sequences since the NCBI has a stringent quality control process when assemblies are submitted. We used BLAST (megablast) to align the non-reference call set to the NCBI nt database. Only a small number of the sequences exhibited significant alignment with the non-mammalian species using 90% identity and 90% query coverage filter thresholds and were removed from the final call set. Presence of NRS in de novo assemblies of 31 humans and in four great apes We examined the presence of each NRS in 31 humans and four great ape de novo assemblies using BLAST (megablast). The four great apes included chimpanzee (GCA_002880755.3), bonobo (GCF_000258655.2), gorilla (GCA_900006655.3) and orangutan (GCF_002880775.1). The presence was determined for the NRS when having identity ≥95% and coverage ≥80% with the assembly. Anchoring NRS on human chromosomes All the sequences were anchored to human chromosomes based on the LAST alignment of their flanking sequences. The anchored position was reported as ‘precisely placed’ when both of the flanking sequences were near perfectly aligned (no gap alignment) to the reference genome. If the sequences have flanking sequences of only one end aligned to the reference genome or have flanking sequences of two ends aligned but with gap alignment rendering the inference of exact breakpoints, it was reported as ‘unlocalized’. The remaining sequences were reported as ‘unplaced’. Based on the breakends coordinates (the genomic position of the two breakpoints for each NRS), the breakpoint resolved sequences could be further classified as insertions when simply introducing one sequence fragment to the reference genome. For alternate alleles, the NRS should share less than 90% (or 0%) identity with the reference allele, and the reference allele were required to be at least 400 bp. Furthermore, the NRS and the reference allele should have a comparable length, with the ratio of the length to be between 1/3 and 3. The remaining sequences that did not meet the above criteria for insertions and alternate alleles were classified as ambiguous sequences. Aligning NRS to the human expressed sequence tag database We downloaded the human dbEST from the NCBI FTP site (ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/est_human.gz). Then, the NRS were aligned to the dbEST using BLAST. Since entries in the dbEST are short and represent only the ends of expressed genes, alignments with ≥95% identity and an e-value <1e-5 were considered as a hit regardless of the query coverage. Comparison with published datasets We compared with our results with four previously published results each of which reported a list of non-reference sequences [3, 5, 13, 15]. The comparison for each of the datasets was performed using a reciprocal strategy as previously described . We first aligned all the NRS to each of the datasets using BLAST. Alignments with ≥95% sequence identity and ≥ 80% query coverage were considered as real hits. We then aligned each of the four datasets to our NRS also with BLAST, and the alignments were filtered using the same criteria as described above. Results from the two alignments steps were merged and a non-redundant list of NRS was reported. Aligning RNA-seq reads to the primary call set We downloaded a total of 87 RNA-seq data from the Geuvadis project (https://www.ebi.ac.uk/Tools/geuvadis-das/) and another study . The information of the samples was described in Additional file 11. Fastp was used to trim off the low-quality bases (−q 20 –l 80 –u 50) . Then the clean reads were mapped to the extended version of reference (hg38 plus the primary call set) using HISAT2 with default parameters . Only the reads with both of the mates mapped and in proper pair were considered as high-quality alignment before counting the mapped reads on each sequence using Sambamba depth (−F “mapping_quality >= 30 and proper_pair”) . A sequence was regarded to be transcribed when ≥10 mapped reads were found in at least two samples. Availability of data and materials The accession numbers corresponding to each of the 31 human de novo assemblies used in our study are listed below and could be found in Additional file 1: GCA_002209525.2 (HG01352), GCA_002208065.1 (HG00733), GCA_001750385.2 (AK1), GCA_001856745.1 (ASM185674v1), GCA_002180035.3 (HG00514), GCA_003070785.1 (HG02059), GCA_001708065.2 (HX1), GCA_001524155.4 (NA19240), GCA_001542345.1 (NA24385), GCA_001297185.1 (PacBioCHM1), GCA_002077035.3 (NA12878), GCA_001013985.1 (ASM101398v1), GCA_002884485.1 (CHM13), GCA_002872155.1 (NA19434), GCA_002884475.1 (YRI), GCA_003086635.1 (HG03486), GCA_900232925.1 (GM12878), GCA_003112895.1 (ASM311289v1), GCA_003112815.1 (ASM311281v1), GCA_003112855.1 (ASM311285v1), GCA_003113175.1 (ASM311317v1), GCA_003112875.1 (ASM311287v1), GCA_003112955.1 (ASM311295v1), GCA_003113115.1 (ASM311311v1), GCA_003112835.1 (ASM311283v1), GCA_003113235.1 (ASM311323v1), GCA_003113215.1 (ASM311321v1), GCA_003112935.1 (ASM311293v1), GCA_003112915.1 (ASM311291v1), GCA_003112975.1 (ASM311297v1), and GCA_000002125.2 (HuRef). The datasets corresponding to the four great apes can be accessed on NCBI Assembly via the accession numbers: chimpanzee (GCA_002880755.3), bonobo (GCF_000258655.2), gorilla (GCA_900006655.3), orangutan (GCF_002880775.1). We downloaded the human dbEST from the NCBI FTP site (ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/est_human.gz). We downloaded a total of 87 RNA-seq data from the Geuvadis project (https://www.ebi.ac.uk/Tools/geuvadis-das/) and another study (Fagerberg et al., 2014). The information of the samples was described in Additional file 11. The datasets supporting the conclusions of this article are included within the article and its additional files. The identified NRS is provided as an Additional file (Additional file 12) and all other data supporting the findings of this study are available in additional files. Abbreviations - CDS: - Coding sequence - Hg38: - Genome Reference Consortium Human Build 38 (refer in particular to GRCh38.p12 in this study) - LRS: - Long-read sequencing - NRS: - Non-reference sequences - SRS: - Short-read sequencing - TE: - Transposable element References - 1. Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, et al. Initial sequencing and analysis of the human genome. Nature. 2001;409:860. - 2. Schneider VA, Graves-Lindsay T, Howe K, Bouk N, Chen H-C, Kitts PA, et al. Evaluation of GRCh38 and de novo haploid genome assemblies demonstrates the enduring quality of the reference assembly. Genome Res. 2017;27:849–64. - 3. Kehr B, Helgadottir A, Melsted P, Jonsson H, Helgason H, Jonasdottir A, et al. Diversity in non-repetitive human sequences not found in the reference genome. Nat Genet. 2017;49:588–91. - 4. Church DM, Schneider VA, Steinberg KM, Schatz MC, Quinlan AR, Chin CS, et al. Extending reference assembly models. Genome Biol. 2015;16:13. - 5. Sherman RM, Forman J, Antonescu V, Puiu D, Daya M, Rafaels N, et al. Assembly of a pan-genome from deep sequencing of 910 humans of African descent. Nat Genet. 2018;51:30–5. - 6. Audano PA, Sulovari A, Graves-Lindsay TA, Cantsilieris S, Sorensen M, Welch AE, et al. Characterizing the major structural variant alleles of the human genome. Cell. 2019;176:663–75. - 7. Li R, Li Y, Zheng H, Luo R, Zhu H, Li Q, et al. Building the sequence map of the human pan-genome. Nat Biotechnol. 2010;28:57–63. - 8. Shi L, Guo Y, Dong C, Huddleston J, Yang H, Han X, et al. Long-read sequencing and de novo assembly of a Chinese genome. Nat Commun. 2016;7:12065. - 9. Cho YS, Kim H, Kim H-M, Jho S, Jun J, Lee YJ, et al. An ethnically relevant consensus Korean reference genome is a step towards personal reference genomes. Nat Commun. 2016;7:13637. - 10. Pendleton M, Sebra R, Pang AWC, Ummat A, Franzen O, Rausch T, et al. Assembly and diploid architecture of an individual human genome via single-molecule technologies. Nat Methods. 2015;12:780. - 11. English AC, Salerno WJ, Hampton OA, Gonzaga-Jauregui C, Ambreth S, Ritter DI, et al. Assessing structural variation in a personal genome-towards a human reference diploid genome. BMC Genomics. 2015;16:286. - 12. Jain M, Koren S, Miga KH, Quick J, Rand AC, Sasani TA, et al. Nanopore sequencing and assembly of a human genome with ultra-long reads. Nat Biotechnol. 2018;36:338. - 13. Wong KHY, Levy-Sakin M, Kwok PY. De novo human genome assemblies reveal spectrum of alternative haplotypes in diverse populations. Nat Commun. 2018;9:9. - 14. Goodwin S, McPherson JD, McCombie WR. Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet. 2016;17:333. - 15. Duan Z, Qiao Y, Lu J, Lu H, Zhang W, Yan F, et al. HUPAN: a pan-genome analysis pipeline for human genomes. Genome Biol. 2019;20:149. - 16. Gao L, Gonda I, Sun H, Ma Q, Bao K, Tieman DM, et al. The tomato pan-genome uncovers new genes and a rare allele regulating fruit flavor. Nat Genet. 2019;51:1044–51. - 17. Waterson RH, Lander ES, Wilson RK, The Chimpanzee S, Analysis C. Initial sequence of the chimpanzee genome and comparison with the human genome. Nature. 2005;437:69–87. - 18. Rakocevic G, Semenyuk V, Lee W-P, Spencer J, Browning J, Johnson IJ, et al. Fast and accurate genomic analyses using genome graphs. Nat Genet. 2019;51:354–62. - 19. Crysnanto D, Wurmser C, Pausch H. Accurate sequence variant genotyping in cattle using variation-aware genome graphs. Genet Sel Evol. 2019;51:21. - 20. Pritt J, Chen N-C, Langmead B. FORGe: prioritizing variants for graph genomes. Genome Biol. 2018;19:220. - 21. Levy S, Sutton G, Ng PC, Feuk L, Halpern AL, Walenz BP, et al. The diploid genome sequence of an individual human. PLoS Biol. 2007;5:e254. - 22. Waterhouse RM, Seppey M, Simão FA, Manni M, Ioannidis P, Klioutchnikov G, et al. BUSCO applications from quality assessments to gene prediction and phylogenomics. Mol Biol Evol. 2017;35:543–8. - 23. Nattestad M, Schatz MC. Assemblytics: a web analytics tool for the detection of variants from an assembly. Bioinformatics. 2016;32:3021–3. - 24. Kurtz S, Phillippy A, Delcher AL, Smoot M, Shumway M, Antonescu C, et al. Versatile and open software for comparing large genomes. Genome Biol. 2004;5:R12. - 25. Kielbasa SM, Wan R, Sato K, Horton P, Frith M. Adaptive seeds tame genomic sequence comparison. Genome Res. 2011. https://doi.org/10.1101/gr.113985.110. - 26. Camacho C, Coulouris G, Avagyan V, Ma N, Papadopoulos J, Bealer K, et al. BLAST+: architecture and applications. BMC Bioinformatics. 2009;10:421. - 27. Quinlan AR. BEDTools: the Swiss-army tool for genome feature analysis. Curr Protoc Bioinformatics. 2014;47:11.2.1–34. - 28. Fu L, Niu B, Zhu Z, Wu S, Li W. CD-HIT: accelerated for clustering the next-generation sequencing data. Bioinformatics. 2012;28:3150–2. - 29. Fagerberg L, Hallstrom BM, Oksvold P, Kampf C, Djureinovic D, Odeberg J, et al. Analysis of the human tissue-specific expression by genome-wide integration of transcriptomics and antibody-based proteomics. Mol Cell Proteomics. 2014;13:397–406. - 30. Chen S, Zhou Y, Chen Y, Gu J. fastp: an ultra-fast all-in-one FASTQ preprocessor. Bioinformatics. 2018;34:i884–i90. - 31. Kim D, Langmead B, Salzberg SL. HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015;12:357–60. - 32. Tarasov A, Vilella AJ, Cuppen E, Nijman IJ, Prins P. Sambamba: fast processing of NGS alignment formats. Bioinformatics. 2015;31:2032–4. Acknowledgements We thank the High Performance Computing platform of Northwest A&F University. Funding This study was supported by research grants from the National Natural Science Foundation of China (31822052) to Y.J., National Natural Science Foundation of China (31802027), Doctoral Fund of Ministry of Education of China (No. 2018 M631209), and “the Fundamental Research Funds for the Central Universities” (2452018127) to R.L. Neither funding body had any influence over study design, analysis and interpretation of the data, or in writing the manuscript. Ethics declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Additional file 2. The spectrum of structural variations (insertions and deletions) of the 31 de novo assemblies as assessed by Assemblytics. Additional file 4. Table showing the presence of NRS in 31 human de novo assemblies and in four great ape assemblies. Additional file 5. Table showing the information of NRS which are anchored to the gap region of hg38. Additional file 7. Blastx information of the novel candidate MHC allele. a The two most significant hits show that the novel candidate MHC allele potentially harbors two genes; b Gene description of the two hits; c Sequence alignment of the first hit to known protein; d Sequence alignment of the second hit to known protein. Rights and permissions Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-019-6107-1
The UK Biobank is an unprecedented resource for human disease research. In March 2019, 49,997 exomes were made publicly available to investigators. Here we note that thousands of variant calls are unexpectedly absent from the current dataset, with 641 genes showing zero variation. We show that the reason for this was an erroneous read alignment to the GRCh38 reference. The missing variants can be recovered by modifying read alignment parameters to correctly handle the expanded set of contigs available in the human genome reference. Main Text The UK Biobank (UKB) is a resource of unprecedented size, scope and openness, making available to researchers deep genetic and phenotypic data from approximately half a million individuals1. The genetic data released thus far include array-based genotypes on 488,000 individuals and exome sequencing on 49,997 of these, with further exome sequences to be released in 2020. Such comprehensive cataloging of protein-coding variation across the entire allele frequency spectrum, attached to extensive clinical phenotyping, has the potential to accelerate biomedical discovery, as evidenced by recent successes with other exome biobanks2. Given the scale of the data (the current exomes release contains approximately 120 TB of aligned sequence), few investigators have the computational infrastructure or knowledge to identify and curate genetic variants and instead rely on releases of accompanying pre-processed variants in variant call format (VCF, approximately 5 GB). Specifically the UKB has released pre-processed VCFs from two different variant analyses, called the Regeneron Seal Point Balinese (SPB)3 and Functionally Equivalent (FE)4 pipelines. Although these pipelines are still evolving, studies have already made use of the released exome variants mainly for comparison with previous UKB genotyping data or variant databases5. However, a recent report pointed out an error in duplicate read marking in the SPB pipeline that could lead to false variant calls here (http://www.ukbiobank.ac.uk/wp-content/uploads/2019/08/UKB-50k-Exome-Sequencing-Data-Release-July-2019-FAQs.pdf), resulting in the removal of the SPB release from the UKB data repository. Thus, the FE pipeline is currently the only source of variant calls available for downstream research. Here we identify an error in the FE pipeline that results in a systematic lack of variant calls for thousands of genes, along with a solution to patch this bug. In our initial investigations of protein-coding variation in the UKB exomes, we noted a complete absence of variation in a number of genes of interest, including CLIC1, HRAS, TNF, and MYH11 (one of the ACMG 59 genes in which incidental sequencing findings should be reported6). Such absence was unexpected given the UKB exome sample size, as these genes were not under severe evolutionary constraint7, and protein-coding variants had been called for these genes in other databases8, some of which were present at sufficiently high frequency to be included on genotyping arrays. We reasoned that the lack of variant calls in these genes was unlikely to be explained by ascertainment of a unique population in the UKB (i.e. the variants truly did not exist), and was instead caused by a technical error in sequencing, data processing, variant calling or a combination of those. In order to prove that the missing variants are indeed present in the UKB population, we first evaluated the internal consistency between the genotyping and exome sequencing data that had been collected for the same UKB samples. In particular, we identified a total of 30,979 common variants (MAF > 0.01) in the UKB dataset that had been ascertained in 49,909 samples by genotyping arrays (Online Methods). While the majority of variants had been called by both methods (24,614 variants, 79.5%) a substantial minority (6,365 variants, 20.5%) were called by the genotyping arrays but not by exome sequencing (Fig. 1). This discrepancy included many common variants with MAFs close to 0.5 (i.e. that were present in almost 50% of the array samples) providing strong evidence that the exome sequencing genotype calls are lacking variants that exist and should have been detected in this UKB exome population. We next examined variant calls aggregated per gene in the UKB exomes in comparison to the Genome Aggregation Database9 (gnomADv2.1.1, 125,748 sequenced exomes, Online Methods). Our analysis focused on the exons sequenced both in UKB and gnomAD, which encompasses 23,040 human genes (Online Methods; Fig. 2a). We found that, for most genes, the number of variants in gnomAD was well predicted by the number in UKB, with expected 1:2.3 proportionality given the larger gnomAD sample size (Fig. 2b). However, this analysis also highlighted 641 genes with 0 variants called in the UKB exomes, versus a median of 286 variants (range of 1 to 14,291) in gnomAD (Supplementary Table 1). Using the aggregate observed variant frequency per gene in gnomAD, we calculated the probability for at least one variant being observed in the UKB exome sample for each gene. Of the 641 genes, 598 (93%) should have had at least one variant identified (95% CI one-tailed binomial distribution). Given that the UKB is a predominantly European ancestry population and the gnomAD dataset contains a more diverse population, we performed ancestry-specific analysis (Fig. 2c) of these genes in gnomAD. The largest number of variants in these genes were found in the European ancestry samples as expected by their majority representation in the gnomAD dataset. This excluded the possibility that some or all of the genes lacking variation in the UKB was due to ancestry-specific variation. To understand the reason for these missing variant calls in the UKB, we analyzed the sequencing read data, provided by the FE pipeline, for individual exomes at each of the 641 loci. Our analysis indicated that, despite having reads mapped to these genes (Fig. 3a), the mapping quality (MAPQ) score was universally zero, causing these reads to be eliminated from the downstream procedures for variant calling. The MAPQ field in the SAM specification10 is the PHRED scaled probability11 of the alignement being erroneous. In practice, however, each aligner treats the MAPQ field differently. With the aligner BWA-MEM12 used in FE pipeline, a MAPQ score of zero is given to reads that align equally well to more than one genomic location, and is typically an indicator of reads that come from duplicated or repetitive regions of genomic DNA. However, many of the loci we individually examined were not known to harbor repetitive elements or reside in regions of genome duplication. Investigating further, we found that the zero MAPQ score was due to the reads showing multiple alignments to the GRCh38 genome reference, not to repetitive elements but to so-called ‘alternative contigs’ in the GRCH38 reference (ftp://ftp-trace.ncbi.nlm.nih.gov/1000genomes/ftp/technical/reference/GRCh38_reference_genome/GRCh38_full_analysis_set_plus_decoy_hla.fa). As of this genome release, alternative contigs are used frequently to represent divergent haplotypes which cannot be easily captured by a single linear sequence. Indeed, of the 598 genes with high probability of missing variation, 568 (95%) had alternative contigs represented in the genome reference (Supplementary Table 1). Starting from the raw reads available from CRAM files, we found that the original read alignment provided by the UK Biobank (Fig. 3a) was most closely reproduced when performing the alignment under default alignment parameters (BWA-MEM12, Online Methods). This alignment (Fig. 3b) does not take into consideration alternative contigs in the absence of an index file specifically marking these contigs, and treats them instead as independent genomic regions equal to primary contigs. Reads that map to both primary and alternative contigs are therefore interpreted as mapping to multiple genomic locations at these loci. We found that re-aligning the raw reads while providing the alternative contig index file for the genome reference resulted in a dramatic improvement in the number of reads that properly mapped to a single genomic locus and therefore had the MAPQ score greater than zero (Fig. 3c). In summary, we have found that genetic variants documented in the UK Biobank FE release are conspicuously absent from certain genes, in a manner that is best explained by errors of read alignment. Furthermore, while our analysis has focused on 641 genes with an absolute lack of variant calls, additional genes may have partially duplicated or repetitive sequences such that they are missing substantial (but above zero) variation beyond those identified in our short study (2391 genes are currently contained within alternative contig representations of the genome). Thus, the variant calls in the current UKB exome data should not be used for large-scale genomic analyses, as only genes without alternative haplotypes are unaffected by the erroneous alignment. Here we provide a description of and protocol for read realignment (Supplementary File) that we hope others will find useful for generating corrected alignment files, which can then be used to generate accurate genotype calls with downstream variant calling pipelines. We have also notified the UK Biobank bioinformatics team of the bug and our proposed patch. This study highlights the need for rigorousness and continued investigations by the community into optimal data processing protocols for UK Biobank and other large genomic resources, prompt sharing of any concerns, and timely responses to any issues raised by data guardians and providers. As tasks like sequence alignment and variant calling are very computationally expensive, robust centralized sequence data processing protocols are critical for enabling the use of such resources by the wide-ranging research community – particularly as UK Biobank prepares to expand the initial 50,000 exomes to 150,000 in early 2020, and to 500,000 whole genomes over the next few years. Declaration of Interests TI is co-founder of Data4Cure, Inc., is on the Scientific Advisory Board, and has an equity interest. TI is on the Scientific Advisory Board of Ideaya BioSciences, Inc., has an equity interest, and receives sponsored research funding. The terms of these arrangements have been reviewed and approved by the University of California San Diego in accordance with its conflict of interest policies. Supplementary Table 1: Characteristics of 641 genes with 0 variants called in the FE pipeline exome sequences from the UK Biobank. Variants are shown aggregated by gene in the UKB (n = 49,997) and gnomAD v2.1.1 (n = 125,748). The probability of observing at least one variant in UKB is based on the cumulative distribution function of a binomial distribution with (n = 49,997 * 2 and p = observed counts in gnomAD / (125,748 * 2)). Genes are labeled according to whether there exists an alternative contig representation in the genome reference GRCH38. Online Methods UK Biobank Whole Exome Sequencing (WES) and genotype array data We used the sample-level aligned sequence data (CRAM files) from the Functionally Equivalent (FE) pipeline1. A total of 49,960 individuals have both exome sequencing data and genotype array data as of November 26, 2019, out of which 49,909 individuals pass standard genotype array quality control. As the exome data are in coordinates relative to GRCh38, but the genotype array data are in coordinates relative to GRCh37, we used the UCSC genome browser liftover tool13 to update genotype data coordinates to GRCh38. To facilitate direct comparison of the exome to array genotype data (Fig. 1), we filtered variants on the genotyping array present at a MAF > 0.01 that were also covered by the exome sequencing regions. Variant comparison to gnomAD We obtained targeted exome capture regions for both UK Biobank and gnomAD9 (v2.1, https://storage.googleapis.com/gnomad-public/intervals/exome_calling_regions.v1.interval_list). The exome calling regions from gnomAD were converted to GRCh38 coordinates using the UCSC genome browser liftover tool13 to facilitate comparison to UK Biobank. We used BEDTools14 to extract shared regions between UKB and gnomAD. Using BEDOPS15, we further annotated the common genomic regions to a total of 23,040 genes based on the Ensembl 85 gene model16. For each gene, we aggregated variants from the UK Biobank FE pipeline project-level variant calls and compared the number of variants per gene to those in gnomAD (Figs. 2a and 2b). To evaluate whether population structure contributes to the difference in variant distribution (Fig. 2c), we tallied the number of variants in gnomAD when subdividing individuals into six population groups: African, Latino, East Asian, European, South Asian and Other (population not assigned). Extraction and reprocessing of raw unmapped reads Using SAMtools10, we query name sorted the aligned sequence reads in the UK Biobank CRAM files and losslessly extracted the raw unmapped reads into FASTQ files. Using BWA-MEM12, these reads were mapped to the full version of the GRCh38 genome reference, which contains both the primary assembly and all alternative contigs. We generated all bwa-required index files locally except the “.alt” index file, which we downloaded from the NCBI (ftp://ftp-trace.ncbi.nlm.nih.gov/1000genomes/ftp/technical/reference/GRCh38_reference_geno_me/GRCh38_full_analysis_set_plus_decoy_hla.fa.alt). We marked duplicates and recalibrated base quality scores following GATK best practices17. To produce the scenario in which alternative contigs are not properly referenced, we use BWA-MEM12 command -j to specify the aligner to ignore the “.alt” index file (Figs. 3b). Acknowledgements We are grateful to Dr. Olivia Osborne for helpful discussions and to William Markuske for support with high-performance computing. This work was funded by grants from the National Institute on Drug Abuse and the National Human Genome Research Institute (P50 DA037844 and R01 HG009979 to TI) as well as the National Institute for Diabetes, Digestive and Kidney diseases (K08 DK102877-01 and R03DK113328-01 to ARM) and a UCSD/UCLA Diabetes Research Center grant (P30 DK063491 to ARM).
https://www.biorxiv.org/content/10.1101/868570v1.full
The Swine Genome Sequencing Consortium (SGSC) was formed in September 2003 by academic, government and industry representatives to provide international coordination for sequencing the pig genome. The SGSC's mission is to advance biomedical research for animal production and health by the development of DNA based tools and products resulting from the sequencing of the swine genome. A physical map of the swine genome was generated by an international collaboration of four laboratories, led by the Wellcome Trust Sanger Institute and Dr. Lawrence Schook at the University of Illinois at Urbana-Champaign. Both high-throughput fingerprinting and BAC-end sequencing were used to provide the template for an integrated physical map of the whole pig genome. A high-quality draft genome sequence for the pig (Sus scrofa) was published in Nature in 2012. The paper entitled "Analyses of pig genomes provide insight into porcine demography and evolution" described the sequencing, analysis, and annotation of the draft genome sequence. In parallel, a series of companion papers was published in BMC journals. The genome paper reports analyses not only of the reference genome of Duroc 2-14 (aka. TJ Tabasco), but also analyses of genomes of several wild boars and other domesticated pigs. The analyses revealed a deep phylogenetic split between European and Asian wild boars dating back ~1 million years. This observation, in part, provides final justification for the wide trait mapping crosses developed in the 1990s, especially between Western and Chinese breeds. It was argued at the time that Chinese and Western pigs were as genetically divergent as Mus musculus and Mus spretus, species of mice which had been extensively used in mouse genetics research. In effect, this judgment has been validated. As observed in other genomes, genes encoding immune response functions show evidence of rapid evolution. The pig has the largest repertoire of functional olfactory receptor genes of any mammal sequenced to date – perhaps that is why they are effective hunters for truffles. There is evidence that genes involved in taste are located in pig evolutionary break points. Pigs can tolerate higher levels of substances that are distasteful to humans – perhaps the ability of pigs to eat material that is unpalatable to humans was one of the attractions when they were domesticated. A comparison of multiple pig genomes and the comparison of pig and human genomes reveal several potentially disease-causing genetic variants which may extend the value of pigs in biomedical research. The pig industry has an excellent track record for rapid and effective exploitation of new knowledge and technologies. The pig genome sequence is expected to enable the acceleration of pig genetics research, the results of which are expected to be translated into pig improvement in a timely manner. The pig industry support for the sequencing project is gratefully acknowledged by the Consortium. This paper represents an important landmark for the Consortium. For many of the authors their collaborative research in pig genetics and genomics stretches back to the early 1990s and the European PiGMap project and USDA Pig Genome Coordinated activities in the US. The project has benefited from these long established collaborations and friendships. Trans-national funding was critical to the delivery of the project, including USDA funding to The Wellcome Trust Sanger Institute, European Commission and European Research Council funding, and significant contributions from Korean, Japanese, and Chinese national sources and many others.
https://comparativegenomics.illinois.edu/swine-genome-project
Taxonomic classification is an essential step in the analysis of microbiome data that depends on a reference database of whole genome sequences. Taxonomic classifiers are built on established reference species, such as the Human Microbiome Project database, that is growing rapidly. While constructing a population wide pangenome of the bacterium Hungatella, we discovered that the Human Microbiome Project reference species Hungatella hathewayi (WAL 18680) was significantly different to other members of this genus. Specifically, the reference lacked the core genome as compared to the other members. Further analysis, using average nucleotide identity (ANI) and 16s rRNA comparisons, indicated that WAL18680 was misclassified as Hungatella. The error in classification is being amplified in the taxonomic classifiers and will have a compounding effect as microbiome analyses are done, resulting in inaccurate assignment of community members and will lead to fallacious conclusions and possibly treatment. As automated genome homology assessment expands for microbiome analysis, outbreak detection, and public health reliance on whole genomes increases this issue will likely occur at an increasing rate. These observations highlight the need for developing reference free methods for epidemiological investigation using whole genome sequences and the criticality of accurate reference databases. Background Clostridia are a very diverse group of organisms. The taxonomy is in constant revision in light of new whole genome sequence production and genomic flux1. While organism classification can be reassigned, the identified isolates within the same species retain their relatedness. In the analysis of 13,151 microbial genomes, the misclassification (18%) was determined by binning into cliques and singletons with ANI data using the Bron-Kerbosch algorithm, which resulted in the misclassification of 31 out of the 445 type strains2. The different causes of the type strain misclassification include poor DNA-DNA hybridization (e.g. high genomic diversity), low DNA-DNA hybridization values, naming without referencing to another type strain, and lack of 16s rRNA data. Hungatella hathewayi, or its prior designation Clostridium hathewayi, was not included in the previous as there were very few Hungatella genomes in the time of that publication. As more metagenomes are published increasing claims of finding new organisms are mounting. To this point, Almeida et al. reported an increase of 1952 uncultured organisms that are not represented in well-studied human populations, where they presented data to support that rare species will be difficult to accurately identify and do not match existing references3. Public repositories of genomic data have experienced tremendous expansion beyond human curatorial capacities, which is an ever increasing issue with the high rate of WGS production4,5. Recently, it was estimated that ∼18% of the organisms are misclassified in microbial genome databases2. This high rate of error led to investigation of misclassification of specific organisms, including Aeromonas6 Fusobacterium7, and ultimately entire reference databases2. These studies found misclassified type strains, which calls into question the foundation of the taxonomy and inferred relatedness when population genomes are being used for epidemiological purposes, especially with rare organisms that are not well represented in the reference database. The work presented here uniquely identified a misclassified reference species and found propagation of incorrectly labelled genomes in several highly cited microbiome studies8,9,10,11. Observation Based on this species delineation notion, we discovered that the Human Microbiome Project reference genome for Hungatella hathewayi (WAL18680) was misidentified while building a phylogeny of Hungatella species using a population of whole genome sequences12. Both 16s rRNA and average nucleotide identity (ANI2) analysis indicated that WAL18680 was not a member of the Hungatella genus based on genome assessment (Table 1). Population genome comparison analysis was instrumental in discovering that WAL18680 was misclassified and the impact for genomic epidemiology purposes would be important. The misclassified H. hathewayi WAL18680 has been used to generate phylogenomic analysis, reference WGS for metagenome analysis, and web server identification platforms utilizing the metagenomic classifiers10,13,14. Epidemiologically, association with clinical disease will be discordant with genomic data and result in inaccurate conclusions on the microbiome ecology or therapies based on the microbiome membership to mitigate disease leading to the wrong causal relationship to be concluded9. As more microbiome studies are linking rare microbes to biological outcomes, a need exists to quickly identify inaccurate assignment when only a few WGS of individual organisms are available for use as a reference. This creates an issue with low sampling of the genome space for rare organisms and may result in mis-naming based on a small set of phenotypic assays that do not represent the genome content or flux15. H. hathewayi was first described as an isolate was from human feces16 and was subsequently reported in a patient with acute cholecystitis, hepatic abscess, and bacteremia17,18. It was also later reported in a case of appendicitis19. H. hathewayi is (WAL18680) one of the designated reference strains in Human Microbiome Project and is used extensively for binning and classification of microbiome related studies, which confounds analysis of the genus Hungatella. This organism can be isolated from the microbiome depending on the enrichment conditions9. Having a reference species misclassified is detrimental to microbiome research and in epidemiological investigations. To solve this issue, we developed a heurist to minimizing misclassification for rare reference species as a result of cross-validation of the genomic information for name assignment. The standard procedure of the 100K Pathogen Genome Sequencing Project4,5,20-22 determines the identity of bacterial pathogen isolates in clinical samples using WGS and the genome distance (ANI23,24) before proceeding with additional comparisons. This analysis was done with a group of isolates from suspected Clostridioides difficile infection cases. We identified a species of H. hathewayi using genome distance using the entire genome sequence that was implemented for high dimensional comparison using MASH25 (with the maximum sketch size). This was coupled to comparison of all of the available WGS to represent the entire genome diversity to build a whole genome phylogeny12 to determine the naming accuracy of the clinical isolates. Unexpectedly, one particular sequence was well beyond the species ANI threshold for C. difficile. We found that based on ANI, is a putative new species of Hungatella (strain 2789STDY5834916). Weis et al.26,27 used this method with Campylobacter species to demonstrate that genome distance accurately estimates host-specific genotypes, zoonotic genotypes, and disease within livestock disease with validated reference genomes. While ANI was the first estimate to raise questions for the accurate identification of this organism, we proceeded with a cross-validation strategy to verify the potential misclassification of the reference species. We advanced with the initial mis-identification by determining the pangenome analysis with the hypothesis that outbreak isolates would cluster together based on the isolate origin (i.e. an individual or location)12 as well as contain the same core genome. We found that WAL18680 did not contain any of the core genome relative to all of the other Hungatella genomes (Figure 1). Together, these genomic metrics prove that this reference genome was misclassified, which has extensive implications as reference sequences are commonly used for genomic identity for outbreak investigations. Additionally, metagenome studies require reference genome databases to identify bacterial community members. This result indicates that if the epidemiological workflow did not include specific whole genome alignment, inaccurate conclusions and misleading deductions will be made – as was observed by Kaufman et al.15 – where they found that genome diversity is unexpectedly large and expands based on a power law with each new WGS that is added to the database. Combining the fact that this is a reference genome from a rare organism from a very diverse group, that the genome evolution rate is a power law, and that this is a reference genome from the Human Genome Project the implications for the mis-identification have far reaching implications. Conflicts of taxonomic classification based on traditional methods, such as phenotypic assays, metabolism, with genomic based parameters will likely increase as more genomes are produced and use of the entire genetic potential (i.e. the entire genome). The need for heuristical indicators of misclassification are needed as is the need to expand WGS that adequately represent bacterial diversity among and within taxonomy to represent the genetic diversity of any single organism. Genome sequence availability The WGS for each genome is via the NCBI with Biosample numbers of SAMD00008809, SAMN02463855, SAMN02596771, SAMEA3545258, SAMEA3545379, SAMN09074768. The WGS sequence for BCW8888 is available via the 100K Project BioProject at the NCBI (PRJNA186441) as Biosample SAMN12055167. Data Availability The whole genome sequences are available now via the SRA for all bu, except BCW8888. It will be publically available within 90 days.
https://www.medrxiv.org/content/10.1101/19000489v1.full
Traits currently being measured on the Breeders toolkit Doubled Haploid Am. muticum lines here at the WRC include: Growth stage dates, Biomass (anthesis and harvest), Leaf area, Plant Height, Internode lengths, Fruiting Efficiency, Ear height, Grain number (size and area), Photosynthetic traits (such as Amax, Asat, ACi parameters, Chlorophyll Fluorescence, etr), Leaf N%, Pollen grain number and size, Anther length, Flowering morphology. More information to follow. Other collaborations include: Target traits include: Lead: Prof. Erik Murchie, Research Fellow: Dr. Lorna McAusland (University of Nottingham) Co/I Dr Elizabete Carmo-Silva, Professor Ian Dodd, Professor Keith Edwards, Dr Michael Foulkes, Professor Ian King, Professor Julie King, Professor Tracy Lawson, Professor Martin Parry, Dr Kevin Pyke, Professor Christine Raines The objective of the programme at Nottingham is to transfer small chromosome segments from related species which carry a target gene but lack any deleterious genes into wheat. In this project we will generate, using wide crosses, landraces and existing cultivars, lines with substantial variation in their photosynthetic properties and use high throughput screening techniques to identify progeny with enhanced photosynthetic capacity and efficiency. We will investigate the genetic basis of the photosynthetic variation and with the IWYP HUB integrate these discoveries into a pre-breeding and breeding strategy. The programme will bring together a multi-disciplinary research team: UoN (wide crossing, wheat physiology, photosynthesis phenotyping), (UoB: Genetic marker analysis), University of Essex (UoE: novel photosynthesis phenotyping, Calvin cycle engineering), Lancaster University Environment Centre (LEC: whole plant carbon gain and water use, Rubisco engineering), CIMMYT (field phenotyping and genotyping). Introducing wild relative material into pre-breeding and breeding programmes takes significant time and hundreds, if not thousands, of plants to achieve the required number of backcrosses. During this project, a novel photosynthesis-specific screening tool was developed to relieve the pressure associated with rapidly phenotyping the generations of plants. The platform is based around a chlorophyll fluorescence imager which can be used to non-invasively monitor PSII. This system from Photon Systems Instruments can screen 200 intact seedlings day-1, 400-500 leaf sections day-1, Control gaseous conditions (e.g, O2), Measure Chlorophyll Fluorescence (invasive or non-invasive), and run custom fluctuating light protocols (dynamic screening). Full details of the phenotyping platform methodology can be found in the following publication; "High throughput procedure utilising chlorophyll fluorescence imaging to phenotype dynamic photosynthesis and photoprotection in leaves under controlled gaseous conditions" McAusland L, Atkinson J.A, Lawson T and Murchie E.H (2019) Plant Methods 15:109 https://doi.org/10.1186/s13007-019-0485-x Department of Plant and Crop ScienceThe University of NottinghamSchool of BiosciencesSutton Bonington Campus Leics, LE12 5RD Tel: +44 (0) 115 951 6014 Email: [email protected] Connect with the University of Nottingham through social media and our blogs.
https://www.nottingham.ac.uk/wrc/germplasm-resources/phenotyping.aspx
Abstract: show hide Sort: Relevance Relevance Recent Title Journal First author Last author Exogenous application and interaction of biochar with environmental factors for improving functional diversity of rhizosphere's microbial community and health Ren T , Feng H, Xu C, Xu Q, Fu B, Azwar E, et al. Chemosphere , 2022 Jan 22;294:133710. PMID: 35074326 DOI: 10.1016/j.chemosphere.2022.133710 Abstract The usage of fertilizer with high nitrogen content in many countries, as well as its enormous surplus, has a negative impact on the soil ecological environment in agricultural system. This consumption of nitrogen fertilizer can be minimized by applying biochar to maintain the sufficient supply of nitrogen as nutrient to the near-root zone. This study investigated the effects of various amounts of biochar application (450, 900, 1350, and 1800 kg/hm2) and reduction of nitrogen fertilizer amount (10, 15, 20, and 25%) on the nutrients and microorganism community structure in rhizosphere growing tobacco plant. The microorganism community was found essential in improving nitrogen retention. Compared with conventional treatment, an application of biochar in rhizosphere soil increased the content of soil available phosphorus, organic matter and total nitrogen by 21.47%, 26.34%, and 9.52%, respectively. It also increased the abundance of microorganisms that are capable of degrading and utilizing organic matter and cellulose, such as Actinobacteria and Acidobacteria. The relative abundance of Chloroflexi was also increased by 49.67-78.61%, and the Acidobacteria increased by 14.79-39.13%. Overall, the application of biochar with reduced nitrogen fertilizer amount can regulate the rhizosphere microecological environment of tobacco plants and their microbial population structure, thereby promoting soil health for tobacco plant growth while reducing soil acidification and environmental pollution caused by excessive nitrogen fertilizer. Soil carbon supplementation: Improvement of root-surrounding soil bacterial communities, sugar and starch content in tobacco (N. tabacum) Yan S, Ren T , Wan Mahari WA, Feng H, Xu C, Yun F, et al. Sci Total Environ , 2021 Aug 24;802:149835. PMID: 34461468 DOI: 10.1016/j.scitotenv.2021.149835 Abstract Soil carbon supplementation is known to stimulate plant growth by improving soil fertility and plant nutrient uptake. However, the underlying process and chemical mechanism that could explain the interrelationship between soil carbon supplementation, soil micro-ecology, and the growth and quality of plant remain unclear. In this study, we investigated the influence and mechanism of soil carbon supplementation on the bacterial community, chemical cycling, mineral nutrition absorption, growth and properties of tobacco leaves. The soil carbon supplementation increased amino acid, carbohydrates, chemical energy metabolism, and bacterial richness in the soil. This led to increased content of sugar (23.75%), starch (13.25%), and chlorophyll (10.56%) in tobacco leaves. Linear discriminant analysis revealed 49 key phylotypes and significant increment of some of the Plant Growth-Promoting Rhizobacteria (PGPR) genera (Bacillus, Novosphingobium, Pseudomonas, Sphingomonas) in the rhizosphere, which can influence the tobacco growth. Partial Least Squares Path Modeling (PLS-PM) showed that soil carbon supplementation positively affected the sugar and starch contents in tobacco leaves by possibly altering the photosynthesis pathway towards increasing the aroma of the leaves, thus contributing to enhanced tobacco flavor. These findings are useful for understanding the influence of soil carbon supplementation on bacterial community for improving the yields and quality of tobacco in industrial plantation. Biochar for cadmium pollution mitigation and stress resistance in tobacco growth Ren T , Chen N, Wan Mahari WA, Xu C, Feng H, Ji X, et al. Environ Res , 2021 01;192:110273. PMID: 33002505 DOI: 10.1016/j.envres.2020.110273 Abstract Pot experiments were conducted to investigate the influence of biochar addition and the mechanisms that alleviate Cd stress in the growth of tobacco plant. Cadmium showed an inhibitory effect on tobacco growth at different post-transplantation times, and this increased with the increase in soil Cd concentration. The growth index decreased by more than 10%, and the photosynthetic pigment and photosynthetic characteristics of the tobacco leaf were significantly reduced, and the antioxidant enzyme activity was enhanced. Application of biochar effectively alleviated the inhibitory effect of Cd on tobacco growth, and the alleviation effect of treatments is more significant to the plants with a higher Cd concentration. The contents of chlorophyll a, chlorophyll b, and carotenoids in the leaves of tobacco plants treated with biochar increased by 9.99%, 12.58%, and 10.32%, respectively, after 60 days of transplantation. The photosynthetic characteristics index of the net photosynthetic rate increased by 11.48%, stomatal conductance increased by 11.44%, and intercellular carbon dioxide concentration decreased to 0.92. Based on the treatments, during the growth period, the antioxidant enzyme activities of tobacco leaves comprising catalase, peroxidase, superoxide dismutase, and malondialdehyde increased by 7.62%, 10.41%, 10.58%, and 12.57%, respectively, after the application of biochar. Our results show that biochar containing functional groups can effectively reduce the effect of Cd stress by intensifying the adsorption or passivation of Cd in the soil, thereby, significantly reducing the Cd content in plant leaves, and providing a theoretical basis and method to alleviate soil Cd pollution and effect soil remediation.
http://mymedr.afpm.org.my/search?q=author%3A%28%22Ren+T%22%29
2 edition of Genetical studies on the chlorophyll apparatus in oats and wheat ... found in the catalog. Genetical studies on the chlorophyll apparatus in oats and wheat ... Kare FroМ€ier Published 1946 by Berlingska Boktryckeriet in Lund . Written in English |The Physical Object| |Pagination||, pp. -406 p.| |Number of Pages||406| |ID Numbers| |Open Library||OL16595391M| Studies of Chlorophyll Content by Different Methods in Black Gram Distilled water from an all glass apparatus, with no addition of potassium permanganate, was used. All the procedure were performed under diffused light to eliminate the exposure of leaf materials to direct, bright or sun light. The impact of (long-term) drought acclimation and (short-term) heat stress and their combination on fast chlorophyll fluorescence induction curves (OJIP) and grain yield was tested using pot-grown plants of wild barley (Hordeum spontaneum) originating from Northern Egypt. Concerning agronomic traits, the main effect of drought was decreased biomass accumulation and grain yield, while heat Cited by: It is believed that most plants need light, CO 2, chlorophyll and water to produce starch through photosynthesis. We need to only use 2 different solutions to discover if Chlorophyll is necessary for the formation of starch; Ethanol (methylated spirits) and Iodine solution. Photosynthetic Studies on a Pea-mutant Deficient in Chlorophyll H. R. Highkin,2 N. K. Boardman, and D. J. Goodchild Division of Plant Industry, CSIRO, Canberra, A.C.T., Australia Received Ap Abstract. A chlorophyll-deficient mutant of pea (Pisum sativum) was found as a spon-taneous mutation of the variety by: The objective was to determine whether prediction of (N dw) with the chlorophyll meter can be improved by a simple correction for specific leaf weight (SLW). Leaf N status was estimated by a chlorophyll meter (SPAD) and measured directly by micro-Kjel-dahl procedure. specific leaf weight was calculated as the ratio of dry weight to leaf area. Chlorophyll a and chlorophyll b are the two major types of chlorophyll and differ only in the composition of one of their structural sidechains. Chlorophyll a is the most prevalent type of chlorophyll. It is found in plants, algae and other aquatic organisms. This type of chlorophyll absorbs red, . China dream Squaw winter The 2000 Import and Export Market for Gas Oils in St. Kitts and Nevis (World Trade Report) Saint and the Hapsburg necklace Guide to the University. National new car price guide Proceedings ofthe twenty-seventh conference on the design of experiments in army research development and testing. Its a Big Deal College beautiful and other poems Ethno-Graphic Best American short stories. Chlorophyll Therapeutic Effects and Natural Sources Chlorophyll is actually responsible for the green pigmentation in plants. Plants use chlorophyll to trap light needed for photosyn Artichoke Benefits and Health Effects The artichoke is a member of the thistle family. The 2 compound found in artichokes are silymarin and cynarin. Polyploidy and radiosensitivity in wheat and barley. Survival, pollen and seed fertility and mutation frequency () Polyploidy and radiosensitivity in wheat and barley. Cytological and cytochemical ca, – Fröier, K. () Genetical studies on the chlorophyll apparatus in oats and tas 32 Cited by: The author gives a survey of the methods used and the results obtained in induced mutation. The article consists of three parts: the general theoretical aspects, the possibilities of practical application and a bibliography of titles. In the theoretical part (Mutation research) the various types of mutation are mentioned, but only gene- or point mutations and structural mutations are. A pilot study on wheat grass juice for its onal and therapeutic potential on chronic diseases. Chauhan Abstract Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to Cited by: 4. Net photosynthetic rate (A n), stomatal conductance (g s), chlorophyll content and dark respiration rate were measured on 16 wheat cultivars (Triticum aestivum L.), grown in replicated yield trials in a warm, irrigated, and low relative humidity environment in central ements were made on flag leaves in full sunlight at three different stages of plant development (booting, anthesis Cited by: between nitrogen and chlorophyll content at some wheat cultivars were carried out at the outset of the flowering phase. Dependence of nitrogen content from mineral elements in the soil was established at the same time. Investigations were guided on unfertilized soil File Size: KB. Germplasm. Up to 26 wheat and 2 barley genotypes were used in a series of experiments to assess variation in chlorophyll per unit leaf area. These included between different positions on the leaf, between different leaves per plant, between plants, between genotypes and between environments to determine the impact of these factors on chlorophyll level per unit leaf area, measured with a SPAD Influence of nitrogen and plant density on spring wheat (Tritium aestivum L.) and oat (Avena ludoviciana L.) competitiveness Result from numerous studies showed that high crop densities could reduce the impact of weed on crop was higher than in the absence of wild oats (Fig. This indicates that there was a decrease in wild oat. Genetic dissection of chlorophyll content at different growth stages in common wheat Article (PDF Available) in Journal of Genetics 88(2) September with 89 Reads How we measure 'reads'. Leaf relative chlorophyll content as an indicator parameter to predict nitrogen fertilization in maize Ciência Rural, v, n.5, set-out, (EMBRAPA, ) and presented the following composition: clay: g dm-3, pH (in water): ; phosphorus, potassium and organic matter contents of 16mg dm-3, mg dm-3 and mg dm-3, respectively. Chlorophyll content in the terminal leaf Fig. 1 presents the results of measuring chlorophyll content in the terminal leaf of wheat. The content of chlorophyll a is significantly greater than that of chloro-phyll b on all variants of soil fertilization. The concentra-tion of chlorophyll b is virtually equal in all cultivars. In addition, numerous studies in plants and algae evidenced the impact of Hg (mostly 10 −6 M IHg) on photosynthesis, notably the chlorophyll breakdown and the reduction in photosynthesis. Chlorophyll is a green photosynthetic pigment found in plants, algae, and cyanobacteria. Chlorophyll absorbs mostly in the blue and to a lesser extent red portions of. The chlorophyll index from the device Minolta SPAD ® showed positive correlation with leaf N concentrations in wheat plants (Fioreze & Rodrigues, ), although studies evaluating the possible differences in efficiency between different models of chlorophyll meters and their relationship with plant N concentrations are still incipient. (BeWellBuzz) Chlorophyll is the green pigment in plants that facilitates photosynthesis. Photosynthesis converts sunlight into the chemical energy we call oxygen and carbohydrates. This substance is one of the main reasons for the health benefits of green veggies. Although scientists had. ZOOLOGI. Hanna, Jack with John Stravinsky. III-IV. Studies on the Activation of Peptidase by the Snake Venoms. 2 vols. Formosa, Japan, pp. /Memoirs of the faculty of science and agriculture, Taihoku Imperial University, Vol. IX, No. 5 and 7. Fröier, Kåre. Genetical Studies on the Chlorophyll Apparatus in Oats and Wheat. Lund. Start studying Science - Chapter - Small but Significant. Learn vocabulary, terms, and more with flashcards, games, and other study tools. / winter wheat growing season at the University of Nottingham, Sutton Bonington Campus, near Loughborough, UK. The experimental site lies between latitude 52° 50’ N and longitude 1° 15’ W. The soil type is a clay loam with soil indices of P: 3, K: 3, Mg: 4 and pH: The previous crop grown was winter oats and the soil. When you learned about photosynthesis in elementary school, you discovered that chlorophyll is used by a plant to collect light and make energy. In addition, plants have their bright green color due to chlorophyll. If you eat a lot of leafy green vegetables, the odds are you get your dose of chlorophyll here for 20 ‘Healthy’ Foods That Are Actually Unhealthy (And How to Fix Author: Bridget Creel. Overview of the Spectrophotometric Method US EPA method The Spec method is used when chlorophyll levels are very high. Step 1: Collect water sample Step 2: Filter known quantity of water onto a 47 mm glass fiber filter Put the filter in a known volume of an acetone solution and grind the filter with a tissue grinder. This releases the chloroplasts from the algae and filterFile Size: 2MB. 1. Eur J Biochem. Jan 3;41(1) Composition of the photosynthetic apparatus of normal barley leaves and a mutant lacking chlorophyll by: The chlorophyll content in flag leaves reflects photosynthetic activity and yield potential of wheat plants. A two-year field experiment was carried out to evaluate flag leaf chlorophyll content index (CCI) at different growth stages [Feekes (anthesis).Answers from experts on chlorophyll vs wheatgrass. First: Tends to have a laxative effect.
https://hoqokazigyja.cinemavog-legrauduroi.com/genetical-studies-on-the-chlorophyll-apparatus-in-oats-and-wheat-book-2251af.php
Increases in plant dry mass are not always associated with increases in photosynthetic rate, particularly when increased internode elongation increases plant height or diameter. Photosynthetic efficiency is rigorously defined as the amount of CO2 fixed per absorbed photon, a ratio known as quantum yield. Longer internodes typically increase the interception and absorption of photons, causing increased plant growth (CO2 fixed or dry mass gain) without an increase in quantum yield (photosynthesis). An increase in the physical process of radiation interception is often incorrectly interpreted as an increase in the biochemical process of photosynthesis. Plant scientists continue to grossly underestimate the magnitude and importance of side lighting in single-plant studies. The reflective walls of growth chambers mean that side light intensity is only slightly less than that from the top. If a single, spaced plant is considered to be spherical rather than circular, the surface area for radiation interception changes from πr2 to 4πr2, a 400% increase. Even if only the top half of the sphere is exposed to light, the surface area and thus light absorption are still twice that of a circle. In many studies, plant surface area and radiation absorption should be analyzed as a cylinder determined by plant height and width, rather than as a circle determined by width only. Side lighting means that tall plants intercept more photons and will have a higher growth rate than short plants, even when the irradiance level is identical at the top of the plants. It is important to distinguish between radiation absorption and photosynthesis because the increases in growth or width caused by increased side lighting do not occur in plant communities where plants form a closed canopy and mutual shading eliminates side lighting. In our studies with wheat canopies, elevated CO2 increased photosynthesis, which increased tillering (branching) and lateral spread at the edges of the plant canopy. Precise measurements of the canopy-absorbing area showed that half of the CO2 effect was caused by increased radiation absorption. The direct CO2 effect on photosynthesis was only about 50% of what we originally measured. Small increases in lateral spread cause surprisingly large increases in radiation absorption. Figure 1 shows how a 10% increase in lateral spread of a wheat canopy resulted in a 24% increase in plant surface area causing a similar increase in growth rate and a corresponding overestimation of the effect of CO2 on plant growth per unit surface area. Fig. 1. The effect of a 10% increase in lateral spread (5 cm on all sides) on surface area of a plant canopy. The planted surface area was 0.8 m2. The actual plant growth area was 0.99 m2, resulting in a 24% increase in final/initial surface area. Small increases at the perimeter cause large increases in surface area. Single-Leaf Maximum Quantum Yield and Whole-Canopy Actual Quantum Yield Photosynthetic efficiency is routinely measured by determining the maximum quantum yield of single leaves, which occurs only at low PPF (less than 200 μmol m-2 s-1) and is measured at the initial slope of the PPF response curve. It is often useful to determine the average daily quantum yield of whole plants at much higher PPF levels, which requires determining the number of photons absorbed by a whole plant. This is difficult because it requires measuring and integrating the incident, transmitted, and reflected photons on all sides of the plant. However, these measurements are often made in plant canopies where the edge effects are small or can be eliminated by artificial shading (Gallo and Daughtry, 1986). We have used fiberglass window screen for artificial shading to simulate the effect of additional plants and to minimize edge effects. The screen is hung over a wire that is stretched around the perimeter of the canopy at the top edge. The wire and screen are raised daily as the canopy grows. The window screen extends from the top to the bottom of the canopy. The goal is to create the same vertical radiation attenuation at the edge of the canopy as the center. The data in Table 1 indicate that 3 layers of window screen may be necessary to create a similar radiation attenuation at the edges. TABLE 1 A comparison of the radiation attenuation from two or three layers of window screen for artificial shading at the edge of a dense wheat canopy. |Cm from top of canopy||Center of tub||(edge) 3 layers of windowscreen||(edge) layers of windowscreen| |0 | 6 10 17 36 |1100 | 750 265 50 0 |1100 | 750 225 35 0 |1100 | 750 225 100 20 Values are for PPF in μmol m-2 s-1 Whole-canopy quantum yield We calculated average daily canopy quantum yield. This involved integrating net photosynthesis during the light period and was based on the assumption that dark respiration occurs at the same rate in the light and the dark (McCree, 1986). Dark respiration may be slightly lower in the light because ATP can be supplied in leaves by photophosphorylation, or slightly higher because the energy demand for translocation and active uptake are increased. Net photosynthesis plus dark respiration equals gross photosynthesis in μmol m-2 s-1 of CO2. Gross photosynthesis divided by absorbed photons (μmol m-2 s-1) is canopy quantum yield (Bugbee and Monje, 1992; Monje, 1993; Monje and Bugbee, 1994). DEFINING GROWTH AND DEVELOPMENT I define plant growth as an increase in dry mass and define plant development as a change in plant shape. These are important distinctions when describing the effect of radiation on internode elongation. An increase in stem elongation is not necessarily an increase in growth. Some radiation environments increase plant height with no change in dry mass, e.g. far-red light can cause rapid stem elongation with no change in photosynthesis or dry mass. PHOTOSYNTHETIC RATE IS SURPRISINGLY LITTLE AFFECTED BY LIGHT QUALITY FROM STANDARD LAMPS The effect of radiation quality on photosynthesis has fascinated physiologists for over a hundred years. Early studies were done on photosynthetic bacteria and algae and we have long known that green light is less useful than other colors. McCree (1972a, 1972b) made comprehensive studies of photosynthesis in single leaves and described an average relative quantum efficiency curve (Figure 2), which was replicated by Inada (1976, 1978a, 1978b) and extended by Sager et al. (1982, 1988). However, the most common method of measuring photosynthetically active radiation gives equal value to all photons with wavelengths between 400 and 700 nm and is referred to as Photosynthetic Photon Flux (PPF). Because blue and green photons result in about 25% less photosynthesis than red photons, a PPF sensor overestimates the photosynthetic value of the blue photons from a source, for example, metal halide lamps. However, a PPF sensor does not respond to ultraviolet or far-red radiation and these wavelengths drive some photosynthesis. A lamp with significant amounts of UV and far-red radiation could thus have a higher photosyn-thetic rate than predicted by a PPF sensor. Fig. 2. The quantum (PPF) response when all photons are weighted equally between 400 and 700 nm; and the relative quantum efficiency curve as determined by the average plant response for photosynthesis (from McCree, 1972a). The quantum response overestimates the photosynthetic value of photons between 400 and about 550 nm, but underestimates the photosynthetic value of photons below 400 and above 700 nm. Differences between the Quantum and the Actual Plant Response for Common Radiation sources Because the spectral output for electric lamps is reasonably constant, the ratio of the constant photon response (quantum or PPF response) to actual plant response can be calculated from the average quantum efficiency curve (from McCree, 1972a). This ratio is shown in Table 2. The differences among lamp types are surprisingly small. Similar calculations have been described previously (McCree, 1981). An additional source of error is that all sensors that integrate photosynthetic radiation are imperfect. Barnes et al. (1993) analyzed the errors associated with commercial sensors designed to integrate photosynthetic radiation over a range of wavelengths. The ratio in Table 2 some lamp types is not intuitively obvious so it is useful to plot the spectral output from the lamps (Figure 3) and plot this output with the average plant response curve (Figure 4). TABLE 2. The spectral efficiency of six electric lamps and sunlight. Lamp type Ratio - Low Pressure Sodium (LPS) .99 - High Pressure Sodium (HPS) .95 - Incandescent (INC) .95 - Metal Halide (MH) .90 - Cool White Fluorescent (CWF) .89 - Red Light-Emitting Diode (LED) .89 - Solar on a clear day .88 Spectral efficiency is defined as the ratio of the lamp spectral output multiplied by McCree’s quantum efficiency weighting factors, divided by the number of photons between 400 and 700 nm. Examples are given in Figure 4. The ratio for solar radiation is not a constant (see Figure 3). The LED had a peak output of 660 nm. LED’s with peak outputs at shorter wavelengths wouldhave greater spectral efficiency, e.g. a peak output at 610 nm would result in an efficiency close to 1.0. PLANT GROWTH IN SOME SPECIES IS SURPRISINGLY LITTLE AFFECTED BY LIGHT QUALITY Although photosynthesis may not be affected by light quality in short-term studies, the spectral quality from some lamps decreases chlorophyll concentration and alters phytochrome status, which can be detrimental to plant growth in long-term studies. The monochromatic radiation from low-pressure sodium lamps can significantly reduce chlorophyll and plant growth in several dicotyledonous species, for example. Fig. 3. The spectral characteristics of the seven radiation sources discussed in Table 2. Data are normalized to a peak value of 100 to facilitate comparisons and plotted on a photon flux basis, which is a better predictor of plant response than is energy flux (adapted from Barnes et al., 1993). The solar curve was measured at noon on a sunny ay in Logan, UT. Increasing diffuse radiation (from clouds or low sun angles) shifts the peak to shorter wave-lengths and would tend to decrease the ratio for solar shown in Table 2. Fig. 4. A comparison of the spectral output from low pressure sodium (LPS), red LED’s, metal halide (MH), and high pressure sodium (HPS) lamps to the average quantum efficiency curve. Monochromatic, LPS lamps are near the peak quantum yield (a ratio of 0.99). Some output of red LED’s exceeds 680 nm where the plant response drops sharply. The ratio for MH lamps (0.90) is reduced because they emit blue photons but this reduction is offset some because they emit photons in the UV region, which are not measured by PPF sensors. HPS lamps have a relatively high ratio (0.95) because most of their output is near the peak quantum yield. Effect of spectral quality of wheat growth and yield Not all species are sensitive to spectral quality, however. Low-pressure sodium lamps did not decrease the growth and yield of wheat compared to HPS and MH lamps (Table 3), a finding we recently confirmed. The plants under the low pressure sodium lamps of course did not look green, but the apparent difference in green color disappeared when the plants were removed and placed together in full spectrum light. Studies with wheat grown under red LED’s also indicate that chlorophyll synthesis, photosynthesis, growth, and yield of wheat (Triticum aestivum) are insensitive to spectral quality. TABLE 3. The effect of radiation source on growth and yield of wheat grown under three radiation sources. (adapted from Guerra et al., 1985). | | Lamp Type |Total Biomass | (g m-2) |Grain Yield | (g m-2) |Low Pressure Sodium | High Pressure Sodium Metal Halide |171 | 159 162 |61.7 | 58.8 62.4 |α = 0.05||n.s.||n.s.| Effect of HPS and MH lamps on soybean growth and yield Soybean leaves grown under HPS lamps are visually chlorotic and have reduced chlorophyll concentrations compared with plants grown under MH lamps. However, most plant leaves have excess chlorophyll, and small reductions do not necessarily decrease photosynthetic rates. Three recent studies in our laboratory confirm the reduction in chlorophyll under HPS lamps, but indicate that this reduction does not reduce growth or yield (Table 4). In fact, growth and yield were slightly better under HPS lamps. There was greater petiole elongation in plants grown under HPS lamps, but we lowered the plants as they grew taller to maintain a constant PPF at the top of the canopy. Lateral spread was prevented by enclosing the plants with a double layer of window screen around the perimeter of the stand. The reduced chlorophyll concentration may have increased PPF transmittance and allowed more PPF to penetrate to lower leaves in the canopy, thereby increasing canopy photosynthesis. TABLE 4. The effect of lamp type on the seed yield of soybean canopies. |PPF (μmol m-2s-1)| |Lamp type||400||600||800| |Metal Halide | High Pressure Sodium |90 | 100 |91 | 100 |83 | 100 The data are normalized to 100% in each study. In spite of reduced chlorophyll concentrations, soybean canopies grown under HPS lamps had slightly increased yields. RADIATION INTENSITY: INSTANTANEOUS VS. INTEGRATED DAILY PHOTOSYNTHETIC PHOTON FLUX Daily plant growth is closely related to the daily integrated PPF (mol m-2 d-1). Leaf emergence rates are determined by daily integrated PPF (Volk and Bugbee, 1991; Faust and Heins, 1993), and physiological and anatomical characteristics of leaves appear to be determined by the integ-rated rather than the instantaneous PPF. When Chabot, Jurik, and Chabot (1979) examined combinations of photoperiod and instantaneous PPF; maximum photosynthetic rate, specific leaf mass, and leaf anatomy were all determined by the integrated daily PPF; instantaneous PPF had little effect. One of the objectives of the workshop that resulted in these proceedings was to establish guidelines for radiation intensity in controlled environments. The use of high intensity discharge lamps (HPS and MH lamps) means that full summer sunlight (50 to 60 mol m-2 d-1) can easily be obtained in growth chambers. Although the instantaneous value of summer sunlight is about 2000 μmol m-2 s-1, it is not always necessary to obtain this PPF level in growth chambers because the photoperiod can be extended to achieve integrated PPF levels similar to the field. A PPF of only 800 μmol m-2 s-1 during a 16-h photoperiod results in an integrated PPF of 46.1 mol m-2 d-1, which is close to average field values for June and July in much of the northern hemisphere. Some short-day plants require a 12-h photoperiod, which decreases the integrated daily PPF in both field and controlled environments. Geographic locations and seasons (equinoxes) with 12-h photoperiods have lower daily PPF levels (35 to 40 mol m-2 d-1), so high instantaneous PPF levels may still not be required in growth chambers. A PPF of 800 μmol m-2 s-1 with a 12-h photoperiod results in 34.6 mol m-2 d-1. THE PPF RESPONSE OF SINGLE LEAVES AND CANOPIES Light response curves for single leaves are well characterized and some workers have suggested that PPF levels that saturate single-leaf photosynthesis are adequate for controlled environment studies. However, canopy photosynthesis saturates at much higher PPF levels than single leaves and PPF levels higher than 1000 μmol m-2 s-1 would be beneficial in some studies. We have found that the photosynthetic response of wheat canopies is linear up to full sunlight (2000 μmol m-2 s-1; Meek, 1990; Figure 5). Canopy photosynthetic efficiency at a PPF of 100 mol m-2 d-1 The data in Figure 5 (subsequent page) are based on short-term (about 1-h) measurements at each PPF level, and these high photosynthetic rates may not be sustained over longer time intervals. However, our studies indicate that high photosynthetic rates are sustained in wheat canopies over a 20-h photoperiod at twice the integrated daily PPF of full summer sunlight (Figure 6). Fig. 5. The photosynthetic response of component wheat leaves and of the intact wheat canopy. The leaves light saturate at a PPF of about 1000 μmol m-2 s-1, but canopy photosyn-thetic rate is linear, even up to the equivalent of full sunlight (2000 μmol m-2 s-1). The canopy was grown at a constant 21° C with elevated CO2 (1200 μmol mol-1). The photosynthetic rate of the single leaves is expressed on a leaf-surface-area basis, and the canopy photosynthetic rate is expressed on a ground or horizontal-surface-area basis. The leaf area index of the canopy exceeded 10, which results in a high dark respiration rate, a high light compensation point, and a linear response to increasing PPF. Fig. 6. The photosynthetic rate of wheat canopies grown at two CO2 levels (ambient: 330 and saturating: 1200 μmol mol-1). The arrow indicates a change in the PPF from 800 to 1400 μmol m-2 s-1. The photoperiod was 20-h. There was no evidence for feedback inhibition of photosynthesis, as indicated by a decreasing photosynthetic rate during the photoperiod, in any of the conditions except at the highest PPF level coupled with elevated CO2. The magnitude of feedback inhibition gradually decreased in the days following the increase in PPF. Within about 6 days after the PPF was increased, the decrease in photosynthesis was less than 5% of the rate at the start of the light period. The daily integrated PPF at 1400 μmol m-2 s-1 was 100.8 mol m-2 d-1, or about twice full summer sunlight. Plants were grown at a constant 23° C day/night temperature. Data are from Monje (1993). CONCLUSIONS Differences in radiation quality from the six most common electric lamps have little effect on photosynthetic rate. Radiation quality primarily alters growth because of changes in branching or internode elongation, which change radiation absorption. Growth and yield in wheat appear to be insensitive to radiation quality. Growth and yield in soybeans can be slightly increased under high pressure sodium lamps compared to metal halide lamps, in spite of greatly reduced chlorophyll concentrations under HPS lamps. Daily integrated photosynthetic photon flux (mol m-2 d-1) most directly determines leaf anatomy and growth. Photosynthetic photon flux levels of 800 μmol m-2 s-1 are adequate to simulate field daily-integrated PPF levels for both short and long day plants, but plant canopies can benefit from much higher PPF levels. Acknowledgements I greatly appreciate the review comments of Frank Salisbury and Tracy Dougher. The insightful editorial assistance of Kurt Gutknecht is also appreciated. *Research reported in this paper was supported by the National Aeronautics and Space Administration cooperative agreement 2-139, and by the Utah Agricultural Experiment Station. This is Journal paper number 4665. REFERENCES Barnes, C., T. Tibbitts, J. Sager, G. Deitzer, D. Bubenheim, G. Koerner, and B. Bugbee. 1993. Accuracy of quantum sensors measuring yield photon flux and photosynthetic photons flux. Hort Science 28:1197-1200. Bugbee, B. And O. Monje. 1992. The optimization of crop productivity: Theory and validation. Bioscience 42:494-502. Chabot, B.F., T.W. Jurik, and J.F. Chabot. 1979. Influence of instantaneous and integrated light-flux density on leaf anatomy and photosynthesis. Amer. Jour. Botany 66:940-945. Faust, J.E. And R.D. Heins. 1993. Modeling leaf development of the African Violet (Saintpaulia ionantha). J. Amer. Soc. Hort. Sci. 118:747-751. Gallo, K.P. And C.S.T. Daughtry. 1986. Techniques for measuring intercepted and absorbed photosynthetically active radiation in corn canopies. Agron. Jour. 78:752-756. Guerra, D., A. Anderson, and F.B. Salisbury. 1985. Reduced phenylalanine ammonia-lyase and tyrosine ammonia-lyase activities and lignin synthesis in wheat grown under low pressure sodium lamps. Plant Physiol. 78:126-130. Inada, K. 1976. Action spectra for photosynthesis in higher plants. Plant Cell Physiol. 17:355-365. Inada, K. 1978a. Photosynthetic action spectra in higher plants. Plant Cell Physiol. 19:1007-1017. Inada, K. 1978b. Spectral dependence of photosynthesis in crop plants. Acta Hortic. 87:177-184. Mccree, K.J. 1972a. The action spectrum, absorbance and quantum yield of photosynthesis in crop plants. Agric. Meteorol. 9:191-216. Mccree, K.J. 1972b. Test of current definitions of photosynthetically active radiation against leaf photosynthesis data. Agric. Meteorol. 10:443-453. Mccree, K.J. 1981. Photosynthetically active radiation. Pages 41-55. In: Lange, O.l., P.s. Nobel, C.B. Osmund, and H. Ziegler (eds.), Encyclopedia of Plant Physiology, New Series, Vol. 12a, Physiological Plant Ecology I. Springer Verlag, Berlin. Mccree, K.J. 1986. Measuring the whole plant daily carbon balance. Photosynthetica 20:82-93. Meek, D. 1990. The relationship between leaf area index and photosynthetic temperature response in wheat canopies. M.S. Thesis. Utah State University. Monje, O. 1993. Effects of elevated CO2 on crop growth rates, radiation absorption, canopy quantum yield, canopy carbon use efficiency, and root respiration in wheat. M.S. Thesis. Utah State University. Sager, J.C., J.L. Edwards, and W.H. Klein. 1982. Light energy utilization efficiency for photosynthesis. Trans. ASAE, 25(6);1737-1746. Sager, J.C., W.O. Smith, J.L. Edwards and K.L. Cyr. 1988. Photosynthetic efficiency and phytochrome photoequilibria determination using spectral data. Trans. ASAE, 31(6):1882-1889. Volk, T. And B. Bugbee. 1991. Modeling light and temperature effects on leaf emergence rate in wheat and barley. Crop Science 31:1218-1224. Bugbee, B. 1994. Effects of radiation quality, intensity, and duration on photosynthesis and growth, p 39-50. In: T.W.Tibbitts (ed.). International Lighting in Controlled Environments Workshop, NASA-CP-95-3309. Copyright © March 1994 NASA [National Aeronautics and Space Administration]. All rights reserved.
https://www.controlledenvironments.org/effects-of-radiation-quality-intensity-and-duration-on-photosynthesis-and-growth-bruce-bugbee/
Summer pruning is primarily used in apples to increase the light penetration into inner canopy to improve fruit color. However, summer pruning may reduce fruit size. We hypothesize that removing healthy exterior shoots reduces the whole-tree carbon supply in relation to pruning severity. If the crop load (i.e., demand) is high, fruit size and quality will be reduced. The effects of summer pruning on photosynthetic activity and recovery of shaded leaves after re-exposure were monitored on a range of exposures in canopies of `Empire' apple trees. The photosynthetic ability of leaves was positively related to its prepruning exposure. There was little recovery of photosynthetic activity of shade leaves until late growing season, indicating the re-exposure of shade leaves after summer pruning cannot replace the role of exterior leaves removed by pruning. Whole canopy net CO2 exchange (NCER) was measured on `Empire'/M9 trees with a commercial range of pruning severity. Reductions in NCER were approximately proportional to pruning severity and % leaf area removed and were as great as 60% in the most severe pruning. Canopy light interception decreased slightly. The effects on canopy NCER thus appeared to be primarily related to reduced photosynthetic efficiency and secondarily to reduced light interception. Search Results Kuo-Tan Li and Alan N. Lakso Hector R. Valenzuela, Stephen K. O'Hair, and Bruce Schaffer The effects of shade during leaf development on photosynthetic activity of cocoyam [Xanthosoma sagittifolium (L.) Schott] were investigated. Net gas exchange and N and chlorophyll concentrations were determined for cocoyam leaves growing in 30%, 50%, or 100% sunlight. Net CO2 assimilation (A) and water use efficiency (WUE) were greater for plants grown in 100% sunlight than for plants grown in less sunlight. Substomatal CO2 concentration increased with increased shading. Stomatal conductance (gs) and transpiration (E) did not vary significantly among treatments. Diurnal paterns for A were positively correlated with gs, lamina temperature, relative humidity, and photosynthetic photon flux (PPF). Lamina N concentrations, determined on lamina dry weight and lamina area bases, increased with increased PPF. Shade plants (30% and 50% sunlight) had greater chlorophyll: N ratios (dry-weight basis) and greater lamina area: lamina dry weight ratios than 100% sunlight-grown plants, which indicates increased photosynthate and N allocation to leaves of shade plants and maximization of light interception. Teresa A. Cerny, Nihal C. Rajapakse, and Ryu Oi Growth chambers constructed from photoselective plastic films were used to investigate the effects of light quality on height manipulation and flowering of photoperiodic plant species. Three types of treatment films were used; control, a far-red light intercepting film (YXE-10) and a red light intercepting film (SXE-4). The red (600-700 nm):far-red (700-800 nm) ratios and phytochrome photoequilibrium estimates for the control, YXE-10 and SXE-4 films were 1.0 and 0.71, 1.5 and 0.77, and 0.71 and 0.67, respectively. The photosynthetic photon flux was adjusted to uniformity among chambers using neutral density filters. Spectral filters did not effect minimum and maximum air temperatures. Experiments were conducted using quantitative long day (Antirrhinum majus and Petunia × hybrida), quantitative short day (Zinnia elegans and Dendranthema × grandiflorum) and day-neutral (Rosa × hydrida) plant species under natural short-day conditions. Plants produced under the YXE-10 filters were significantly shorter than the control plants, while plants produced under the SXE-4 films had similar or increased height compared to the control plants. However, both height response and flowering times varied with the crop species. Flowering time of Rosa × hybrida plants was uniform among all treatments. Flowering of quantitative long-day plants was delayed by at least 10 days under the YXE-10 film and was most responsive to the filtered light. Flowering of quantitative short-day plants was delayed by 2 days under the YXE-10. Days to flower for plants produced under the SXE-4 film were similar to the control plants for all species tested. D. Michael Glenn, Ralph Scorza, and William R. Okie Two unpruned willow leaf and two unpruned standard leaf peach [Prunuspersica(L.) Batsch.] selections were evaluated for physiological components related to water use efficiency (WUE). The purpose of the study was to assess the value of willow leaf phenotypes to improve water use efficiency in peach and separate the environmental from the genetic components. The willow leaf characteristic itself did not confer improved water use efficiency. Light interception was a key determinant of WUE in these genotypes and the relationship of WUE with intercepted photosynthetically active radiation (PAR) by the entire canopy indicated a significant negative correlation. Internal shading of the tree by excessive leaf area reduced WUE and canopies that intercept more than 60% of the PAR have reduced WUE. While WUE is improved by reducing the amount of PAR interception of the canopy, productivity is reduced. Neither of the willow leaf genotypes had a significant correlation of WUE with yield (leaf and fruit weight); however, the standard leaf type cultivars, `Bounty' and `Redhaven', had significantly different regressions that indicate greater productivity in `Bounty' for a given level of WUE. `Redhaven' was the least productive cultivar; `Bounty' was the most productive, and the two willow leaf genotypes were intermediate in the relationship of intercepted PAR with yield. Therefore, genetic differences in peach growth types can be selected for both increased WUE as well as increased productivity. Future work in peach breeding to improve WUE and productivity must take into consideration light interception, productivity, and WUE in an integrated manner to make progress in the efficient use of water and light. Marvin P. Pritts Manipulating light, temperature, moisture, and nutrients to favor plant growth and productivity is an important component of horticulture. The technology required to achieve such manipulation ranges from inexpensive, basic practices to elaborate, costly approaches involving the latest engineering advances. For example, pruning and mulching are relatively low-tech methods for improving light interception and soil moisture status in small fruit plantings. At the opposite extreme are glass houses with supplemental lighting, CO2 enrichment, and nutrient film hydroponic systems Of greatest value to small fruit growers, however, is technology that ran be applied in field situations, such as the use of overhead irrigation for maintaining soil moisture status, frost protection, and evaporative cooling. One of the greatest challenges to small fruit growers and rcsearchers is integrating new technology into production systems. The introduction of a new technique for environmental modification usually has indirect effects on other aspects of management, which may require additional technology to compensate for adverse changes while maintaining the favorable change. In addition, unique macro- and microclimates demand and market opportunities, specific solutions, and the result is a dynamic, diverse collage of production systems used by growers throughout the world. Gisele Schoene, Thomas Yeager, and Joe Ritchie In crop models, it is important to determine the leaf area, because the amount of light interception by leaves influences two very important processes in the plant: photosynthesis and evaporation. Leaf area is dependent on leaf appearance and expansion rates. Leaf appearance rate is driven mainly by temperature. Although the influence of temperature on leaf area development is well known for several agronomic crops, there is no information for woody ornamentals. An experiment was conducted to study the relationship between temperature and leaf appearance of container-grown sweet viburnum. Plants were grown in field conditions in Gainesville, Fla., during two growing periods (Apr. to Aug. 2004 and Aug. 2004 to Jan. 2005). Daily maximum and minimum temperature and leaf appearance were recorded. Linear regression equations were fitted to data and maximum and minimum temperature and leaf appearance were recorded. Linear regression equations were fitted to data and base temperature was assumed to be 8 °C. Thermal time (°C d) was calculated as daily average maximum and minimum air temperature minus the base temperature and was regressed against leaf number. The sum of accumulated thermal time was found to be linearly correlated with leaf number. Phyllochron, which is the thermal time between the appearances of successive leaves, was estimated 51 °C per day. The information presented in this study will be useful in modeling water use of sweet viburnum in response to environmental conditions. Neil C. Yorio, Jeff T. Richards, Sharon L. Edney, Joel O. Wilkinson, Gary W. Stutte*, and Raymond M. Wheeler The effects of using mixed cropping strategies for reducing overall mass and increasing system efficiency was examined as part of NASA's mission to study minimally-processed or “salad” crops as dietary supplements on long-duration space missions. To test interspecific compatibility, radish (Raphanus sativus L. cv. Cherry Bomb II), lettuce (Lactuca sativa L. cv. Flandria), and bunching onion (Allium fistulosum L. cv. Kinka) were grown hydroponically as either monoculture (control) or mixed-crop within a walk-in growth chamber maintained at 25 °C, 50% relative humidity, 300 μmol·m-2·s-1 PPF, and a 16-h light/8-h dark photoperiod under cool-white fluorescent lamps. Weekly time-course harvests were taken over 28 days of growth. Results showed that none of the species showed any negative growth effects when grown together under mixed-crop compared to monoculture growth conditions. However, radish showed significant increases in edible mass when grown under mixed-crop compared to monoculture conditions. The observed increases in growth are likely attributable to increased light interception due to a decreased guard row effect as well as a faster canopy development for radish. Rohini Deshpande, D. P. Coyne, K. G. Hubbard, J. R. Steadman, E. P. Kerr, and Anne M. Parkhurst The microclimate of Great Northern (GN) dry bean lines with diverse plant architecture was investigated in terms of white mold (WM) incidence and yield. A split-plot design was used with protected (3 weekly sprays of benomyl 0.9 KG HA-1 after flowering) and unprotected treatments as main-plots and GN lines as sub-plots in a WM nursery (1990, 1991). Canopy density, erectness, leaf area index, and plant characteristics were measured. `Starlight' (upright) and `Tara' (prostrate) were selected for detailed microclimate studies. An infrared thermometer, humidity sensor, and a thermistor were placed within the canopy at the advent of flowering. Leaf wetness and its duration were estimated by the leaf temperature in combination with air temperature and dewpoint temperature. `Starlight' showed later and shorter duration of leaf wetness, lower humidity, and WM and higher yield than `Tara'. Severe WM and reduced yields occurred also on all other susceptible entries with dense prostrate plant habits in the unprotected plots. Fractal analysis was done on the images of the canopy to quantify the light interception within the canopy. Michael K. Bomford Polycultures are thought to offer yield advantages over monocultures when net competition between plants of different species is less than that between plants of the same species. Planting density and crop ratios may both alter these competitive effects. To observe such effects, dicultures of basil (Ocimumbasilicum L.), brussels sprout (Brassica oleracea L.), and tomato (Lycopersicumesculentum Mill.) were grown organically at a range of ratios and densities (1–47 plants/m2) over two field seasons. Relative land output (RLO) values were calculated from field data and from modeled yield-density-ratio surfaces. Both methods showed yield advantages from polyculture at high planting densities (RLO = 2.20 @ top density), but not at low densities. Dicultures offered a 19% yield advantage, on average. Competition for resources was compared by measuring canopy light interception and soil moisture content, showing tomato to be the most competitive crop, followed by brussels sprout, then basil. Diculture yield advantages were most pronounced when individuals of a less competitive species outnumbered those of a more competitive species. Yield advantages were 36% and 20% for dicultures dominated by basil and brussels sprout, respectively. Dicultures dominated by tomato offered no yield advantage. The results are discussed in terms of the current ecological understanding of plant interactions, and possible advantages to be derived from small-scale intercropping, popularly termed companion planting. Cynthia L. Barden and W. J. Bramlage Superficial scald development on apples is related to preharvest environmental conditions, perhaps through effects on endogenous antioxidant concentrations In 1989 we examined effects of maturity, light, and preharvest temperatures (< 10°C) on endogenous antioxidant levels in the fruit at harvest and on scald development after long-term storage in 0°C air. Cortland apple trees were sprayed with 500 ppm ethephon 1 month before normal harvest to create maturity differences. Fruit on other Cortland trees were bagged 1 month prior to harvest to reduce light interception. Samples also were harvested from other Cortland trees after exposures to different numbers of hours < 10°C, Hours < 10°C before harvest were negatively correlated to scald development. Ethephon treatment decreased scald incidence, and bagging increased it, The total lipid-soluble antioxidant activity increased with increasing hours < 10°C and with ethephon treatment, while bagging of fruit slightly decreased this antioxidant activity. To better understand the relationships between preharvest factors and antioxidant levels, individual antioxidants, including ascorbic acid, α tocopherol, anthocyanins and glutathione, are being analyzed.
https://journals.ashs.org/search?access_0=all&page=6&pageSize=10&q=%22light+interception%22&sort=relevance
Photosynthetic light responses of apple (Malus domestica) leaves in relation to leaf temperature, CO2 and leaf nitrogen on trees grown in orchard conditions. - Biology, Medicine - Functional plant biology : FPB - 2018 It was concluded that apical leaves may have accumulated nitrogen which caused the high photosynthetic capacity and nitrogen use efficiency, as these leaves were possibly most exposed, and these were correlated with the leaf nitrogen content. Expand Nutrient recycling during the decomposition of apple leaves (Malus domestica) and mowed grasses in an orchard - Environmental Science - 2007 Abstract Each year, significant fractions of nutrients absorbed by trees and orchard grasses, return to the soil by abscised leaves and mowed biomass. Using litter bag technique and labelled ( 15 N)… Expand Mechanical Harvesting Has Little Effect on Water Status and Leaf Gas Exchange in Citrus Trees - Biology - 2005 Long-term studies revealed that fruit yield of citrus trees was affected little by mechanical harvesting, and mechanical harvesting did not reduce CO2 assimilation, transpiration, stomatal conductance, water use effiency, or photosystem II effi ciency as measured by chlorophyll fl uorescence. Expand Variable Fall Climate Influences Nutrient Resorption and Reserve Storage in Young Peach Trees - Medicine, Biology - Front. Plant Sci. - 2018 Variable climate conditions of increased temperatures or reduced soil moisture during autumn resulting in delayed senescence influence the process of nutrient resorption and increase nutrient storage within reserve organs. Expand In situ experimental exposure of fruit-bearing shoots of apple trees to 13CO2 and construction of a dynamic transfer model of carbon. - Medicine - Journal of environmental radioactivity - 2021 A dynamic compartment model for apple fruit-bearing shoots, assuming that the shoots are a simple unit of source and sink for photoassimilates, indicated that the retention ofphotoassimilated C at the harvest depended on the growth rate of C in the organs at the exposure. Expand Strategies for timing nitrogen fertilization of pear trees based on the distribution, storage, and remobilization of 15N from seasonal application of (15NH4)2SO4 - Biology - 2020 It is recommended that the autumn application of N-fertilizer be soon after fruit harvest in order to increase N stores in fine roots, and in turn the amount available for remobilization in spring is increased. Expand Detection of Cadmium Risk to the Photosynthetic Performance of Hybrid Pennisetum - Medicine, Chemistry - Front. Plant Sci. - 2019 The decrease in photosynthesis through exposure to Cd may be a result of the decrease in leaf chlorophyll content, Rubisco activity, and RuBP regeneration, inhibition of triose phosphate utilization, reduction of the ability to use light and provide energy, and restrictions on electron transport in PSII. Expand Responses of Nitraria tangutorum to water and photosynthetic physiology in rain enrichment scenario - Biology - 2013 Leaf water content and leaf water potential of N. tangutorum could adapt to the tendency of future increasing precipitation by the coordination of water physiology and photosynthesis, suggesting that leaf gas exchanges were regulated by leaf water status. Expand The responses of photosynthetic rate and stomatal conductance of Fraxinus rhynchophylla to differences in CO2 concentration and soil moisture - Biology - Photosynthetica - 2013 The results showed that moderate water stress was beneficial for increasing plant assimilation, decreasing photorespiration, and increasing production of photosynthates in F. rhynchophylla, and could be planted in habitats of low soil water content. Expand References SHOWING 1-10 OF 18 REFERENCES Effects of defruiting on source‐sink relationship, carbon budget, leaf carbohydrate content and water use efficiency of apple trees - Biology - 1995 The excessive respiratory losses after fruit removal in October, when the tree lost more carbon than it assimilated, may have been induced by translocation of carbohydrates from the leaves to the perennial woody parts of the tree and by the onset of leaf senescence. Expand Gas exchange parameters, water relations and carbohydrate partitioning in leaves of field-grown Prunus domestica following fruit removal - Biology - 1991 The effect of fruit removal on gas exchange, water relations, chlorophyll and non-structural carbohydrate content of leaves from mature, field-grown plum trees, and the decrease of CO2 assimilation rate is discussed in relation to the hypothesis of assimilate demand regulating photosynthesis through a feedback mechanism. Expand Profiles of leaf senescence during reproductive growth of sunflower and maize - Biology - 2000 Lack of interaction between reproductive treatment and leaf position indicates that the senescence signal, whatever its nature, was equally effective throughout the plant in both species. Expand Sink removal and leaf senescence in soybean : cultivar effects. - Biology, Medicine - Plant physiology - 1987 Even though cultivars differed in rate of decay of photosynthetic rate and Rubisco level in response to sink removal, the initiation of leaf senescence was not influenced by presence or absence of developing fruits. Expand Differential Senescence of Maize Hybrids following Ear Removal : II. Selected Leaf. - Biology, Medicine - Plant physiology - 1984 It is deduced that the rate of flux of N into the leaf was a factor in regulating the differing rates of senescence observed for the six treatments; however, it cannot rule out the possibility of concurrent influence of growth regulators or other metabolites. Expand Qualitative and quantitative changes in nitrogenous compounds in senescing leaf and bark tissues of the apple - Biology - 1980 Fractionation of the total bark proteins by DEAE-cellulose chromatography indicated that the final upsurge of bark proteins observed in November was associated primarily with one group of proteins (Peak III). Expand Biochemical and Enzymatic Changes in Apple Leaf Tissue during Autumnal Senescence.
https://www.semanticscholar.org/paper/Effect-of-delayed-fruit-harvest-on-photosynthesis%2C-Tartachnyk-Blanke/99f24a79797f05952d6f2915e1bac3e25e6b4449
The present experiment was carried out to investigate the toxic effect of lead and cadmium on seedling growth and metabolism of medicinally important plant Trigonella foenum-graecum L. Physio-morphological changes was studied at early seedling growth under different concentrations (5, 10, 15 mg/l) of lead and cadmium. These metal showed toxic effect on seedling length, root-shoot ratio, dry weight accumulation, seedling vigour index. On the other hand, content of photosynthetic pigments viz. chlorophyll a, chlorophyll b and carotenoid were decrease in a dose dependent manner in both heavy metal solution. The content of total free amino acids, soluble proteins and soluble and insoluble carbohydrate were initially increased as compared to control but at higher concentration in both the metal treatment showed decreased in content. Oxidative stress induced by heavy metals in plant causing membrane injury as observed by enhanced level of malondialdehyde (MDA) and in all experiments, higher concentration of cadmium showed the maximum toxicity as compared to lead.
https://paper.researchbib.com/view/paper/305406
The photosynthetic activity of four Rhododendron simsii cultivars `Dorothy Gish', `Paloma', `White Gish', and `Gloria' were studied at both the individual leaf level using a portable photosynthesis system (closed), or at the whole-plant level using assimilation chambers (semi-open system). Net photosynthetic assimilation curves in response to light in both systems will be established. The experimental points obtained will be adjusted to a photosynthetic model as described in the literature. The model parameters [original efficiency (α) dark respiration (Rd), maximum photosynthetic capacity to saturated light (Pmax)] will be presented. The evolution of these parameters will be presented as a function of the various stages of development. Also a comparison of the four cultivars will be shown. Rosa ×hybrida `Samantha' plants were grown under high-pressure sodium (HPS) lamps, HPS lamps fitted with blue gel filters to reduce the red to far-red (R:FR) ratio, or metal halide lamps. R: FR ratios were 1:0.95, 1:2, and 1:0.26 for HPS; filtered HPS, and metal halide, respectively. Although the R: FR ratio for metal halide was 3.5 times higher than for HPS, the total energy from 630 to 750 nm was 2.8 times lower. At a nighttime supplemental photosynthetic photon flux of 70 to 75 μmol·m-2.s-1, plants under HPS and metal halide lamps produced 49 % and 64% more flowering shoots, respectively, than those under filtered HPS (averaged over two crop cycles). The quality index for flowers under HPS, metal halide, and filtered HPS was 25.0, 23.3, and 18.5, respectively. Vase life was 10 to 11 days, regardless of treatment. We examined effects of single-layer glass and double-layer antifog polyethylene films on growth and flowering of stock (Matthiola incana L.) and snapdragon (Antirrhinum majalis L.) in a 3-year period. Stock produced more buds/spike with shorter but thicker stems under single-layer glass and under antifog 3-year polyethylene, and showed higher photosynthetic capacity (P c) under single-layer glass than under other covers regardless of light regimes. Similarly, growth and flowering of snapdragon were significantly better under single-layer glass than in polyethylene houses. A supplemental light of 60 μmol·m-2·s-1 accelerated flowering by 20 to 25 days, improved flower quality, and eliminated differences in plant growth and quality of snapdragon between covering treatments. The P c of stock was lower under all polyethylene covers than under single-layer glass. Among the three antifog polyethylene films, a slightly higher P c was measured for plants under antifog 3-year polyethylene. However, there was no difference among covering treatments in the net photosynthetic rate (P N) at low light level (canopy level). Supplemental lighting reduced P c of stock leaves, especially under single-layer glass, and diminished differences in P c among covering treatments. Dry mass was more influenced by larger leaf area caused by higher leaf temperature than by P N. Overall, antifog 3-year polyethylene was a good covering material when both plant quality and energy saving were considered. Low water retention in hanging baskets is a constraint in urban floriculture and hydrogel addition is an alternative. However, growth may be reduced with such a product depending on the substrate used. This study was conducted to determine the combined effects of substrate and type of hydrogel on the growth of Surfinia plants produced in hanging baskets. During Spring 1998, three rooted cuttings of Surfinia (Petunia × hybrida `Brilliant Pink') were transplanted into 30-cm hanging baskets. Plants were transplanted into one of the following substrates: 1) Pro-Mix BX, 2) a blend of 4/5 Pro-Mix BX and 1/5 compost, or 3) 1/3 perlite 1/3 vermiculite and 1/3 compost (v/v). These three substrates were amended with two types of hydrogels. The first type, Soil Moist, is an acrylic-acrylamide copolymer and the second type is Aqua-Mend, an acrylic polymer. Plants were grown for 8 weeks under standard irrigation and fertilization practices. Plant growth characteristics, percent dry weight, mineral nutrition, and growth index were determined. Substrate physical properties such as available water content, unsaturated hydraulic conductivity and total porosity were measured. The dry weight and growth index of plants in Pro-Mix BX amended with both types of hydrogels were greater than those plants growing in Pro-Mix BX without hydrogel. Plants growing in substrates 2 and 3 with hydrogels were smaller or similar respectively than those plants growing in substrates without hydrogel. Their effects on physical properties of substrates and plant growth will be discussed. While the majority of terrestial plants are colonized in soils by vesicular-arbuscular fungi (AM), that does not mean that these species can form a symbiosis with AM fungi in an artificial substrate under commercial production conditions. The purpose of this study was to identify those plants having a colonization potential. In Mar. 1998, 51 species and cultivars of ornamental plants were inoculated with two vesicular-arbuscular fungi (Glomus intraradices Schenk & Smith, and Glomus etunicatum Becker & Gerdemann; Premier Tech, Rivière-du-Loup, Quèbec). Periodic evaluations of colonization were done 5, 7, 9, 12, and 16 weeks after seeding. More than 59% of these plants tested were shown to have a good colonization potential with G. intraradices. Species belonging to the Compositae and Labiatae families all colonized. Species in the Solanaceae family showed slight to excellent colonization. Several species studied belonging to the Amaranthaceae, Capparidaceae, Caryophyllaceae, Chenopodiaceae, Cruciferae, Gentianaceae, Myrtaceae et Portulaceae families were not colonized. Root colonization with G. etunicatum was not detected on these species and cultivars during this short experimental period.
https://journals.ashs.org/search?f_0=author&q_0=B.+Dansereau
Photoperiod, the Photosynthetic daily light integral (DLI), and mean daily temperature are three environmental parameters the have the largest effects on plant growth and development. During commercial production of floriculture crops, one or more of these factors is often manipulated so that crops are marketable when desired. In temperate climates (e.g., > 35 ˚N latitude), high-intensity (photosynthetic of supplemental) lighting is provided to increase growth and accelerate flowering when ambient light conditions are low. In addition, growers provide low-intensity (photoperiodic) lightiong to deliver long days, which accelerates flowering of long-day plants. Finally, growers often control temperature, which influences cropping time and plant quality attributes. Since a aubstantial amount of energy is used to heat greenhouses located in cold climates, growers need to optimize temperature so that energy costs are minimized on a per-crop basis. This paper describes how light and temperature influence growth and flowering of floriculture crops and presents information to improve the energy efficiency of greenhouse crop production based heavily on recent research performed at Michigan State University. |ISBN:||978-986-02-4616-2| |Appears in Collections:||花卉研究中心| Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
https://scholars.tari.gov.tw/handle/123456789/6625
Photosynthetically active radiation Photosynthetically active radiation, often abbreviated PAR, designates the spectral range (wave band) of solar radiation from 400 to 700 nanometers that photosynthetic organisms are able to use in the process of photosynthesis. This spectral region corresponds more or less with the range of light visible to the human eye. Photons at shorter wavelengths tend to be so energetic that they can be damaging to cells and tissues, but are mostly filtered out by the ozone layer in the stratosphere. Photons at longer wavelengths do not carry enough energy to allow photosynthesis to take place. Other living organisms, such as cyanobacteria, purple bacteria, and heliobacteria, can exploit solar light in slightly extended spectral regions, such as the near-infrared. These bacteria live in environments such as the bottom of stagnant ponds, sediment and ocean depths. Because of their pigments, they form colorful mats of green, red and purple. Chlorophyll, the most abundant plant pigment, is most efficient in capturing red and blue light. Accessory pigments such as carotenes and xanthophylls harvest some green light and pass it on to the photosynthetic process, but enough of the green wavelengths are reflected to give leaves their characteristic color. An exception to the predominance of chlorophyll is autumn, when chlorophyll is degraded (because it contains N and Mg) but the accessory pigments are not (because they only contain C, H and O) and remain in the leaf producing red, yellow and orange leaves. In land plants, leaves absorb mostly red and blue light in the first layer of photosynthetic cells because of Chlorophyll absorbance. Green light, however, penetrates deeper into the leaf interior and can drive photosynthesis more efficiently than red light. Because green and yellow wavelengths can transmit through chlorophyll and the entire leaf itself, they play a crucial role in growth beneath the plant canopy. PAR measurement is used in agriculture, forestry and oceanography. One of the requirements for productive farmland is adequate PAR, so PAR is used to evaluate agricultural investment potential. PAR sensors stationed at various levels of the forest canopy measure the pattern of PAR availability and utilization. Photosynthetic rate and related parameters can be measured non-destructively using a photosynthesis system, and these instruments measure PAR and sometimes control PAR at set intensities. PAR measurements are also used to calculate the euphotic depth in the ocean. In these contexts, the reason PAR is preferred over other lighting metrics such as luminous flux and illuminance is that these measures are based on human perception of brightness, which is strongly green biased and does not accurately describe the quantity of light usable for photosynthesis. When measuring the irradiance of PAR, values are expressed using units of energy (W/m2), which is relevant in energy-balance considerations for photosynthetic organisms. However, photosynthesis is a quantum process and the chemical reactions of photosynthesis are more dependent on the number of photons than the energy contained in the photons. Therefore, plant biologists often quantify PAR using the number of photons in the 400-700 nm range received by a surface for a specified amount of time, or the Photosynthetic Photon Flux Density (PPFD). Values of PPFD are normally expressed using units of mol m−2 s−1. In relation to plant growth and morphology, it is better to characterise the light availability for plants by means of the Daily Light Integral (DLI), which is the daily flux of photons per ground area, and includes both diurnal variation as well as variation in day length. PPFD used to sometimes be expressed using einstein units, i.e., μE m−2 s−1, although this usage is nonstandard and is no longer used. There are two common measures of photosynthetically active radiation: photosynthetic photon flux (PPF) and yield photon flux (YPF). PPF values all photons from 400 to 700 nm equally, while YPF weights photons in the range from 360 to 760 nm based on a plant's photosynthetic response. PAR as described with PPF does not distinguish between different wavelengths between 400 and 700 nm, and assumes that wavelengths outside this range have zero photosynthetic action. If the exact spectrum of the light is known, the photosynthetic photon flux density (PPFD) values in μmol s−1m−2) can be modified by applying different weighting factors to different wavelengths. This results in a quantity called the yield photon flux (YPF). The red curve in the graph shows that photons around 610 nm (orange-red) have the highest amount of photosynthesis per photon. However, because short-wavelength photons carry more energy per photon, the maximum amount of photosynthesis per incident unit of energy is at a longer wavelength, around 650 nm (deep red). It has been noted that there is considerable misunderstanding over the effect of light quality on plant growth. Many manufacturers claim significantly increased plant growth due to light quality (high YPF). The YPF curve indicates that orange and red photons between 600 and 630 nm can result in 20 to 30% more photosynthesis than blue or cyan photons between 400 and 540 nm. But the YPF curve was developed from short-term measurements made on single leaves in low light. More recent longer-term studies with whole plants in higher light indicate that light quality may have a smaller effect on plant growth rate than light quantity. Blue light, while not delivering as many photons per joule, encourages leaf growth and affects other outcomes. The conversion between energy-based PAR and photon-based PAR depends on the spectrum of the light source (see Photosynthetic efficiency). The following table shows the conversion factors from watts for black-body spectra that are truncated to the range 400–700 nm. It also shows the luminous efficacy for these light sources and the fraction of a real black-body radiator that is emitted as PAR. For example, a light source of 1000 lm at a color temperature of 5800 K would emit approximately 1000/265 = 3.8 W of PAR, which is equivalent to 3.8*4.56 = 17.3 μmol/s. For a black-body light source at 5800 K, such as the sun is approximately, a fraction 0.368 of its total emitted radiation is emitted as PAR. For artificial light sources, that usually do not have a black-body spectrum, these conversion factors are only approximate. Besides the amount of radiation reaching a plant in the PAR region of the spectrum, it is also important to consider the quality of such radiation. Radiation reaching a plant contains entropy as well as energy, and combining those two concepts the exergy can be determined. This sort of analysis is known as exergy analysis or second law analysis, and the exergy represents a measure of the useful work, i.e., the useful part of radiation which can be transformed into other forms of energy. The spectral distribution of the exergy of radiation is defined as: about 8.3% lower than the value considered until now, as a direct consequence of the fact that the organisms which are using solar radiation are also emitting radiation as a consequence of their own temperature. Therefore, the conversion factor of the organism will be different depending on its temperature, and the exergy concept is more suitable than the energy one. Researchers at Utah State University compared measurements for PPF and YPF using different types of equipment. They measured the PPF and YPF of seven common radiation sources with a spectroradiometer, then compared with measurements from six quantum sensors designed to measure PPF, and three quantum sensors designed to measure YPF. They found that the PPF and YPF sensors were the least accurate for narrow-band sources (narrow spectrum of light) and most accurate broad-band sources (fuller spectra of light). They found that PPF sensors were significantly more accurate under metal halide, low-pressure sodium and high-pressure sodium lamps than YPF sensors (>9% difference). Both YPF and PPF sensors were very inaccurate (>18% error) when used to measure light from red-light-emitting diodes. Photobiologically Active Radiation (PBAR) is a range of light energy beyond and including . Photobiological Photon Flux (PBF) is the metric used to measure PBAR. Many grow lights often missing an integrating sphere test report which means that values like photosynthetic photon flux (PPF) are guessed by the manufacturer. Also, false advertising of photosynthetic photon efficacy (PPE) (photosynthetic photon flux (PPF) umol / watts) values from grow light manufacturers can be avoided by simply control calculate the value. Furthermore, some manufacturers state the photosynthetic photon flux density (PPFD) value of the center light-emitting diode (LED) instead of the PPF in the area of one square meter.
https://dir.md/wiki/Photosynthetically_active_radiation?host=en.wikipedia.org
Vegetable crops such as cucumber and tomato are grown widely through the world using not only field but also protected farmland. Sensitive responses of many vegetables have been widely reported to environmental conditions such as light, air temperature, relative humility, and CO2 concentration in the past years. Among these environmental factors, light is considered to be the most important one for vegetable growth and development, especially in protected farmland. Therefore, lots of researches on effects of light environment, including light intensity, light quality, photoperiod, and light direction, on vegetable growth and development have been done in order to optimize the environmental conditions for high-yield and high-quality vegetable production in protected farmland. In this review, recent advances in light environment control for vegetable production in protected farmland have been reviewed and the prospective for the future research has been proposed as well. 1. INTRODUCTION Vegetables are economically important crops and now grown most commonly through the world using not only field culture but also protected-farmland culture. Vegetable crops just like rice, wheat and maize are very sensitive to unfavorable environmental conditions, and the slight stress can influence their growth and yield. So a series of studies on the relationship between environmental factors (e.g. light, air temperature, relative humility and CO2 concentration) and vegetable production have been done in order to optimize the environmental conditions for high-yield and high-quality vegetable production. Among these environmental factors, light is thought to be the most important one for vegetable growth and development, particularly in protected farmland. This review focused on recent advances in the control of light environment, including light intensity, light quality, photoperiod and light direction, for vegetable production in protected farmland. The perspective for future research has also been proposed. 2. LIGHT INTENSITY Light intensity needed for the maximum rate of photosynthesis is quite distinct, depending on vegetable cultivars and ambient conditions. Too low light intensity can not satisfy the requirement of photosynthetic capacity and thus results in insufficient synthesis of photoassimilates, which severely influences vegetable growth, development, and yield. On the contrary, too high light intensity may cause significant decline in the photochemical activity of photosystem II or photosystem I, which is known as photoinhibition [1-4]. It usually occurs when the light-dependent reactions of photosynthesis produce ATP and NADPH in excess of that can be consumed by the reactions of dark carbon metabolism [5, 6]. Healthy leaves growing under favorable conditions can experience intense light without extensive photodamage . However, when the environmental conditions do not promote carbon fixation, even weak or moderate light may become harmful [8,9]. For example, weak light can cause severe photoinhibition of cold-sensitive plants when they are exposed to low air temperature . Interestingly, light requirement has been reported for cold-tolerant plants when they suffer from and acclimate clod stress recently [11,12], indicating the strong crosstalk between light and air temperature signal. Supplemental lighting, an important light environment control technology widely used in protected farmland, can not only improve light conditions but also result in the increased air and leaf temperature in plant canopy. The increased air and leaf temperature may influence leaf photosynthesis and senescence positively or negatively, depending on seasons. As a result, the efficiency of supplemental lighting fluctuates with seasons because the environmental conditions in protected farmland are more easily controlled at low outdoor temperature in winter . In order to obtain the high-quality vegetable plants under variable light conditions, the photo thermal (PTR) has been proposed . The photo thermal ratio, referring to the ratio of radiant energy (mols of photosynthetic active photons per m2) to thermal energy (degree-day), is now thought as a useful tool to identify the optimal balance between light and air temperature . Therefore, finding the optimal PTR for vegetables grown in protected farmland with artificial light maybe improve the efficiency of artificial light. 3. LIGHT QUALITY Light quality is thought to affect many plant physiological processes during growth and development, particularly photosynthesis and morphology. Light quality alters plant photosynthesis by the effects on the activity of photosynthetic apparatus in leaves and the effects on the expression and/or activity of the Calvin cycle enzymes . Cucumbers grown under monochromatic light, including purple, blue, green, yellow and red, have reduced growth, CO2 assimilation rate and quantum yield of photosystem II electron transport as compared with plants grown under white, and these reductions are more significant in the plants under green, yellow and red. Interestingly, plants grown under purple and blue have higher stomatal conductance, total and initial Rubisco activities and higher transcriptional levels of genes encoding key enzymes in the Calvin cycle together with higher total soluble sugars, sucrose and starch contents as compared with plants grown under white, whereas in plants grown under green, yellow and red, these parameters decline. Yu and Ong reported that the CO2 assimilation rate, dark respiration, total biomass and relative growth rate of seedlings grown under monochromatic radiation were significantly lower than those of seedlings grown under broad spectrum light. Strong ultraviolet-B radiation can cause serious damage to plant photosynthesis, such as losses of both activity and content of Rubisco and sedoheptulose 1,7-biphosphatase , inactivation of photosynthetic electron transport chain and induction of stomatal closure . The involvement of light quality in regulation of plant morphology through photoreceptors has been widely studied in the past decades. The phytochromes, an important photoreceptor family including phyA to phyE in Arabidopsis [21-23], can reversibly photoconvert between two conformers: the inactive red light-absorbing Pr form and the biologically active far red light-absorbing Pfr form. Pr to Pfr photoconversion takes place upon absorption of red light photons, and reversion of Pfr to Pr occurs in far red light-enriched environment and also in the dark . Seedling hypocotyl elongation, which is a common phenomenon and severely decreases seedling commercial quality in seedling production, is just a well-established light-regulated response by phytochromes when seedlings are grown under continuous darkness or far redenriched ambience . Recently, some new findings suggest that plant growth and development might be coregulated by photoreceptors and other endogenous factors such as hormones and temperature sensing system. For example, phytochromes and gibberellins have been reported to act in coordination to regulate multiple aspects of Arabidopsis development such as flowering and hypocotyls elongation . Significant interaction between difference between day/night temperature (DIF) and end-of-day (EOD) light quality on growth, morphology, dry matter content and carbohydrate status has been observed in cucumbers . Positive DIF can induce similar responses in cucumber elongation growth, day matter and carbohydrate accumulation as EOD-farred light, and further phytochrome status can interact with the responses to alternating day and night temperature. Some cases that light quality was involved in plant stress tolerance have also been reported in previous studies. For example, a low red to far red ratio light signal increases CBF (C-repeat/DRE-Binding Factor) expression in Arabidopsis, this increase being sufficient to confer freezing tolerance at temperatures higher than those required for cold acclimation . All these results indicate that it may be essential to choose a suitable light quality for maximizing plant photosynthetic performance, growth and eventually their production when artificial supplemental light is used in protected farmland. 4. PHOTOPERIOD Photoperiod can also influence plant growth and development, especially sex expression. Floral induction and differentiation, the most important developmental transition from vegetative growth to reproductive growth in life cycle of higher plants, can directly affects the agricultural yield through determining the time of flowering, the number of flowers and fruits, as well as the diversion of resources from vegetative growth [29,30]. Longer photoperiod can increase the daily integrated photosynthetic photon flux and more photosynthetic product can thus be synthesized and possibly release the carbohydrate competition between vegetative organs and reproductive organs. There have been many reports on effects of photoperiod on sex expression in horticultural crops such as cucumber, a model plant often used for sex expression research in flowering plants [31-33]. The number of pistillate flowers is increased under short photoperiod in monoecious cucumber and in an androecious cucumber . However, Cantliffe reported that photoperiod had no effects on sex expression in cucumber and gherkin. Jutamanee et al. reported that the influence of photoperiod on sex expression depended on the genetic background. The short day treatment promotes pistillate flower formation and suppresses staminate flower formation in monoecious cucumbers due to the increased evolution of endogenous ethylene , whereas the long day treatment has the reversed effect . Photoperiod has no significant effects on sex expression in gynoecious cucumbers. Photoperiod can influence leaf senescence as well. Many studies have suggested that leaf senescence occurred as a consequence of shorter photoperiod, whereas extended photoperiod delayed leaf senescence [39,40]. This evidence suggests that it is necessary to take both light intensity and photoperiod into consideration as supplemental lighting is applied for crop production in protected farmland. The mean daily light integral (DLI) is thus proposed to evaluate the combined effects of light intensity and photoperiod on plant growth and development when supplemental lighting is used in practice of crops [41-43]. The DLI for optimal plant growth differs considerably between species and canopies . Finding the optimal DLI for horticultural crops grown under different conditions may be an effective way to, on one hand, improve yield and, on the other hand, reduce production cost such as electricity, which is considered to be the greatest one in protected farmland with artificial light. 5. LIGHT DIRECTION Light direction may be involved in regulation of plant growth and development. The ability of leaves to harvest light on different surfaces or from different directions (i.e. direct vs. diffuse rays) is determined by the interplay of structural, biochemical and physiological properties of component cell layers. Depending on azimuth, vertical leaves of plans such as Eryngium yuccifolium and Silphium terebinthinaceum may receive similar daily irradiance on the adaxial and abaxial surfaces and typically have similar photosynthetic rate when irradiated on either surface . These functionally symmetrical leaves of vertical-leaf species are amphistomatous and have unifacial leaf anatomy, with palisade cells beneath the adaxial and abaxial epidermis. In contrast, the light environment of horizontal-leaf species such as Ambrosia trifida and Solidago canadensis is highly asymmetrical, and these leaves typically have photosynthetic rates that are 30% - 50% lower when irradiated on the abaxial surface [44,45]. These functionally asymmetrical leaves of horizontal-leaf species are anatomically bifacial, with one or more layers of palisade mesophyll beneath the adaxial epidermis and spongy mesophyll beneath the abaxial epidermis. Differential responses of adaxial and abaxial leaf surfaces to light have been attributed to differences in the biochemical characteristics of stomata, palisade and spongy mesophyll cells [46-48]. These differences are similar to the characteristics of sun and shade leaves described by Boardman and Bjorkman . Horizontal leaves irradiated predominantly on the adaxial side during development have chloroplasts in palisade cells with sun-type properties including higher chlorophyll a/b ratio, a lower ratio of appressed to nonappressed thylakoid membranes, and higher electron transport and CO2 fixation rates than found in spongy mesophyll cells [51-53]. When horizontal leaves are irradiated on abaxial side during expansion, sun-leaf and shade-leaf properties become reversed; palisade cells are populated with shade-type chloroplasts and spongy mesophyll cells are populated with sun-type chloroplasts [52,54]. As a result, light sensitivity of leaf abaxial side expressed as stomatal conductance is significantly increased , indicating that adjustment of light direction might be a feasible way to increase crop canopy photosynthesis and thus improve production. 6. LOOK TO THE FUTURE Now artificial light including top lighting and interlighting has been widely used in protected farmland in the region of high latitude, and as a result, yield of horticultural crops such as tomato and cucumber has increased significantly [13,15,56]. However, there are still some issues that should be further studied. For example, undoubtedly supplemental lighting significantly increases vegetable yield, but production cost ramps up at the same time because of the significantly increased electricity consumption. To smooth this problem, several possible methods can be adopted in vegetable production. The first method is to introduce some more efficient artificial light sources such as light-emitting diode (LED) to decrease electricity cost. LED is a semiconductor light source and presents many advantages including lower energy consumption, longer lifetime, improved robustness, smaller size and faster switching, over traditional light sources such as high pressure sodium lamp (HPS) and fluorescent lamps (FL), two kinds of artificial light sources now widely used in protected farmland. Some researches on effects of LED on crop growth and yield have been performed in the past several years , but there is still a great gap between the research and the application of LED in protected farmland. Therefore, many efforts should be made in the future. Improving the present lighting design may improve use efficiency of artificial light and decrease electricity cost. Take cucumber as an example, previous studies have shown that interlighting increased cucumber yield and quality more significantly than top lighting due to the more even distribution of artificial light . However, the further increasing of interlight proportion (Top 52% + Interlight 48%), which results in more even distribution of light in cucumber canopy, had only minor effects on the amount of yield and energy use efficiency than the treatment of Top 76% + Interlight 24% possibly due to the decrease of artificial light from vertical direction . This evidence indicates that light direction might be optimized. It is well known that light absorbance is severely influenced by incident angle and reaches the maximum when the incident angle is 0˚. So light use efficiency may be further improved if light incident angle to vegetable leaves could be adjust to about 0˚ through optimizing lighting equipment. Compared with cucumber grown with fluorescent lamps vertically installed, faster fruit growth and higher yield have been observed when cucumbers were grown with fluorescent lamps installed parallelly to cucumber leaves (unpublished data), indicating supplemental lighting based on leaf angle may be feasible. Besides artificial lighting, improving natural light transmittance and selectivity of cover materials may be a more economic way to improve vegetable production in protected farmland in middle-latitude and low-latitude regions such as East China and Central China because solar light in these regions is enough to support the production of most vegetables. Recently, a new project named as “Study on Environment-friendly Functional Agricultural Film and Related Products”, which was supported by National Science and Technology Ministry, has been started in order to develop the new-type agricultural film and related products more suitable for environmental control in the protected farmland of China and is expected to contribute greatly to the modernization of China agriculture. 7. ACKNOWLEDGEMENTS This work has been supported by The National Key Technology R&D Program of China (2012BAD11B01) and China Agriculture Research System (CARS-25-D-03). REFERENCES - García-Plazaola, J.I., Becerril, J.M., Hernández, A., Niinemets, Ü. and Kollist, H. (2004) Acclimation of antioxidant pools to the light environment in a natural forest canopy. New Phytologist, 163, 87-97. doi:10.1111/j.1469-8137.2004.01096.x - Szabo, I., Bergantino, E. and Giacometti, G.M. (2005) Light and oxygenic photosynthesis: Energy dissipation as a protection mechanism against photo-oxidation. EMBO Reports, 6, 629-634. doi:10.1038/sj.embor.7400460 - Kreslavski, V.D., Carpentier, R., Klimov, V.V., Murata, N. and Allakhverdiev, C.I. (2007) Molecular mechanism of stress resistance of the photosynthetic apparatus. Biochemistry (Moscow) Supplement Series A: Membrane and Cell Biology, 1, 185-207. doi:10.1134/S1990747807030014 - Murata, N., Takahashi, S., Nishiyama, Y. and Allakhverdiev, S.I. (2007) Photoinhibition of photosystem II under environmental stress. Biochimica and Biophysica Acta, 1767, 414-421. doi:10.1016/j.bbabio.2006.11.019 - Demmig-Adams, B. and Adams, W.W. (2000) Photosynthesis: Harvesting sunlight safely. Nature, 403, 371-374. doi:10.1038/35000315 - Li, Y.Y., Sperryb, J.S. and Shao, M.G. (2009) Hydraulic conductance and vulnerability to cavitation in corn (Zea mays L.) hybrids of differing drought resistance. Environmental and Experimental Botany, 66, 341-346. doi:10.1016/j.envexpbot.2009.02.001 - Govindacharya, S., Bukhovab, N.G., Jolya, D. and Carpentiera, R. (2004) Photosystem II inhibition by moderate light under low temperature in intact leaves of chilling-sensitive and -tolerant plants. Physiologia Plantarum, 121, 322-333. doi:10.1111/j.0031-9317.2004.00305.x - Kudoh, H. and Sonoike, K. (2002) Irreversible damage to photosystem I by chilling in the light: Cause of the degradation of chlorophyll after returning to normal growth temperature. Planta, 215, 541-548. doi:10.1007/s00425-002-0790-9 - Gerotto, C., Alboresi, A., Giacometti, G.M., Bassi, R. and Morosinotto, T. (2011) Role of PSBS and LHCSR in Physcomitrella patens acclimation to high light and low temperature. Plant, Cell and Environment, 34, 922-932. doi:10.1111/j.1365-3040.2011.02294.x - Krause, G.H. (1994) Photoinhibition induced by low temperatures. In: Baker, N.R. and Bowyer, J.R., Eds., Photoinhibition of Photosynthesis: From Molecular Mechanisms to the Field, BIOS Scientific Publishers, Oxford. - Kim, H.J., Kim Y.K., Park, J.Y. and Kim, J. (2002) Light signaling mediated by phytochrome plays an important role in cold-induced gene expression through the C-repeat/dehydration responsive element (C/DRE) in Arabidopsis thaliana. Plant Journal, 29, 693-704. doi:10.1046/j.1365-313X.2002.01249.x - Catalá, R., Medina, J. and Salinas, J. (2011) Integration of low temperature and light signaling during cold acclimation response in Arabidopsis. Proceedings of the National Academy of Sciences of the United States of America, 108, 16475-16480. doi:10.1073/pnas.1107161108 - Hovi-Pekkanen, T. and Tahvonen, R. (2008) Effects of interlighting on yield and external fruit quality in yearround cultivated cucumber. Scientia Horticulturae, 116, 152-161. doi:10.1016/j.scienta.2007.11.010 - Liu, B. and Heins, R.D. (2002) Photothermal ratio affects plant quality in “Freedom” poinsettia. Journal of America Society for Horticultural Science, 127, 20-26. - Moe, R., Grimstad, S.O. and Gislerǿd, H.R. (2006) The use of artificial light in year round production of greenhouse crops in Norvey. Acta Horticulturae, 711, 35-42. - Wang, H., Gu, M., Gui, J.X., Shi, K., Zhou, Y.H. and Yu, J.Q. (2009) Effects of light quality on CO2 assimilation, chlorophyll-fluorescence quenching, expression of Calvin cycle genes and carbohydrate accumulation in Cucumis sativus. Journal of Photochemistry and Photobiology B: Biology, 96, 30-37. doi:10.1016/j.jphotobiol.2009.03.010 - Yu, H. and Ong, B.L. (2003) Effect of radiation quality on growth and photosynthesis of Acacia mangium seedlings. Photosynthetica, 41, 349-355. doi:10.1023/B:PHOT.0000015458.11643.b2 - Allen, D.J., McKee, I.F., Farage, P.K. and Baker, M.R. (1997) Analysis of limitations to CO2 assimilation on exposure of leaves of two Brassica napus cultivars to UV-B. Plant, Cell and Environment, 20, 633-640. doi:10.1111/j.1365-3040.1997.00093.x - Allen, D.J., Nogués, S. and Baker, N.R. (1998) Ozone depletion and increased UV-B radiation: Is there a real threat to photosynthesis? Journal of Experimental Botany, 49, 1775-1788. - Nogués, S. and Baker, N.R. (1995) Evaluation of the role of damage to photosystem II in the inhibition of CO2 assimilation in pea leaves on exposure to UV-B. Plant, Cell and Environment, 18, 781-787. doi:10.1111/j.1365-3040.1995.tb00581.x - Rockwell, N.C., Su, Y.S. and Lagarias, J.C. (2006) Phytochrome structure and signaling mechanisms. Annual Review of Plant Biology, 57, 837-858. doi:10.1146/annurev.arplant.56.032604.144208 - Schafer, E. and Nagy, F. (2006) Photomorphogenesis in plants and bacteria. Springer, Dordrecht. doi:10.1007/1-4020-3811-9 - Quail, P.H. (2010) Phytochromes. Current Biology, 20, R504-R507. doi:10.1016/j.cub.2010.04.014 - Leivar, P., Montec, E., Megan, M., Cohn, M.M. and Quail, P.H. (2012) Phytochrome signaling in green arabidopsis seedlings: Impact assessment of a mutually negative phyB-PIF feedback loop. Molecular Plant, 5, 208-223. doi:10.1093/mp/sss031 - Soy, J., Leivar, P., González-Schain, N., Sentandreu, M., Prat, S., Quail, P.H. and Monte, E. (2012) Phytochromeimposed oscillations in PIF3 protein abundance regulate hypocotyl growth under diurnal light/dark conditions in Arabidopsis. Plant Journal, 71, 390-401. - Facella, P., Daddiego, L., Giuliano, G. and Perrotta, G. (2012) Gibberellin and auxin influence the diurnal transcription pattern of photoreceptor genes via CRY1a in tomato. PLoS ONE, 7, e30121. doi:10.1371/journal.pone.0030121 - Xiong, J.Q., Patil, G.G., Moe, R. and Torre, S. (2011) Effects of diurnal temperature alternations and light quality on growth, morphogenesis and carbohydrate content of Cucumis sativus L. Scientia Horticulturae, 128, 54-60. doi:10.1016/j.scienta.2010.12.013 - Franklin, K.A. and Whitelam, G.C. (2007) Light-quality regulation of freezing tolerance in Arabidopsis thaliana. Nature Genetics, 39, 1410-1413. doi:10.1038/ng.2007.3 - Terefe, D. and Tatlioglu, T. (2005) Isolation of a partial sequence of a putative nucleotide sugar epimerase, which may involve in stamen development in cucumber (Cucumis sativus L.). Theoretical and Applied Genomics, 111, 1300-1307. doi:10.1007/s00122-005-0058-4 - Chen, H.M., Tian, Y., Lu, X.Y. and Liu, X.H. (2011) The inheritance of two novel subgynoecious genes in cucumber (Cucumis sativus L.). Scientia Horticulturae, 127, 464-467. doi:10.1016/j.scienta.2010.11.004 - Yamasaki, S., Fujii, N. and Takahashi, H. (2003) Photoperiodic regulation of CS-ACS2, CS-ACS4 and CS-ERS gene expression contributes to the femaleness of cucumber flowers through diurnal ethylene production under short-day conditions. Plant, Cell and Environment, 26, 537-546. doi:10.1046/j.1365-3040.2003.00984.x - Tanurdzic, M. and Banks, J.A. (2004) Sex-determining mechanisms in land plants. The Plant Cell, 16, S61-S71. doi:10.1105/tpc.016667 - Miao, M.M., Yang, X.G., Han, X.S. and Wang K.S. (2011) Sugar signaling is involved in the sex expression response of monoecious cucumber to low temperature. Journal of Experimental Botany, 62, 797-804. doi:10.1093/jxb/erq315 - Astmon, D. and Galun, E. (1962) Physiology of sex in Cucumis sativus (L.) leaf age patterns and sexual differentiation of floral buds. Annals of Botany, 26, 137-146. - Rudich, J., Baker, L.R., Scott, J.W. and Sell, H.M. (1976) Phenotypic stability and ethylene evolution in androecious cucumber. Journal of the American Society for Horticultural Science, 101, 48-51. - Cantliffe, D.J. (1981) Alteration of sex expression in cucumber due to changes in temperature, light intensity, and photoperiod. Journal of the American Society for Horticultural Science, 106, 133-136. - Jutamanee, K., Saito, T. and Subhadrabandhu, S. (1994) Control of sex expression in cucumber by photoperiod, defoliation, and plant growth regulators. Kasetsart Journal (Natural Science), 28, 626-631. - Wu, T., Qin, Z.W., Zhou, X.Y., Feng, Z. and Du, Y.L. (2010) Transcriptome profile analysis of floral sex determination in cucumber. Journal of Plant Physiology, 167, 905-913. doi:10.1016/j.jplph.2010.02.004 - Rosenthal, S.I. and Camm, E.L. (1996) Effects of air temperature, photoperiod and leaf age on foliar senescence of western larch (Larix occidentalis Nutt.) in environmentally controlled chambers. Plant, Cell and Environment, 19, 1057-1065. doi:10.1111/j.1365-3040.1996.tb00212.x - Zhao, H., Li, Y., Duan, B., Korpelainen, H. and Li, C. (2009) Sex-related adaptive responses of Populus cathayana to photoperiod transitions. Plant, Cell and Environment, 32, 1401-1411. doi:10.1111/j.1365-3040.2009.02007.x - Moccaldi, L.A. and Runkle, E.S. (2007) Modeling the effects of temperature and photosynthetic daily light integral on growth and flowering of Salvia splendens and Tagetes patula. Journal of the American Society for Horticultural Science, 132, 283-288. - Oh, W., Cheon, I.H., Kim, K.S. and Runkle, E.S. (2009) Photosynthetic daily light integral influences flowering time and crop characteristics of Cyclamen persicum. HortScience, 44, 341-344. - Garland, K.F., Burnett, S.E., Stack, L.B. and Zhang, D.L. (2010) Minimum daily light integral for growing highquality coleus. HortTechnology, 20, 929-933. - DeLucia, E.H., Shenoi, H.D., Naidu, S.L. and Day, T.A. (1991) Photosynthetic symmetry of sun and shade leaves of different orientations. Oecologia, 87, 51-57. doi:10.1007/BF00323779 - Terashima I. (1989) Productive structure of a leaf. In: Briggs, W.R., Ed., Plant Biology, Photosynthesis, 8, Alan R. Liss Inc., New York. - Poulson, M.E. and DeLueia, E.H. (1993) Photosynthetic and structural acclimation to light direction in vertical leaves of Silphium terebinthiaaceum. Oecologia, 95, 393- 400. doi:10.1007/BF00320994 - Soares, A.S., Driscoll, S.P., Olmoe, E., Arrabaca, M.C. and Foyer, C.H. (2008) Adaxial/abaxial specification in the regulation of photosynthetic CO2 assimilation with respect to light orientation and growth with CO2 enrichment in Paspalum dilatatum leaves. New Phytologist, 177, 186-198. - Wang, Y., Noguchi, K. and Terashima, I. (2008) Distinct light responses of the adaxial and abaxial stomata in intact leaves of Helianthus annuus L. Plant, Cell and Environment, 31, 1307-1316. doi:10.1111/j.1365-3040.2008.01843.x - Boardman, N.K. (1977) Comparative photosynthesis of sun and shade plants. Annual Review of Plant Physiology, 28, 355-377. doi:10.1146/annurev.pp.28.060177.002035 - Bjorkman, O. (1981) Responses to different quantum flux densities. In: Lange, O.L., Nobel, P.S., Osmond, C.B. and Zeigler, H., Eds., Physiological Plant Ecology I., Ency in Plant Physiology, NS, 12A, Springer, New York. doi:10.1007/978-3-642-68090-8_4 - Schreiber, U., Fink, R. and Vidaver, W. (1977) Fluorescence induction in whole leaves: Differentiation between the two teaf sides and adaptation to different light regimes. Planta, 133, 121-129. doi:10.1007/BF00391909 - Terashima, I. and Inoue, Y. (1984) Comparative photosynthetic properties of palisade tissue chloroplasts and spongy tissue chloroplasts of Camellia japonica L.: Functional adjustment of the photosynthetic apparatus to light environment within a leaf. Plant and Cell Physiology, 25, 555-563.
https://file.scirp.org/Html/6-3000347_24494.htm
(Centritto, 2002) and the earth in general. Forest growth, survival and structure are some of the major concerns from climate change as an output from the interaction between forests and the environment (Chmura et al., 2011). An understanding of forest canopy is important where it operated as a medium for the energy, mass and momentum exchanges between environment and the ecosystem of the forest (Song et al., 2010). It contains myriads of plant populations and this includes the parasites, hemi-epiphytes, vascular and non-vascular epiphytes (Benzing, 1990; Nadkarni et al., 2004). In tropical forest, vascular epiphytes are included as a vital component of the forests biological diversity (Wester et al., 2011). Existence of epiphytes at plants and atmosphere interface made them vulnerable to climate change thus, other components of the forest consequently will almost directly affected from any negative impact occur to the epiphytes (Zotz and Bader, 2009). There are also increases in awareness to the view that the survival and continuation of the epiphytes community are gradually at risk (Nadkarni, 1992). Besides, excessive logging and land use to the tropical rain forests especially in the mountain rain forest are harmful and have a dangerous effect to the epiphytes community (Barthlott et al., 2001). Plants in this natural environment have higher risks to be affected by several damages in relation to environmental stresses (Huang, 2006). Moreover, epiphytes are subjected to multiple abiotic and biotic stresses of varying intensities and durations. The location of canopy dwelling plants like epiphytes make it sensitive to environmental stress (Oberbauer et al., 1996). Where, epiphytes are mainly habituated in a complex light atmosphere (Martin et al., 2001) where the main abiotic limitation in epiphytic habitat is the insufficiency of water (Zotz and Hietz, 2001). When the environment of epiphytes is affected by its climate, automatically the plants physiological responses will be directly influenced. According to Schurr et al. (2006), at any state of environmental change, photosynthesis and growth processes are likely be affected. Climate change affects the overall functions and operations of physiological processes such as growth, transpiration and respiration (Brouder and Volenec, 2008). In addition, the growth of plants is mainly influenced by genetic traits or its surroundings (Kramer and Kozlowski, 1979). In fact, plants will respond to environmental stresses through their physiological features such as modification organss growth, or by using various types of ways to prevail against the stresses (Schurr et al., 2006). Therefore, it is necessary to discover how environmental factors especially light and water affect the epiphytes as they are mostly threatened by those two types of stresses. Their responses are vital to understand the epiphytes performances in fluctuating environments. This review will focus on the nature, contribution and response of epiphytic plants to light and water stress. Even though this paper touches on the entire globe, there is a clear focus on the growth and physiological responses of epiphytes to light and water stress. These observations also supplemented with reviews on epiphytic adaptive strategies through Crassulacean Acid Metabolism (CAM) in resistance against such stresses. EPIPHYTES Epiphytes are plants that settle on other plants and their habitats vary from the terrestrial understory to edge of the tree canopy (Zotz and Hietz, 2001). The epiphytic plants use other plants like trees, shrubs and woody vines as host. In contrast with the parasites, they are fully autotrophic (Benzing, 1998) or are considered as true epiphytes (Reinert, 1998). Ferns, orchids and bromeliads are embraces as epiphytes where they have characteristics that include thick and waxy leaves that allow them to require small amount of water (Ainuddin, 2007). There were plenty of epiphytic plants grew in wet tropical countries (Freiberg, 2001: Benzing, 1998, 1990). Besides, the vascular epiphytes are distinct characteristic element of tropical moist forests (Hietz et al., 2002). Additionally, epiphytes are globally known as dominant elements of the tropical rainforest (Fayle et al., 2009) and most of the vascular epiphytes are tropical plants (Zotz and Hietz, 2001). In addition, Benzing (1990) also mentioned that the epiphytes rarely exist in drier places. However, he stated that there are few types of epiphytes that could grow in the arid regions such as the orchids and the bromeliads where they are supported by plants like the Mexican and Peruvian cactus. According to Diaz et al. (2010), administrative practices in Chilean temperate rainforests prominently overlook the epiphytic plants communities. They were removed due to misperception that they are the indicators of deteriorating tree health and reducing production of timber. In fact, epiphytes do act as a source of the tropical rainforest ecology where it contributes to many forests functions such as storing water from rainfall and holding nutrients (Benzing, 2004). In the process of hydrological and nutrient cycles of a forest system, epiphytes are the vital contributor (Benzing, 1998) since they engage and hold nutrients from the surrounding (Benzing, 1990). Epiphytes play a significant function in influencing their nearby area microclimate particularly the canopy area (Stuntz et al., 2002). Moreover, according to Rosdi and Ainuddin (2004), microclimate of the surroundings area seems to be affected by the existence of plant. For example, the surrounding microclimatic state at the forest floor, does show a balance ecological system contained by much drier and warmer condition of the atmosphere, which is as the reflection of the contribution of canopy epiphytes and their accumulated dead organic matters (Freiberg, 2001; Freiberg and Turton, 2007). The vascular and non-vascular epiphytes contribute to the forest biomass. According to Nadkarni (1984), these epiphytic plants contribute to more than a few tons per hectare of biomass. In addition, these epiphytic plants also elevated the uptake of carbon and productivity of the forest (Diaz et al., 2010). For instance, a study of an epiphyte of Asplenium nidus shows that it could accumulate its dry mass up to one tones of per hectare (Ellwood et al., 2002). In both vascular and non vascular plants there are epiphytisms among them and which comprise of pteridophytes, flowering plants, latter bryophytes, algae and lichens (Reinert, 1998). Generally, vascular epiphytic plants varieties are mostly localized in tropical regions (Dubuisson et al., 2009). Furthermore, Otto et al. (2009) reported that plenty of epiphytes that mainly ferns are absolutely acclimatized to the environment that is airy. According to Lorenzo et al. (2010), almost 80% species of the epiphytes fit into just four families: Bromeliaceae, Orchidaceae Polypodiaceae and Araceae. PLANT STRESS Climate change is the present issue that manipulates the ecological features of an ecosystem. The changes in the environment have huge effects especially on plants growth and its distribution. Interaction between plants and the environment will influence plants physiological functions thus affecting some ecological processes (Hegland et al., 2009). When the environment of a plant is affected by its climate, the natural processes of this plant will be directly influenced. Plant stress is always influenced by adverse environmental conditions, such as insufficient water supply, diseases, lack of nutrients or insect damage. There are many types of stresses caused by the changes of its surrounding area either in a minor or major way. These abiotic stresses will effect growth and in the long run, it will exterminate the survival of plant. Canopy act as a filter to plant beneath it and reduce the solar energy received to the ground (Ainuddin and Lili, 2005). Environmental stresses for example water deficit and high light intensity might occur to epiphytes which live beneath the forest canopy. A species of Platycerium bifurcatum developed in diverse habitat which always exposed by drought and stress of high irradiance (Rut et al., 2008). The surrounding microclimate of the epiphyte area will therefore affect their mortality and as a result, it will affect the distribution of different species (Fayle et al., 2009). Epiphytes were also reported as the first population to be declined as their ecosystem was at risk and disturbed (Dubuisson et al., 2009). However, epiphytes also exhibit ecophysiological, morphological and anatomical adaptation to survive under these harsh canopy conditions (Lorenzo et al., 2010). According to Suzuki et al. (2005), a complex system of transcription factors and other rigid genes will control the multiple defenses of enzymes, proteins and specific pathways if any plant is acclimatize to those abiotic stresses. Each plant will use different mechanism dealing with abiotic stress. For instance in Mimosa strigillosa, under environmental stress of soil pH, more growth is distributed to shoot under optimum condition, while under stress, it allow more growth to the roots (Nuruddin and Chang, 1999). This reflects that plant use different approach as an adjustment to environmental stress. LIGHT STRESS Light is one of the stress factors that have a huge influence on plant. The energy source for photosynthesis in plants is light and it is the vital requirement for plant life (Long et al., 1994). Light intensity and light quality fluctuation definitely have its effects on biochemical, physiological and plants developmental processes where under the low light level, growth and development are disturbed due to insufficient energy and under high light intensity, photodamaged may occur due to overload of the plants system (Humby and Durnford, 2006). Epiphytes can grow under different light condition ranging from the nearly full sun which is out in the open branches to the deep shade on base of the stem (Hietz and Briones, 2001). Epiphytes environment is always exposed to such stress and the regulation of plant growth and development are affected by the changes of the environmental conditions. Epiphytes that live under plant canopy are typically exposed to the elements of harsh atmosphere where light stress may arise among the epiphytic plants. Light stress is when an excess light is absorbed greater than what is needed during the process of photosynthesis, where the excess of light is defined as when the ratio of Photon Flux Density (PFD) and photosynthesis is high (Demmig-Adams and Adams, 1992). High light intensity will lead to high temperature which will influence the plant growth due to hotter and drier surrounding microclimate. Different mechanisms shield the photosynthetic apparatus from over-energization under the overload light but, if the mechanisms of photoprotective of plant are insufficient, photoinhibition process will be turned on (He et al., 1996). The trigger to photoinhibition occurs due to the overexposure of plant to irradiance which it is higher than the irradiance usually obtained by plant (Stancato et al., 2002). The photochemical inactivation mostly of Photosystem II (PS II) is involved in photoinhibition (Sarvikas et al., 2006). Therefore, every organism that conducts photosynthesis is probably vulnerable to injury due to radiation influences, but the level of susceptibility depends on factors such as environmental, genotype, phenotype and physiological (Alves et al., 2002). The growth of plants is closely related to photosynthesis and without light, photosynthesis will not occur and the growth of plants will be retarded. According to Winter et al. (1983), many CAM plants are not only could survive in the area with high light intensity but also under shaded condition. Capability of some species to survive under certain light range is probably dependent on how far they can adapt to new levels of light by changing their photosynthetic response (Heschel et al., 2004). WATER STRESS Meteorological term for drought is implied as restriction of water for a long period of time while water stress is likely to signify the complex progression of the effect of drought which is triggered by drought (Lombardini, 2006). Drought stress also has become more severe in some area due to the global climate change (Elsheery and Cao, 2008). Living in the canopy environments, water limitation is likely to occur among epiphytes. Extensive environmental conditions of drought and salinity converge to lower water accessibility of plants lead to the limitation of photosynthesis (Flexas et al., 2004), growth and worldwide yield (Chaves et al., 2003). The rate of CO2 assimilation of plants may be reduced under the condition of water insufficiency (Stancato et al., 2001). Closure of stomata and photosynthetic rates reduction are also stimulated by the limitation of water availability (Angelopoulos et al., 1996). For instance, in Mimosa strigillosa, the stomata are closed and leaves are folded to reduce loss of water (Chang et al., 1997). In addition, both stomatal closure and inhibition of leaf growth are among the earliest reactions towards drought to defend the plants from excessive water loss where it might lead to leaf cell dehydration and runaway xylem cavitations thus, bring towards the mortality (Chaves et al., 2003). The main limitation of plant survival and growth is water stress and during acclimatization of plants to water stress, it involves several physiological and anti-oxidative apparatus (Upadhyaya et al., 2008). Therefore, stress tolerance indicates the capability of plants to survive in adverse environment through the adaptation and acclimatization from the state of stress (Lombardini, 2006). Epiphytes must tolerate the stress to survive in the harsh environment. Therefore, there were many defenses applicable to epiphytic plants (Benzing, 1990). For instance, the adjustment of conductance can defend a plant itself to drought or water stress. Moreover, Benzing (1990) also stated that osmotic adjustment and carbon dioxide fixation are other ways that can be used by plants to overcome this situation. According to Luvaha et al. (2008), in changing of environmental climate, it is beneficial for species to apply avoidance of drought mechanisms and adaptation through active osmoregulation. In addition, plants tolerate drought by avoiding tissue dryness and at the same time sustaining water potential or enduring the low water potential (Chaves et al., 2003). Generally, thick cuticles, succulence, sunken stomata and a thicker layer of boundary on the leaf surface are some of the adaptations applied by plants to conserve their water status (Hsu et al., 2006). For C4 or CAM plants, they use other methods of to cope with stress through carbon dioxide fixation for production of sugar with a minimum water loss (Xoconostle-Cazares et al., 2010). Water insufficiency in soil and plant tissue during drought leads to the adjustment in the processes of plant photosynthesis and has its effect on plant growth, development and survival in harsh environment (Lombardini, 2006). GROWTH RESPONSE TO LIGHT AND WATER STRESS Growth is a mechanism achieved by division of cell, cell enlargement and differentiation and it is associated with the physiological, ecological, genetic and morphological measures and their complex interactions (Farooq et al., 2009). One of the important elements for plant growth and development is light (Saifuddin et al., 2010). Nearly all plants use solar radiation not only source of energy for photosynthesis, but also to regulate their processes of growth and development (Lombardini, 2006). Cervantes et al. (2005) theorized that individual epiphytes under extreme light intensity microhabitats within the canopy in a tropical dry forest of Yucatan, Mexico may have a restriction in their growth and reproduction. A study by Singh and Srivastava (1985) conformed to this finding where the fern of Azolla pinnata R. Brown recorded has the lowest value of mean leaf area in both of under the shade and highest light intensity treatments (Table 1). In terms of biomass, unlike C3 plants that grow in high light area, plants that grown under shade would distribute more biomass to their photosynthetic tissues, creating thin, horizontally oriented foliages with little intra-canopy shadings thus, they have relatively higher concentrations of chlorophyll and higher coefficients of light absorption, however, these shaded plants have smaller value in root biomass indicating low rates of transpiration and low light saturated rates of photosynthesis (Skillman et al., 2005). Decline in the vegetative growth of plants are one of the initial consequences of drought where water deficit will stimulate changes in terms of cell structure of plants and its metabolism (Khaled, 2010). Another consequence of water stress is turgor loss which reduces the size of cells leading to reductions in leaf expansion and shoot extension plus the leaf area reduction will definitely lessen the surface area for transpiration and thus, smaller leaves will decrease the light absorption and photosynthesis (Lombardini, 2006). In a study by Ainuddin and Nur Najwa (2009), water restriction lowered length and area of Asplenium nidus. According to Vurayai et al. (2011), decline in leaf area is the initial defense of plants to drought as in their study, water stress reduces area of the leaf to transpires less water. In addition, the reductions of leaf water potential and stomatal closure are the instantaneous reaction to water insufficiency, which point towards to decline in CO2 uptake and photosynthesis (Li et al., 2008). The reduction in photosynthesis results in a slower growth and lower plant biomass (Du et al., 2010) because plants need water to create biomass (Benzing, 1990). Furthermore, stomata closure and slower plant growth are desirable approaches to decrease further water loss (Sinclair and Purcell, 2005). However, only a few studies were done on the direct and long term observation as in concern of the growth of epiphytic plants (Hietz et al., 2002). PHYSIOLOGICAL RESPONSE TO LIGHT AND WATER STRESS Generally, the function of physiology of plant explains the growth of plants and its respond to the surrounding factors and cultural treatments (Kramer and Kozlowski, 1979). In extreme environments especially in high light intensity and water stress condition, plants will respond to changes through their physiological processes, such as the rate of photosynthesis, transpiration and stomatal conductance. In plants leaves, the factors from plant nutrition, light regime, leaf age, water stress and other physiological parameters can affect the photosynthetic CO2 assimilation (Von Caemmerer and Farquhar, 1981). Although light is essential for photosynthesis, the light intensity whether it is low or high can affect plant growth (Valladares and Niinemets, 2008). Light intensity determines the degree of opening of the stomata and the guard cell, controlling water balance and influencing the photosynthesis rate of a plant via the light receptors that drive fixation of CO2 and reduced intercellular CO2 concentration (Yu et al., 2004). In addition, the environment plays an important role in determining the photosynthesis efficiency, for example, plants that grow in a low light regime, their leaves absorb lower photon energy and they are depending on photon supplies for their photosynthesis rate (Miyake et al., 2009). Moreover, in a tropical dry forest of Yucatan, Mexico, plants that grow in the lower level of the canopy would have a lower photosynthetic rate while and plants that grow in the higher level of the canopy would experience photoinhibition (Cervantes et al., 2005). Roberts et al. (1998) found similar results where maximum rates of photosynthesis in shaded leaves are lower than the exposed ones while Schafer and Luttge (1988) found the opposite findings (Table 2). Epiphytes usually lived in a place that receives variability in light and is high in PFD since their photosynthetic apparatus will be affected, resulting in high evaporation rate which affects the plant water relations (Schafer and Luttge, 1988). Moreover, physiological and biochemical processes of photosynthesis were affected by water stress (Ramanjulu et al., 1998). According to Chang et al. (1995), in adjustment to water stress condition, two physiological characteristics vital to plants are water transport system efficiency and regulatory system for water loss. In declining water availability, stomata closure has been noted as the earliest response of plants physiological attributes towards drought (Flexas and Medrano, 2002). Since there is known close correlation between stomatal conductance (gs) and net CO2 assimilation (Anet), it has been classified that in drought condition, stomatal closure has an influence in limiting the CO2 uptake in the leaves (Flexas et al., 2004). In the limitation of net photosynthesis rate, stomatal or non-stomatal factors were involved in this situation but the limitation was also dependable on the harshness and persistence of stress and also genetic reaction of the plant species (Ramanjulu et al., 1998). The survival capability of plantin severe drought is based on water loss limitation through minimum opening of stomata (Sanusan et al., 2010). Although the closure of stomata helps in sustaining high leaf water content and its water potential however is also responsible for the decline in photosynthesis (Ohashi et al., 2006). Moreover, in water deficit condition, the decline in CO2 assimilation rate is due to electron transfer in the Photosystem II was reduced, thus affecting the quantum yield in its photochemical apparatus (Stancato et al., 2001). In a study of drought tolerance in cereal species, it was found that the value of the quantum yield of PS II in water deficit condition was reduced with the increment in the stress level (Flagella et al., 1998). Compared to other climatic elements in nature, light varies through its amplitude and radiation quantity and quality obtained by plants (Alves et al., 2002). When plants are subjected to strong light, their physiological features will respond. The declines in capability for photosynthesis were stimulated by exposing photosynthetic organisms, structures or organelles to visible light which has been denoted in various terms such as photoinhibition, photooxidation, photoinactivation, photolability, solarization and photodynamic reactions (Powles, 1984). Photoinhibition were used to explain the inhibition of photosynthetic capacity (Demmig-Adams and Adams, 1992) and the independence of gross adjustment in pigment concentration caused by excessive light (Powles, 1984). Long et al. (1994) also supported that photoinhibition was considered as light dependent and a gradual reduction of photosynthetic rate (Demmig-Adams and Adams, 1992) which was independent of any developmental change. Furthermore, it is known that exposing green plants to excessive light has caused damages to photosynthetic apparatus (Powles, 1984). Moreover, tremendously too much light obtained by plants possibly will damage photosynthetic pigments (Powles, 1984) and the structure of plants which leading to photodamage (Larcher, 2003). In tropical region, high irradiance and high temperature arise concurrently with drought will direct the plants towards increasing their photon energy in chloroplast, which also decreases photochemical efficiency, thus leading to photosynthetic apparatus damages (Elsheery and Cao, 2008). Chlorophyll fluorescence technique is a good indicator in determining the response from stresses. Whereby, in understanding adaptation system of plant and resistance to stress from the environment, chlorophyll fluorescence act as an assistant regards to this matter (Siam et al., 2008). This technique is handy and be used broadly by plant physiologists and ecophysiologists (Maxwell and Johnson, 2000). Chlorophyll fluorescence measuring techniques has currently improved and therefore is a vital instrument in the basic and applied physiology of plant (Krause and Weis, 1991). Using this technique, it calculates the modifications in photosystem II (PS II) action through the Chlorophyll a fluorescence changes which stimulated by the stress (Percival, 2005). There have been plenty of evidences associated with PS II acting as the main site of lesion in photoinhibition (Powles, 1984). The apparatus of photosynthesis in plants could possibly be affected provisionally by environmental stresses prior to the permanent morphological injury (Naumann et al., 2008; Percival and Sheriffs, 2002). Under various conditions and at various times, this technique of chlorophyll fluorescence measurement can estimate the parameters of the actual photosynthetic efficiency of the leaf (Maxwell and Johnson, 2000). It also measures the potential maximum of the quantum efficiency of Fv/Fm (Duraes et al., 2001). Plant responses to stress conditions in this technique can be quantified through the measurement of fluorescence sign from dark adapting leaves of a plant for a certain period of time. As reported by Baker and Rosenqvist (2004), when the leaf was subjected to immediate light, fluorescence increases to a level of minimal fluorescence (Fo) which at this state the reaction centre of PS II. CRASSULACEAN ACID METABOLISM: AS DEFENSE MECHANISM FROM STRESS FOR EPIPHYTES Plant reacts to numerous stresses imposed by the climate change such as high and low concentrations of carbon dioxide, high light intensity and high temperature, which all have effects on carbon fixation and anatomical pathway (Holtum and Winter, 1999). There are three types of photosynthesis pathway, C3, C4 and Crassulacean Acid Metabolism (CAM) and many dynamic defense systems exist for epiphytics (Benzing, 1990). In fact, plants that grow in arid land modify their physiological metabolic system via CAM (Grams and Thiel, 2002). CAM is a CO2 concentrating mechanism that activates the Phosphoenolpyruvate Carboxylase (PEPC) enzyme at night for detaining the respiratory and atmospheric CO2. Physiologically, CAM preserves carbon and water in plants that grow in surroundings with limited accessibility those two resources either in short or long term basis (Borland and Taybi, 2004). In CAM photosynthesis, plant use a metabolic strategy in which during the cooler period at night, the stomata of the plants open to allow the nocturnal uptake of carbon dioxide (CO2) and during the day, stomata close to prevent the loss of water (Dodd et al., 2002; Rut et al., 2008). Thus, by this processes during day and night, it is considered that water stress could be adapted by CAM plants through the reduction of water loss (Luttge, 2004). There have been many studies on the availability of this pathway on plants especially the epiphytes (Borland et al., 1998; Herrera et al., 2000; Hsu et al., 2006; Rut et al., 2008). A study by Holtum and Winter (1999) found that the tropical epiphytic and lithophytic ferns are the common plants that undergo CAM than it is presently investigated. In this study, eventhough the Polypodium crassifolium and Polypodium veitchii did not showed a strong CAM features, CAM activity still happened and was shown in the increase of titratable acidity. This shows that there is an occurrence of the CAM indirectly. In a study by Wanek et al. (2002), there was an activity of CAM cycle in all stages of development of Clusia species which was the Clusia osaensis Hammel-ined., Clusia peninsulae Hammel-ined. and Clusia valerii Standl. This study shows that from the titratable protons and malic and citric acid there was a significant day-night flux cycle. Therefore, they advocate that the accessibility of water and light intensity created the appearance of CAM. Among epiphytes, the broad occasions of CAM are not expected as an outcome from short, but probably frequent phases of drought stress (Hsu et al., 2006). According to Rabas and Martin (2003), plants that are succulent will regularly uptake CO2 through CAM and additionally to the mechanism of C3. Stresses from ecological factors could alter the isotope composition of C13 in many expected ways, ultimately via the effects on the balance among stomatal conductance and carboxylation (Robinson et al., 2000). Leavitt and Long (1982) reported that light intensity has an effect on the photosynthesis rate and in turn, it manipulates the composition of the carbon isotopic of plant and stresses from water could also influence stomata conductance and availability of water for photosynthesis. Epiphytes might survive via CAM where this mechanism allows plants to defense themselves through several stresses such as light and drought. CONCLUSION Climate changes have been manipulating the ecological value of an ecosystem. Light and water plays an important role in epiphytes growth and physiological performances. Many of epiphytes growth and physiological parameters are being affected, including the decline in biomass, photochemical efficiency and many more. Plants especially epiphytes have many ways in adapting to stressful conditions, which in such cases, CAM are vital for epiphytes in surviving harsh environment. The fixation of CO2 at night and the closing of stomata during day are the essential mechanisms for the epiphytes. Through these adjustments on their physiological functions, epiphytes certainly have a higher percentage in survivability by preventing themselves from dying water loss and increasing temperature. Other mechanisms encountered by plants to defense themselves from such stresses should be further investigated to get better idea of other stress responses in plants. In order to obtain a larger picture on the underlying explanation on what other parameters that are affected by light and water stress, things such as leaf anatomy, leaf structure and biochemical components variations in epiphytes need to be explored in the future. ACKNOWLEDGMENT We would like to thank every person involved in this project and we are also most grateful for each individual that give valuable suggestion and manuscript review. Financial support from the Research University Grant Scheme (RUGS) No. 03/01/07/0035RU are gratefully acknowledged.
https://scialert.net/fulltext/?doi=ajps.2011.97.107&org=11
Research concerning endophytic fungi has recently received a remarkable boost following a general trend to investigate and exploit biodiversity in all its forms, and because of the easier access to equipment and methods, which enables quicker identification procedures. The available data highlight that, besides the plant hosts, endophytes consistently interact with the other components of biocoenosis, and that the assortment of the microbial consortium is also to be considered on account of the reciprocal influence between the several species which are part of it. Unravelling these complex ecological relationships is fundamental because of possible translational applications, particularly regarding crop management. However, this requires that the available information concerning plant species, ecological contexts or functional categories of endophytes is examined fully. In this aim, a coordinated effort appears to be necessary to organise the current knowledge to increase the significance and the practical impact of new findings. Palabras clave - crop protection - defensive mutualism - endophytes - plant fitness - plant microbiome Research Article - Acceso abierto Parameters of radish phytomass ( Raphanus sativus L.) determined by vermicompost and earthworms (Eisenia fetida) Páginas: 217 - 233 Resumen In 2-year outdoor pots experiment, which was realised in the vegetation cage situated in the campus of Slovak University of Agriculture in Nitra, both the impact of different doses of vermicompost (Vc) (0%, 10%, 20%, 25% and 50%) and the number of earthworms (EW) (0, 10 and 20 individuals/pot) in the soil substrate were studied on the quantitative and qualitative parameters of radish yield. The achieved results show that along with the increasing quantity of Vc, the total chlorophyll content also increased proportionally. The content of vitamin C declined and the content of nitrates increased in both the aboveground and underground biomass. The weight of the roots and leaves of radish increased until the content of Vc in the substrate did not exceed 20%. Vermicompost abundance >20% led to the decline of root and leaf biomass formation. The root yield and leaf biomass were higher in the presence of 50% Vc content in the substrate, compared with the control. The EW had mostly a negative impact on radish phytomass formation, particularly both weight and root diameter. The highest percentage of roots weight decline cultivated in the treatment with EW were obtained with the least dose of Vc (10%), thereby the least quantity of fodder for the EW. The impact of EW on the total chlorophyll, vitamin C and nitrates contents in roots and leaves was non-significant. The number of EW did not influence the root diameter and content of vitamin C; however, it affected the root weight. Palabras clave - earthworms - nitrates - root weight - total chlorophyll content - vermicompost - vitamin C - Acceso abierto Effects of supplemental lighting using HPS and LED lamps with different light spectra on growth and yield of the cucumber ( Cucumis sativus L.) during winter cultivation in greenhouse Páginas: 9 - 15 Resumen The aim of the experiment was to assess the effects of supplemental lighting of cucumber grown in greenhouse using lamps with differentiated light spectra: sodium lamps – high-pressure sodium (HPS) and light emitting diodes (LEDs). Plants (cucumber ‘Pacto’ F1) were grown in two greenhouse compartments with five light treatments: I – HPS + LED (top light with HPS LEDs as interlighting), II – only HPS as top light, III – LED R (LED chips on board (COB) type with an increased level of red band), IV – LED W (LED COB type, white), V – LED B (LED COB type with an increased level of blue spectrum). Light treatments: HPS + LEDs and HPS were grown in one greenhouse compartment and the other three light treatments: LED R (red light supplementation), LED W (without additional supplementation) and LED B (red spectrum supplementation) in the second compartment in analogous climatic conditions. The LED lamps using COB technology are known to be a very efficient source of light. Plants were cultivated from December 2018 to March 2019 in mineral wool slabs Grotop Master (100 × 15 × 10) with four plants on one mat. They were illuminated for 18 h (from 5 am to 11 pm), setting the threshold value (on and off) at 130 W. The plants were drip-irrigated with a complete nutrient solution. The irrigation was controlled based on a weighting system. The assessment of the effect of lighting on early yield and quality of cucumbers was completed after 8 weeks of cropping. It was shown that it was possible to obtain 3.59 kg from one plant during the 8-week period of evaluation by illuminating plants with sodium lamps (HPS), while using HPS and LEDs as additional illuminated inter-rows 3.89 kg. The yield of plants illuminated by LED lamps varied depending on the variant of the spectrum used and was respective for LED R, LED W and LED B, 3.30 kg, 3.90 kg and 3.25 kg. The obtained results indicated that the yield of cucumber ‘Pacto’ F1 grown with HPS lamps at top lighting and at the same time using interlighting with LED lamps was similar to LED W lamps (i.e. without additional supplementation in the range of red (LED R) and blue (LED B) light). Due to good results of LED lamps (type COB for top lighting or as interlighting) used for the cucumber supplemental lighting and high energy efficiency of LEDs, the promising future for that type of lamps compared with traditionally used HPS during winter cultivation in a greenhouse was demonstrated. Palabras clave - Cucurbitaceae - greenhouse cultivation - growth parameters - high-pressure sodium lamps - LEDs COB - light spectra - Acceso abierto Biochemical variances through metabolomic profile analysis of Capsicum chinense Jacq. during fruit development Páginas: 17 - 26 Resumen Palabras clave - Jacq. - chili pepper - GC-MS - LC-MS - pepper fruit diversity - pepper fruit morphology - untargeted metabolomics - Acceso abierto Exploring wild edible flowers as a source of bioactive compounds: New perspectives in horticulture Páginas: 27 - 48 Resumen The increasing interest in healthy and natural foods has raised the attention towards uncommon or unexplored ingredients, such as edible flowers. These products are proven to be a rich source of bioactive compounds, for example, vitamins or polyphenols that play an important role in health promotion and disease prevention. However, plant species with edible flowers are numerous and most of them still need to be studied with this aim. The high species richness of North-Western Italy provides interesting perspectives in the use of wild edible flowers, which are currently underutilized, but can be a valuable food source or food supplement for healthy diets. In this framework, the phytochemical composition of 22 wild edible flowers was analysed and compared with that of four cultivated species ( Palabras clave - antioxidant activity - edible flowers - functional food - polyphenols - vitamin C - Acceso abierto Genetic characterisation and population structure analysis of Anatolian figs ( Ficus carica L.) by SSR markers Páginas: 49 - 78 Resumen The common fig ( Palabras clave - Anatolia germplasm - L. - genetic structure analysis - microsatellite - Acceso abierto Morphological and biochemical variations induced by synergy of salicylic acid and zinc in cockscomb Páginas: 79 - 90 Resumen Palabras clave - biplot - foliar application - heat map - micronutrient - reproductive indices - synergism - Acceso abierto Silicon dioxide-nanoparticle nutrition mitigates salinity in gerbera by modulating ion accumulation and antioxidants Páginas: 91 - 105 Resumen This work aimed to investigate the interaction between salt stress and the application of silicon dioxide-nanoparticles. In this study, gerbera plants grown in soilless culture were supplied with nutrient solutions with different NaCl concentrations (0, 5, 10, 20 and 30 mM) in combination with SiO2-NPs spray (0, 25 and 50 mg · L−1). Exposure of gerbera to salinity increased sodium concentration but decreased potassium and calcium concentrations in leaf as well as stem length/diameter, fresh/dry weight, leaf/flower number, flower diameter and leaf area. It also increased the activities of antioxidant enzymes and electrolyte leakage. Results indicated that SiO2-NPs could improve growth, biochemical and physiological traits. It increased stem thickness but slightly affected stem length. Flower diameter was not affected by salinity rates up to 10 mM of NaCl. However, a significant difference was observed between controls and plants treated with 30 mM of NaCl. Salinity increased the electrolyte leakage (32.5%), malondialdehyde (83.8%), hydrogen peroxide (113.5%), and the antioxidant enzyme activities such as ascorbate peroxidase (3.4-fold) and guaiacol peroxidase (6-fold) where SiO2-NPs activated them more, except for superoxide dismutase. Under salinity (30 mM), the increase in SiO2-NPs (especially at 25 mg · L−1) led to the increase in the uptake of Ca2+ (25.3%) as well as K+ (27.1%) and decreased absorption of Na+ (6.3%). SiO2-NPs has potential in improving salinity tolerance in gerbera. It seems that the sensitivity threshold of gerbera to the salinity was 10 mM and the use of SiO2-NPs is also effective in non-saline conditions. Palabras clave - antioxidant defense - biostimulants - elemental status - nano-SiO - salt stress - Acceso abierto Relationship between salicylic acid and resistance to mite in strawberry Páginas: 107 - 119 Resumen The two-spotted spider mite (TSSM) Palabras clave - × - morpho-anatomical - Acceso abierto Gibberellic acid and 6-benzyladenine reduce time to flowering and improve flower quality of Laelia anceps Páginas: 121 - 133 Resumen The efficacy of plant growth regulators (PGRs) has been demonstrated in the flowering of economically significant orchid hybrids, but studies of their effects in wild species with commercial potential are scarce. The effect of three doses of gibberellic acid (GA3) and 6-benzyladenine (BA), individually or in combination, and a control without PGRs, were evaluated during three flowering periods in Palabras clave - BA - GA - Orchidaceae - orchids - plant growth regulators - seasonal behavior - survival analysis - Acceso abierto Sensitivity of quinoa cv. ‘Titicaca’ to low salinity conditions Páginas: 135 - 145 Resumen Quinoa ( Palabras clave - biomass production - photosynthetic rate - salt glands - salt stress - water use efficiency - Acceso abierto Foliar application of polyamines improve some morphological and physiological characteristics of rose Páginas: 147 - 156 Resumen This experiment was conducted to investigate the effects of foliar spray of polyamines on some morphological and physiological characteristics of rose. Experimental variants involved the type (putrescine, spermidine and spermine) and concentration (0 mM, 1 mM, 2 mM and 4 mM) of polyamines. In this research, the plant height, number of leaves and shoots, leaf area and thickness, fresh and dry weight of leaf and stem, the content of anthocyanin, soluble sugar, phenol and antioxidant capacity were measured 2 weeks after the end of experiment. Results indicated that among all polyamine types, putrescine has the highest effect on the morphological characteristics. Among different concentrations of polyamines, the concentration of 1 mM resulted in the highest increase in shoot fresh and dry weight. Putrescine application at 2 mM and 4 mM concentrations increased soluble sugar content. In the present study, polyamine treatment reduced the content of anthocyanin, phenol and antioxidant capacity. It can be cocluded that application of polyamines improved some morphological and physiological traits in various ways. Palabras clave - antioxidant capacity - anthocyanin - growth characteristics - putrescine - spermidine - spermine - Acceso abierto Genetic diversity and structure analysis of Croatian garlic collection assessed by SSR markers Páginas: 157 - 171 Resumen This study examines genetic diversity and structure of a Croatian garlic germplasm collection using 13 simple sequence repeat (SSR) markers. A total of 71 alleles were observed across 64 accessions representing 3 Croatian regions (Istria, Dalmatia and continental Croatia) and 16 foreign landraces, with an average of 5.46 alleles per locus. Among the 80 accessions analysed, 61 distinct multilocus genotypes (MLG) were identified, of which 51 represented unique genotypes and the remaining accessions were divided into 10 MLG groups, comprising potential duplicates or redundant genotypes. Model-based Bayesian and hierarchical UPGMA clustering approaches revealed five major groups within the collection which partially correlated with geographical origin. The analysis of molecular variance (AMOVA) showed that the majority (87.71%) of the total molecular diversity is within the Croatian groups of accessions, even though a significant share (12.29%) of diversity derived from genetic diversity among groups. These results support regional structuring, as well as the existence of significant diversity within local populations. This study is the first comprehensive report on an extensive evaluation of genetic resources of garlic maintained by Croatia with the aim of setting the course for future preservation strategies with particular emphasis on the value of diversity in the context of climate change both on macro and micro levels. Palabras clave - collections management - garlic - genetic diversity - genetic structure - plant genetic resources - SSR - Acceso abierto Meiotic behaviour and pollen fertility of F1, F2 and BC1 progenies of Iris dichotoma and I. domestica Páginas: 173 - 183 Resumen Pollen characteristics are very important for Palabras clave - 2n gametes - abnormal meiosis - interspecific hybrid - male - Acceso abierto Evaluation of the possibility of obtaining viable seeds from the cross-breeding Hippeastrum × chmielii Chm. with selected cultivars of Hippeastrum hybridum Hort. Páginas: 185 - 194 Resumen Palabras clave - amaryllis - fertilisation - germination - pollen - pollen tube - stigma - Acceso abierto Influences of girdling and potassium treatments on fruit quality and some physiological characters of ‘Fremont’ mandarin variety Páginas: 195 - 202 Resumen Growing citrus involves cultural treatments such as girdling and foliar potassium treatment to increase fruit size, yield and quality. The aim of the study was to evaluate the effects of single and double stem girdling, potassium nitrate (KNO3) treatment on leaves and combinations of these treatments on the fruit yield, size and quality characteristics, leaf chlorophyll concentration and leaf nitrogen content, leaf fluorescence (PSII) and leaf sugar content of the ‘Fremont’ mandarin variety. Girdling treatments were applied on the stem by removing 4 mm wide ring of bark at the end of anthesis and after the June fruit drop. Foliar KNO3 applications were applied at a concentration of 4% twice (90 days and 120 days after full anthesis) in the experiment. The single girdling (SG) and double girdling (DG) treatments on stems increased fruit yield (kg · tree−1) by approximately 40% relative to the control (C). Treatments did not significantly affect the internal fruit quality of the ‘Fremont’ mandarin variety except in fruit colour and appearance. The highest impact on fruit size was found in the DG + KNO3 treatment. According to treatments and periods, the SPAD values varied between 62.08 and 70.67, whereas the PSII values varied between 0.698 and 0.756. The treatments significantly increased the leaf nitrogen (N) concentration and the fructose, glucose and sucrose concentrations relative to the control. The highest N concentration content was detected in the foliar potassium treatment, and the highest total sugar content was detected in the SG treatment. Palabras clave - chlorophyll - citrus - girdling - quality - sugar content Review - Acceso abierto Application of plant natural products for the management of postharvest diseases in fruits Páginas: 203 - 215 Resumen Prevention of postharvest losses has been a very important concern in the scientific world for many centuries, since adoption of an effective means to curtail such losses is believed to help in reaching sustainability in horticultural production and prevention of hunger around the world. The main means of deterioration in fruits, which may occur after harvest, include physiological changes/losses, physical losses, biochemical changes, changes in enzymatic activities and pathological deterioration. Among these, diseases cover the most important part; the losses due to diseases range from 5% to 20%, and this figure may extend up to >50% in the cases of certain susceptible cultivars. Fungicides have been the most important tool for the management of postharvest diseases for many years, together with hygiene, cold storage and packaging. However, due to the scientifically confirmed hazards of agro-chemicals on environment and human health, the acceptability of agro-chemicals decreased and scientists turned their attention towards natural alternatives. Most tropical and subtropical fruits contain a superficial cuticle, which helps them to regulate respiration and transpiration and protects against microbial decay. However, the waxy cuticle is generally being removed or damaged during washing or other handling practices. Therefore, the application of protective coatings (including wax) has been used in the fruit industry since the twelfth century, against microbial decay and for maintaining an acceptable standard of postharvest quality. This review aims to summarise and discuss the main natural products used for this purpose, to provide a broad-in-scope guide to farmers and the fruit storage sector.
https://sciendo.com/es/issue/FHORT/33/1
Effects of CO2 application and endophytic bacterial inoculation on morphological properties, photosynthetic characteristics and cadmium uptake of two ecotypes of Sedum alfredii Hance - Source: - Environmental science and pollution research international 2019 v.26 no.2 pp. 1809-1820 - ISSN: - 0944-1344 - Subject: - Sedum, bacteria, biofortification, cadmium, carbon dioxide, carbon dioxide enrichment, chlorophyll, ecotypes, endophytes, environmental factors, gas exchange, hydroponics, hyperaccumulators, leaves, photosynthesis, phytoremediation, pigments, polluted soils, roots, shoots, toxicity - Abstract: - Plant uptake of cadmium (Cd) is affected by soil and environmental conditions. In this study, hydroponic experiments were conducted to investigate the effects of elevated CO₂ coupled with inoculated endophytic bacteria M002 on morphological properties, gas exchange, photosynthetic pigments, chlorophyll fluorescence, and Cd uptake of S. alfredii. The results showed that bio-fortification processes (elevated CO₂ and/or inoculated with endophytic bacteria) significantly (p < 0.05) promoted growth patterns, improved photosynthetic characteristics and increased Cd tolerance of both ecotypes of S. alfredii, as compared to normal conditions. Net photosynthetic rate (Pn) in intact leaves of hyperaccumulating ecotype (HE) and non-hyperaccumulating ecotype (NHE) were increased by 73.93 and 32.90%, respectively at the low Cd (2 μM), 84.41 and 57.65%, respectively at the high Cd level (10 μM). Superposition treatment increased Cd concentration in shoots and roots of HE, by 50.87 and 82.12%, respectively at the low Cd and 46.75 and 88.92%, respectively at the high Cd level. Besides, superposition treatment declined Cd transfer factor of NHE, by 0.85% at non-Cd rate, 17.22% at the low Cd and 22.26% at the high Cd level. These results indicate that elevated CO₂ coupled with endophytic bacterial inoculation may effectively improve phytoremediation efficiency of Cd-contaminated soils by hyperaccumulator, and alleviate Cd toxicity to non-hyperaccumulator ecotype of Sedum alfredii. - Agid:
https://pubag.nal.usda.gov/catalog/6275333
Current methods of in-house plant phenotyping are providing a powerful new tool for plant biology studies. The self-constructed and commercial platforms established in the last few years, employ non-destructive methods and measurements on a large and high-throughput scale. The platforms offer to certain extent, automated measurements, using either simple single sensor analysis, or advanced integrative simultaneous analysis by multiple sensors. However, due to the complexity of the approaches used, it is not always clear what such forms of plant phenotyping can offer the potential end-user, i.e. plant biologist. This review focuses on imaging methods used in the phenotyping of plant shoots including a brief survey of the sensors used. To open up this topic to a broader audience, we provide here a simple introduction to the principles of automated non-destructive analysis, namely RGB, chlorophyll fluorescence, thermal and hyperspectral imaging. We further on present an overview on how and to which extent, the automated integrative in-house phenotyping platforms have been used recently to study the responses of plants to various changing environments. Recently, a large number of reviews have been published on the advantages and possibilities of high-throughput plant phenotyping approaches [1-5]. Most focus on the potential of these approaches which use precise and sophisticated tools and methodologies to study plant growth and development. To review the state-of-the-art of phenotyping platforms, we present a list of recent publications in Table 1. Interestingly, in about a half of these, only one measuring tool, mostly RGB imaging, for plant phenotyping was used. In the other papers, integrative phenotyping, signifying two or more measuring tools but which are rarely automated, was used (Table 1). This illustrates that the integrative automated high-throughput phenotyping measurements/platforms are still rather rare. Greenhouse- and grow chamber-based plant phenotyping platforms are publically available and these offer their services and collaborative projects. Descriptions, methodological background and focus can be found at http://www.plant-phenotyping-network.eu/eppn/select_installation. As an example of the integrative automated high-throughput phenotyping platform, a grow chamber-based phenotyping facility installed at Palacký University in Olomouc, Czech Republic is presented in Figure 1. Scheme of the grow chamber-based automated high-throughput phenotyping platform PlantScreen™ (Photons Systems Instruments, Brno, Czech Republic), installed at Palacký University in Olomouc, Czech Republic . The system is located in a growth chamber with white LED illumination (max. 1000 μmol photons m−2 s−1) and controlled environment (10 – 40°C, 30 – 99% relative humidity). The growth area with roller conveyer has capacity of up to 640 Arabidopsis, cereals and other crops grown in standardized pots. The measuring cabinet contains acclimation chamber for dark adaptation of plants coupled with an automated weighting and watering area. The cabinet is equipped with KCFIM and RGB imaging (top and 2 side views), thermoimaging (IR) to measure stomata openness and SWIR hyperspectral imaging to determine water content. The platform can be controlled either from the place or via remote control software. The operating software enables automatic data evaluation. High-throughput integrative phenotyping facilities provide an opportunity to combine various methods of automated, simultaneous, non-destructive analyses of plant growth, morphology and physiology, providing a complex picture of the plant growth and vigour in one run, and repeatedly during the plant’s life-span. Particular methods used in integrative plant phenotyping are often not new and usually represent those which have already been used for a number of years in basic research, e.g. non-invasive methods that employ visible or fluorescence imaging (described in more detail further in the text). High-throughput then allows analysis of the plants on a large scale. This enables users to apply statistics to discover subtle but significant differences between the studied genotypes and treatment variants. The potential users of such facilities, mostly biologists, are often not very familiar with the applied physical methods used in integrative plant phenotyping. Thus, in this mini-review, we present a simple introduction to the basis of various non-invasive sensors used in high-throughput phenotyping platforms, namely visible red-green-blue (RGB) imaging, chlorophyll fluorescence imaging (CFIM), thermoimaging, and hyperspectral imaging. Further, we describe potential applications of some of the phenotyping methods that have been used to study the responses of different plant species to various stresses. The methods for automated phenotyping and their aims have been reviewed in a number of recent reports [3,6,7]. In the following text we give a description of the basis of the automated non-invasive analysis of plant shoots and appropriate sensors that have been used for studies of plant stress responses. Apart from the importance of root-growth analysis, a key descriptive parameter in plant physiology is the growth of plant shoots. Although there are numerous secondary traits describing the morphology of shoots in particular species and their developmental stages, the primary and universal trait is biomass formation. Shoot biomass is defined as the total mass of all the aboveground plant parts at a given point in a plant’s life . This trait can be easily assessed by a simple weighing of the fresh (FW) and dry (DW) masses. However, this involves the destruction of the measured plant thus only allowing end-point analyses. Similarly, leaf area and consequently the plant growth rate are usually determined by manual measurements of the dimensions of plant leaves [9-11]. Such measurements are highly time consuming and thus cannot be used for large scale experiments. For this reason, plant phenotyping facilities prefer to evaluate the growth rate using imaging methods which employ digital cameras with subsequent software image analysis. This enables a faster and more precise determination of the leaf area [12-14] and other parameters called projected area (Figure 2), or hull area in the case of monocots [15,16]. In general, non-invasive techniques of shoot growth determination have proven very reliable, and high correlations between the digital area and the shoot fresh, or dry weights, respectively, were reported in Arabidopsis, tobacco , cereals [18,19], and pea . An example of a general shoot phenotyping protocol based on biomass estimation was reported by Berger et al. . Similarly, other common morphometric parameters such as stem length, number of tillers and inflorescence architecture can be assessed non-destructively and manually, but again the time requirements, limit the number of plants analysed. High-throughput approaches for analyses of these rather species-specific traits would be very valuable , however, with the exception of Arabidopsis the range of accessible solutions is still limited (for some emerging methods see [23-26]). The illustrative figure presenting outcome of simultaneous analysis of control and salt-stressed Arabidopsis plants, using RGB, hyperspectral and Chl fluorescence imaging. The 18 DAG old soil-grown Arabidospis plants were treated with 250 mM NaCl (salt-stressed) and water (control) and after 48 hours were analysed by different sensors for comparison in: morphology (top-view RGB imaging can be used for computation of rosette area or shape parameters), spatial distribution of vegetation index reflecting changes in the chlorophyll content (NDVI) provided by VIS/NIR hyperspectral camera, and the changes in maximal quantum yield of PSII photochemistry for a dark-adapted state (ΦPo, also referred as FV/FM) reflecting the photosynthetic activity of the plants obtained from KCFIM. Correct determination of digital plant growth area can be distorted by overlapping leaves, leaf twisting and curling, and circadian movement, especially when the RGB image is taken only from one view (e.g. from top view). A new approach developed for Arabidopsis consisting of plant area estimation (which takes into account leaf overlapping), growth modelling and analysis, followed by application of a nonlinear growth model to generate growth curves, and subsequent functional data analysis, was shown to analyse the plant growth in high-throughput experiments more precisely . However, due to the use of only a top-view RGB imaging, this approach cannot be applied for analyses of most of the agronomical important plants with vertical growth. A set-up that introduces more projections (e.g. side-views) into the phenotyping platforms thus can partially solve this problem. The three-views RGB imaging together with linear mathematical modelling was used for accurate estimation of plant shoot dry weight of wheat and barley from two dimensional images . The accuracy of three-view approach has been recently validated in species with challenging shoot morphology such as field pea . One of the chlorophyll (Chl) fluorescence methods is chlorophyll fluorescence induction (CFIN), i.e., the measurement of the Chl fluorescence signal during illumination of the sample following prior dark adaptation. Since the first paper on CFIN by Kautsky and Hirsch , CFIN has been one of the most common methods used in photosynthesis and plant physiology research: it is inexpensive, non-destructive, and above all, provides a great deal of information about the photosynthetic function of the sample (reviewed, e.g., by Lazár [28,29]). Use of pulse amplitude modulation (PAM) techniques for the measurement of CFIN together with the application of the saturation pulse (SP) method enables the separation of photochemical and non-photochemical events occurring in the sample . Chl fluorescence is excited and measured with the help of weak measuring flashes, whereas photosynthesis is maintained by actinic illumination and saturation of photosynthesis is achieved by the SPs. Since Chls absorb in blue (Chl a at 436 nm and Chl b at 470 nm, respectively) and red (at about 650 nm for both Chls a and b) regions of visible spectrum, the measuring and actinic light is the light with one of the above wavelengths, usually 650-nm. The SPs are usually generated by white light. On the other hand, Chl fluorescence emission spectrum at room temperature shows two peaks centred at about 680 and 735 nm. To avoid a possible overlap of the 650-nm excitation light with Chl fluorescence emission, the Chl fluorescence signal is detected at wavelengths longer than 700 nm. To reveal spatial heterogeneity of the fluorescence signal during CFIN, imaging Chl fluorometers were developed [31,32]. In the images (for illustration see Figure 2), different colours are used to show different fluorescence intensities according to a chosen false colour scale (as mentioned above, fluorescence emission is always above 700 nm, red light). An additional advantage of the CFIM is that it provides a huge amount of data which can be thoroughly analysed and used for early detection of plant stress as shown, e.g., by Lazár et al. . At present, modern CFIM instruments adopt PAM and SP methods/techniques and are thus highly suitable for high-throughput plant phenotyping (reviewed, e.g., by Gorbe and Calatayud , Harbinson et al. ). However, over the course of time, too many Chl fluorescence parameters were defined and claimed to reflect particular functions of photosynthetic apparatus. Hence, there is a problem over which parameter should be measured/evaluated and presented. Values of most of the parameters cannot be mutually compared. It is only possible to compare relative changes (caused, e.g., by a stress treatment) of a given parameter. The parameters of the so-called energy partitioning, i.e., quantum yields of processes responsible for the use of the absorbed light energy, are the best choice (reviewed by Lazár ) as they are all defined on the same basis and can be directly compared. Since all quantum yields sum to unity, the quantum yields express fractions of absorbed excitation light that are used for given processes (photochemical and various types of non-photochemical energy dissipations). It is also worth mentioning here that kinetic types of CFIM (KCFIM) that measure whole CFIN and also apply the SPs which then allow computation of various Chl fluorescence parameters, and integrate signal from the whole leaf or shoot, are the most valuable for physiological studies. However, integration of KCFIM into high-throughput systems [20,37] is not very common and in the majority of recent reports, imaging systems measuring either single Chl fluorescence level (SLCFIM) or two Chl fluorescence levels (usually the minimal and maximal Chl fluorescence levels for the dark-adapted state; TLCFIM) were used (see Table 1). As intensity of Chl fluorescence depends on the amount of chlorophylls, the SLCFIM might be used, e.g. to distinguish between non-stressed and senescent leaves (when the amount of Chls is decreased) at the later stages of stress progression but it does not provide any information about early processes in photosytem II (PSII) that are not necessarily linked to the later senescence events. Further, the usual output of the TLCFIM, the FV/FM ratio, which estimates the maximum quantum yield of photosystem II photochemistry, provides only a limited information about photosynthetic function compared with the outputs of the KCFIMs, which also allow determination of the other quantum yields and parameters (see for a review). Plants are cooled by transpiration and when the stomata are closed, plant temperature increases. Based on this principle, thermal imaging was used for the first time to detect the changes in the temperature of sunflower leaves caused by water deficiency . In addition to transpiration, stomata also drive water vapour, both parameters being typically determined by leaf gas exchange measurements. However, leaf gasometry involves contact with leaves which often interferes with their function. Further, leaf gasometry is time-consuming, limited by sample size and/or large number of samples required. In addition to heat emission, plants can lose heat by conduction and convection, which in fact represent mechanisms of a non-photochemical quenching of excited states. For this reason, it is not unexpected that an increased thermal signal correlates with an increase in non-photochemical quenching as shown by Kaňa and Vass . Given the foregoing, thermoimaging is a very suitable method for plant phenotyping [19,40,41]. Like CFIM, it uses cameras to measure spatial heterogeneity of heat emissions, usually from leaves; the heat is electromagnetic radiation in the infrared region, usually between 8 – 13 μm. Generally, thermal imaging has been successfully used in a wide range of conditions and with diverse plant species. The technique can be applied to different scales, e.g., from single seedlings/leaves through whole trees or field crops to regions. However, researchers have to keep in mind that environmental variability, e.g., in light intensity, temperature, relative humidity, wind speed, etc. affects the accuracy of thermal imaging measurements and therefore the measurements and their interpretations must be done with care. Although thermal imaging sensors have been integrated into the in-house phenotyping platforms with controlled-environment (see section The use of phenotyping methods to study plant stress responses) the majority of studies have been performed so far in field conditions [42-44]. All aspects of thermal imaging used for the exploration of plant-environment interactions, as well as an overview of the application of thermoimaging in field phenotyping, were recently reviewed by Costa et al. . The absorption of light by endogenous plant compounds is used for calculations of many indices which reflect the composition and function of a plant. Such indices are, for example, the normalized difference vegetation index (NDVI) , an estimator of the Chl content, and the photochemical reflectance index (PRI) , an estimator of the photosynthetic efficiency. The absorption of a compound (e.g., water) at a given wavelength can also be used for direct estimation of the compound contents in the plant. For practical reasons, measurement of absorbance is replaced here by measurements of reflectance. Depending on the measured wavelengths of reflected signal, various detectors are used, usually VIS-NIR (visible-near infrared region (400–750) - (750–1400 nm)) and SWIR (short wavelength infrared region; 1400–3000 nm). Measurements of the reflectance signal in VIS-NIR and SWIR regions originate from methods of remote sensing [49-51]. However, due to the high value of the information they carry, they are very suitable methods for plant phenotyping [52-54]. The reflectance signal can be detected at selected wavelengths or separated spectral bands (so-called multispectral detection). The whole spectral region can also be measured even for each pixel when cameras are applied and the hyperspectral imaging is carried out (Figure 2). Whereas the hyperspectral imaging in the VIS-NIR spectral region is used for evaluation of several indices as mentioned above, the SWIR spectral region is mainly used for the estimation of the plant’s water content. Several aspects of plant reflectance were recently reviewed by Ollinger . Despite the many indices that have been defined so far, based on the reflectance measurements, it is difficult to assess them accurately, similar to the situation with CFIN parameters (see above). For this reason, critical revision of all of the reflectance indices is needed to evaluate which of them provide the required information in the best way. One of the most important applications of automated plant phenotyping methods is in studies of plants’ responses to various types of environmental stresses. In Table 1 we listed recent reports describing phenotyping protocols developed for indoor automated shoot phenotyping used in stress-related studies. Since the integrative approaches are a logical but rather new step in the development of phenotyping platforms, there are limited reports on the use of simultaneous analysis by multiple sensors. For this reason, we included here “single-sensor” experiments as well, that were performed in the automated platforms. Perhaps the most widely used application of high-throughput phenotyping is in the search for drought-tolerant varieties. Objectives, traits and approaches related to automated plant selection for drought stress resistance were recently reviewed in Mir et al. , and Berger et al. . Here, we add information from examples of the use of non-invasive plant phenotyping in this field. One of the early reports on the use of the high-throughput phenotyping platform describes the employment of the commercial-prototype system for evaluation of drought tolerance in nine Arabidopsis accessions . The screening was based on RGB imaging, estimating rosette-leaf area and automated pot weighing and watering to assess transpiration rates. A very similar approach was later used by Skirycz et al. also in Arabidopsis . The same platform was further used in a recent physiological study of Clauw and co-authors in which the impact of mild-drought on various Arabidopsis thaliana accessions was evaluated . Another study on Arabidopsis employing top-view RGB imaging, pot weighing and automated rotation of pots was performed by Tisné et al. . The phenotyping platform was designed to prevent position effect on water evaporation and authors demonstrated important improvement in the evaporation homogeneity . Although these studies represent an important contribution to the development of automated phenotyping, the design of the platform for top-view experiments has limited their use to analyses of plants with leaf rosette. Further progress thus lay in development of platforms allowing RGB imaging from multiple positions. The most recent advances in the use of multiple-view RGB imaging followed by software analysis were demonstrated in a study by Neumann et al. . The authors were able to automatically extract from the images of the barley plants, the plant height and width, and also leaf colours to evaluate the impact of drought on the degradation of chlorophyll. Earlier, Pereyra-Irujo et al. , reported a study that employed a self-constructed high-throughput platform for the RGB screening of growth and water-use efficiency (WUE) in two soybean (Glycine max L.) genotypes. The system with automated weighing and watering placed in the greenhouse was used to analyse the projected area of the shoots and the mass of the pots . An impressive number of plants was analysed for similar traits in the study by Honsdorf et al. . These authors searched for drought-tolerance QTLs in 48 wild barley introgression lines, using a commercial greenhouse based platform with multiple-view RGB imaging and automated weighing and watering . A similar approach utilizing estimation of shoot biomass based on RGB imaging was used by Coupel-Ledru et al., to screen thousands of grapevine plants for drought tolerance . In these studies, the plant water management was automatically analysed by simple weighing of the pots. This approach, however, begs several questions about the homogeneity of evaporation from the soil of the pots placed in different positions of the growing area. The solution to this issue usually requires an exhaustive validation process with numerous control pots and artificial plant-like objects randomly distributed throughout the growing area (Mark Tester, personal communication). A more elegant solution could be the use of the specific sensors controlling directly the plant water content or transpiration of each plant. Even this approach, however, requires appropriate validation. An integrative way of analysis was employed in the study of Petrozza et al. . Here, the effect of Megafol treatment on drought-stressed tomatoes was assessed using RGB imaging to distinguish shoot area, SLCFIM measurement to calculate “stress index” and NIR camera for water content estimation. Repeated measurements by NIR camera throughout the experiment allowed visualizing the drop of the high water content index that precedes the growth limitation caused by drought stress . A combination of RGB and NIR imaging techniques was also used by Harshavardhan et al. for analysis of the drought-tolerance of transgenic Arabidopsis plants . The RGB imaging was employed by Bresson et al. to study the effect of plant-bacteria interactions on plant tolerance to drought stress . The integration of FV/FM measurement by TLCFIM provided complementary information to the growth rate and WUE analysis obtained by pot weighing . A combination of RGB, SLCFIM and NIR imaging techniques was used by Chen et al. to study different phenotypic traits of 18 barley genotypes. The authors used sophisticated statistics and mathematical modelling to classify genotypes based on their response to drought stress . Another important trait in drought studies is the leaf surface temperature that reflects the transpiration rate of the plant (as discussed above in the section Thermoimaging). A combination of shoot digital imaging, thermoimaging and automated weighing and watering to study WUE was used by Fehér-Juhász et al. . These authors employed a self-constructed greenhouse-based platform for the selection of drought-tolerant transgenic wheat plants. The platform allows monitoring of the mature cereal plants´ growth by multiple-view RGB imaging and assessment of the leaf surface temperature by side-view thermal camera recording the differences in temperatures of plant shoots . The same platform and a similar phenotyping experimental design were used for evaluation of drought tolerance in barley. The system provides integrative analysis of plant growth and physiology, but its use for large-scale analysis is limited by a semi-automated regime requiring manual loading of the plants into the system . Given that physiological responses to drought and high temperature stresses are tightly connected, similar approaches can be used to study the tolerance of plants to both drought and high temperature. The use of high-throughput phenotyping for high temperature tolerance and a description of the appropriate sensors can be found in a review by Gupta et al. . More recently, the effects of the high temperature on the Arabidopsis plants were studied by Vasseur et al. . The authors used commercial-prototype platform allowing the top-view RGB imaging and WUE analysis followed by highly-sophisticated statistical approach to reveal contrasting adaptive strategies to the high temperature and drought stresses . The salinization of soil is another phenomenon often associated with drought and high temperature stress. The example of the protocol for salt stress study in various cereals combining RGB imaging with destructive leaf sampling to measure Na+ concentration was described by Berger et al. . The effect of salt stress was studied by Rajendran et al. using digital RGB imaging in a greenhouse-based commercial system. This study provided deep insight into the physiological processes connected with salinity in wheat. The authors used the multiple-view RGB imaging to estimate a digital area of shoot, and to visualize changes in leaf colour for quantification of the senescent area. Using non-invasive plant phenotyping and analysis of Na+ concentration in 4th leaf, the authors predicted a plant salinity tolerance index that showed a good correlation with the results obtained from conventional salt-tolerance measurements . Simple RGB imaging in wheat and barley was carried out in the physiological study of Harris et al. , and described in the methodological report of Golzarian et al. . Recently, Schilling et al. applied a similar approach to select a salt-tolerant line of transgenic barley . The combination of digital RGB imaging (used to measure shoot growth rate) with SLCFIM (used for the assessment of senescent areas) was used for the selection of salt-tolerant cultivars of rice by Hairmansis et al. . These studies of salt-stress tolerance were performed using the same commercial platform involving SLCFIM sensor. As mentioned in the section Chlorophyll fluorescence imaging (CFIM) this type of CFIM in fact provides only estimation of a senescent area that can be obtained using an older way of estimation based on colour detection by RGB imaging. Thus, to increase the value of the physiological evaluation, the use of KCFIM is necessary for quantification of the quantum yield of photochemistry and of the other competitive processes . Combination of RGB imaging, thermoimaging and TLCFIM was used in the pioneer work of Chaerle at al. who evaluated the effects of mild mottle virus infection on tobacco and bean plants . The use of high-throughput techniques in the nutrient starving stress studies have been already reported too. The principle of the method based on RGB imaging of leaf expansion was described by Moreau et al. . A comprehensive study on the phenotypic effects of nitrogen and phosphorus nutrient statuses of Brachypodium was carried out by Poire et al. employing RGB imaging to estimate growth rate . A similar approach was used in a study of Neilson et al. where the responses to nitrogen deficiency and drought were evaluated by RGB imaging, NIR imaging and automated weighing, respectively. The authors also developed software that extracted from the images, additive traits such as projected plant height and the height to the ligule of the youngest fully expanded leaf, which showed very good correlations with standard manually measured agronomical parameters . The multiple-sensor approach was described earlier in beans by Chaerle et al., who used RGB imaging, thermoimaging and TLCFIM to evaluate the phenotypes related to magnesium deficiency and biotic stress . The impact of cold stress on plant growth and physiology is routinely studied using non-invasive methods through the analysis of Chl fluorescence, but not using fluorescence sensors integrated into complex growth-analysing platforms [82-84]. Jansen et al. studied the effects of chilling stress in Arabidopsis and tobacco plants using a growth chamber based system equipped with digital top-view RGB screening and KCFIM . Very recently an automated screening approach based on RGB imaging and KCFIM analysis for selection of pea cultivars with different cold-sensitivity was developed by Humplík et al. . The reported study was not intended only for selection of cold-sensitive/tolerant varieties of pea but also for studies of plant cold-response strategies in general. Since the CFIM analysis is not limited to plant morphology and the image analysis was sensitive enough to detect tiny tendrils of pea, the described procedure should be theoretically employed for shoot analyses of other plant species . This mini-review focuses on recent advances towards development of integrative automated platforms for high-throughput plant phenotyping that employ multiple sensors for simultaneous analysis of plant shoots. In both basic and applied science, the recently emerging approaches have found importance as tools in unravelling complex questions of plant growth, development, responses to environment, as well as selection of appropriate genotypes in molecular breeding strategies. As far as phenotype is an interactive network of responses by the plant to its environment that affects in turn, the expression of the genotype it is worth pointing out that attention to the way the analyses are done, under precisely controlled conditions allowing for direct linking the huge amount of complex phenotyping data obtained to the particular conditions. It would also help the end user – the biologist – to narrow his/her view on the importance of various parameters and indices available from the specialized measurements (specifically CFIN and reflectance measurements) and evaluate which of them provide the required information in the best way and hence thus the most suitable for high-throughput plant phenotyping. Such information and standardized protocols applicable for the particular phenotyping methodologies should be available in the near future due to the phenotyping community efforts. Jan F Humplík and Dušan Lazár contributed equally to this work. This work was supported by the grant No LO1204 (Sustainable development of research in the Centre of the Region Haná) from the National Program of Sustainability I, Ministry of Education, Youth and Sports, Czech Republic. JFH, DL, AH and LS drafted the manuscript. All authors read and approved the final manuscript.
https://plantmethods.biomedcentral.com/articles/10.1186/s13007-015-0072-8
This page covers what I think is the most important thing to understand about space: The basic structure of the Universe. It's a long page but it's not actually very difficult. If you can understand everything on this page, you'll understand more about the Universe than most people. Before we start there are a few things you need to know: - The Sun is a star, similar to all the other stars you see in the night sky. There are different types of stars, some smaller than the Sun and some much larger. - Celestial objects (e.g. star, planets, moons) tend to rotate on their axis like a spinning top. Earth does it, the Moon does it and even the Sun does it. - Celestial objects tend to orbit (travel around) other celestial objects. Smaller objects orbit larger (more massive) objects. Many objects can orbit the same object. One object can orbit another object which in turn is orbiting an even more massive object (and so on). Now, let's begin... We live on a planet. It's called Earth. Like all planets it's roughly spherical in shape. Earth rotates on its axis and we call the time it takes to complete one rotation a day. Earth has one moon, called The Moon. The Moon orbits around the Earth once every 27.32 days. We approximate this orbital period into units of time we call months (month = moonth). When you have a celestial object with at least one other thing orbiting it, you can call that whole thing a system. The Earth and Moon together, along with any other objects orbiting Earth, can be called the Earth system (or sometimes the Earth/Moon system). The Earth system travels as one unit through space. The Earth system orbits around the Sun. It takes 365.25 days to orbit the Sun and we call that a year. There are eight major planets (that we know of) and many smaller objects orbiting the Sun. The Sun and everything that orbits it make up the Solar System, i.e. the system that is centered around the Sun (solar system = sun system). This means that... Our planet is part of the Solar System. All the planets orbit anti-clockwise as seen from "above" the Solar System (i.e. above Earth's North Pole). All planets orbit in roughly the same plane, forming an overall disc shape. Mercury and Venus orbit alone with no moons. Earth has one moon, Mars has two and the outer gas giant planets all have lots of moons. There are also countless smaller objects orbiting the Sun including Pluto, other dwarf planets, asteroids, meteoroids and comets. REMEMBER: The Solar System consists of the Sun and everything that orbits around the Sun. We call it the Solar System as if it's the only one but in fact it's not. Far out in deep space there are billions more "solar systems". Astronomers call them planetary systems, i.e. a set of planets orbiting a star. You can actually see other planetary systems (sort of) because when you look at stars in the night sky you're looking at other "suns". Most of those stars have their own families of planets orbiting them, just like our star has its family of eight planets. You can't see the planets in other systems because they're too small and faint, but you can see their suns. On a clear night you can see several thousand stars/planetary systems. Some are young, many are middle-aged and some are old. Most of the stars in this photo are suns of other planetary systems. Planetary systems are basic building blocks of the Universe. Each star and its family of planets is a unit that is born together, lives and eventually dies together (when a star dies it's pretty much a death sentence for anything orbiting it). Almost everything you can see in the sky is far outside our Solar System, but in the big scheme of things it's still relatively local. Along with the Solar System, all the individual stars and planetary systems you can see are part of a bigger structure... The Solar System is part of the Milky Way Galaxy. The Solar System and all the neighbouring planetary systems are part of a galaxy we call the Milky Way. This galaxy is made up of an estimated 200 billion stars, most of which have planetary systems. Remember that you can only see a few thousand of these stars with your own eyes so you're only looking at a very small part of the Milky Way. The Milky Way is shaped like a disc with a bright bulge in the middle surrounding the galactic centre. The bulge is caused by a higher density of stars near the middle of the disc. The images below show the galaxy from "above" and on its side. Our entire Solar System is a tiny dot at this scale. Everything in the Milky Way, including our Solar System, is orbiting around the galactic centre. At the very centre of the galaxy lies a super-massive black hole. This black hole is to the galaxy what the Sun is to our Solar System. In other words the Earth is orbiting a star that is orbiting a black hole1. It takes our Solar System about 250 million years to orbit the galactic centre once. The last time we were at our current position in the galaxy, dinosaurs had not yet evolved. REMEMBER: The Moon orbits Earth, which orbits the Sun, which orbits the galactic centre. We're almost there but there's one last big jump in scale to make... The Milky Way Galaxy is part of the Universe. The Milky Way is one of at least 100 billion galaxies in the known Universe. Each galaxy is made of billions of stars and planetary systems. The Milky Way, a fairly typical galaxy, is surrounded by a number of dwarf galaxies that appear to orbit our galaxy. The closest similar galaxy to ours is Andromeda, about 2.5 million light-years away and also surrounded by dwarf galaxies. The galaxies shown below, along with a few others, make up a set of galaxies called the Local Group. At this scale galaxies can be moving in any direction relative to each other. In our case the Andromeda galaxy happens to be on a collision course with the Milky Way. Don't worry, it's not your problem. As the name "Local Group" implies, these are just a few of the closest galaxies to us. The entire Universe is unimaginably larger and may indeed be infinitely big (we don't really know). The image below shows a much larger section of the Universe in which the Local Group is a tiny dot in the middle (part of the "Virgo Cluster"). Each tiny white dot represents a group of galaxies. At this point you start seeing the large-scale structure of the Universe in which clusters and super-clusters of galaxies are arranged in a pattern of filament structures. This is pretty much how the Universe looks as far into the distance as we can see. At this scale there is also a distinct pattern of motion. It appears that, with some local exceptions, all galaxies tend to be moving away from all other galaxies. In other words, the Universe appears to be expanding. This was the first clue that eventually led to the Big Bang theory, but that's another story. We'll finish with one last image... the visible Universe. This isn't the entire Universe because we can't see that far but we assume there's more of the same beyond. This raises questions about how big it all is, what it's expanding into, etc. Sorry but those questions will have to wait for another tutorial (spoiler: we don't really know). So that's it. Ideally you should be able to pass the following tests but don't worry if you can't or if it doesn't come naturally. Most people find that it takes time to master this stuff, but most people also find that it's worth it. Test yourself - You should be able to answer the following question confidently: "What's the difference between the Solar System, the Milky Way galaxy and the Universe?" - You should be able to imagine each of the following objects in their correct orbits relative to each other: Moons, planets, stars, galactic centres. - If you see a news story about astronomy, you should be able to visualize where in the Universe the story is set. Is it somewhere "local" in the Solar System, outside the Solar System but within our galaxy, or is it far across the Universe? Previous: Measurement Units | Next: Sizes of things in space 1. Technically it's not the black hole we're orbiting, it's the barycenter of the Milky Way, or the centre of the combined mass of the galaxy. However for simplicty's sake it's okay to think of everything in the galaxy orbiting the black hole.
https://www.spacecentre.nz/resources/learn/universe/