content
stringlengths
71
484k
url
stringlengths
13
5.97k
The FDA will publish final rules that represent the first major update for the 20-year-old Nutrition Facts Label and related Servings Sizes, on May 27, 2016. Changes to the current nutrition labeling requirements incorporate the most recent nutrition and public health research, and attempt to improve how nutrition information is presented to consumers. The areas for revision are setting Daily Values for nutrition labeling, definitions of carbohydrates and dietary fiber; prominence of calories; and mandatory and optional labeling of individual nutrients. These changes reflect current dietary intakes and recommendations and will trigger changes in most food labels — estimated to be over 800,000 labels -- impacting nutrition claims on packaging and the consumer's understanding of foods using a new format that now includes a declaration of "Added Sugar." A clear understanding of the changes and how manufacturers' current labels are impacted is a necessity. Accommodating the changes requires a comprehensive process that includes an analysis of the format and nutrition information with the new rules to help consumers understand the nutritional contribution of foods, and creating the best strategy to effectively communicate to consumers the changes to nutrition information on your labels. This webcast will provide an overview of the required changes and opportunities and challenges related to food product formulation and reformulation and consumer messaging and education. Speakers:Rob C. Post, Ph.D., MEd., MSc.Bill LaydenJulie Meyer, RD (moderator) Length: 1 hourContact Hours: 1.00Date Recorded: June 3, 2016 You will have access to the content for one year from the date of purchase. All prices are in USD($). Sign in to see the correct price for your IFT member status.
https://www6.ift.org/Ecommerce/Meetings/MeetingDetail?productId=27299476
Dimension of the primary box (mm): Pallet size (mm): CODES CUBING Storage Conditions | Labeling | Others Storage and transport: Store and transport the temperature between 0 and 7ºC. Shelf life: 12 Months after date of production. Packaging: Primary packaging: Secondary packaging: micro-corrugated carton with identification tag. Sales Location: Retail shops equipped with means of conservation of chilled. Recommendations: The product should be soaked and cooked before being consumed. Labeling: Name of the product, species, catch area and fishing gear; Ingredients and net weight; Date of minimum durability and batch identification; Storage temperature, recommendations, method of use and nutritional declaration; Veterinary control number, name, producer's address and bar code.
http://pascoal.pt/Psc/dry-salted-cod/dry-salted-cod-bits/?lang=en
Lataukset: Fernández Franco, Alicia Tampereen ammattikorkeakoulu 2014 All rights reserved Julkaisun pysyvä osoite on http://urn.fi/URN:NBN:fi:amk-2014121219580 http://urn.fi/URN:NBN:fi:amk-2014121219580 Tiivistelmä The importance of packaging design as a communication channel and branding is growing in competitive markets for packaged products and it’s a crucial parameter to consider when designing a product or selecting a target audience. This thesis utilizes surveys, interviews, literature-based research and other methods to understand consumer behavior towards products and how packaging elements can affect buying decisions. Visual package elements play a major role, representing the product, especially in low involvement and when they are in rush. It was during my exchange Studies in Asia when I noticed that cultural values clearly affect visuals and as a result, the market and ultimately the way consumers buy. The challenge for designers is to integrate packaging into an effective purchasing decision model, by understanding packaging elements as important marketing communications tools and considering side parameters such as cultural differences, tendencies, or psychological values. This paper is meant to help package designers analyze the general cues in package design and to spot cultural differences and similarities in a practical way, helping their work when working for global brands or in different cultural environments. Propositions for future research are presented, which can help in developing better understanding of consumer response to packaging elements.
https://www.theseus.fi/handle/10024/85014
China is set to enforce new food packaging regulations from 1 January 2013. The regulations will help consumers to identify a product's nutritional structure and ingredients straight away and will provide standard nutritional information on food packaging labels. As part of the regulations, the packaging should contain labels pertaining to content of four kinds of nutritional elements including protein, fat, carbohydrate and sodium as well as the calorie count. As the country faces a severe situation of chronic non-communicable diseases, the information on the packages will help consumers avoid the intake of unhealthy elements, such as saturated fat and cholesterol, and increase the ingestion of dietary fibers. Food manufacturers are encouraged to follow the regulations before 2013, but the entire food industry should execute the regulations with regard to food packaging, after 1 January 2013. Initially, the regulations were devised in 2007 and will now lay down specific rules on labeling other nutritional content, names and functions.
https://www.compelo.com/packaging/company-news/china-to-get-new-regulations-on-food-packaging-from-january-2013/
St-Hyacinthe cheesemaking facility, Canada Intensity Mild Milk Cow's milk Texture Soft cheese Usages Melting Platter Description Camembert Chevalier is a soft cheese which features a pronounced cream and hazelnut taste, making us want to indulge. Milk Fat 23% Moisture 55% Formats 450 g Allergens Dairy Treatment Pasteurized Characteristics - Non organic - With lactose Nutritional values and ingredients Ingredients Pasteurized milk, modified milk ingredients, pasteurized cream, bacterial cultures, salt, calcium chloride, microbial enzyme. Nutritional values are provided for information purposes only and may vary according to the product format. Please refer to the product packaging for exact nutritional values. In the event of any discrepancy between the nutritional values indicated on this website and those indicated on the packaging, the latter shall prevail.
https://www.cheesebar.ca/cheese/chevalier-camembert
1) Size – For a product that has to be shipped to a distribution point that’s quite far off, heavy or bulk packaging could cost too much in terms of transportation expenses. 2) Labeling – It may be necessary to include specific information on a product’s label if it is to be distributed in a particular way. One example is the labels of food products displayed for sale in retail outlets. These labels must contain the necessary information pertaining to their nutritional value and ingredients. 3) Durability – A lot of products undergo rough handling on their way to the consumer from the production point. If the distribution system cannot be trusted to protect the product, the packaging must be done in such a manner as to provide the necessary protection. 4) Opening – If the item or product is of a kind that has to distributed in such a way that a potential customer can examine it before deciding whether to buy it or not, the packaging must be easy to open as well as to close again. Alternatively, if the packaging should only be opened by the purchaser (such as in the case of over-the-counter medication), the packaging would have to be designed so as to be tamper-proof. In a lot of countries, packaging is completely integrated into government, industrial, institutional, business and personal use.
https://www.cleverism.com/lexicon/packaging/
Japan’s Consumer Affairs Agency (CAA) will add walnuts to list of allergens that product manufacturers and importers must include on the label of packaged products containing walnuts. Currently, CAA strongly recommends including walnuts on the label, but does not require their inclusion. CAA will hold a public comment period prior to making this modification to the Food Labeling Standards, but have not yet announced dates for the comment period. Exports of U.S. walnuts to Japan have grown from 10,604 metric tons to 21,944 metric tons over the last decade. General Information Japan’s Consumer Affairs Agency (CAA) announced that it will add walnuts to the list of allergens that food manufacturers and importers must include on product packaging labels. CAA has not yet scheduled a date for the public comment period. Japan’s Food Labeling Standard currently includes walnuts on the list of allergens that it highly recommends food manufacturers and importers include on packaging. Since February 2021, CAA has held three Advisor Meetings on Food Allergy Labeling to determine if walnuts should be added to the list of allergens with mandated labeling requirements. The advisory panel found that walnuts should be added to the list of allergens required for labeling based on the results of a new longitudinal study on food allergies in Japan. On June 6, during the during the 67th Food Labeling Subcommittee meeting, the Cabinet Office’s Consumer Committee reviewed this issue. Food Allergy Labeling The Ministry of Health, Labour and Welfare (MHLW) established food allergen labeling standards in 2001 before oversight of the Food Labeling Standard shifted to CAA, a division of the Prime Minister’s Cabinet Office, in 2009. Cabinet Order No.10 currently requires allergy labeling for seven “required ingredients” and highly recommends allergy labeling for 21 “recommended ingredients,” see Table 1. If CAA classifies an allergen as a required ingredient, and it is present in a food product, then the product manufacturer or importer must include the name of the individual allergen on the packaging. If a required ingredient is included as part of a processed ingredient within the final product, the label must also include the name of the allergen, for example, “cake mix (including wheat, egg, milk).” For recommended ingredients, CAA highly recommends the label include the name of the allergen within the ingredient list. The labeling standard allows for labels to list all existing individual recommended ingredients and recommended ingredients collectively. The Food Labeling Standard does not require labels indicate which ingredients are allergens, but product manufactures may choose to do so. See Table 2 for examples of approved labeling of allergens. Please see JA7078 for more details on the Food Labeling Standard. CAA periodically revises the allergy labeling requirements. To do so, CAA relies on the “Reports on Food Labeling Related to Food Allergies Investigation Project,” which it publishes approximately every three years. This series of longitudinal surveys is a compilation of food allergy case reports. Japanese importers and manufacturers bear sole responsibility for the development of labels compliant with Japanese food labeling regulations, including allergy labeling. There is no legal obligation for U.S. exporters to affix Japanese labels to their products prior to export. Please see Food and Agricultural Import Regulations and Standards for the current Japanese food labeling requirement.
https://pacificnutproducer.com/2022/06/17/japan-to-require-allergy-labeling-for-walnuts/
Pasteurized milk, modified milk ingredients, salt, cumin seeds, calcium chloride, microbial enzyme, bacterial culture (potassium sorbate and natamycin in coating). Nutritional value accuracy Nutritional values provided here are for information purposes only and may differ from those found on products in store. Please refer to product packaging for nutritional values. In the event of a discrepancy between the nutritional values on this site and those on the product itself, the latter will prevail.
https://fromagesbergeron.com/en/cheeses/products-fine-cheeses-coureur-des-bois/
Legal Requirements for providing food information to consumers Other than in Northern Ireland (NI), any references to European Union (EU) Regulations in this guidance should be read as meaning retained EU law. You can access retained EU law via the HM Government EU Exit web archive. This should be read alongside the relevant EU Exit legislation that was made to ensure retained EU law operates correctly in a United Kingdom (UK) context. EU Exit legislation can be accessed on legislation.gov.uk. In NI, EU food law will continue to apply, as listed in the Northern Ireland Protocol. Retained EU law will not apply in these circumstances. This page highlights the requirements of Regulation No. 1169/2011, the Food Information to Consumers (FIC) Regulation, and associated legal standards for labelling and composition of food products such as bottled water, milk, fish and meat. The Food Information to Consumers (FIC) Regulation 1169/2011 on the provision of food information to consumers brings together EU rules on general food labelling and nutrition labelling into one piece of legislation. The retained version of Regulation 1169/2011 on the provision of food information to consumers applies to food businesses in Great Britain (GB). Whereas, EU food law, including Regulation (EU) No 1169/2011 on the provision of food information to consumers, applies in NI. Labelling of prepacked food All prepacked food requires a food label that displays certain mandatory information. All food is subject to general food labelling requirements and any labelling provided must be accurate and not misleading. Certain foods are controlled by product specific regulations and they include: - bread and flour - cocoa and chocolate products - soluble coffee - milk products - honey - fruit juices and nectars - infant formula - jams and marmalade - meat products - sausages, burgers and pies - fish - natural mineral waters - spreadable fats - sugars - irradiated food - foods containing genetic modification (GM) FSA Explains: Compositional standards What must be included The following information must appear by law on food labels and packaging: Name of the food The name of the food must be clearly stated on the packaging and not be misleading. If there is a name prescribed in law this must be used. In the absence of a legal name, a customary name can be used. This might be a name that has become commonly understood by consumers and established over time such as ‘BLT’ for a bacon, lettuce and tomato sandwich. If there is no customary name or it is not used, a descriptive name of the food must be provided. This must be sufficiently descriptive to inform the consumer of the true nature of the food and to enable it to be distinguished from products with which it might otherwise be confused. Most products will fall into this category and require a descriptive name. If the food has been processed in some way, the process must be included in the title, for example ‘smoked bacon’, ‘salted peanuts’ or ‘dried fruit’. A processed food is any food that has been altered in some way during preparation. List of ingredients If your food product has two or more ingredients (including water and additives), you must list them all under the heading 'Ingredients' or a suitable heading which includes the word 'ingredients. Ingredients must be listed in order of weight, with the main ingredient first according to the amounts that were used to make the food. Some foods are exempt from the need to display an ingredient list, for example: fresh fruit and vegetables, carbonated water and foods consisting of a single ingredient etc. Allergen information Where a food product contains any of the 14 allergens, required to be declared by law, as ingredients, these allergens must be listed and emphasised within the ingredients list. You must emphasise allergens on the label using a different font, style, background colour or by bolding the text. This enables consumers to understand more about the ingredients in packaged foods and are helpful for people with food allergies and intolerances who need to avoid certain foods. We provide free online allergen training for businesses. Quantitative declaration of ingredients (QUID) The QUID tells a consumer the percentage of particular ingredients contained in a food product. This is required where the ingredient or category of ingredients concerned: (a) appears in the name of the food or is usually associated with that name by the consumer; (b) is emphasised on the labelling in words, pictures or graphics; or (c) is essential to characterise a food and to distinguish it from products with which it might be confused because of its name or appearance. The indication of quantity of an ingredient or category of ingredients must: - be displayed as a percentage, which corresponds to the quantity of the ingredient or ingredients at the time of its/their use; and - appear either in or immediately next to the name of the food or in the list of ingredients in connection with the ingredient or category of ingredients in question. Net quantity All packaged foods above 5g or 5ml must show the net quantity on the label to comply with the Food Information Regulations. Foods that are packaged in liquid (or an ice glaze) must show the drained net weight. The net quantity declaration is not mandatory in the case of foods: (a) which are subject to considerable losses in their volume or mass and which are sold by number or weighed in the presence of the purchaser; (b) the net quantity of which is less than 5 g or 5 ml, unless these are herbs or spices; (c) normally sold by number, provided that the number of items can clearly be seen and easily counted from the outside or, if not, is indicated on the labelling. Storage conditions and date labelling Food labels must be marked with either a ‘best before’ or ‘use by’ date so that it is clear how long foods can be kept and how to store them. Further information can be found in the guide on date marking on the Waste & Resources Action Plan (WRAP) website. Name and address of manufacturer Food businesses must include a business name and address on the packaging or food label. This must be either: - the name of the business whose name the food is marketed under; or - the address of the business that has imported the food Food products sold in NI must include a NI or EU address for the food business. If the food business is not in NI or EU, they must include the address of the importer, based in NI or the EU. Food businesses can continue to use an EU, GB or NI address for the FBO on food products sold in GB until 30 September 2022. From 1 October 2022, food products sold in GB must include a UK, Channel Islands or the Isle of Man address for the food business. If the food business is not in GB, they must include the address of the importer, based in the UK, Channel Islands or the Isle of Man. The address provided needs to be a physical address where your business can be contacted by mail. You can’t use an e-mail address or phone number. Providing an address gives consumers the opportunity to contact the manufacturer if they have a complaint about the product or if they want to know more about it. Country of origin or place of provenance In accordance with the FIC Regulations, the indication of the country of origin or place of provenance of a food shall be mandatory where failure to indicate this might mislead the consumer as to the true country of origin or place of provenance of the food. Consumers might be misled without this information, for example a Melton Mowbray Pork pie which was made in Italy. Under the FIC Regulations there are specific origin rules which must be adhered to, including the Country of Origin for Primary Ingredients and Country of Origin for Certain Meats. Find out when you must label your meat, fish or seafood product with its country of origin. In NI, EU Country of Origin rules, as applied by the Northern Ireland Protocol (NIP), are applicable for food placed on the NI market. Where EU Law requires an indication of a Member State in respect to country of origin, food businesses must ensure that where food has originated in Northern Ireland, such indications should be in the form “UK(NI)” or “United Kingdom (Northern Ireland)". Preparation instructions Instructions on how to prepare and cook the food appropriately, including for heating in a microwave oven, must be given on the label if they are needed. If the food must be heated, the temperature of the oven and the cooking time will usually be stated. Nutritional declaration The mandatory nutrition declaration must be clearly presented in a specific format and give values for energy and six nutrients. The values must be given in the units (including both kJ and kcal for energy) per 100g/ml, and the nutrition declaration must meet the minimum font size requirements. Further information is available at Nutrition Labelling. Additional labelling requirements There are additional labelling requirements for certain food and drink products. You must tell the consumer if your products contain: - sweeteners or sugars - aspartame and colourings - liquorice - caffeine - polyols FSA Explains: Food labels Food labelling training We provide free online Food labelling training for businesses. How to display mandatory information on packaging and labels A minimum font size applies to mandatory information which you must print using a font with a minimum x-height of 1.2mm. If the largest surface area of packaging is less than 80cm squared, you can use a minimum x-height of 0.9mm. Mandatory details must be indicated with words and numbers. They can also be shown using pictograms and symbols. Mandatory food information must: - be easy to see - be clearly legible and be difficult to remove, where appropriate - not be in any way hidden, obscured, detracted from or interrupted by any other written or pictorial matter - should not require consumers to open the product to access the information Food labelling - non-prepacked foods Non-prepacked food is any food presented to the final consumer or mass caterer that does not fall within the definition of ‘prepacked food’. Non-prepacked foods include: - foods sold loose in retail outlets - foods which are not sold prepacked, such as meals served in a restaurant and food from a takeaway - prepacked for direct sale food (PPDS), such as sandwiches placed into packaging by the food business and sold from the same premises - food packed on the sale's premises at the consumers’ request, such as a sandwich prepared in front of the consumer. Labelling requirements For non-prepacked food, the name of the food, presence of any of the 14 allergens, and a QUID declaration (for products containing meat), must be provided to consumers. This can be done: (a) on a label attached to the food, or (b) on a notice, ticket or label that is readily discernible by an intending purchaser at the place where the intending purchaser chooses that food. In the case of irradiated food, one of the following statements must appear near the name of the food: • 'irradiated' or • 'treated with ionising radiation' Currently, food businesses are not required by law to provide a full ingredients list. The requirement is to provide information about the use of allergenic ingredients in a food. If a food business chooses not to provide this information upfront in a written format, (for example allergen information on the menu), they must use clear signposting to direct the consumer to where this information can be found, such as asking a member of staff. In such situations, a statement must be included on food menus, chalkboards, food order tickets or food labels. From 1 October 2021 foods which are pre-packed for direct sale (PPDS) will require the name of the food and a full ingredients list with the allergens emphasised within it. For further information on PPDS visit Introduction to allergen labelling changes. Packaging wrappers (Vacuum packing) If you vacuum pack (VP) or modified atmosphere pack (MAP) food as part of your business then you must: - use material that will not be a source of contamination for wrapping and packaging - store wrapping materials so they are not at risk of contamination - wrap and package the food in a way that avoids contamination of products - make sure that any containers are clean and not damaged, particularly if you use cans or glass jars - be able to keep the wrapping or packaging material clean Food authenticity Food authenticity is when food matches its description. Labelling is regulated to protect consumers who should have the correct information to make confident and informed food choices based on diet, allergies, personal taste or cost. Mislabelled food deceives the consumer and creates unfair competition with manufacturers or traders. Everyone has the right to know that the food they have bought matches the description given on the label. Part of our role is to help prevent mislabelling or misleading descriptions of foods. The description of food refers to the information given about its: - name - ingredient - origin - processing If you think that a food product is not authentic, please see information on food crime. Falsely describing, advertising or presenting food is an offence and there are many laws that help protect consumers against dishonest labelling and misleading descriptions.
https://www.food.gov.uk/business-guidance/packaging-and-labelling?navref=quicklink
GNC General Nutrition Center GNC sets the standard in the nutritional supplement industry by demanding truth in labeling, ingredient safety and product potency, all while remaining on the cutting-edge of nutritional science. As our company has grown over the years, so has our commitment to Living Well. In fact, GNC is the world’s largest company of its kind devoted exclusively to helping its customers improve the quality of their lives. From scientific research and new product discovery to the manufacturing and packaging processes, GNC takes pride in our rigorous approach to ensuring quality. Our commitment to quality extends to our interactions with you in our stores and after you buy our products. Social Media & Website GNC 60 El Camino Fresno, Ca 93720 559.437.3994 60 El Camino Fresno, Ca 93720 559.437.3994 It's Sunday 6:41 PM — Sorry, the business is currently closed. Please review the business hours below.
http://www.shopriverpark.com/gnc-2/
Principles of quality management as they pertain to manufacturing, processing, and/or testing of foods, with a major emphasis on food regulations, food plant sanitation and Hazard Analysis of Critical Control Points. Food quality assessment methods, good manufacturing practices and statistical process controls are discussed. Prerequisite(s): FSM1165 or approved sanitation certificate, junior status. Offered at Charlotte 3 Semester Credits FSC3020 Food Chemistry Food chemistry applies basic scientific principles to food systems and practical applications. Chemical/biochemical reactions of carbohydrates, lipids, proteins and other constituents in fresh and processed foods are discussed in respect to food quality. Reaction conditions and processes that affect color, flavor, texture, nutrition and safety of food are emphasized. Laboratory experiments reinforce class discussions. These include activation and control of non-enzymatic browning and food emulsions. This course is conducted within both a lecture and laboratory environment. Prerequisite(s): CHM2040 (or concurrent), junior status. Offered at Charlotte 3 Semester Credits FSC3040 Food Ingredients & Formulations This course applies food science principles to ingredient substitutions in food products. Students explore practical applications of various carbohydrate, lipid and protein food ingredients and their impact in food systems. Legal and regulatory restrictions in respect to ingredients, package materials, processes and labeling statements are introduced. Laboratory procedures for standard formulations and instrumental evaluation, with an emphasis on problem solving and critical thinking, are studied in a laboratory setting. Prerequisite(s): Junior status. Offered at Charlotte 3 Semester Credits FSC3050 Fermentation Science & Functional Foods This course explores various fermented food systems with particular emphasis on their development and continued manufacturing. Additionally, this course covers a range of functional foods and food components, their health conferring benefits, mechanisms of actions, and possible applications in the food industry. Prerequisite(s): Junior status. Offered at Charlotte 3 Semester Credits FSC3060 Principles of Food Microbiology This course introduces students to various aspects of food microbiology, organisms associated naturally with foods and those responsible for spoilage. The role and significance of food microorganisms including food pathogens are discussed. Additionally, students investigate various sources of contamination and the influence of food formulation and processing on microbial growth. Control techniques and methodology to detect and enumerate microorganisms in food products are studied. Prerequisite(s): BIO2201, BIO2206, Corequisite: FSC3065, junior status. Offered at Charlotte 3 Semester Credits FSC3065 Principles of Food Microbiology Laboratory This course is the laboratory companion for Principles of Food Microbiology. The laboratory focuses on practical application of microbiological principles to food and food ingredients. Students develop skills in using commonly employed microbiological techniques in research laboratories and quality control. Emphasis is on investigating food contamination, the techniques and methods to detect and enumerate microorganisms, and evaluating the efficacy of control efforts. Prerequisite(s): BIO2201, BIO2206, Corequisite: FSC3060, junior status. Offered at Charlotte 1.5 Semester Credits FSC4010 Sensory Analysis Application of sensory science principles and practices to food and beverage systems including an understanding of consumer sensory techniques and the use of various instrumental testing methods. Prerequisite(s): FSC3020, MATH2001, junior status. Offered at Charlotte 3 Semester Credits FSC4020 Principles of Food Processing Principles and practices of food processing including, extrusion, canning, freezing, dehydration, aseptic packaging, fresh ready to eat and specialty food manufacturing. Understanding of various preparations, processing and packaging techniques including the use of additives. The course exposes students to various manufacturing equipment and explores raw material control, disposal of waste products and the use of re-work in a manufacturing setting. Prerequisite(s): FSC3020, FSC3040, senior status. Offered at Charlotte 3 Semester Credits FSC4040 Product Research & Development This senior-level capstone class builds on and applies knowledge learned in previous food science & technology major courses. This laboratory based class will expose students to the product development process from concept through product optimization. Students learn the importance of teamwork in a R&D laboratory classroom. They will develop a consumer food product which meets predefined nutritional, performance, regulatory and shelf-life expectations. ESHA Genesis R&D software will be used to enter formulations and design nutritional and ingredient labels. Prerequisite(s): FSC3020, FSC4020, senior status.
https://catalog.jwu.edu/courses/fsc/
Job Description : - Product Planning, Development and Management - Assist in the formulation and development of the product strategy, roadmap and development calendar by providing support in eliciting, analyzing and validating company, channel & customer requirements. - Document and deliver market & product functional requirement specifications and other related documents as required to manage scope and for use in the product development process through coordination and liaison with the necessary functions. - Analyze necessary market research studies for assessment and superior understanding of consumer usage & attitudes, industry & products performance trends to define target segments & needs and evaluate their impact on product strategy & plans. - Conduct ongoing analysis of existing product suites, associated target markets/competitors to provide in-depth industry, market and competitive analysis to assist in the identification and evaluation of new product opportunities - Responsible for reporting and assessing a log of proposed enhancements, major complaints and identified gaps for existing products to ensure continuous product portfolio improvements. - Product Communication and Support - Assist in the development of packaging strategies and implement ongoing innovative product marketing programs by conducting market research, competitive analysis and business planning supported by ongoing visits to and feedback from customers/ non-customers to ensure that the product positioning and key benefits are clearly understood. - Work closely with all departments as necessary (sales, training, customer service) to assist and drive development of sales collaterals & advice tools, customer presentations, sales training materials and other product marketing collaterals that optimally positions, communicates the strengths of our products.
https://www.iimjobs.com/j/product-manager-insuretech-bfsi-3-8-yrs-881813.html?ref=kp
As more countries begin to find their new normal after facing temporary shutdowns brought on by the global pandemic caused by the Cornona virus, the U.S. Food and Drug Administration (FDA) announced a temporary flexibility policy regarding certain labeling requirements for foods for humans during the COVID-19 pandemic. The issued guidance document provides additional temporary flexibility in food labeling requirements to both manufacturers and vending machine operators, with the goal is to provide regulatory flexibility, where appropriate, to help minimize the impact of supply chain disruptions on product availability associated with the current COVID-19 pandemic. Entitled “Temporary Policy Regarding Certain Food Labeling Requirements During the COVID-19 Public Health Emergency: Minor Formulation Changes and Vending Machines,” this guidance is just one of several that the FDA issued providing temporary flexibility to help the food supply chain meet consumer demand while supplies are in short. What does the new policy entail? According to the FDA, first, it is “providing flexibility for manufacturers to make minor formulation changes in certain circumstances without making conforming label changes, such as making a change to product ingredients, without updating the ingredient list on the packaged food when such a minor change is made. According to its announcement on the new policy, the FDA states that for purposes of this guidance, minor formulation changes should be consistent with the general factors listed below, as appropriate: - Safety: the ingredient being substituted for the labeled ingredient does not cause any adverse health effect (including food allergens, gluten, sulfites, or other foods known to cause sensitivities in some people, for example, glutamates); - Quantity: generally present at 2 percent or less by weight of the finished food; - Prominence: the ingredient being omitted or substituted for the labeled ingredient is not a major ingredient in the product; - Characterizing Ingredient: the ingredient being omitted or substituted for the labeled ingredient is not a characterizing ingredient; for example, omitting raisins, a characterizing ingredient in raisin bread; - Claims: an omission or substitution of the ingredient does not affect any voluntary nutrient content or health claims on the label; and - Nutrition/Function: an omission or substitution of the labeled ingredient does not have a significant impact on the finished product, including nutritional differences or functionality. Specific examples are contained in the guidance. For example, an ingredient could be temporarily reduced or omitted (e.g. green peppers) from a vegetable quiche that contains small amounts of multiple vegetables without a change in the ingredient list on the label. Substitution of certain oils may temporarily be appropriate without a label change, such as canola oil for sunflower oil, because they contain similar types of fats. Another formulation change for which the FDA is providing temporary flexibility is the substitution of bleached flour. Some flours require the word “bleached” wherever the name of the food appears on the label. Given that there is a shortage of the bleaching agent used to bleach flour, creating supply chain disruptions for this specific ingredient, the FDA is providing temporary flexibility for the substitution of unbleached flour for bleached flour without a corresponding label change. Second, the FDA is providing temporary flexibility to the vending machine industry and will not object if covered operators do not meet vending machine labeling requirements to provide calorie information for foods sold in the vending machines at this time. Other temporary flexibilities that FDA has issued address nutrition labeling on food packages, menu labeling, packaging and labeling of shell eggs and the distribution of eggs to retail locations.
https://www.packagingtechtoday.com/materials/labels/a-label-update-or-not/
Hong Kong S.A.R. Macao S.A.R. South Korea Japan India Indonesia Malaysia Philippines Singapore Thailand Vietnam more All Business CBEC Consumer Customs Clearance Dairy E-commerce Food Contact Materials Food for Special Medical Purposes Food Ingredients Health Food Infant Formula Labeling Marketing Mum & Baby Product Registration Retail more NEWS Japan Updates Food Composition Standard Tables Jan 05, 2021 On December 25, 2020, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan released the amendment to the Standard Tables of Food Composition in Japan-(Eighth Revised Version) (from now on "Standard Tables"), which mainly subdivided the carbohydrates, changed the energy calculation method, and enriched the information on prepared food. Besides, the number of listed food has increased by 287, now amounting to 2,478. Japan Other Food Labeling Regulation & Standard NEWS Recycling Mark of Food Package/Containers in Japan Dec 17, 2020 Food labeling is required for containers and packaging, but similarly, when using containers and packaging of the applicable material, it is necessary to display the "identification mark", the so-called recycling mark. Japan Labeling ANALYSIS Labeling of Food Containing Designated Ingredients etc. Enforced by Japan on June 1, 2020 Aug 26, 2020 "Food containing designated ingredients, etc." refers to foods that contain ingredients designated by the Minister of Health, Labour and Welfare as ingredients requiring special precautions. Japan Labeling Food Ingredients NEWS Japan Revises Food Labeling Standards Jul 28, 2020 The wording of "artificial" and "synthetic" are removed from the indication name of sweeteners, colorants, preservatives and flavoring & fragrance. Organic livestock products are officially included as special raw material Japan Prepackaged Food Food Additives Labeling Regulation & Standard NEWS Japan Permits Sale of Label-less Products May 13, 2020 A new policy in Japan allows certain products to be sold without an over-label. Label-less products help reduce waste, environmental load, and labor costs and are becoming popular in Japan. Japan Prepackaged Food Beverage Food Contact Materials Labeling Food Packaging NEWS Japan Relaxes Food Labeling Regulatory Requirements during COVID-19 Pandemic Apr 17, 2020 COVID-19 has severally impacted the global food supply chain. On Apr. 10, 2020, Japan CAA and MAFF issued statements outlining plans to reduce the stringency of enforcement of food labeling rules. Japan Labeling Regulation & Standard ANALYSIS Japan Publishes the Guidelines’ Draft for the Ex-Post Review on Foods with Function Claims Apr 14, 2020 The Consumer Affairs Agency (CAA) published “The guidelines’ draft to ensure transparency of the ex-post regulations (ex-post review) of Foods with Function Claims based on related legislations on food labeling, etc.” and started receiving public comments on January 16, 2020. Japan Labeling NEWS Japanese New Food Labeling System Effective Since April 1, 2020 Apr 10, 2020 In accordance with the “partial revision of Food Labeling Standard(2015)”, Japan released a new labeling system. The new labeling system has a transition period for five years and came into effect on April 1, 2020. The revision of the food labeling standard mainly involves nutrition labeling and allergen labeling. Japan Labeling Regulation & Standard ANALYSIS The Labeling Requirements of “-Free” and “Non-Use” for Food Additives in Japan Mar 05, 2020 The 6th “Discussion about the Food Additives Labeling System” was held at the Consumer Affairs Agency (CAA) on November 1, 2019, which mainly talked about the use of “-free” and “non-use” for food additive labeling. Japan Food Additives Labeling NEWS The Submission Deadline of New Manufacturer Identification Codes for This Year is Approaching in Japan Nov 25, 2019 The Consumer Affairs Agency (CAA) called for manufacturers or related business operators to submit a notification in the case of need of Manufacturer Identification Codes based on the new system no later than Friday, 27th of December, 2019 Japan Prepackaged Food Labeling 1 2 Next HOT TOPICS MORE Updates of Consultation Drafts in China, 2018-2020 36 Articles China Consults on Series of Health Food Related Documents, Involving Function Catalogue, Function Evaluation and Human Consumption Trial Asia Pacific Food Regulation Dynamics 2020-2021 6 Articles Asia Pacific Food Regulation Dynamics | January 2021 Newly Released Regulations & Standards, 2018-2020 22 Articles China Releases Five Guidelines for Organic Foods Certification Sampling MOST POPULAR 1 Top 10 Food Regulatory Updates in China in 2020 2 Five Influences that China’s Promotion Action Plan of Dairy Quality and Safety will Bring Forth 3 China Approves 10 New Varieties of Food Additives and 6 New Varieties of Food Related Products 4 Breaking News: South Korea Permits General Food with Function Claim 5 South Korea Suspends the Food Importation from 36 Countries INVITE FRIENDS You can access 3 more original regulations if one of your friends are successfully invited and register on ChemLinked. Click here to get the invitation link SEND INVITATIONS SEND INVITATIONS Invite friends COPY COPIED Apply now and start your 14-day free trial navigation.
https://food.chemlinked.com/news/?tag=113&category=&country=65
Which federal agencies regulate food labeling? The U.S. Food and Drug Administration (FDA), operating under the Federal Food, Drug and Cosmetic Act, regulates the labeling for all foods other than meat and poultry. Meat and poultry products are regulated by the U.S. Department of Agriculture (USDA) under the Federal Meat Inspection Act. What is "food labeling"? Food labels for most of the food products sold in the United States must have the product name, the manufacturerís name and address, the amount of the product in the package and the product ingredients. The ingredients are listed in descending order, based on their weight. Under the current laws, fresh fruits, vegetables and meat are exempt from these labeling requirements. In 1973 the Food and Drug Administration (FDA) established "nutrition labeling" or guidelines for labeling the nutrient and caloric content of food products. Nutrition labeling is mandatory only for those foods that have nutrients added or make a nutritional claim. Manufacturers are encouraged, but not required, to provide nutrition labeling of other food products. The current nutrition labeling regulations from the FDA require a label and have the percentage of the U.S. Recommended Daily Allowances (U.S. RDA). These standards are based on the 1968 edition of the Recommended Dietary Allowances (RDA), but the RDA and the U.S. RDA are not the same! For each nutrient, the U.S. RDA are the highest RDA for any of the RDA age and sex groups. The U.S. RDA usually apply to people four years of age and older. FDA nutrition labels must have the serving size; servings per container; calories per serving; grams of protein, carbohydrate and fat per serving; and the percent of the U.S. RDA for protein, five vitamins and two minerals. There is less nutrition information on labels regulated by USDA. USDA labels list only the serving size; servings per container; calories per serving; and grams of protein, carbohydrate and fat per serving. Why is food labeling important? Food labeling provides basic information about the ingredients in, and the nutritional value of, food products so that consumers can make informed choices in the market place. What are the trends in food labeling? In the 1990 Food Marketing Institute (FMI) survey, over 70% of food shoppers identified taste, nutrition and product safety as being very important factors in making food purchases. In the same survey, 36% of shoppers reported they always read the ingredient and nutrition labels, and another 45% said they sometimes read nutrition labels. The growing importance of the role of nutrition in promoting health and preventing disease, and consumer demand for clearer and easier to understand information, has led to the passage of the Nutrition Labeling and Education Act (NLEA) of 1990. The federal regulations, detailing the format and content of food labels, are now in effect. Institute of Food Technologists 525 W. Van Buren, Ste 1000 Chicago, IL 60607 Phone: +1.312.782.8424 | Fax: +1.312.782.8348 | Email: [email protected] © Institute of Food Technologists Popular Links Become a Food Scientist Events Certification Explore Core Sciences Explore Focus Areas Journals Member Directory News Releases Learn Online Contact Us | Help | Full Site Stay connected with IFT:
http://www.ift.org/Knowledge-Center/Learn-About-Food-Science/Become-a-Food-Scientist/Introduction-to-the-Food-Industry/Lesson-3/Food-Labeling.aspx
Reading an ingredients label can get confusing quickly. Some brands describe the same ingredient using different names, such as it's botanical name. Label formats, dosage levels and the level of detail provided per ingredient are additional variables. This can make a like for like comparison between products challenging. At vital.ly we have started to standardise some things like naming conventions, to help make such comparisons easier. This post is intended to clarify some of the details around the legislative frameworks governing ingredient labels, and how these frameworks are applied to the labels on your favourite products. Australian regulations - an overview In Australia, food and medicines are regulated under separate legislative frameworks, corresponding with the intended use and potential risks those products pose to public health and safety. Within the regulatory frameworks, there are different requirements for foods and medicines in relation to their manufacturing, labeling, advertising, and evidence required to substantiate any claims made for the products (1). Let's walk through some of the basics including: - Practitioner only products (and what makes them different to retail supplements) - AUST L and AUST R numbers - Ingredients list vs nutrition panels - Herbal ingredients listings - Equivalent values in ingredient listing Products that are classed as therapeutic goods (which include prescription and complementary medicines) are regulated by the Therapeutic Goods Association (TGA) at a federal level while foods (including many that make health claims) are predominantly regulated by state and territory regulatory bodies (8). State and territory food agencies regulate food in accordance with both state/territory and national food legislation, including food standards set by statutory body Food Standards Australia New Zealand (FSANZ). A product is classified as a medicine if it is represented to be a therapeutic good, likely to be taken for therapeutic use, or declared to be a therapeutic good. The TGA regulates medicines at the federal level (9). Tablets, capsules, and pills generally provide a concentrated version of an ingredient compared to forms traditionally associated with foods, such as powders or bars. The manufacturing requirements for foods are not as stringent as for therapeutic goods (which need to adhere to good manufacturing principles (GMP) (1). Table 1. Key differences between foods and medicines | | | | Food | | Medicine | | Regulatory body | | FSANZ is the Commonwealth statutory authority responsible for developing food standards that make up the Australia New Zealand Food Standards Code, which is enforced by the states and territories regulating the sale and supply of food. The Australia New Zealand Food Standards Code lists safety, composition and labeling requirements made on food labels and in product advertising. The Code regulates the use of ingredients, processing aids, colourings, additives, vitamins, and minerals (2). | | The TGA regulates medicines under the Therapeutic Goods Act 1989. These include medicinal products containing herbs, vitamins, minerals, nutritional supplements, homoeopathic and certain aromatherapy preparations, which are referred to as 'Complementary Medicines’ (4). Complementary medicine means a therapeutic good consisting wholly or principally of 1 or more designated active ingredients, each of which has a clearly established identity and a traditional use (5). Australia has a three-tiered system for the regulation of medicines, which is overseen by the TGA: | | Claims | | All health claims are now required to be supported by scientific evidence, whether they are pre-approved by FSANZ or self-substantiated (3). General level claims refer to a relationship between particular nutrients or substances in a food, and their effect on health such as ‘calcium for healthy bones and teeth’. General level health claims are prohibited from referring to a serious disease or biomarker of a serious disease. There are currently a total of 201 pre-approved general level health claims available for use by food businesses across 39 categories. High level claims refer to a relationship between particular nutrients or substances and a serious disease or biomarker such as ‘Phytosterols may reduce blood cholesterol’. High-level health claims must be based on one of 13 pre-approved FSANZ health claims, across 10 categories. | | Medicines must meet evidence requirements proportionate to their health claims. | | Manufacturing | | Produced according to food safety standards. | | Medicines, including complementary medicines, must be manufactured according to Good Manufacturing practice (GMP). This set of principles and procedures ensure that therapeutic goods are of high quality (7). | | Labeling | | Need to include nutrition information (10). | | Medicine labels including complementary medicines, must display the active ingredient (6). | | Examples | | Energy drinks, cereal products. nutritional bars, meal replacement shakes. | | | | Availability | | | | Medicines have tighter regulatory restrictions than foods on where and how they can be sold and who can access them. Figure 1. Example of food label (e.g. 180 Nutrition Vegan Protein Bars) Figure 2. Example of complementary medicine label (e.g. BioCeuticals Calm Bursts) Listed medicines – AUST L (e.g. Metagenics AdrenoTone) 'AUST L' medicines contain pre-approved low-risk ingredients. Most complementary medicines are ‘listed’ with products clearly marked with a unique AUST L number. They are classified as unscheduled medicines with well-known, low-risk ingredients, usually with a long history of use, such as vitamin and mineral products. The TGA assesses such products for quality and safety but not efficacy. This is not an indication that they do not work, rather that the suppliers hold the evidence to support TGA indications and claims made for their medicine (11). Assessed listed medicine – AUST L(A) (e.g. Hydralyte Effervescent Electrolyte Tablets) 'AUST L(A)' assessed listed medicines may only use low-risk ingredients permitted for use in listed medicines. They must have at least one intermediate indication (i.e. indications that are above those available for AUST L listed medicines), and may also include lower-level indications. Assessed listed medicines have an AUST L(A) number on their medicine label (11). Registered medicines – AUST R (e.g. Nurofen liquid capsules) AUST R medicines are assessed by the TGA for safety, quality and effectiveness. They include prescription-only medicines and some over-the-counter products such as those for colds and pain relief (11). AUST R registered complementary medicines are considered to be relatively higher risk than listed medicines, based on the ingredients they contain or the indications made for the medicine. All registered medicines are fully assessed by the TGA for quality, safety and efficacy prior to being available in the Australian market. Registered medicines have an AUST R number on the medicine’s label. Therapeutic goods that are labeled for ‘practitioner dispensing only’ or words to that effect, are only to be supplied to and dispensed by a healthcare professional as described in section 42AA of the Therapeutic Goods Act 1989. The difference between ‘for practitioner dispensing only’ products and other listed or registered complementary medicines, is that the former does not need to include a statement of their purpose/therapeutic indication on the label. These medicines should only be supplied to an individual after consultation with a healthcare practitioner, at which time, the healthcare practitioner attaches a label to the medicine providing individualised instructions for use (5). Health professionals who are found to be in breach of supply conditions may be subject to a voluntary restriction of supply. This a commercial decision made by suppliers of complementary medicine products, a practice which the TGA supports. This aims to ensure only qualified health professionals are supplied practitioner only products. The TGA recognises the following healthcare practitioners (12): - Medical practitioners, psychologists, dentists, pharmacists, optometrists, chiropractors, physiotherapists, nurses, midwives, dental hygienists, dental prosthetists, dental therapists or osteopaths; or - Herbalists, homoeopathic practitioners, naturopaths, nutritionists, practitioners of traditional Chinese medicine or podiatrists registered under a law of a State or Territory. Exempt goods Some medicines do not need to be registered or listed on the ARTG as a result of a specific exemption or determination under the Therapeutic Goods Act 1989. These include (13): - Medicines (excluding those used for gene therapy) that are extemporaneously compounded or dispensed by a practitioner for use by a particular person - Certain homoeopathic preparations - Certain shampoos for the treatment/prevention of dandruff - Starting materials used in the manufacture of therapeutic goods, except when formulated as a dosage form or pre-packaged for supply for other therapeutic purposes Ingredients list vs nutrition panels Sometimes confusion arises over the presentation of ingredients contained in a product compared to nutrition panel values listed for the product. In general, an ingredient is a physical item that has been added to make up the product, whereas the nutrition panel displays properties of the ingredients. Nutrition panels typically contain the macronutrient content of the product (protein, carbohydrate, including sugars, and fat content), as well as other commonly used values when assessing the nutritional qualities of a product (e.g. sodium or fibre content). In addition, other properties of the ingredients might be listed, e.g. vitamin or mineral content. For example, a product might contain orange juice, which by its nature contains vitamin C. The ingredient in this instance is the orange juice, whereas the vitamin C is a nutritional value of the ingredient, which might be listed on the nutrition panel. In instances where the product contains vitamin C as an ingredient (usually synthetic vitamin C added to supplements), this is listed, with its amount, under ingredients. In order to compare vitamin C content with other products that do not explicitly have vitamin C added as an ingredient, the nutrition panel values can be useful. At vital.ly, the difference between the ingredients and nutrition panel info is clearly displayed under different headings (Formulation/Nutritional information). Figure 3. Ingredients vs nutrition panel data (e.g. GelPro Kakadu Plum Powder) Herbal ingredients are most commonly listed on labels with varying degree of detail and amounts relating to the raw herbal material used in the preparations, and the extracted herbal components. At vital.ly, we standardise the ingredients listings to make it easier to compare products. Typically, 3 levels of herbal detail are given (where the information is available): - The original herbal content, including the part of the plant used. This is the raw material/content used to produce the herbal extract. - The amount of the herb extracted from the raw materials. This is shown as the “ext.” amount. - When a product contains information relating to the active constituents of the herbal extract, these are listed as “standardised” amounts. Where this information is available, it is displayed first for the ingredient listing. Figure 4. Three levels of herbal ingredients data on display (e.g. Herbs of Gold Berberine-ImmunoPlex) Equivalent values in ingredient listing Some vitamin and mineral ingredients also contain active constituents. In these cases, the ingredient is listed, with the active ingredient given as an “equivalent” value. For example, a zinc amino acid chelate ingredient of 100 mg might contain 10 mg of zinc. At vital.ly, we have standardised the entry of the ingredients, making it easier to compare the amounts of vitamins and minerals in products, by displaying the equivalent amounts of these ingredients where they are available.
https://www.vital.ly/commons/blog/2021/03/02/How-to-read-an-ingredients-label/post=136/
The American Bakers Association (ABA) has joined the newly-formed Coalition for Accurate Product Labels (CAPL), a broad-based coalition advocating for clear, concise and scientifically accurate labelling for consumer products. CAPL will be advocating for the Accurate Labels Act which amends the Fair Packaging and Labeling Act of 1966 by establishing science and risk-based criteria for all additional state and local labeling requirements. "Consumers have a right to know exactly what is in the products they buy," said ABA President & CEO Robb MacKie. "Labels not based on sound science and legitimate risk only hinder consumers' ability to shop smart and make healthy choices for their families." The bill also allows for state-mandated product information to be provided through smartphone-enabled "smart labels" and on websites, where consumers can find up-to-date, relevant ingredients and warnings. If passed, the legislation would also provide bakers with clearer ground rules regarding food labeling. Read the rest of this story from perishablenews.com...
https://prolabel-inc.com/markets/consumer-goods/iddba-preview-aba-joins-coalition-for-accurate-product-labels/
Products on this site contain a value of 0.3% or less THC. These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure or prevent any disease. This nutritional and supplemental facts information is theoretically calculated based on data gathered from the FDA labeling guide and approximate values based on ingredients contained in formulation. The information provided is based on our own testing and is to the best of our knowledge true and accurate. It does not relieve you from carrying out your own precautions and tests. All recommendations or suggestions pertaining to product labeling, product use or production procedures are made without warranty or guarantee. Customers should conduct their own 3rd party tests to determine the applicability or suitability for their own particular purposes. FDA Disclosure: This product is not for use by or sale to persons under the age of 21. This product should be used only as directed on the label.
https://magiccityorganics.com/product/hf-dispensary-broad-spectrum-tincture-orange-1000mg/
Session VIII: Site Power: Renewable Energy Opportunities The ultimate goal of the 2030 Challenge is fossil fuel free buildings by the year 2030. As buildings approach zero for their carbon footprint, on-site renewable energy sources become a key element to realizing that goal. As the lower-up-front-cost conservation and efficiency measures are exhausted, renewable energy emerges as the final step to reaching aggressive carbon elimination goals. This session will explore the relationship between conservation and renewable energy, and investigate current renewable energy opportunities, both onsite and offsite systems, such as combined heat and power and local district energy (valuable for load sharing). AIA+2030 Learning Objectives - Identify the major on-site renewable energy strategies for buildings. - Propose an appropriate renewable energy strategy based on site characteristics and resources. - Enumerate the life cycle costs and benefits of on-site renewable energy. - Understand how district energy can provide thermal and electric services and balance neighborhood loads. About the Presenter - Violeta Archer Violeta Archer is an entrepreneur, a consummate designer and consistent performer. An eclectic career path exposed her to industry best practices in various sectors (healthcare, IT, construction), which she now employs in her business operations. She owns a rep agency in Texas that opens markets and develops business opportunities for a curated portfolio of foreign manufacturers, while operating an interior design firm focused on sustainability best practices and LEED certification. She is a LEED Green Associate, an expert in building integrated photovoltaic (BIPV) solutions for the built environment and recently developed an innovative product line for office interiors that is poised for Cradle 2 Cradle certification. Additionally, she heads the Houston Renewable Energy Group, a 501c3 nonprofit volunteer organization with the mission to educate and promote renewable energy solutions as well as to create a renewable energy 'voice' in the national and international energy dialog. Education - Undergraduate – Haverford College - BA - Philosophy & French - Graduate – Fletcher School of Law & Diplomacy - MA - International Trade & - Environmental Policy - Post Graduate - Rhodec School of Design - BA - Interior Design Click here to download a printable document containing all 2015 session information. ***Registration is still open for the series at a prorated fee. Please contact Rashida at [email protected] for more information.
https://aiahouston.org/v/event-detail/AIA-2030-Session-VIII/ns/
June 5, 2017 — Northfield Mount Hermon is taking a major step toward its goal of becoming a carbon-neutral campus. Starting this December, NMH will achieve net zero electricity use for at least the following three years. (NMH currently gets at least 12 percent of its power from renewable sources.) The school has signed a contract with Renewable Choice Energy, a subsidiary of the Schneider Electric corporation, to purchase renewable energy credits that support production of energy from renewable sources such as wind, according to Rick Couture, NMH director of plant facilities. By offsetting its emissions in this way (and in others, such as planting carbon-dioxide-absorbing trees), the school’s net release of carbon dioxide into the atmosphere — for 100 percent of its electricity use — will be zero. NMH’s strategic plan for 2020 pledges to “devote resources to sustainability projects and initiatives,” and the new electrical power agreement is just the latest step toward reducing its environmental impact. “Recognizing the importance of combatting the impacts of global warming, the school has made a significant commitment to supporting renewable power,” said Couture. Between 2015 and 2016, NMH converted from fuel oil to biofuel to heat its buildings, which eliminated carbon emissions from that source. There will still be some carbon impact from vehicles driven for school business, coolants used for refrigeration, and gasses produced by animals on the school’s farm. But Becca Malloy, NMH science teacher and director of sustainability, says that — even including those — the school’s overall carbon footprint will be “negligible for the size of our institutional operations.” Although a few other schools, including Phillips Exeter and Lawrenceville, also have net zero electricity programs, Malloy says that NMH is closer to achieving full carbon neutrality than many of its peer institutions. —By Emily Weir Learn about other sustainability initiatives at NMH.
https://www.nmhschool.org/news-details/~board/news-and-events-board/post/nmh-closing-in-on-carbon-neutrality-goal
by DR IZYAN MUNIRAH MOHD ZAIDEEN & CAPTAIN MOHD FAIZAL RAMLI / pic TMR CARBON neutrality is becoming increasingly prevalent and Malaysia is at a crossroads in its climate change mitigation pathway, aiming to achieve carbon neutrality by 2050. A consensus derived from the Paris Agreement, of which Malaysia is one of 195 signatories, is seen as a target that must be met cooperatively in order to limit global warming to 1.5°C. Carbon neutrality refers to the balancing of greenhouse gas emissions and absorption by carbon sinks like plants, seas and soil, and it is a commitment we can make not just to the planet, but also to humanity. Carbon neutrality implies that Malaysia will need to make significant adaptations over the span of the next decade in order to avert the worst consequences of climate change that are currently taking place. The effort is not merely a political move; rather, it is an attempt to guarantee that the global commitment to limit global warming for human survival is fulfilled. Our country has a remarkable opportunity to protect the environment by reducing its carbon footprint. However, specific timeframes and action plans have yet to be established. Hence, developing carbon neutrality roadmaps is the optimal course of action to tackle global warming as part of the Paris Agreement. To meet the goal of carbon neutrality by 2050, the government will need to work on a plan that can be segmented into short-, medium- and long-term actions. For the course of long-term action, the need for a policy that is feasible, comprehensive, yet practicable should be adopted. This initiative will need considerable financial input from a variety of sources, including funding. An attempt to become carbon-neutral will be a major demand-driver in our economy, requiring technical advancements and socio-economic transitions like switching to renewable energy (RE), such as solar power, instead of coal and investing in carbon-absorbing initiatives such as reforestation programmes. The government will also need to establish a cost and carbon impact scoring system to offer an idea of the magnitude of investment required and carbon savings obtained for each action taken. Carbon offsetting can also involve compensating for carbon emissions by financing a reduction in CO2 emissions elsewhere. Commitment by the government and corporate sectors to reduce carbon emissions by investing in ecologically-friendly technology should also be considered to meet the aim of nearly no coal usage, significant reductions in the use of other fossil fuels and over 70% of energy output from renewable resources. Even though Malaysia’s chances of pursuing marine RE are currently limited, its proximity to a vast ocean region makes it advantageous to have access to marine energy sources including offshore wind, tidal, underwater current and solar. Achieving carbon neutrality before 2050 is extremely challenging and will necessitate unwavering efforts and the cooperation of the entire community. This will be the impetus for achieving the goal of carbon neutrality in the quest for sustainability. In order to sustain transformation over the long term, it is essential to instil a carbon-neutral culture within practices and norms. This requires a critical awareness of carbon footprint reduction among the public at large. The effective action plan must be structured in terms of sustainability policies and regarded as a dynamic piece that must be regularly updated to account for further improvement from time to time. Without a doubt, it will take time and trial and error to uncover roadmaps to carbon neutrality and the commercial opportunities that route will offer; so, that process demands a thorough grasp of what carbon neutrality entails. For the next 30 years, carbon neutrality will be the primary investment opportunity in RE, energy-saving and environmental protection sectors. If we are too slow to act, our country may lose its allure as a destination for international investments and businesses. Dr Izyan Munirah Mohd Zaideen is a senior lecturer at the Faculty of Maritime Studies, Universiti Malaysia Terengganu, and Captain Mohd Faizal Ramli is an environmental, health and safety marine specialist in the oil and gas sector.
https://themalaysianreserve.com/2022/07/18/the-need-for-carbon-neutrality-action-plan-in-malaysia/
KPMG has announced its intention to become a net-zero carbon organisation by 2030. KPMG has announced its intention to become a net-zero carbon organisation by 2030, as part of its continued focus on delivering growth in a sustainable way and providing climate solutions for member firms, clients and communities. To underpin this goal, the global organisation has signed up to a series of new climate actions, including a 1.5°C science-based target which will focus on achieving a 50 percent reduction of KPMG’s direct and indirect greenhouse gas (GHG) emissions by 2030. Additionally, KPMG firms have collectively committed to: - 100 percent Renewable Electricity by 2022 in its 22 Board Countries, and by 2030 for the wider network. - Offsetting any remaining GHG emissions through externally accredited voluntary carbon offsets to mitigate the remainder it cannot remove from its operations and supply chain. KPMG has taken a rigorous approach, using in-house experts to project KPMG’s path to decarbonisation and, from this, has developed a carbon forecasting model for KPMG firms that enables bottom-up target setting. This model maps the impact and sources of emissions and how a change in policy, for example on business travel, can have a significant impact on GHG emissions. To remain confident that KPMG firms’ climate actions are creating impact, the global organisation will also track progress against the new commitments by measuring and reporting to CDP (formally the Carbon Disclosure Project) and the Science Based Targets Initiative. The organisation will be working closely with its people to educate colleagues on the new commitments and mobilise teams to support the journey towards a more sustainable future. Global Chairman & CEO of KPMG International, Bill Thomas said: “We have made real, and valuable, progress in our efforts to help KPMG member firms and clients grow sustainably, but the size of the challenge we all face globally on climate means we must go further. I am pleased that the new commitments we have announced will help to accelerate our ambitions to deliver on a more sustainable future, inspiring confidence among our teams and stakeholders and empowering them to change how we shape our future. Our carbon reduction plan will aid not only our own progress towards reducing the effects of the climate on tomorrow’s world, but it will also contribute to our clients’ efforts to reduce their end to end carbon footprint. With this new set of global commitments across KPMG, I am confident that we are making the right decisions today to make a difference tomorrow.” This announcement builds on previous progress from KPMG in reducing its collective impact on the planet. Prior to the COVID-19 pandemic, the global organisation had reduced its net carbon emissions per FTE by around one-third over the last decade, performing better than its targets. It is also currently on track to meet its 2020 renewable energy target of 60%. “Bermuda represents the perfect ecosystem to become carbon neutral and to test and grow renewable energy sources. KPMG in Bermuda supports the Regulatory Authority’s Integrated Resource Plan, a journey towards a sustainable and secure future where the majority of our energy will be procured from renewable sources.” said Mike Morrison, CEO, KPMG in Bermuda, “Investing in renewable energy can be a catalyst for the economy, while helping to protect the future for our community. Our firm is committed to help bring this plan to life. At KPMG in Bermuda we focus on the goal of lower consumption and higher awareness. We actively work to reduce our firm’s overall electricity usage and have seen major progress over the last six years, cutting our usage by over one-third. During our recent renovations we kept the environment top of mind, replacing all lightbulbs with LEDs, placing sensor lights throughout the building, donating used furniture to local organisations, and choosing workstations and carpets made of recycled materials. Operating during COVID-19 shifted the reality of the way we conduct business, forcing us to reconsider our amount of international travel. How we do business going forward will change, with increased consideration of virtual alternatives and our overall carbon footprint, positively impacting GHG emissions. We are excited about KPMG’s global ambition and the opportunities for Bermuda to be a leader in sustainability. “ KPMG firms are working with clients across the world to support them in decarbonising their businesses and supply chains, and embedding ESG in everything they do. Launched earlier this year, KPMG IMPACT brings together KPMG firms’ expertise in supporting clients to address the biggest challenges facing our planet, with the aim of delivering growth with purpose and achieving progress against the United Nations Sustainable Development Goals (SDGs). © 2022 KPMG, a group of Bermuda limited liability companies and a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved. For more detail about the structure of the KPMG global organization please visit https://home.kpmg/governance.
https://home.kpmg/bm/en/home/media/press-releases/2020/11/net-zero-carbon-by-2030.html
Policy GESP3: Net-Zero Carbon Development To ensure that developments within the Greater Exeter area contribute to meeting the overarching net-zero target set out in draft policy GESP2, applicants for all developments which propose the construction of new home(s) or non-residential floorspace or change of use will be required to submit to the local planning authority a carbon statement for approval and implementation. The carbon statement will demonstrate that proposals are designed, constructed and will perform to deliver net-zero carbon emissions, taking account of emissions from primary energy use and transport, broadly in compliance with the energy hierarchy. In meeting the above requirement, proposals will demonstrate that they meet the sustainable and active transport targets which apply to the site and: - Minimise energy demand across the development and avoid temperature discomfort through: - Passive design, solar masterplanning and effective use of on-site landscaping and green infrastructure - The “fabric first” approach to reduce energy demand and minimise carbon emissionsnecessary for the operation of the building - Low carbon solutions where additional energy is required for building services such as heating, ventilation and air conditioning - Maximise the proportion of energy from renewable or low carbon sources through: - Ensuring that opportunities for on-site or nearby renewable energy generation or connection to a local decentralised energy scheme are exploited - Ensuring that the ability to install future solar PV or vehicle-to-grid connections is not precluded - Storage of on-site renewable energy generation - Ensure in-use performance is as close as possible to designed intent through: - Use of a recognised building quality regime and consistent approach to calculating both, the designed and in-use performance - Ensuring that at least 10% of buildings on major developments deliver in-use energy performance and generation and carbon emissions data to home owners, occupiers, developers and the local planning authority for a period of 5 years, clearly identifying regulated and unregulated energy use and any performance gap. Where a performance gap is identified in the regulated use, appropriate remedial action will be required Where it is not feasible or viable to deliver carbon reduction requirements on-site, methods such as offsetting elsewhere will be considered. This will need to be through a specific deliverable proposal or financial contributions to an accredited carbon offsetting fund. Development proposals should calculate whole life-cycle carbon emissions through a nationally recognised Whole Life Cycle Carbon Assessment and demonstrate actions taken to reduce life-cycle carbon emissions. 5.8 Various proposed policies of the GESP and other development plans are designed to work together to reduce the carbon impact of new development in line with the draft vision and the net-zero target set out in draft policy GESP2. Draft policy GESP3 concentrates on development- specific requirements and seeks to ensure that proposals are designed, constructed and perform to deliver net-zero carbon emissions. To evidence this, the policy requires a “mock” Standard Assessment Procedure (SAP), Simplified Building Energy Model (SBEM) or Dynamic Simulation Model (DSM) test to be submitted as part of a Carbon Statement, and subsequently through the “real” SAP, SBEM or DSM test as the development passes through Building Control. To achieve net-zero carbon emissions, the draft policy proposes following an energy hierarchy of interventions, as set out below. The policy allows flexibility as to how that overarching target is met, but, through the energy hierarchy, advocates a “fabric first” approach before considering on-site renewable generation or off-site contributions. The hierarchy gives a sensible structure to the Carbon Statement required by the policy. The Greater Exeter councils will publish further guidance on the production of Carbon Statements in due course. The Energy Hierarchy 5.9 Development location and sustainable transport investment is the most significant way to reduce carbon emissions from new development. By ensuring easy access to jobs and basic services/facilities by active travel and high quality public transport links, the need to travel by private car can be reduced. We propose this should be reflected in the GESP spatial development strategy and the location of its allocations for major development. Digital connectivity is also key to reducing the need to travel by enabling home working and access to online services. The draft policies in this chapter deal with this element of the hierarchy. Carbon emissions arising from travel associated with development will be minimised by applying the policies in the Movement and Communication chapter including draft policy GESP23: Sustainable travel in new developments and draft policy GESP24: Travel planning. Any residual carbon emissions from transport will then be taken into account in the submitted Carbon Statement and the delivery of net zero carbon emissions. The remaining elements of the hierarchy are set out below. Priority 1 - Use masterplanning to minimise energy demand through passive design. - Effective use of landscaping and green/blue infrastructure. - Adopt a ‘fabric first’ approach. - Development should be designed to be climate resilient. Priority 2 - On-site renewable energy generation should reduce unavoidable carbon emissions associated with any residual energy use. - Enable electric vehicles to discharge to the grid (vehicle to grid) and help meet the power needs of the building. - Off-site measures are a potential option for developments where on-site measures are not practical/viable. - Carbon offsetting could be used to fund a large scale energy efficiency programme in existing buildings, large scale renewable energy installations, community energy projects and heat network expansions for instance. Priority 3 - Use a recognised building quality regime and monitor in-use data to ensure the in-use performance of buildings is as close as possible to the way they were expected to perform - Performance monitoring and evaluation will need to ensure that the sample data is representative of the development as a whole. - Where a performance gap is identified corrective action should be taken.
https://www.gesp.org.uk/consultation/draft/part-3/policy-gesp3/
Formalise Your Commitment Toward Sustainability While international coalitions and regulations have mandated adoption of sustainable practices for business, organisations often find it difficult to formalise this commitment towards sustainability as part of their day to day operations. Today, the economy is dynamic and has spread beyond national and international boundaries. With an effective increase in location of operations, businesses today must operate at a greater environmental and social cost. This has brought about a need to understand what and how one contributes to the economy and what the exact cost of doing business is. For a long time, this social and environmental cost was ignored which led to over-exploitation of resources with life-altering consequences. In our previous article, we had analysed how environmental risks lead to severe economic implications. Like every other risk that is taken into consideration, it is imperative to understand the “How” and “Why” behind addressing Environmental Risks and formalise the commitment towards sustainability by inclusion in all business processes. How can Businesses Approach Environmental Risks? - Know Your Footprint The Cambridge dictionary defines environmental footprint as ‘the effect that a person, company, activity has on the environment, for example the amount of natural resources that they use which leads to the production of a certain amount of harmful gases.’ Few common footprints are Water Footprint, Carbon Footprint and Land Footprint. Organisations need to ascertain the effect of their business processes on all relevant parameters to be able to assess their Footprint. - Set Realistic Targets Once the organisation is aware of its footprint, it must identify its problem areas and set realistic targets and processes to overcome inefficiencies. For example, existing lighting infrastructure can be replaced with Energy efficient LED lights to reduce energy consumption and electricity costs. - Monitor Progress Once the organisation has successfully set targets, the next step towards ensuring sustainability in business operations is to establish a robust monitoring mechanism for implementation of targets. Businesses must initiate a fair and transparent procedure for monitoring progress on set timelines. Commonly Adopted Measures for Mitigating Environmental Risks - Waste Management- Know your waste streams. Segregate waste at source and manage waste considering its chemical composition, source of production, future utility and options for disposal. - Optimal Utilisation of Resources- Incorporate lean processes to ensure optimal utilisation of resources and avoid any waste. Adopt 3M Classification of inefficiencies in allocation of resources. - Cradle to Grave Approach- The cradle to grave approach suggests an alternative way of looking at an organisation’s economic offering. The concept looks at the complete lifecycle of products and follows the impact the organisation’s product has on all stakeholders, from its conception till its disposal. The Cradle to Grave approach calls for the organisation’s complete ownership of their product and its implications even beyond the point of sale. - Circular Economy- A circular economy is a system of closed loops in which raw materials, resources and products lose as little value as possible via its continuous use and re-use. This relies heavily on renewable energy sources and systems thinking. - Water & Energy Conservation- Water and electricity are daily consumables which are a part of any core as well as non-core business activity. Efficient and responsible consumption and utilisation of water and electricity can help cut down operating costs by significant margins. Simple measures like switching to energy efficient lighting solutions, renewable energy sources, installing motion-sensor faucets in washrooms, can prove to be significantly beneficial. - Switch to digital forms of communication- Digital forms of communication provide a perfect alternative to paper. Businesses should encourage employees to use paper only when necessary. Paper and paper-based products used in offices can be recycled and re-used again in various forms. - Carbon Offsetting- Businesses can invest in carbon-offsetting projects certified by Gold Standard, United Nations or other recognised authorising agencies. Carbon offsetting allows businesses to offset their own carbon emissions via funding ‘green’ projects that aim to eventually reduce dependency on carbon-emitting operations. Carbon offsets are programs designed to counterbalance or green our unavoidable footprint by buying carbon credits from carbon credits trading exchange companies. Simply stated, carbon offsets are credits for greenhouse gas cutbacks garnered by one individual or entity that are able to be purchased and utilized to offset(compensate) the emissions contributed by another individual or entity. Carbon offsets are normally measured in tons of CO2-equivalents, commonly abbreviated as CO2e. In the 21st Century it is essential for businesses to focus on more than just products, sales and profits to succeed. Sustainability of any business is dependent on whether the organisation has the ability to maintain existing practices without placing future resources at risk. From an economic standpoint, long-term strategies of business development and growth must consider sustainability as a larger goal incorporating all aspects of day to day business. Written by Akhyata Akhyata is an inquisitive and passionate Sustainability Professional who works at Goodera assisting delivery of strategic CSR and Sustainability solutions.
https://goodera.com/blog/sustainability/sustainability-commitment/
The SEA Group's primary objective is to combine the essential values of respect and protection of our environment and ecological heritage with business development. Action principles - Full compliance with rules and regulations; - On-going improvement in its environment-related performance; - Increased awareness and involvement of all players in the airport system; - Constant levels of monitoring and verification; - Framework based on integration, transparency and sharing. Next priorities - Maintaining neutrality within the scope of "Carbon footprint", through a reduction of energy consumption, the streamlining of the methods of transport to/from the airports, increased use of renewable energy; - Offsetting emissions resulting from activities under the direct control of the company (objective 1 and 2) through the acquisition of Carbon Credits; - Reduction of water footprint by streamlining pumping and consumption and reviewing discharged water-related processes.
http://www.seamilano.eu/en/sustainability/environmental-sustainability
Organizations worldwide have been inspired to pursue a resilient recovery and accelerate decarbonization. Whether your organization is new to carbon reduction or far along its sustainability journey, it’s essential to have a thorough understanding of the emissions scopes to effectively reduce your company’s carbon footprint. As your company navigates its strategy for long-term success, consider how reducing emissions from each scope can uniquely contribute to a resilient and sustainable future. Emissions Scopes Defined The Greenhouse Gas (GHG) Protocol Corporate Standard—developed jointly by WRI and the World Business Council on Sustainable Development (WBCSD)—is the customary tool for the worldwide accounting of GHG emissions produced by organizations. The collective emissions of a company, generally known as its carbon footprint, are determined by applying a variety of emission factors (multipliers based on the global warming potentials of GHGs) to the organization’s emissions-producing activities. All emissions that contribute to an organization's carbon footprint are categorized based off of the source. These categorizations are known as the emissions 'scopes'. The GHG Protocol recognizes an organization’s emissions-producing activities as either direct or indirect. Direct emissions originate from sources owned or controlled by the reporting entity, such as a headquarters facility. Indirect emissions result from a company’s activities but occur at sources that the company neither owns nor ultimately controls, such as a utility or on an airline. The Protocol further categorizes direct and indirect emissions into three different scopes: - Scope 1: All direct emissions. - Scope 2: Indirect emissions from purchased electricity, heat, or steam. - Scope 3: Other indirect emission sources. Let’s take a closer look at each scope… Scope 1 Emissions Scope 1 encompasses all direct emissions that result from a company’s operations. These include emissions under a company’s control such as onsite fuel combustion, company-owned vehicles and in-house processing equipment. Because these emissions are within a company’s direct purview, Scope 1 can be a good place to start when beginning a carbon-reduction journey. Scope 1 can be addressed using a variety of activities. Efficiency upgrades and optimization are a great first step for reducing a company’s controlled emissions on the demand side. Upgrading old equipment to more efficient models, performing LED lighting retrofits and using data-management software that helps monitor and improve the efficiency of operations can reduce the total volume of Scope 1 emissions. Switching fuels can also be an appropriate mitigation measure, as there may be lower carbon fuels available to replace those currently used. For instance, coal emits more than 200 pounds (90.7 kg) of carbon dioxide per million BTUs compared to natural gas’ 117 pounds (53 kg). Biofuels emit even less and renewable natural gas is an emerging option for a carbon-neutral energy source. With today's technologies, no company can completely avoid direct emissions. To address those direct emissions that cannot be avoided, companies can use carbon offsets (also known as verified emissions reductions). Carbon offsets are generated from a variety of activities that reduce the volume of GHG emissions entering the atmosphere, prevent emissions from entering the atmosphere in the first place or remove GHG emissions from the atmosphere entirely. Common project types include landfill gas capture, forestry and fuel switching in favor of less carbon-intensive alternatives. When paired in a 1:1 ratio with direct Scope 1 emissions, the carbon offset effectively neutralizes the environmental impact of the GHG. For detail on carbon offsets, check out our whitepaper Moving Organizations to Carbon Neutrality: The Role of Carbon Offsets. Scope 2 Emissions Scope 2 includes all indirect GHG emissions that result from purchased and consumed electricity, heat, steam or cooling. Though these emissions physically occur at the third-party facility where the electricity is generated, they are driven by (and therefore attributable to) the end user that consumes the energy. Purchased indirect energy consumption is a distinct category from other indirect emissions because it often represents a considerable portion of a company’s footprint. Although indirect, Scope 2 emissions are becoming increasingly controllable by companies via renewable energy procurement and therefore represent a significant opportunity to make reductions. Organizations can use a combination of energy attribute certificates (EACs), onsite renewable energy (sometimes known as distributed generation), power purchase agreements (PPAs), green tariffs, and even carbon offsets in some cases to address Scope 2 emissions. A record-breaking number of companies worldwide have set aggressive SBTs and RE100 renewable energy procurement targets in order to fully mitigate the emissions impact from their purchased, indirect energy. For more guidance on clean energy and emissions reductions, as well as what you need to know about making claims, see this whitepaper. Scope 3 Emissions Scope 3 emissions are more nuanced, as they broadly represent all other indirect emissions from value chain activities. These emissions occur as a result of a company’s operations but are produced from sources neither owned nor controlled by the company. Examples include emissions generated by suppliers, employee commute and business travel, and landfill waste disposal. Scope 3 reductions can take many forms: replacing business travel with virtual meetings or investing in less GHG-intensive travel options; reducing materials and resources consumed, and sustainably sourcing those that are consumed; implementing zero-waste policies and practices; favoring more sustainable suppliers; and engaging key suppliers on improving their own carbon footprint through efficiency measures, resource reductions, and renewable and clean energy procurement. Like Scope 1, unavoidable Scope 3 emissions may be remedied through the purchase of carbon offsets. Companies that successfully engage their stakeholders and take action to address Scope 3 emissions are well-positioned to assert leadership in GHG management. See our blogs How to Calculate Scope 3 Emissions and How to Reduce Scope 3 Emissions for more detailed guidance. Managing your organization’s carbon footprint is not only good for the environment; it is also a way build back with resilience while reducing risks and creating new business opportunities. Corporate carbon accounting gives companies a full view of their operations, creates transparency along the corporate value chain, and allows for informed decision making on sustainability matters, while clean energy supply can help companies improve their resiliency and address unavoidable emissions. Are you looking to calculate, reduce, or report your scope emissions? Contact us to learn more and accelerate your decarbonization journey.
https://perspectives.se.com/latest-perspectives/what-are-the-3-emissions-scopes-and-how-do-you-manage-them
In an ongoing effort to reduce our carbon footprint, we are also members of the United States Environmental Protection Agency’s (EPA) Green Power Partnership Program, which recognizes us for looking to alternative renewable energy sources for our electricity. At the winery we source 36% our electricity from renewable sources like geothermal, solar and wind. That is equivalent to taking 266 cars off the road, the annual energy use of 115 homes or planting 32,354 trees. A significant portion of the renewable electricity we use is generated locally. In addition we also source over 30% of our electricity from clean hydropower. We regularly conduct energy audits at the winery to look for opportunities to improve our process and reduce energy use. Around the winery we incorporate energy efficiency into all our process operations and are continually pursuing opportunities to reduce our energy footprint.
https://www.francisfordcoppolawinery.com/en/behind-the-scenes/sustainability/energy
There are many renewable energy sources available on earth today. Each one is used by different industries for different reasons but all of them have one thing in common: they can be used indefinitely. This article explores the different types of renewable energy sources that are present on earth. A renewable energy source is simply an alternate or non-renewable resource that will replenish itself in order to replace a part depleted through consumption and use, either through biological reproduction or other periodic processes in a natural human time frame. As stated above, bioprocesses are one such type of renewable energy resource. They occur naturally in the environment and are considered “carbon sinks.” Carbon sinks means that carbon dioxide emissions will be offset by the absorption of carbon dioxide from the atmosphere. Biogas, which is also considered to be such renewable energy sources, is a type of carbon sink. It helps to trap carbon dioxide and other greenhouse gas emissions from the atmosphere. However, biogas is no longer utilized since the availability of manure is being depleted. Solar panels, wind turbines and geothermal power are some other types of such sources that we can use for power. Solar panels and wind turbines harness the power of the sun or the wind, respectively, to generate electricity. Geothermal power, also called hydraulic power, uses geothermal power from underground and is commonly used in areas where water is scarce. These energy sources require less energy to produce and are considered to be more sustainable than other renewable energy sources. Fossil fuels, such as coal, petroleum, natural gas, and nuclear energy, are the primary source of energy that we use on a regular basis. Fossil fuels, however, are non-renewable resources that will eventually run out and can thus be considered to be non renewable energy sources. These are non-renewable resources due to the fact that they are found in nature and cannot be replaced by man. In short, they are finite resources. Some other types of renewable energy source are still not renewable at all. Such sources include geothermal energy, which can be used for both domestic and industrial applications, and hydropower, which are a form of renewable energy source using wind or solar energy. Hydroelectricity, which are a form of renewable energy source through the use of the flow of water from dams, and tidal energy, which are a form of renewable energy source using the moving water from rivers. As you can see, there are many types of renewable energy sources and their uses are limited only by a few factors such as the type of resource and human control over it. Although fossil fuels, such as coal and petroleum, are still the dominant source of energy, it may be possible to substitute them with alternative energy sources such as wind turbines or geothermal power. Hydropower is one such type of renewable energy source. The future of renewable energy sources remains uncertain but the benefits, though minimal, are too numerous to ignore. Renewable energy sources are a great benefit to human health and well-being and the environment. There are some environmental groups who are skeptical of the use of renewable energy because they believe that it poses a greater risk to the environment than the use of fossil fuels. However, there are some environmental groups that support the use of renewable energy as a way to reduce our carbon footprint and help the planet. It is important to remember that fossil fuels are also finite and if used up too fast, they will become even harder to come by. Fossil fuels are not the only source of energy but they are probably the most abundant. They are also the most easily obtainable and can be used for almost any application and at any time. The only real exception to this is nuclear power, which cannot be used by the average person to produce his or her own electricity because of its high costs. Although renewable energy sources are the wave of the future, the technology needed to make them is not very advanced and their price tags are not that high either. This means that it will take some time before they can compete with fossil fuels as an energy source. Also, most of these sources are not economically feasible for the majority of people. However, there are some methods, such as solar and wind, that are becoming more affordable and accessible to everyone. We have to face the fact that the energy sources that are available today will most likely be our only source of energy for many years to come, so it is not worth risking our current source. Therefore, we should explore every available avenue of reducing our carbon footprint. By using renewable energy sources, we can avoid the effects of global warming and potentially save the environment.
https://power-save.com/guide-to-renewable-energy-sources/
12 December 2019, Istanbul – The United Nations Development Programme (UNDP) in Kazakhstan will be joining efforts with the Bitfury Group, a leading emerging technologies company, to preserve and replenish large swathes of forest land across Kazakhstan. As part of its Biodiversity Finance Initiative (BIOFIN), UNDP and Bitfury will work to increase forest areas and enhance forest management practices, as well as raise public awareness of climate change and offset carbon emissions attributable to the company by more than 100 percent. The project will aim to provide a model of how to offset carbon through the protection of forests, flora and fauna. It will also guide Kazakhstan’s legislation in this regard – paving the way for scaling up offsetting initiatives over the next decade – and raise public awareness on how to reduce emissions. BIOFIN is helping countries around the world to measure their level of spending on biodiversity, evaluate how much financing is needed to accomplish their goals and advise on how to mobilize new funding from a diversity of sources. “This is a first in terms of carbon offsetting for Kazakhstan and we hope it will create a foundation for accelerating the country’s efforts to reduce its carbon footprint in partnership with the private sector. These are practical solutions that will require institutionalization and scale-up,” said Yakup Beris, UNDP Resident Representative in Kazakhstan. “We fully support UNDP’s sustainable development goals to protect our planet, and we are inspired by Kazakhstan’s national initiative to create offset mechanisms to neutralize greenhouse gas emissions,” said Valerijs Vavilovs, CEO, and co-founder of Bitfury. Kazakhstan currently has 29 million hectares of forests but much of that land is endangered as a result of illegal logging, forest fires, and land use changes. The country says it will aim to meet 50 percent of its energy needs from alternative and renewable sources by the middle of the century. With support from the UN, Kazakhstan already achieved a 45 percent annual reduction in energy consumption for heating pilot municipal buildings over the past five years. The Paris Agreement and the UN Sustainable Development Goals (SDG) state that a full transition to a sustainable, low-carbon and climate-resilient world will require significant investment and innovation both by the state and the private sector. Kazakhstan ratified the Paris Agreement in 2016 and pledged to reduce greenhouse gas emissions by 15% by 2030 through mobilizing innovative solutions with the participation of the private sector. According to experts, the total volume of forest carbon in Kazakhstan is estimated at more than 718.3 million tons of CO2 equivalent. The increase of forest cover from 4.6% to 5% will help increase carbon absorption by forests additionally to 2.9 million tons of CO2 equivalent annually. Furthermore, healthy forests provide essential services including water regulation, biodiversity, food, medicines and ecotourism. UNDP is working to help the government and partners in Kazakhstan achieve a quick transition to renewable energy. For instance, the agency created a solar energy atlas and is working with the Global Environment Facility (GEF) to boost investments across the country in utility-scale and small-scale renewable energy. UNDP Media Contact Meruyert Sadvakassova Bitfury Media Contact Rachel Pipan *** UNDP partners with people at all levels of society to help build nations that can withstand crisis, and drive and sustain the kind of growth that improves the quality of life for everyone. On the ground in nearly 170 countries and territories, we offer global perspective and local insight to help empower lives and build resilient nations.
https://www.kz.undp.org/content/kazakhstan/en/home/presscenter/pressreleases/2019/december/undp-and-bitfury-partner-to-protect-forests-in-kazakhstan.html
In the spring of 2020, S Group published ambitious climate goals. According to the goals, S Group aims to be carbon negative by the end of 2025. In addition, we will reduce the amount of our greenhouse gas (GHG) emissions by 90 per cent, compared to the level of 2015, by the end of 2030. There are many new and difficult-to-understand terms that are related to climate change and GHG emissions. Below, you will find a short introduction to key words and practices. We hope that it clarifies what climate work is all about. Key terms: 1. Carbon negativity In practice, being carbon negative means that we capture more carbon from the air than we release into the atmosphere. Thus, our operations reduce the atmospheric carbon atmosphere, and our final impact on the amount of carbon dioxide is subtractive, i.e. negative. To achieve this goal, it is generally necessary to both heavily reduce emissions from our own operations and use carbon offsetting, such as nature’s own carbon capture or, alternatively, technologies that remove carbon dioxide from the atmosphere. 2. Carbon neutral, zero-emission or emission-free When a company creates no carbon dioxide emissions, it is considered carbon neutral or zero-emission. The operations of the company do not change the carbon content of the atmosphere as the amount of carbon dioxide emissions is zero. In other words, the operations are emission-free. A company can also be carbon neutral if it offset carbon emissions, in this case, it is no longer zero-emission. Carbon-neutral operations do not change the carbon content of the atmosphere as the amount of carbon dioxide emissions is zero. 3. Carbon emissions/greenhouse gas emissions These terms are used when talking about greenhouse gases that cause climate change. In addition to carbon dioxide, greenhouse gases also include methane and ozone. Refrigerants in refrigerators are also strong greenhouse gases. As different greenhouse gases have a different impact on the climate, we have to the convert the other greenhouse gases to carbon dioxide equivalents (CO2e) in order to find out their total impact. 4. Carbon offset / Compensation a: Carbon offset carried out by the entire S Group or an individual company Carbon offset is related to our goal of being carbon negative. After we have reduced our emissions, there might still exist a carbon load from, for example, the production of district heating. Carbon offset is a reduction in emissions made to compensate for emissions made elsewhere S Group always aims for carbon negativity in its compensation, which means that emissions are overcompensated. In practice, carbon offsetting is used to increase carbon storage (e.g., through land restoration or the planting of trees) elsewhere to compensate the amount of emission that our operations have created. b: S Group offers compensation to consumers In the future, we might offer a service that enables the consumer to the carbon dioxide emissions created by their consumption, either for one product or all their consumption. These emissions are not included in the emissions created by S Group’s own operations, which is why we might offer the carbon offset as a voluntary service that the consumer pays for. Some of our suppliers have already compensated the carbon footprint of their products, so the product is carbon neutral already when the consumer buys it. 5. Overcompensation We talk about overcompensation when we want the compensated amount to be higher than required by carbon neutrality. Overcompensation helps us ensure that the results of our operations are carbon negative, i.e. that we remove more carbon from the atmosphere than we produce. 6. National or international compensation? For now, the carbon sinks in the Finnish forests are included in the Finnish government’s national calculation and carbon balance. Therefore, companies cannot use them to offset for their emissions, because this would mean that the created emission unit is used twice (so-called double counting), and not enough carbon dioxide would be captured from the atmosphere. Companies can nevertheless commit to a climate act of increasing the amount of domestic carbon sinks and support Finland’s national emission goal in this manner. However, these acts cannot be used to offset carbon emissions in the manner required by S Group’s climate goals. International compensation initiatives are certified and confirmed by third parties, and the emission units they create are counted and used only once. This guarantees that the activities capture the company’s own carbon emissions. S Group’s climate goals state that it must be possible to verify the compensated amount in such a manner that the measures truly capture enough emissions. 7. Carbon handprint, positive carbon handprint A carbon handprint refers to a positive climate impact created by the operations or a product. A positive handprint can refer to, for example, a company is feeding waste heat from refrigerators into the general district heating network and reducing the need for other heat in the area. 8. Carbon footprint A carbon footprint will typically reveal the amount of emissions created by the operations or a product with certain limitations. The carbon footprint of, for example, supermarket includes the emissions created by its construction, operations and closing of the operations. 9. Emission-free electricity versus electricity produced with renewable energy sources Emission-free electricity is produced with either renewable or emission-free energy. Solar and wind power are examples of renewable and emission-free electricity, while nuclear power is an example of emission-free electricity. 10. S Group’s goals in the production and consumption of electricity S Group’s goal is to use only renewable electricity by the end of 2030. In addition, we have promised that the electricity we use during the transition period is at least emission-free. In practice, we have already reached the goal level: in 2020, all the energy we use was produced with renewable energy, and its origin was verified with a guarantee of origin. S Group produces electricity for its operations with both wind and solar power. Our own renewable energy covers approximately 50 per cent of our electricity consumption; the rest we buy from the market. The amount of own production will rise significantly this year. 11. Guarantee of origin of electricity Guarantees of origin of electricity verify that the consumed electricity is renewable. The guarantee of origin of electricity refers to an electronic document that is proof for the end user that a certain amount of electricity is produced with a certain source of energy. Mechanisms related to the origin of electricity is regulated by legislation (Act on Verification and Notification of Origin of Electricity). S Group currently acquires a guarantee of origin of electricity for all electricity that is consumed within the group. This means that all electricity that we consume is produced with renewable energy. 12. Carbon sink A carbon sink is any process, activity or mechanism that removes carbon dioxide from the atmosphere. For example, plants capture carbon dioxide as they grow, i.e. they function as a carbon sink for as long as they grow. As they decay, they return carbon into the atmosphere, which makes it the opposite of a carbon sink, i.e. a source of carbon. 13. Carbon storage Carbon storage is different from a carbon sink. For example, plants create a carbon storage, but the size of the carbon storage can change. As plants grow, they function as a carbon sink and, once the growth ends, the plant is still a carbon storage even though the size of the carbon sink is no longer growing. 14. The life cycle of a product or activity, life-cycle assessment A life-cycle assessment is a method that examines a product’s or service’s environmental impact from cradle to grave. A life-cycle assessment acknowledges various types of environmental impact, one of which is the carbon footprint. 15. Carbon intensity, emission intensity? Carbon intensity refers to the greenhouse gas emissions created by the operations in relation to, for example, the company’s turnover (€/tCO2) or the produced energy (tCO2/MWh). S Group’s responsibility review discloses the emission intensity of our operations in relation to sales and the gross area of our properties. In addition to electricity and heating, the emissions also include emissions created by refrigerant leakage. 16. Specific emission Specific emission contains emissions created per each produced unit of energy (electricity and heating). The amount is often announced as gCO2e/kWh or kgCO2e/MWh. 17. Coverage of S Group’s climate goals? The climate goal for S Group’s own operations includes emissions reductions targets from electricity, heating, and refrigerant leakages. The goal does not include emissions from, for example, transport, products we sell, packaging or wastage as their emissions are counted towards other parties according to internationally agreed calculation models. S Group’s goal is to be carbon negative by the end of 2025. This means that, by then, we will reduce more carbon dioxide from the atmosphere than we produce. In addition, we will reduce our GHG emissions by 90 per cent by the end of 2030. 18. What do we hope for from our partners? In our Big Deal campaign, we have challenged our partners—that is, suppliers and service providers—to reduce emissions from their operations by one million tonne by the end of 2030. Already 109 partners have accepted the challenge. Collaboration with our partners significantly increases the impact of our climate work. Our external partners are responsible for S Group’s transports, so transport is not part of our emission reduction goal. Therefore, our logistics partner Inex Partners has set its own emission reduction goal for its contract partners that take care of distribution. According to the goal, emissions from transport will be reduced by 20 per cent by the end of 2025. Reducing food waste is also part of minimising emissions. We are on track to halve food waste by the end of 2030.
https://s-ryhma.fi/en/sustainability/climate-and-natural-resources/s-group-s-climate-work/climate-vocabulary
Amidst numerous discussions about reducing greenhouse gases stands a glaring question: what are the greatest sources of household emissions? To answer this, researchers at the University of California, Berkeley created this amazing interactive map that tracks the carbon footprint of all 31,000 U.S. ZIP codes. The map analyzes everything that people consume in a single year including transportation, housing, food, goods and services, and it’s a valuable tool to help cities devise localized, comprehensive climate action plans. Unsurprisingly, the researchers found that suburban sprawl accounts for a whopping 50% of the entire United States’ household carbon footprint. The interactive map is based on a study by UC Berkeley researchers Christopher M. Jones and Daniel M. Kammen, and its findings are fascinating. Population dense cities, with smaller homes and easy access to public transport, have a lower carbon footprint than much of the rest of the country; in some cases these urban households produce 50 percent less greenhouse gases than the national average. But there’s a sizable flip side to this: the most densely populated urban areas tend to sit within the middle of the most extensive suburbs, and these suburbs have a significantly higher carbon footprint than those urban areas. The researchers explain: “The primary drivers of carbon footprints are household income, vehicle ownership and home size, all of which are considerably higher in the suburbs,” with some suburban homes producing twice the emissions of the national average. As a result, a full 50 percent of U.S. emissions come from the suburbs, in spite of the fact that the suburbs house just 143 million of the U.S.’s 313 million residents. Overall, the low household carbon footprint of densely populated cities is, in many cases, mitigated by the footprint of surrounding suburban sprawl. There are, of course, factors beyond income, transportation and home size that contribute to household carbon footprint (HCF). When looking at the HCF associated with energy usage, the maps show far lower emissions in the West and Northwest as compared to the Midwest and East Coast. This is due in large part to the use of low-carbon electricity sources and a growing adoption of renewable energy. Ultimately the researchers recommend that “cities understand the size and composition of household carbon footprints in their locations and then develop customized plans that address the largest opportunities to reduce those impacts,” suggesting that “an entirely new approach of highly tailored, community scale carbon management is urgently needed.” But does this call for the end of the suburbs? The researchers don’t think so, rather that the suburbs are “ideal candidates for a combination of energy efficient technologies, including whole home energy upgrades, and solar photovoltaic systems combined with electric vehicles,” to reduce the footprint associated with suburban living.
https://inhabitat.com/amazing-interactive-map-shows-carbon-footprint-of-all-31000-u-s-zip-codes/
Introduction and Scope Carbon Neutral Northleach is an independent voluntary organisation. Formed with ongoing support from the Town Council and in liaison with Cotswold District Council and Gloucestershire County Council. Climate change is a broad term for many issues, and as a small group of volunteers it would be easy to be too ambitious. To simplify things, we are looking at Carbon Reduction which simply means reducing the amount of carbon dioxide each of us produces. Our project plan focuses on what we can realistically achieve if our whole community gets involved. The Aim We have a simple aim – to work together to reduce our Parish’s carbon footprint - Reducing our CO2 emissions (e.g. by using green energy, insulation, LED lighting, reducing car journeys, buying local) - Offsetting what we produce (e.g. by planting trees) How are we going to do this? Carbon Neutral Northleach is a long term project. Our project plan nominally spans 10 years, with the aim to make incremental improvements every year. The plan can be viewed at the bottom of this page. The plan focuses on research, measurement, education and practical steps to reduce our carbon footprint. To succeed we are dependent upon engaging the community in this very important project – many hands make light work. We need volunteers and supporters to implement our plans. If you would like to be involved please see the links at the bottom of the page. Our carbon footprint We have made some initial calculations of our carbon footprint as a parish. The measurement currently only looks at household emissions through direct energy consumption (electricity and heating) and vehicle use. Delving into the realms of calculating the footprint from the manufacture of our car or clothing is too much at this stage. However, we would like to encourage everyone to buy local, reduce car journeys, buy second hand and pass on once finished. This does not include local businesses at this point. What is our current position? So far our household survey (covering direct energy consumption and vehicle use) estimates that we emit over 8000 tCO²℮ (tonnes of CO2) a year which is equal to 4790 single passenger return flights from London to New York. As part of our carbon reduction programme we will also seek to work with local businesses. Join the calculation We need more people to fill in the questionnaire here to help us accurately calculate the parish’s current carbon footprint. The more households that complete the survey, the more accurate the calculation will be. Fill out and send back our questionnaire What is the carbon reduction plan? In the first few years we are focusing primarily on mainstream carbon footprint reduction projects, which directly reduce our footprint. Our Project Plan sets out the proposed programme, which was originally agreed at the working group meeting of 9th January 2020 and receives regular updates. In summary There are five main objectives to the plan: - Learning – research, specialist advice - Measurement – current position, actual savings - Education/Communication – community, schools, PR & promotion - Reducing carbon emissions – green energy, energy efficiency/using less/insulation, buying local produce - Offsetting carbon emissions – tree planting, supporting world wide programmes. And further down the line: - Waste and recycling- improved recycling, reduced landfill Click below to read our Carbon reduction Plan 2020-2030 CNN Carbon Offset Plan This document shows progress on various tree planting initiatives in the parish. Actions you can take to help now! Join the team Membership is currently free. If you want to be part of delivering this exciting project or simply to receive updates please click here to receive CNN updates Use green energy A simple way to reduce your carbon footprint is to buy electricity from a green energy supplier who provides energy from 100% renewable sources. Households – We are registered as an affiliate with Bulb energy (often highly rated for low prices on Uswitch). You can sign up to Bulb Energy Click here for Bulb green energy quote By switching to Bulb using this link, Bulb will make a £20 contribution to the Carbon Neutral Northleach fund. Businesses – If you would like to reduce your carbon (and hopefully costs) by switching to a green supplier and need assistance to switch to a cost-effective green energy supplier, email us at [email protected] and we will contact you to provide practical assistance.
https://www.northleach.gov.uk/carbon-neutral-northleach/
Man Bites Dog has teamed up with the World Land Trust to make the company carbon balanced. Following a year long programme to reduce Man Bites Dog’s carbon footprint, the company is now working with the Trust to offset its remaining carbon emissions. World Land Trust’s carbon offsetting programme involves restoration ecology: the restoration of degraded natural habitat. The Trust’s main offset project is in Buenaventura, Ecuador, where local partners are planting native species of tree to extend an existing wildlife reserve. As these trees grow they sequester CO2 at a rate in excess of 6 tonnes per hectare per year for 20 years or more. As part of its environmental programme Man Bites Dog has reduced energy consumption and switched to a renewable electricity provider; reviewed its supply chain; undertaken an intensive recycling programme; increased use of public transport and introduced a cycle to work scheme.
https://www.manbitesdog.com/news-insights/news/barking-up-the-right-tree/
As the COVID-19 pandemic progresses, we cheer ourselves by thinking of future socializing in-person. We also think about returning to work or activities we love. These hopes help us through the challenges of physical distancing. Moreover, these challenges show us that we can be more flexible or more creative than we thought we could. For instance, transportation providers have adapted to new ways of serving the public during the pandemic. In the post-COVID-19 future, more transportation providers may recognize the value of adapting their vehicles and services to meet citizens’ diverse needs. Consequently, more service providers may offer accessible public transportation after the COVID-19 pandemic. Accessible PublicTransportation After the COVID-19 Pandemic As physical distancing continues, transportation providers have made changes to the services they offer. For instance, busses have changed their schedules and seating arrangements. Similarly, many bus companies have waived fees in order to allow riders to board through back doors. All these changes make vehicles safer for essential workers and other people who need to travel on public transit, such as buses and trains. In the same way, transportation companies can adapt just as proactively to better serve travellers with disabilities. Current AODA Requirements for Conventional Transportation Providers Currently, the Transportation Standards of the AODA only mandate accessibility in public transit vehicles if: - The vehicles were made on or after January 1st, 2013 - The vehicles were purchased on or after July 1st, 2011 In addition, if companies update one feature of their vehicles, such as signage, the updated feature must be accessible. However, remaining features continue to be inaccessible. This limitation to the standards means that older vehicles may not be welcoming to passengers with disabilities. Some individuals responsible for vehicle oversight at public transit companies may feel that they do not need to worry about making older vehicles accessible because the AODA does not require them to do so. They may also fear that installing accessible features will be costly, time-consuming, or inconvenient. However, companies with accessible vehicles better serve both drivers and passengers. Vehicle Accessibility For example, different vehicle set-ups offer passengers different levels of independence. The wheelchair-accessible seats on some vehicles allow many people to secure their own assistive devices. In contrast, other vehicles require drivers to secure passengers’ wheelchairs, scooters, and other devices. During the COVID-19 pandemic, these differences in vehicle accessibility impact drivers and passengers in new ways. The Transportation Standards require drivers to provide assistance securing passengers, upon request. However, some drivers feel that providing this assistance during the COVID-19 pandemic is not safe. Like workers in all essential services, bus drivers deserve to be safe and supported as they do their important work. Nonetheless, serving passengers with disabilities, including securing passengers, is part of that essential work. People of all abilities need to travel to their jobs and essential services, like stores or doctors. When public transit companies invest in vehicles with more accessibility features, their drivers and passengers will be less likely to face this dilemma. In other words, the more accessible vehicles are, the safer they are for drivers and passengers. When transportation companies choose to improve their vehicle accessibility, the changes they make may later bring benefits they do not expect.
https://aoda.ca/accessible-public-transportation-after-the-covid-19-pandemic/
The global pandemic created an urgent need to both safely access and deliver health care services. Telemedicine instantly met that need. Although many think of telemedicine services... Large-Scale Energy Customers and the Potential Value of Becoming an Accelerated Renewable Energy Buyer The Virginia Clean Economy Act (VCEA), passed by the General Assembly in 2020, in part incentivizes commercial, industrial, educational, health care, governmental and other large-scale energy customers... 2021 Notables Leadership Michael C. Guanzon has been appointed to a three-year term to the Board of Managers (Directors) of the University of Virginia Alumni Association. Bar Leadership Several firm... The Copyright Claims Board: Key Considerations in this Alternative Venue for Small Copyright Disputes Individuals involved in relatively small or specialized copyright infringement disputes now have an alternative to potentially costly and time-consuming lawsuits in federal district court. It is anticipated... Mandatory Employee COVID-19 Vaccination Required for Certain Medicare- and Medicaid-certified Providers The Centers for Medicare and Medicaid Services (CMS) published an interim final rule with comment period (IFR or Rule) effective Nov. 5, 2021 that requires certain Medicare-... New Laws and Regulations on Surprise Billing Practices In recent years, balance billing from medical providers has attracted the attention of legislatures at both the state and federal level. On Jan. 1, 2021, Virginia’s balance...
https://www.cblaw.com/news/page/3/
By Gary Austin, Payer/Provider Interoperability Lead *CMS/ONC Rules Timeline was updated on 11/16/2020 to reflect changes to the deadlines. Changes in regulations, technological innovation and the healthcare landscape are propelling application programming interfaces (APIs) forward as the preferred way to exchange both patient clinical and administrative data. In fact, APIs are rapidly beginning to replace or augment legacy electronic data interchange (EDI) transactions in healthcare. This changeover gives organizations who are parties to value-based care (VBC) contracts the opportunity to re-engineer underlying administrative and clinical processes, such as electronic prior authorization (ePA). They can be enabled to create competitive advantage and significantly reduce costs—not to mention improve the member’s/patient’s experience. EDI vs API. So, what’s the difference between EDI and APIs? Let’s level set. - EDI. EDI is the historical way for healthcare organizations to securely collect and share claims and patient data. EDI has several drawbacks. EDI is based on often immutable standards which take years to update or replace. It is not nimble in adapting to changing technologies, shifting business needs, incremental data requirements and regulatory mandates. - APIs. APIs are a key part of the digital transformation of healthcare. An API serves as a software interface between two endpoints (such as a mobile app or electronic health record and a payer’s back-office system) to enable information sharing in real-time. An API can be configured to send or retrieve data that can update an individual’s record or provide aggregated data that can be used to create reports for diagnosis and treatment. APIs can be implemented quickly and inexpensively once the API technical framework is in place. Some legacy EDI systems are not fully interoperable with API technology and may require retrofitting or bolt-on technology. Adoption accelerators. The accelerated adoption of APIs for patient data exchange is coming sooner rather than later, under provisions of the 21st Century Cures Act. Payers and providers must comply. Two implementing regulations (click here to read more about them) mandate quick creation and adoption of APIs to enable data sharing among providers, patients and payers. As required by the new rules, APIs will be based on HL7’s FHIR (Fast and Interoperable Healthcare Resource) standard, version 4, which is web-based. Many payer systems have yet to convert to the new standard. The new rules give patients access to their health information and move the healthcare system toward greater interoperability. As shown on the timeline below, payers have short deadlines to implement APIs to support patient access to their own data, provider directories and payer-to-payer data exchange. CMS/ONC Rules Timeline The API Opportunity As APIs rapidly become part of the healthcare scene, payers and other entities that support VBC contracts are faced with the prospect of integrating them with their current EDI system or replacing certain transactions altogether. Business process change will be paramount. An example is prior authorization, which is one of the most complex and costly healthcare transactions. Despite many advances in standards and technology, it remains a largely manual process. According to the CAQH’s 2019 Index, the industry savings opportunity with electronic prior authorization (ePA) is $10 per authorization or $454 million in Potential Annual Savings for the Medical Industry. The good news is that a lot of the frustration and churning involved in completing PA transactions can be improved today, from elimination of manual review processes and reduction in claim kickbacks to providers for missing or incomplete information. The potential benefits of fully automating the PA process are significant: - Improved Patient Outcomes: More than nine in 10 physicians (92 percent) say that prior authorizations programs have a negative impact on patient clinical outcomes, according to an AMA physician survey. In another survey of insured Americans through the Doctor-Patient Rights Project (2017), respondents whose payer denied coverage of a prescribed treatment reported their median wait time to seek approval and be denied was greater than one month. Almost a third (28%) shared that the approval process took three months or longer. While waiting for a payer to consider an authorization request, another third (29%) experienced a worsening of their condition due to delayed treatment. (Source: eHealth Initiative) - Improved medication adherence: The manual PA process is inefficient and time-consuming and leads to the patient abandoning the prescription 37 percent of the time. Turnaround times for ePA determinations can be on average 35 percent shorter as reported by a major pharmacy benefit manager, which has been shown to increase medication adherence. (Source: 2019 ePA Adoption Scorecard) - Reduced Administrative Costs: Manual (phone/fax) prior authorization is the most costly, time-consuming administrative transaction for providers. On average, providers spend almost $11 per transaction to conduct a prior authorization manually and payers must often reject claims for missing or incomplete information. According to CoverMyMeds, approximately 11 percent of prescription claims are rejected at the pharmacy, and, on average, 66 percent of those prescriptions require PA. CAQH reported the improved efficiencies of ePA can directly save plans over $3 per medical prior authorization or an estimated $99 million annually. Another analysis points out that use of ePA could create significant savings by reducing plans’ call center volume (by as much as 22% in some cases). How APIs Will Benefit Payers Reengineering administrative processes to facilitate data exchange through APIs can benefit payers in several ways. For example: - Improved member stickiness. The widespread use of the internet, social media and on-line retail changed consumer behavior and expectations. Consumers demand a new, high level of customer service with the vendor, whether it is an on-line retailer, an airline, or their health plan. As a result, consumers expect a seamless, hassle-free experience. The incoming ability to shop healthcare services puts providers in new competitive modes, in which patient recruitment and retention will be influenced by easy access to their data and how they are exchanged. - Improved provider satisfaction. Less hassle and happier patients translate into improved provider satisfaction and morale. Not only does this have implications for retention, but it also is a metric on which payers may be judged. - Improved population health. Population health is taking on improved importance on the post-COVID-19 world to identify populations at risk for the virus and chronic illnesses as well as more cost-effective ways to manage those conditions. Using APIs to share patient data will be key. A 360-degree view of the patient can create unparalleled insights into the patient’s condition and gaps in care. - Compliance with regulatory requirements. The use of APIs by health plans to exchange patient data is now a government mandate. While compliance penalties have been temporarily suspended due to the COVID-19 crisis, they will be back in full force at some point. In the meantime, competitors will be moving full steam ahead on not only meeting rules compliance, but also leveraging the new capabilities for marketplace leadership. - Steerage. As price and quality performance data become more transparent, those VBCs that can provide data to members/patients and steer them to quality- and price-appropriate care delivery are able to significantly alter their marketplaces and gain competitive advantage. For payers to capitalize on APIs, it is critical for them to identify underlying technology gaps related to patients’ data access, patient-to-provider/or caregiver data sharing and payer-payer data transfer. Many VBCs do not have the underlying technology—or revamped administrative processes—in place to accomplish the lift needed for each of these. If they don’t start building this capacity as soon as possible, their competitors will leave them in the dust. Point-of-Care Partners is uniquely positioned to guide your organization in the transition to the growing, patient-facing health IT economy. Want to know more about APIs and VBC? Reach out to me at Gary “Lumpy” Austin ([email protected]).
https://blog.pocp.com/blog/value-based-payers-get-on-the-api-train-or-competitors-will-leave-you-at-the-station
The Antitrust Division and the FTC (together, the Agencies) have investigated and litigated antitrust cases in markets across the country involving hospitals, physicians, ambulatory surgery centers, stand-alone radiology programs, medical equipment, pharmaceuticals, and other health care goods and services. In addition to this enforcement, we have conducted hearings and undertaken research on various issues in health care competition. For example, in 2003, we conducted 27 days of hearings on competition and policy concerns in the health care industry, hearing from approximately 250 panelists, eliciting 62 written submissions, and generating almost 6,000 pages of transcripts.(3) As a result of that effort, the Agencies jointly published an extensive report in July 2004 entitled, Improving Health Care: A Dose of Competition.(4) We regularly issue informal advisory letters on the application of the antitrust laws to health care markets, and periodically issue reports and general guidance to the health care community. Through this work, we have developed a substantial understanding of the competitive forces that drive innovation, costs, and prices in health care. The Agencies' experience and expertise has taught us that Certificate-of-Need laws impede the efficient performance of health care markets. By their very nature, CON laws create barriers to entry and expansion to the detriment of health care competition and consumers. They undercut consumer choice, stifle innovation, and weaken markets' ability to contain health care costs. Together, we support the repeal of such laws, as well as steps that reduce their scope. We have also examined historical and current arguments for CON laws, and conclude that such arguments provide inadequate economic justification for depriving health care consumers of the benefits of competition. To the extent that CONs are used to further non-economic goals, they impose substantial costs, and such goals can likely be more efficiently achieved through other mechanisms. We hope you will carefully consider the substantial costs that CON laws may impose on consumers as you consider eliminating or otherwise amending Illinois's CON requirements. Although we do not intend to focus on specific aspects of the CON program in Illinois, we are generally familiar with the issues before you and recognize them as issues that CON laws present in other states and markets. Also, please note that it is not the intent of the Agencies to "favor any particular procompetitive organization or structure of health care delivery over other forms that consumers may desire. Rather, [our] goal is to ensure a competitive marketplace in which consumers will have the benefit of high quality, cost-effective health care and a wide range of choices . . ."(5) Our mission is to preserve and promote consumer access to the benefits of competition, rather than any particular marketplace rival or group of rivals. Our concerns about the harm from CON laws are informed by one fundamental principle: market forces tend to improve the quality and lower the costs of health care goods and services. They drive innovation and ultimately lead to the delivery of better health care. In contrast, over-restrictive government intervention can undermine market forces to the detriment of health care consumers and may facilitate anticompetitive private behavior. In our antitrust investigations we often hear the argument that health care is "different" and that competition principles do not apply to the provision of health care services. However, the proposition that competition cannot work in health care is not supported by the evidence or the law. Similar arguments made by engineers and lawyers – that competition fundamentally does not work and, in fact is harmful to public policy goals – have been rejected by the courts, and private restraints on competition have long been condemned.(6) Beginning with the seminal 1943 decision in American Medical Association v. United States, the Supreme Court has come to recognize the importance of competition and the application of antitrust principles to health care.(7) The Antitrust Division and the Federal Trade Commission have worked diligently to make sure that barriers to that competition do not arise. During our extensive health care hearings in 2003, we obtained substantial evidence about the role of competition in our health care delivery system and reached the conclusion that vigorous competition among health care providers "promotes the delivery of high-quality, cost-effective health care."(8) Specifically, competition results in lower prices and broader access to health care and health insurance, while non-price competition can promote higher quality.(9) Competition has also brought consumers important innovations in health care delivery. For example, health plan demand for lower costs and "'patient demand for a non-institutional, friendly, convenient setting for their surgical care'" drove the growth of Ambulatory Surgery Centers.(10) Ambulatory Surgery Centers offered patients more convenient locations, shorter wait times, and lower coinsurance than hospital departments.(11) Technological innovations, such as endoscopic surgery and advanced anesthetic agents, were a central factor in this success.(12) Many traditional acute care hospitals have responded to these market innovations by improving the quality, variety, and value of their own surgical services, often developing on- or off-site ambulatory surgery centers of their own. This type of competitive success story has occurred often in health care in the areas of pharmaceuticals, urgent care centers, limited service or "retail" clinics, and the development of elective surgeries such as LASIK, to name just a few. Without private or governmental impediments to their performance, we can expect health care markets to continue to deliver such benefits. CON laws are a regulatory barrier to entry, which, by their nature, are an impediment to health care competition. Accordingly, in A Dose of Competition, we urged states to rethink their CON laws.(13) We made that recommendation in part because the original reason for the adoption of CON laws is no longer valid. Many CON programs trace their origin to a repealed federal mandate, the National Health Planning and Resources Development Act of 1974, which offered incentives for states to implement CON programs. At that time, the federal government and private insurance reimbursed health care charges predominantly on a "cost-plus" basis, which provided incentives for over-investment. There was concern that, because patients are usually not price-sensitive, providers engaged in a "medical arms race" by unnecessarily expanding their services to offer the perceived highest-quality services, allegedly driving up health care costs.(14) The hope was that CON laws would provide a counterweight against that skewed incentive. Thus, it is important to note that: Since the 1970s, the reimbursement methodologies that may in theory have justified CON laws initially have significantly changed. The federal government, as well as private third-party payors, no longer reimburse on a cost-plus basis. In 1986, Congress repealed the National Health Planning and Resources Development Act of 1974. And health plans and other purchasers now routinely bargain with health care providers over prices. Essentially, government regulations have changed in a way that eliminates the original justification for CON programs.(15) CON laws also appear to have generally failed in their intended purpose of containing costs. Numerous studies have examined the effects of CON laws on health care costs,(16) and the best empirical evidence shows that "on balance . . . CON has no effect or actually increases both hospital spending per capita and total spending per capita."(17) A recent study conducted by the Lewin Group for the state of Illinois confirms this finding, concluding that "the evidence on cost containment is weak," and that using "the CON process to reduce overall expenditures is unrealistic."(18) Not only have CON laws been generally unsuccessful at reducing health care costs, but they also impose additional costs of their own. First, like any barrier to entry, CON laws interfere with the entry of firms that could otherwise provide higher-quality services than those offered by incumbents.(19) This may tend to depress consumer choice between different types of treatment options or settings,(20) and it may reduce the pressure on incumbents to improve their own offerings.(21) Second, CON laws can be subject to various types of abuse, creating additional barriers to entry, as well as opportunities for anticompetitive behavior by private parties. In some instances, existing competitors have exploited the CON process to thwart or delay new competition to protect their own supra-competitive revenues. Such behavior, commonly called "rent seeking," is a well-recognized consequence of certain regulatory interventions in the market.(22) For example, incumbent providers may use the hearing and appeals process to cause substantial delays in the development of new health care services and facilities. Such delays can lead both the incumbent providers and potential competitors to divert substantial funds from investments in such facilities and services to legal, consulting, and lobbying expenditures; and such expenditures, in turn, have the potential to raise costs, delay, or – in some instances – prevent the establishment of new facilities and programs.(23) Moreover, much of this conduct, even if exclusionary and anticompetitive, may be shielded from federal antitrust scrutiny, because it involves protected petitioning of the state government.(24) During our hearings, we gathered evidence of the widespread recognition that existing competitors use the CON process "to forestall competitors from entering an incumbent's market."(25) In addition, incumbent providers have sometimes entered into anticompetitive agreements that were facilitated by the CON process, if outside the CON laws themselves. For example: Finally, the CON process itself may sometimes be susceptible to corruption. For example, as the task force is probably aware, in 2004, a member of the Illinois Health Facilities Planning Board was convicted for using his position on the Board to secure the approval of a CON application for Mercy Hospital. In exchange for his help, the Board member agreed to accept a kickback from the owner of the construction company that had been hired to work on the new hospital.(33) Incumbent hospitals often argue that they should be protected against additional competition so that they can continue to cross-subsidize care provided to uninsured or under-insured patients. Under this rationale, CON laws should impede the entry of new health care providers that consumers might enjoy (such as independent ambulatory surgery centers, free-standing radiology or radiation-therapy providers, and single- or multi-specialty physician-owned hospitals) for the express purpose of preserving the market power of incumbent providers. The providers argue that without CON laws, they would be deprived of revenue that otherwise could be put to charitable use.(34) We fully appreciate the laudatory public-policy goal of providing sufficient funding for the provision of important health care services – at community hospitals and elsewhere – to those who cannot afford them, and for whom government payments are either unavailable or too low to cover the cost of care. But at the same time, we want to be clear that the imposition of regulatory barriers to entry as an indirect means of funding indigent care may impose significant costs on all health care consumers – consumers who might otherwise benefit from additional competition in health care markets. First, as noted above, CON laws stifle new competition that might otherwise encourage community hospitals to improve their performance. For example, in studying the effects of new single-specialty hospitals, the Medicare Payment Advisory Committee (MedPAC) found that certain community hospitals responded to competition by improving efficiency, adjusting their pricing, and expanding profitable lines of business.(35) In addition to administrative and operational efficiencies, the MedPAC Report identified several examples of improvements sparked by the entrance of a specialty hospital into a market, including extended service hours, improved operating room scheduling, standardized supplies in the operating room, and upgraded equipment.(36) Second, we note that general CON requirements such as those imposed under Illinois law sweep very broadly, instead of targeting specific, documented social needs (such as indigent care). Although the Agencies do not suggest to Illinois policy makers any particular mechanism for funding indigent care, we note that solutions more narrowly tailored to the state's recognized policy goals may be substantially less costly to Illinois consumers than the current CON regime, and that the Lewin Group report commissioned by the state identifies various alternatives that may be more efficient in advancing such goals.(37) Third, it is possible that CON laws do not actually advance the goal of maintaining indigent care at general community hospitals. Recently the federal government studied just this issue in connection with the emergence of single-specialty hospitals around the country. That study found that, for several reasons, specialty hospitals did not undercut the financial viability of rival community hospitals.(38) One substantial reason for this was that specialty hospitals generally locate in areas that have above-average population growth. Thus, they are competing for a new and growing patient population, not just siphoning off the existing customer base of the community hospitals. This is consistent with the Lewin Group study showing that safety-net hospitals in non-CON states actually had higher profit margins than safety-net hospitals in CON states.(39) The Agencies believe that CON laws impose substantial costs on consumers and health care markets and that their costs as well as their purported benefits ought to be considered with care. CON laws were adopted in most states under particular market and regulatory conditions substantially different from those that predominate today. They were intended to help contain health care spending, but the best available research does not support the conclusion that CON laws reduce such expenditures. As the Agencies have said, "[O]n balance, CON programs are not successful in containing health care costs, and . . . they pose serious anticompetitive risks that usually outweigh their purported economic benefits."(40) CON laws tend to create barriers to entry for health care providers who may otherwise contribute to competition and provide consumers with important choices in the market, but they do not, on balance, tend to suppress health care spending. Moreover, CON laws may be especially subject to abuse by incumbent providers, who can seek to exploit a state's CON process to forestall the entry of competitors in their markets. For these reasons, the Agencies encourage the task force to seriously consider whether Illinois's CON law does more harm than good. FOOTNOTES 1. This statement draws from testimony delivered on behalf of the Antitrust Division to the General Assembly and Senate of the State of Georgia on February 23, 2007; to the Committee on Health of the Alaska House of Representatives on January 31, 2008; and to the Florida Senate Committee on Health and Human Services Appropriations on March 25, 2008. It also draws from testimony delivered on behalf of the Federal Trade Commission to the Committee on Health of the Alaska House of Representatives on February 15, 2008 and to the Florida State Senate on April 2, 2008. 2. This statement responds to an invitation from Illinois State Senator Susan Garrett, co-chair of the Illinois Task Force on Health Planning Reform, dated June 30, 2008. 3. This extensive hearing record is largely available at http://www.ftc.gov/bc/healthcare/research/ 4. Federal Trade Commission and the Department of Justice, Improving Health Care: A Dose of Competition (July 2004), available at http://www.usdoj.gov/atr/public/health_care/204694.htm (hereinafter A Dose of Competition). 5. U.S. Department of Justice and Federal Trade Commission, Statements of Antitrust Enforcement Policy in Health Care, August 1996, Introduction, at 3, available at http://www.usdoj.gov/atr/public/ 6. See, e.g., F.T.C. v. Superior Court Trial Lawyers Ass'n, 493 U.S. 411 (1990); National Society of Professional Engineers v. U.S., 435 U.S. 679 (1978). 7. 317 U.S. 519, 528, 536 (1943) (holding that a group of physicians and a medical association were not exempted by the Clayton Act and the Norris-LaGuardia Acts from the operation of the Sherman Act, although declining to reach the question whether a physician's practice of his or her profession constitutes "trade" under the meaning of Section 3 of the Sherman Act). 8. A Dose of Competition, Executive Summary, at 4. 9. Id.; see also id., Ch. 3, §VIII. 10. Id., Ch. 3 at 25. 11. Medicare Payment Advisory Commission, Report to the Congress: Medicare Payment Policy § 2F, at 140 (2003), available at http://www.medpac.gov/publications/congressional_reports/Mar03_Entire_report.pdf. 12. A Dose of Competition, Ch. 3 at 24. 13. A Dose of Competition, Executive Summary at 22. 14. See A Dose of Competition, Ch. 8 at 1-2. 15. A Dose of Competition, Ch. 8 at 1-6. 16. A Dose of Competition, Ch. 8 at 1-6; Christopher J. Conover & Frank A. Sloan, Evaluation of Certificate of Need in Michigan, Center for Health Policy, Law and Management, Terry Sanford Institute of Public Policy, Duke University, A Report to the Michigan Dept. of Community Health, 30 (May 2003); David S. Salkever, Regulation of Prices and Investment in Hospital in the United States, in 1B Handbook of Health Economics, 1489-90 (A.J. Culyer & J.P. Newhouse eds., 2000) ("there is little evidence that [1970's era] investment controls reduced the rate of cost growth."); Washington State Joint Legislative Audit and Review Committee (JLARC), Effects of Certificate of Need and Its Possible Repeal, I (Jan. 8, 1999) ("CON has not controlled overall health care spending or hospital costs."); Daniel Sherman, Federal Trade Commission, The Effect of State Certificate-of-Need Laws on Hospital Costs: an Economic Policy Analysis, iv, 58-60 (1988) (concluding, after empirical study of CON programs' effects on hospital costs using 1983-84 data on 3,708 hospitals, that strong CON programs do not lead to lower costs but may actually increase costs); Monica Noether, Federal Trade Commission, Competition Among Hospitals 82 (1987) (empirical study concluding that CON regulation led to higher prices and expenditures); Keith B. Anderson & David I. Kass, Federal Trade Commission, Certificate of Need Regulation of Entry into Home Health Care: A Multi-Product Cost Function Analysis (1986) (economic study finding that CON regulation led to higher costs, and that CON regulation did little to further economies of scale). 17. See Conover & Sloan, Report to Michigan, supra note 15, at 30. 18. The Lewin Group, An Evaluation of Illinois' Certificate of Need Program, prepared for the Illinois Commission on Government Forecasting and Accountability (February 15, 2007), at 31 (hereinafter Lewin Group). 19. A Dose of Competition, Ch. 8 at 4 (citing Hosp. Corp. of Am., 106 F.T.C. 361, 495 (1985) (Opinion of the Commission) (stating that "CON laws pose a very substantial obstacle to both new entry and expansion of bed capacity in the Chattanooga market" and that "the very purpose of the CON laws is to restrict entry")). 20. With regard to hospital markets, see, e.g., United States Dept. of Health and Human Services, Final Report to the Congress and Strategic Implementing Plan Required under Section 5006 of the Deficit Reduction Act of 2005 (2006), available at http://www.cms.hhs.gov/PhysicianSelfReferral/06a_DRA_Reports.asp (reporting at specialty hospitals a "quality of care at least as good as, and in some cases better than, care provided at local competitor hospitals" for cardiac care, as well as "very high" patient satisfaction in cardiac hospitals and orthopedic specialty hospitals) (citations omitted). In addition, specialty hospitals appear to offer shorter lengths of stay, per procedure, than peer hospitals. See also Medicare Payment Advisory Commission, Report to the Congress: Physician-Owned Specialty Hospitals, vii (Mar. 2005), available at http://www.medpac.gov/documents/Mar05_SpecHospitals.pdf (hereinafter MedPAC). 21. See, e.g., MedPAC, supra note 19, at 10 (observing both administrative improvements "Some community hospital administrators admit that competition with specialty hospitals has had some positive effects on community hospitals' operations" and other qualitative improvements "We heard several examples of constructive improvements sparked by the entrance of a specialty hospital into a market, including extending service hours, improving operating room scheduling, standardizing the supplies in the operating room, and upgrading equipment."). 22. Paul Joskow and Nancy Rose, The Effects of Economic Regulation, in 2 Handbook of Industrial Organization (Schmalensee and Willig, eds., 1989). 23. See, e.g., Armstrong Surgical Ctr., Inc. v. Armstrong County Mem'l Hosp., 185 F.3d 154, 158 (3rd Cir. 1999) (an ambulatory surgery center alleged that a competing hospital had conspired with nineteen of its physicians to make factual misrepresentations as well as boycott threats to the state board, allegedly causing the board to deny the center its CON); St. Joseph's Hosp., Inc. v. Hosp. Corp. of America, 795 F.2d 948 (11th Cir. 1986) (a new hospital applying for a CON alleged that an existing competitor submitted false information to the CON board; that the board relied on that information in denying the CON; and that the defendants also acted in bad faith to obstruct, delay, and prevent the hospital from obtaining a hearing and later a review of the adverse decision). 24. Eastern Rail. Pres. Conf. v. Noerr Motor Frgt., Inc., 365 U.S. 127 (1961). 25. A Dose of Competition, Executive Summary at 22. 26. U.S. v. Charleston Area Med. Ctr., Inc., Civil Action 2:06 -0091 (S.D.W.Va. 2006), available athttp://www.usdoj.gov/atr/cases/f214400/214477.htm. 27. Justice Department Requires West Virginia Medical Center to End Illegal Agreement (Feb. 6, 2006), available at http://home.atrnet.gov/subdocs/214463.htm. 28. U.S. v. Bluefield Regional Medical Center, Inc., 2005-2 Trade Cases ¶ 74,916 (S.D. W.Va. 2005). 29. See id. at 2-3 (referring to the prohibited conduct). 30. Id. 31. Department of Justice Statement on the Closing of the Vermont Home Health Investigation (Nov. 23, 2005), available at http://www.usdoj.gov/atr/public/press_releases/2005/213248.htm. 32. Id. 33. Plea Agreement at 20-23, U.S. v. Levine (D. Ill. 2005) (No. 05-691). 34. There is an ironic element to this argument: What started as laws intended to control costs have become laws intended to inflate costs. Proponents of CON laws now would use these barriers to entry to stifle competition, protect incumbent market power, frustrate consumer choice, and keep prices and profits high. 35.See, e.g., MedPAC, supra note 19, at 10 ("Some community hospital administrators admit that competition with specialty hospitals has had some positive effects on community hospitals' operations"). Other studies have found that the presence of for-profit competitors leads to increased efficiency at nonprofit hospitals. Kessler, D. and McClellan M., "The Effects of Hospital Ownership on Medical Productivity," RAND Journal of Economics 33 (3), 488-506 (2002). 36. MedPAC, supra note 19, at 10; see also Greenwald, L. et al., "Specialty Versus Community Hospitals: Referrals, Quality, and Community Benefits," Health Affairs 25, no. 1 (2006): 116-117; Stensland J. and Winter A., "Do Physician-Owned Cardiac Hospitals Increase Utilization?" Health Affairs 25, no. 1 (2006): 128 (some community hospitals have responded to the presence of specialty hospitals by recruiting physicians and adding new cardiac catheterization labs). 37. See Lewin Group, at 29 (discussing various financing options for charity care in Illinois). 38. MedPAC, supra note 19, at 23-24; see also MedPAC, Report to the Congress: Physician-Owned Specialty Hospitals Revisited, at 21-25 (August 2006), available at http://www.medpac.gov/documents/Aug06_specialtyhospital_mandated_report.pdf. 39. Lewin Group, at 28. 40. A Dose of Competition, Executive Summary at 22.
https://www.justice.gov/atr/competition-health-care-and-certificates-need-joint-statement-antitrust-division-us-department
The prevalence of audits is soaring after the expiration of COVID-related regulatory relaxation. Despite the strain that hospitals and health systems are facing with the COVID-19 pandemic, compliance audits for reimbursement are on the rise. The Centers for Medicare & Medicaid Services (CMS) suspended audits between March 30 and Aug. 3, 2020, but the suspension was lifted in 2021. During last year’s audit downtime, Recovery Audit Contractors (RACs) began to data-mine claims, resulting in an increase of medical records requests and overpayment demand letters sent to providers. Experts expect a continued increase in audit activity in the coming months, potentially to previously unseen levels. As such, providers need a well-designed, automated approach to respond proactively and effectively to RAC requests. Rule Changes and Error Risks The onset of COVID brought rapid changes to the regulations governing services such as telehealth, inpatient rehab, nursing home care, and more. The frequency of these changes has increased the likelihood for audit errors and misapplication of rules. Claims adjudicated within the first 60 days of the pandemic are at particular risk for errors. With rule changes occurring almost daily, auditors may face difficulty applying appropriate guidelines to the applicable time frame of the claim. Things have been further complicated by federal work plans that have increased audits, evaluations, and inspections by the U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG). Work plans bring a higher level of scrutiny to areas such as Patient-Driven Payment Model (PDPM) coding and supporting documentation, proper use of Skilled Nursing Facility (SNF) waivers, and appropriate access to and accounting for Provider Relief Funds. Another area of focus is billing patterns for Medicare telehealth services during the pandemic, with a close look at provider characteristics that could potentially pose a risk to the program. An Increase in Audits In conjunction with these activities, CMS has also been working to resolve provider complaints related to the backlog of Medicare appeals. In 2018, a federal court ruling in favor of the American Hospital Association (AHA) and its member hospitals established deadlines for CMS to reduce its backlog of Medicare appeals at the Administrative Law Judge (ALJ) level. The ruling was accompanied by a $182.3 million increase in funding, which enabled the hiring of an additional 70 ALJs dedicated to adjudicating appeals. With the addition of ALJs, the Office of Medicare Hearing and Appeals (OMHA) estimates that the agency will be able to adjudicate more than 300,000 appeals annually, compared to its previous capacity of approximately 75,000 annually. Healthcare legal experts say this increased capacity creates a double-edged sword for providers. As of March 2021, CMS had already reduced its backlog by nearly 70 percent, tracking toward its goal of a 75-percent reduction by the end of the 2021 fiscal year (FY). With the backlog coming to an end, CMS will likely loosen restrictions previously placed on contractors to slow down audit activity, and providers could see a substantial increase in audits as a result. A Proactive Approach With the increased likelihood that providers will face a RAC audit, the benefit of having a well-defined process and proactive approach to audit responses is apparent. The approach should involve clear audit defense protocols, including thorough documentation of billing compliance. Playing an active role in the audit process helps providers identify any potential errors that may inadvertently be included in auditors’ methodologies and calculations, which will help prevent unfounded determinations and unwarranted recoupment demands. Providers should pay close attention to all overpayment or audit letters, whether from government or commercial payors. When an Additional Documentation Request (ADR) is received, it is important that everyone involved in billing claims document its receipt and act quickly in response. By taking a proactive approach, collecting documentation of billing compliance and documenting auditors’ mistakes, providers may be able to address unfavorable determinations before RACs initiate the appeals process. Data from the AHA shows that appealing RAC denials is often favorable for providers, with 27 percent of providers saying that RACs reversed a denial during the discussion period, before the formal appeals process began. Of the RAC denials that went through the formal appeals process, 62 percent were overturned. While the results can be favorable for providers, the appeals process is often costly and time-consuming. Data from 2016 shows that 43 percent of hospitals spent more than $10,000 managing the RAC process, and another 24 percent spent more than $25,000. Proactively addressing RAC denials before they enter the appeals process can help providers reduce these costs. The Importance of Documentation According to CMS’s Comprehensive Error Rate Testing (CERT) Research and Statistics Data, the top reason for a RAC denial is a lack of documentation. This occurs when a provider fails to appropriately respond to an ADR, either by neglecting to respond or by stating that they do not have the documentation requested. The second-most common error category is insufficient documentation, meaning that the medical records provided for review are inadequate to support the services billed for payment. Documentation is a critical lifeline for providers to win the battle against RAC denials. A clear workflow is needed to collect necessary documentation and ensure that all materials are included for the dates under review. This includes documents required for payment, such as a signed physician order, to show that billed services were actually provided and medically necessary. Electronic Submission of Medical Documentation CMS launched the Electronic Submission of Medical Documentation (esMD) initiative to assist providers in the submission and tracking of audit documentation. The process gives providers the ability to electronically receive and respond to documentation requests through electronic medical documentation requests, or eMDR. By digitally transmitting request and response data, providers can reduce the risk of losing audit notifications and ADRs, as well as show timely filing with digital proof of receipt by RACs and CMS. Electronic filing is shown to improve payment response times for audited claims by eliminating time-consuming processes such as screen scraping, printing, faxing, and mailing paper records. The process also reduces the risk of paper copies being delayed, lost in transit, or delivered to the wrong location. A fully digital process helps ensure timely responses, resulting in fewer missed deadlines, fewer discrepancies about submission timelines, and ultimately, fewer denials.
https://racmonitor.com/taking-a-proactive-approach-to-audit-defense/
Try as you might, and I’m sure many of you have, it’s been impossible to avoid the endless discussions about the Brexit deal and the implications for industries across the UK. With such a lack of certainty, the extent to which, and how the finance industry will be affected, is certainly not clear. The impact will undoubtedly depend on the nature of the deal, if there is one, and the arrangements that are put in place post-Brexit. However, it is still possible to predict what some of the industry’s biggest challenges are likely to be. Passporting is undoubtedly the issue that will have the biggest potential impact on the financial services sector. Passporting is the process by which UK-based financial institutions sell their products and services overseas without having to obtain a licence or get regulatory approval to do so. That includes everything from insurance providers, the banks, fintech firms and asset management companies. Data from the Financial Conduct Authority (FCA) shows that there are nearly 5,500 UK-registered firms that hold at least one passport to do business in another EU member state. With Brexit just a couple of months away, it seems extremely unlike that passporting will continue. There are a number of deals that have been mentioned, such as the Norwegian deal and Swiss deal, that could see the continuation of passporting, but both of those deals seem extremely unlikely. Without passporting in place, UK finance firms will have to seek authorisation to sell their products and services in countries they’ve traded in successfully for years. The process of obtaining that authorisation is likely to be time-consuming, expensive and heavily administrative, with authorisation to trade in each EU market sought separately. That could be hugely damaging for the industry and could be enough to tempt UK headquartered firms to migrate overseas. The next crucial problem the finance industry is expected to face is the uncertainty surrounding the regulation of the industry. Progressive regulation has historically been one of the major strengths of the UK finance industry, and more recently, it has been the driving force behind London’s position as the European fintech capital. However, Brexit threatens to complicate matters considerably. The UK will find itself in the position of having to renegotiate more than 40 years of EU regulations, which will undoubtedly take time. Many firms may not be prepared to endure another prolonged period of uncertainty and instead choose to move their operations elsewhere. Secondly, although the bureaucracy from Brussels was one of the arguments touted in favour of leaving the EU, in recent years, the UK has shown a greater appetite for stringent financial regulations than its continental peers. The likelihood is that a tightening of regulations could have a detrimental short-term impact, although greater regulatory autonomy could potentially prove to be beneficial over the longer term. The third key way Brexit could do lasting damage to the UK’s finance industry is by provoking a mass exodus of the world-class talent the industry currently relies on. At present, London, which is very much at the centre of financial services industry, benefits from having a highly skilled, industry-specific talent pool on its doorstep. Brexit could bring serious disruptions to that talent pool, with issues such as visa uncertainty and potential job losses, certainly in the near term, coming to the fore. The result could be that the top talent chooses to go elsewhere. On the visa issue, a recent report by the City of London suggested that if the current visa system was rolled out to EU migrants, only a quarter of those living and working in London would meet the requirements. That would be a big issue for the finance industry, which relies heavily on talent from the EU.
https://www.bizepic.com/2019/02/02/how-is-brexit-likely-to-affect-the-finance-industry-in-the-uk/
Washington, D.C., August 6, 2018 – The Satellite Industry Association (SIA) today announced it was encouraged by Senate Committee approval of proposed legislation that would see streamlining of the rules governing approval for commercial satellite earth observation licensing, as well as launch and reentry licensing regulations. On July 25th, Senators Ted Cruz (R-Texas), Bill Nelson (D-Fla.), and Ed Markey (D-Mass.), members of the U.S. Senate Committee on Commerce, Science, and Transportation, introduced the Space Frontier Act. The bill is designed to strengthen the commercial space sector and includes a number of provisions including reform of the regulatory framework for Earth observation operations. On Wednesday, the proposed legislation was approved by the Committee on Commerce, Science, and Transportation. “SIA is pleased by the legislative proposal to overhaul the framework of regulations that currently govern the licensing process for commercial earth observation and can often lead to approval delays,” said Tom Stroup, President of SIA. “Earth observation and remote sensing is one of the fastest growing segments of the commercial satellite industry. Reforming the rules governing licensing will streamline the regulatory approval process, help enable earth observation innovation and ensure American leadership in commercial space and satellite remote sensing. SIA applauds the steps already taken by the Department of Commerce to both decrease the average review time for remote sensing licenses from 210 days in 2015 to 91 days in 2017 and to seek industry input via the Advanced Notice of Proposed Rulemaking on Licensing Private Remote Sensing Space Systems. SIA is further encouraged by specific proposals intended to decrease costly delays, including the introduction of reduced timelines and accountability to industry by the heads of those agencies or departments reviewing license applications. SIA also supports the modernization and streamlining of FAA commercial launch and reentry regulations to reduce the regulatory burden on industry.” SIA is a U.S.-based trade association providing representation of the leading satellite operators, service providers, manufacturers, launch services providers, and ground equipment suppliers. For more than two decades, SIA has advocated on behalf of the U.S. satellite industry on policy, regulatory, and legislative issues affecting the satellite business. For more information, visit www.sia.org.
https://www.sia.org/satellite-legislation/
Planning for subdivision With the increasing number of homes being built in existing suburbs, subdividing lots is something we hear of often. The zoning of our property outlines the permissible uses of the land. Our homes are often in residential areas so if you want to do something different with your lot, such as opening a shop, you will need to work through what can be an extensive process with your local government. With residential lots, the ‘R Code’ for the site (for example R20 or R80) indicates the size of the lot permitted. These are set by the local government in consultation with the local community and state government. Traditionally, the ‘R’ number indicated the size of the lot relative to the number of homes per hectare. These values have moved slightly over time and the R codes also link to the size and shape of any building that can be constructed on a lot – more on that next week. Once you understand the basics of lot size and coding you will begin to get an idea of what you want to do with the property, including whether you want to keep existing houses. A surveyor will help you understand the conditions of the site and a designer can help you plan the lots effectively. This plan will need to consider the type of subdivision you are trying to achieve. A key consideration is whether you need to share services (such as water, power and sewerage) through the lot. You may also need to provide some shared areas and if this is the case, a strata subdivision will need to be considered. Once you have a plan for your subdivision, planning approval from the Western Australian Planning Commission (WAPC) needs to be sought. The WAPC will work with the local government and key service providers to make the assessment, a process that can take around three months. The subdivision approval issued will be subject to certain conditions that need to be complied with. The work to prepare the site is carried out once this approval is received. This could include providing new services such as extending a sewer or adding a new power dome, which can be costly exercises. Fees may also need to be paid to the local government to contribute to the development of the local area. Once the site is ready, a surveyor will plot the lots and boundaries. When all the approval conditions are met the WAPC can endorse the plan. Plans are then lodged with Landgate after which you can apply for new titles, which a solicitor or settlement agent often assists with. The subdivision process can be complex, costly and present a range of challenges beyond what we have outlined here. If you are looking at subdividing or building, HIA members can help you understand the process. Find a local member by visiting www.tradebuild.com.au. CONTACT Housing Industry Association, 9492 9200, www.hia.com.au. Get the latest news from thewest.com.au in your inbox.
https://thewest.com.au/lifestyle/new-homes/planning-for-subdivision-ng-b881118092z
On May 5, 2021, after several years of extensive debate and negotiations, including a 2020 legislative session ending in an impasse, the 112th Tennessee General Assembly passed a highly anticipated certificate of need (CON) reform bill. The legislation, Public Chapter No. 557, HB0948/SB1281 (new law), was sponsored by Representative Clark Boyd (R – Lebanon) and Senator Shane Reeves (R – Murfreesboro). The new law makes significant changes to CON requirements for various healthcare providers and the Health Service and Development Agency’s (HSDA) ongoing administration of the CON program and builds upon substantial modifications made to the state’s CON program in 2016. Governor Lee signed the legislation into law on May 26. While debate surrounding the efficacy of CON laws continues, the new law evidences a compromise among various lawmakers and stakeholders. It maintains CON requirements in many places, eases requirements in others and, most importantly, responds to the need for immediate improvement in: access to healthcare in rural communities, mental health and substance abuse treatment and healthcare delivery and infrastructure shortcomings exposed during the COVID-19 pandemic. The summary below outlines many of the new law’s critical changes to the state’s CON program. For a detailed comparison of the existing CON law and the new law, see the summary chart. Healthcare Institutions, Services or Other Actions Exempt from CON Requirements The new law creates several new exemptions to CON requirements for providers serving particular subsets of patients or geographic locations. These exemptions include the following: Mental Health and Substance Abuse Treatment Providers Under the new law, mental health hospitals are no longer considered “healthcare institutions” subject to CON requirements. And, while a CON will still be required to establish a nonresidential substitution-based treatment center for opiate addiction, the new law permits a licensed hospital to operate nonresidential substitution-based treatment centers located on the hospital’s campus without first obtaining a CON. A CON is also no longer necessary to initiate psychiatric services. Projects in Economically Distressed Counties The new law eliminates CON requirements for projects initiated in counties qualifying as economically distressed and lacking a hospital actively licensed under Title 68 (general acute care hospitals). As of July 1, 2020, the Tennessee Department of Economic & Community Development identified the following eleven counties as economically distressed: Bledsoe, Clay, Cocke, Grundy, Hancock, Hardeman, Lake, Lauderdale, Perry, Scott and Wayne (see Tennessee Distressed Counties Map). The Tennessee Department of Health (TDOH) 2019 Hospital Summary Report, showing licensed hospitals by county, can be found here. Home Health Agencies and Hospices Targeting Specific Patient Populations The new law establishes three exemptions to CON requirements for home health agencies. A CON will no longer be required for home health agencies furnishing home health services exclusively to patients under the federal Energy Employees Occupational Illness Compensation Program Act of 2000 or to patients less than 18 years of age. In addition, a CON is not required to establish a home health agency or a residential hospice limited to providing hospice services to patients under the care of a healthcare research institution. Hospitals in Rural or Distressed Counties Under the new law, a CON will not be required to restart a previously licensed Title 68 hospital (general acute care hospitals) located in a tier 2, tier 3 or tier 4 enhancement county or a county with a population less than 49,000. The new law empowers the TDOH to renew the hospital’s license upon finding the hospital will operate in substantially the same manner as it did before closure. But, the party seeking to establish the hospital must still apply for a CON within 12 months of submitting licensure renewal information to TDOH. Thus, it remains to be seen how these changes will be applied in practice. For a map of enhancement counties by tier, see here, and for census data on population by county, see here. Healthcare Institutions, Services or Other Actions Subject to Modified CON Requirements The new law also modifies several existing CON requirements to reduce the types of projects subject to CON requirements and to offer healthcare providers greater flexibility within the existing framework of CON laws. Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) Services The new law makes several changes to CON requirements for MRI and PET services. Generally, a CON will be necessary to initiate MRI or PET services or to increase the number of MRI machines only in counties with populations of 175,000 or less. This change reflects a significant shift in policy and, for the first time, treats MRI and PET services more equally under the CON program. For county-level census data, see here. Nursing Homes The new law makes several changes to CON requirements for nursing homes, including an extension of the moratorium on CONs for new nursing home beds to June 30, 2025, and the elimination of several provisions related to the relocation of nursing home beds. The new law also extends the default implementation period for a nursing home CON from two to three years. Changes in Bed Complement The new law vastly simplifies adjustments in licensed bed complement. Previously, a provider needed CON approval to adjust bed complements from one category to another (e.g., from acute to long-term); under the new law, a provider will be able to make such adjustments largely outside of the CON program. With the new law, a CON will be required for such adjustments only if the provider lacks a license for the bed category at issue. In addition, a CON will be required only in instances where a provider seeks to increase the number of nursing home beds. Changes in Location of Existing Facilities The new law maintains a CON requirement for most location changes of existing healthcare facilities but authorizes the Executive Director to exempt certain relocation projects. The new law also expands home health agencies’ and hospices’ ability to relocate their principal offices without a CON. Addition of Annual Reporting Requirements The new law adds annual reporting requirements for several health services, including cardiac catheterization, open heart surgery, organ transplantation, home health, hospice, burn units and neonatal intensive care units. Changes of Ownership and Control/CON Transfers While the new law eliminates the requirement for providers to notify the HSDA of changes of ownership occurring within two years of the initial date of licensure, it both maintains and alters restrictions related to changes of control and transfers of CONs. It remains to be seen how these changes will work in practice. Going forward, the HSDA must approve the transfer of CONs occurring as part of the change of control of a licensed healthcare institution and will consider various factors, such as satisfaction of quality standards and maintenance of access to at-risk and underserved communities. Changes to HSDA Procedures and Administration In addition to extending the HSDA’s sunset provision to June 30, 2024, the new law makes various changes to the CON application and review process and the HSDA’s administration and oversight of the CON program. Those modifications include the following: CON Application and Review Process The new law revises various provisions prescribing requirements for the CON application and review process, including changing the filing deadlines for letters of intent and CON applications, expediting the CON application review cycle from approximately 60 days to 30 days and formalizing procedures for issuing emergency CONs and for qualifying for consent calendar review. CON Opposition Procedures The new law dramatically alters who can oppose a CON project application. Whereas before opposition to a CON project application could come from virtually anywhere, the new law requires an opponent to be located within 35 miles of the proposed project. For home care organization-related projects, the opponent must now demonstrate that it has served patients in the proposed service area within two years (730 days) of the filing of the CON application being opposed. CON Implementation Period The new law contains several provisions affecting the length of time a provider has to implement a CON project once approved. Previously, a provider was required to demonstrate good cause for an extension of time and “substantial progress” toward implementation; the new law eliminates the substantial progress requirement. The new law also voids a CON if the service authorized by the CON has not been performed for a continuous one-year period after the date of implementation. Notably, for home care organizations, this provision applies to each county in which the home care organization is licensed. If CON authorization is voided, state licensing agencies are prohibited from issuing or renewing licenses. Annual Licensure Fees The new law imposes more substantial annual fees on most healthcare providers. Previously, annual fees ranged from $50-$300 per license, depending on the license type. Now, fees range from $50-$5,000 per license, with facilities like hospitals, nursing homes, ambulatory surgery centers and outpatient diagnostic centers seeing the steepest increases. For example, before, hospitals faced a maximum annual fee of $300 per license; now, hospitals will face fees up to $5,000 per license. Expansion of HSDA Duties to Promote Healthcare Quality The new law includes numerous provisions focused on improving healthcare quality and enhancing the HSDA’s role in ensuring that it appropriately evaluates and approves projects that further access to necessary, high quality and cost-effective healthcare services across the state. For example, the HSDA will now be required to conduct studies related to healthcare, including annual needs assessments measuring access to healthcare, identifying access gaps and informing the criteria and standards for issuing CONs. Similarly, at least once every five years, the HSDA must evaluate and update its criteria and standards for issuing CONs based on input received from relevant state agencies and governing bodies, such as the TDOH and the General Assembly’s committees on healthcare. Changes to HSDA Executive Director Duties Overall, the new law increases the Executive Director’s responsibilities for overseeing the administration of the CON program and the standards under which CONs are evaluated. In particular, the Executive Director’s modified duties include submitting an annual report to the General Assembly comparing the actual payer mix and uncompensated care of CON holders to the projections submitted in CON applications and, by January 1, 2023, a plan to consolidate the HSDA and the Board for Licensing Health Care Facilities into a single health facilities commission. While the new law expands some of the Executive Director’s duties and responsibilities, it also limits certain responsibilities that members of the HSDA can delegate to the Executive Director. Conclusion The majority of changes under the new law take effect on October 1, 2021. A few, however – particularly those related to nursing homes – were effective once the legislation became law. While the new law makes comprehensive changes to the state’s CON program that generally reduce or eliminate CON requirements, for most healthcare providers, the CON framework remains largely intact. For example, the new law preserves existing CON requirements for two heavily debated healthcare provider types: ambulatory surgery centers and outpatient diagnostic centers. Thus, healthcare providers must continue to evaluate carefully whether their projects are subject to the CON program. The HSDA is expected to provide additional insights on the impact of the new law at its June 23, 2021, meeting. If you have questions about the requirements of the new CON legislation, please contact the authors.
https://www.bassberry.com/news/tennessee-con-reform-bill/
The web developer's complete reference for developing dynamic, data-driven Web sites and applications with Dreamweaver. Master the in-depth knowledge of Dreamweaver to get the most out of this versatile design and development program Applied ADO.NET: Building Data-Driven Solutions provides extensive coverage of ADO.NET technology including ADO.NET internals, namespaces, classes, and interfaces. Where most books cover only SQL and OLE DB data providers, Mahesh Chand and David Talbot cover SQL, OLE DB, ODBC data providers and the latest additions to ADO.NET:... SOA (Service Oriented Architecture) is the next big step in evolution for Web services. This book is the first to educate people about SOA, and to introduce them to the technologies that they can use today, prior to the release of Indigo. It will introduce them to a new architecture and will help them realize why Web services are such a big deal... Portlet development traditionally has been difficult and time-consuming, requiring costly resources and specialized expertise in multiple technologies. IBM®... The Java Fundamental Classes Reference provides complete reference documentation on the core Java 1.1 classes. These classes contain architecture-independent methods that serve as Java's gateway to the real world, by providing access to resources such as the network and the host filesystem. The core classes also include...
https://www.pdfchm.net/tag/net/181/
Public procurement is the purchase of supplies, works or services by European governments and public bodies, including GP consortia. Procurement regime is designed to ensure free trade across the EU and prevent state-owned or state-related bodies from succumbing to pressure to ‘buy national’. The EC Treaty lays down obligations to ensure free movement of supplies, services and establishments across the EU. Treaty principles – non-discrimination, equal treatment of potential providers and being transparent – apply to all public-sector procurement. These obligations have been transposed into UK law by way of the Public Contracts Regulations 2006.1 These regulations set out rules for procurement that govern how public bodies in the UK purchase supplies, works or services. Understanding how the rules apply to commissioning The regulations specify that only supplies, works or services that are valued above a certain financial threshold need to be procured in accordance with the rules set out in them – but, the rules are specific as to when related or repeat contracts must be aggregated together for the purposes of valuation. Values must reflect a genuine estimate of the contract being tendered (excluding VAT), including the value of any options to extend even if it is uncertain whether such options will be exercised. There are rules on aggregation of contracts and valuing long-term contracts which consortia should familiarise themselves with. The regulations do not apply to contracts valued below the financial threshold, although treaty principles will still apply. A key issue for consortia will arise when services valued above the financial threshold are being commissioned. Unlike supplies and works, the regulations group services into two separate categories (part A and part B), and apply different rules to each category. Part A services are treated in the same way as supplies and works – the full set of rules apply. These include services such as IT support, accountancy and most management consultant services. Part B services have only limited rules that apply – these are deemed less likely to attract interest, and therefore competition, from suppliers in other EU member states. Part B will include community healthcare, acute healthcare and mental health services, catering and legal services. Getting started Before starting any procurement, you should be asking the following: Establish what is being commissioned Is it supplies, works or services? If it’s a service, does it fall within part A or part B of the service categories? Work out the contract value The current threshold values (net of VAT) that apply to the NHS are: • supplies – £101,323 • works – £3,927,260 • part A services – £101,323 • part B services – £156,442 If financial thresholds are met, then either the full rules set out in the regulations will apply (for supplies, works and part A services contracts) or a limited number of rules will apply (for part B services contracts.) Applying the full rules When the full rules apply, procurements by GP consortia will need to begin with an advert that must be placed in the Official Journal of the European Union. This alerts all suppliers throughout Europe of the opportunity and gives them the chance to express interest in providing the contract. The opportunity should also, in accordance with principle 2 of the NHS Principles and Rules for Co-operation and Competition, be advertised on the NHS Supply2Health procurement portal. The regulations set out a choice of procedures, one of which must be followed by consortia when carrying out a procurement. The most popular procedure used is the ‘restricted procedure’, which is a two-stage tender process. The first stage is a short-listing stage for all suppliers who responded to the advert to assess their general suitability to contract with the consortium in question. Only the shortlist of top-ranked suppliers from the first stage is invited to the second stage of the process, which involves a competitive tender between those suppliers in order to identify the best offering. Tenders are evaluated against pre-published criteria and weightings (with no negotiation allowed) to choose who will enter into contracts with consortia. The decision should then be posted on the OJEU website and Supply2Health. Applying the limited rules When healthcare services and other part B services are being procured, there is no requirement to place an advert on the OJEU website or undertake a formal process. However, other rules apply – such as the treaty’s stipulation for demonstrating transparency when procuring contracts. Further, there is a specific duty that has been set out in the Health and Social Care Bill (section 63) that places an obligation on consortia to follow good practice and promote competition when commissioning services. It is unclear exactly what the impact of this will be, but it will probably reflect the current requirements of the PRCC. In practice, it is likely that that consortia will have to undertake a proportionate level of advertising and observe the overarching treaty principles when carrying out the process. Understanding where any willing provider fits in Any willing provider (AWP) is an accreditation process that results in a list of suitable healthcare providers that patients may be referred to by their GP. One of the aims of the AWP model is to remove the need for a traditional procurement process to be carried out, thereby saving tendering costs. Although the exact impact of AWP on the procurement of healthcare services will remain unknown until the Department of Health releases policy, it is likely to mean consortia will not need to procure healthcare services that are subject to AWP – as there will be no restriction on the providers who may become accredited and accordingly no restriction on competition. Patient choice will determine the amount of services purchased from a particular provider rather than the commissioner. The recent letter from Sir David Nicholson makes it clear that services that are subject to tariff will only compete on quality, not price – although negotiation on price will be allowed in certain exceptional circumstances. However, under AWP, it will be the patient that judges the quality on offer (above the requisite level required for accreditation) rather than the consortia making payment for the services. Getting results Focus on good design Before undertaking any procurement, it is important to plan your commissioning and be clear about what is being bought and when you need it, what budget is available and the current offerings in the market in which you are going out to tender. Be transparent Not only is transparency required by the regulations and the treaty principles, but it is also key to getting exactly what you want. If cost is your main driver rather than quality, design your process around that by weighting your cost criteria higher than any other criteria that you use. If quality is your main driver, design your process in a way that allows you to measure the quality being offered and apply weightings to your criteria accordingly. Be pragmatic in your approach to part B services When deciding whether to advertise a part B service and undertake a process to deal with the subsequent interest received, take into account the value of the contract and the market in which the subject matter of the contract sits.2 If a service is small scale and the contract is of low value, it may not be worth advertising. Know where you stand if it all goes wrong If the procurement rules are not followed in accordance with the regulations, consortia can be challenged by aggrieved suppliers in the courts. However, it is important to remember that a challenge can only be brought under the regulations by an aggrieved supplier. In respect of part B healthcare services, if all suppliers are offered the opportunity to provide services under AWP, the chance of a successful challenge is reduced.
https://www.pulsetoday.co.uk/news/practice-personal-finance/survival-guide-procuring-services/
Abell S1063, the final frontier Abell S1063, a galaxy cluster, was observed by the NASA/ESA Hubble Space Telescope as part of the Frontier Fields programme. The huge mass of the cluster acts as a cosmic magnifying glass and enlarges even more distant galaxies, so they become bright enough for Hubble to see. Credit: NASA, ESA, and J. Lotz (STScI) Hubble discovers “wobbling galaxies” Observations may hint at nature of dark matter Using the NASA/ESA Hubble Space Telescope, astronomers have discovered that the brightest galaxies within galaxy clusters “wobble” relative to the cluster’s centre of mass. This unexpected result is inconsistent with predictions made by the current standard model of dark matter. With further analysis it may provide insights into the nature of dark matter, perhaps even indicating that new physics is at work. Dark matter constitutes just over 25 percent of all matter in the Universe but cannot be directly observed, making it one of the biggest mysteries in modern astronomy. Invisible halos of elusive dark matter enclose galaxies and galaxy clusters alike. The latter are massive groupings of up to a thousand galaxies immersed in hot intergalactic gas. Such clusters have very dense cores, each containing a massive galaxy called the “brightest cluster galaxy” (BCG). The standard model of dark matter (cold dark matter model) predicts that once a galaxy cluster has returned to a “relaxed” state after experiencing the turbulence of a merging event, the BCG does not move from the cluster’s centre. It is held in place by the enormous gravitational influence of dark matter. Hubble image of galaxy cluster MACS J1206 This image from the NASA/ESA Hubble Space Telescope shows the galaxy cluster MACS J1206. Galaxy clusters like these have enormous mass, and their gravity is powerful enough to visibly bend the path of light, somewhat like a magnifying glass. These clusters are useful tools for studying very distant objects, because this lens-like behaviour amplifies the light from faraway galaxies in the background. They also contribute to a range of topics in cosmology, as the precise nature of the lensed images encapsulates information about the properties of spacetime, the expansion of the cosmos and the distribution of dark matter within the cluster. This is one of 25 clusters being studied as part of the CLASH (Cluster Lensing and Supernova survey with Hubble) programme, a major project to build a library of scientific data on lensing clusters. Credit: NASA, ESA, M. Postman (STScI) and the CLASH Team But now, a team of Swiss, French, and British astronomers have analysed ten galaxy clusters observed with the NASA/ESA Hubble Space Telescope, and found that their BCGs are not fixed at the centre as expected . The Hubble data indicate that they are “wobbling” around the centre of mass of each cluster long after the galaxy cluster has returned to a relaxed state following a merger. In other words, the centre of the visible parts of each galaxy cluster and the centre of the total mass of the cluster — including its dark matter halo — are offset, by as much as 40 000 light-years. Lensing cluster Abell 383 The giant galaxy cluster in the centre of this image contains so much dark matter mass that its gravity bends the light of more distant objects. This means that for very distant galaxies in the background, the cluster’s gravitational field acts as a sort of magnifying glass, bending and concentrating the distant object’s light towards Hubble. These gravitational lenses are one tool astronomers can use to extend Hubble’s vision beyond what it would normally be capable of observing. This way some of the very first galaxies in the Universe can be studied by astronomers. The lensing effect can also be used to determine the distribution of matter — both ordinary and dark matter — within the cluster. Credit: NASA, ESA, J. Richard (CRAL) and J.-P. Kneib (LAM). Acknowledgement: Marc Postman (STScI) “We found that the BCGs wobble around centre of the halos,” explains David Harvey, astronomer at EPFL, Switzerland, and lead author of the paper. “This indicates that, rather than a dense region in the centre of the galaxy cluster, as predicted by the cold dark matter model, there is a much shallower central density. This is a striking signal of exotic forms of dark matter right at the heart of galaxy clusters.” The wobbling of the BCGs could only be analysed as the galaxy clusters studied also act as gravitational lenses. They are so massive that they warp spacetime enough to distort light from more distant objects behind them. This effect, called strong gravitational lensing, can be used to make a map of the dark matter associated with the cluster, enabling astronomers to work out the exact position of the centre of mass and then measure the offset of the BCG from this centre. Brightest galaxy in Abell 2261 The giant elliptical galaxy in the centre of this image, taken by the NASA/ESA Hubble Space Telescope, is the most massive and brightest member of the galaxy cluster Abell 2261. Astronomers refer to it as the brightest cluster galaxy (BCG). Spanning a little over one million light-years, the galaxy is about 20 times the diameter of our Milky Way galaxy. Astronomers used Hubble’s Advanced Camera for Surveys and Wide Field Camera 3 to measure the amount of starlight across the galaxy, catalogued as 2MASX J17222717+3207571 but more commonly called A2261-BCG (short for Abell 2261 brightest cluster galaxy). Abell 2261 is located three billion light-years away. The observations were taken between March and May 2011. The Abell 2261 cluster is part of a multi-wavelength survey called the Cluster Lensing And Supernova survey with Hubble (CLASH). Credit: NASA, ESA, M. Postman (Space Telescope Science Institute, USA), T. Lauer (National Optical Astronomy Observatory, USA), and the CLASH team. If this “wobbling” is not an unknown astrophysical phenomenon and in fact the result of the behaviour of dark matter, then it is inconsistent with the standard model of dark matter and can only be explained if dark matter particles can interact with each other — a strong contradiction to the current understanding of dark matter. This may indicate that new fundamental physics is required to solve the mystery of dark matter. Galaxy cluster MACS J1720+35 The heart of a vast cluster of galaxies called MACSJ1720+35 is shown in this image, taken in visible and near-infrared light by the NASA/ESA Hubble Space Telescope. The galaxy cluster is so massive that its gravity distorts, brightens, and magnifies light from more distant objects behind it, an effect called gravitational lensing. In the top right an exploding star nicknamed Caracalla and located behind the cluster can just be made out. The supernova is a member of a special class of exploding star called Type Ia, prized by astronomers because it provides a consistent level of peak brightness that makes it reliable for making distance estimates. Finding a gravitationally lensed Type Ia supernova gives astronomers a unique opportunity to check the optical “prescription” of the foreground lensing cluster. The supernova is one of three exploding stars discovered in the Cluster Lensing And Supernova survey with Hubble (CLASH), and was followed up as part of a Supernova Cosmology Project HST program. CLASH is a Hubble census that probed the distribution of dark matter in 25 galaxy clusters. Dark matter cannot be seen directly but is believed to make up most of the universe’s matter. The image of the galaxy cluster was taken between March and July 2012 by Hubble’s Wide Field Camera 3 and Advanced Camera for Surveys. Credit: NASA, ESA, S. Perlmutter (UC Berkeley, LBNL), A. Koekemoer (STScI), M. Postman (STScI), A. Riess (STScI/JHU), J. Nordin (LBNL, UC Berkeley), D. Rubin (Florida State), and C. McCully (Rutgers University) Co-author Frederic Courbin, also at EPFL, concludes: “We’re looking forward to larger surveys — such as the Euclid survey — that will extend our dataset. Then we can determine whether the wobbling of BGCs is the result of a novel astrophysical phenomenon or new fundamental physics. Both of which would be exciting!” Notes The study was performed using archive data from Hubble. The observations were originally made for the CLASH and LoCuSS surveys.
https://parsseh.com/138126/hubble-discovers-wobbling-galaxies.html
Matter Adds Twist to Cosmic Microwave Background The cosmic microwave background (CMB) is the oldest discernible light in the Universe. It provides us with a photograph of an infant Universe—the Universe billion years ago. But this ancient snapshot has been slightly distorted by intervening matter. As CMB light rays propagated through the Universe to us, they encountered countless lumps of matter that slightly deflected their direction by the effect called “gravitational lensing.” Some aspects of this lensing have been observed before, but now a team of astronomers using the -meter South Pole Telescope (SPT) have detected for the first time a subtle twisting in the polarization of the CMB due to gravitational lensing. The achievement, described in Physical Review Letters , could lead to a map of the distribution of matter in the Universe, including the invisible dark matter. The CMB is often called the “afterglow of the big bang” because it originated from the hot ionized plasma that filled the early Universe. Primordial density fluctuations in this plasma were recorded in the hot and cold spots that have been observed in the CMB. The fluctuations also left their mark in the polarization of the light. The last interaction that CMB photons had with the plasma was elastic (Thomson) scattering, and this would have imprinted a particular polarization pattern. The physics of Thomson scattering tells us that if we look at a ring of light around a hot or cold spot in the CMB, the light will be polarized along radial or tangential lines, respectively (see Fig. 1). However, as the CMB light propagates through a lumpy Universe, gravitational lensing slightly distorts these radial/tangential patterns, creating patterns that look more like vortices . Cosmologists use an analogy with the spatial properties of electromagnetic fields and call the tangential/radial patterns “electric” or -mode polarization, and the vortices “magnetic” or -mode polarization. The -mode polarization dominates, with only about of this polarization converted to mode through gravitational lensing. The DASI Collaboration was the first to detect the primeval -mode polarization in 2002 . Subsequent experiments, including NASA’s Wilkinson Microwave Anisotropy Probe and European Space Agency’s (ESA) Planck satellite , have measured -mode polarization with increasing precision. However, -mode polarization has until now remained undetected. The SPT Collaboration (Hanson et al.) has managed to extract the faint signal of -mode polarization in the CMB using and polarization-sensitive bolometers working at - and -millimeter wavelengths, respectively. To reduce contamination due to instrumental polarization and various systematics, they compared their -mode polarization signal with a prediction of the lensing effect based on galaxy counts. Galaxies form in dense regions so they can tell us where the matter density is high, but they only give part of the story since of the matter in the Universe is invisible dark matter. Astronomers therefore require complicated models to convert galaxy observations into a total matter distribution over the sky. The SPT Collaboration focuses on a particular class of galaxies, which contain a lot of warm dust grains emitting light at submillimeter wavelengths. In previous work, the researchers used images from ESA’s Herschel satellite to identify these dusty galaxies and then showed that they could estimate the total matter along a line of sight using the galaxy data . They now use this “matter map” to predict the amount of gravitational lensing and its effect on the -mode polarization that they measure. This process yields a map of -mode polarization that would be expected based on the galaxy counts. When the Collaboration cross correlated the predicted and measured -mode polarization maps, they detected a high level of correlation between these two maps at the statistical significance of nearly eight standard deviations. The advantage of -mode polarization over the traditional method of counting galaxies is that it provides us with a high-fidelity map of the total matter, including dark matter, rather than an indirect estimate based only upon the visible matter such as galaxies. The -mode maps can complement other methods of detecting dark matter, which tend to measure the amount of matter in a particular galaxy cluster or along particular lines of sight. The SPT Collaboration has thus opened up a new window into the era in which we can finally “see” dark matter filling the intergalactic space via gravitational lensing. With this new method, cosmologists are hoping to measure, among other things, the mass of neutrinos . Since neutrinos have very small masses (about billion times smaller than the proton mass), they are generally moving too fast to be held in the gravitational potentials of galaxy clusters. They therefore spread out more uniformly, leading to a smoother matter distribution that produces less gravitational lensing, and therefore less -mode polarization, than might normally be predicted. Full-sky measurements of -mode polarization could characterize the level of smoothing and thereby estimate the neutrino mass. What is next? Gravitational lensing is not the only mechanism to produce -mode polarization. Primeval ripples in space, called gravitational waves, could have been produced during the earliest moment in the Universe , and they can also produce -mode polarization in the CMB . Gravitational-wave-induced modes can be distinguished from lensing-induced modes in that the former should fluctuate on much larger angular scales than the latter. Detection of -mode polarization from the primeval gravitational waves is thought to provide definitive evidence for the cosmic inflation paradigm, which states that the early Universe underwent a period of rapid, accelerating expansion right after its birth and that the structures we see in the Universe such as stars, galaxies, and ourselves originate from quantum fluctuations produced during this inflation. -mode polarization thus offers a clue to the fundamental question about the origin of our own Universe. The detection of nonprimeval, lensing-induced -mode polarization by the SPT Collaboration is a significant step toward the ultimate detection of signatures of the primeval gravitational waves from inflation.
https://physics.aps.org/articles/v6/107
A dramatic collision between galaxy clusters captured by the Hubble and Chandra space telescopes provides striking evidence for the existence of dark matter as it separates from ordinary matter. MACSJ0025 formed after an enormously energetic collision between two large galaxy clusters, each one thousand million million times the mass of our Sun, crashing together at millions of kilometres per hour. After the smash, the stars and hot gas in the two clusters slowed down, but the dark matter component sailed right through, allowing astronomers to study the behaviour of the different components. MACSJ0025.4-1222, a cluster showing a clear separation between dark and ordinary matter. The blue cloud-shaped parts flanking the centre show the position of dark matter, mapped by the Advanced Camera for Surveys onboard the NASA/ESA Hubble Space Telescope. The pink middle indicates ordinary matter, charted by NASA’s Chandra X-Ray Observatory. Image: X-ray(NASA/CXC/Stanford/S.Allen); Optical/ Lensing(NASA/ STScI/UC Santa Barbara/M.Bradac). Using visible-light images from Hubble and X-ray images from Chandra, astronomers were able to infer the total mass distribution of dark matter (coloured in blue in the image) and ordinary matter (coloured in pink), which, mostly in the form of hot gas, glows brightly in X-rays. The dark matter was detected indirectly by the gravitational lensing technique, a method that uses the distortion that mass causes as light passes by another object between an observer and background objects. Dark matter cannot be directly seen but it has mass and thus gravitational pull on the clusters’ galaxies. The separation between the ordinary and dark material not only provides observational evidence for dark matter, but supports the idea that dark matter particles interact with each other only very weakly or not at all, apart from the pull of gravity. The observations also provide independent confirmation of a similar effect detected two years ago in a target nicknamed the Bullet Cluster, showing that this original observation was not an exception or the product of some unknown error. The collision of galaxies in the Bullet cluster created a bow-shaped shock wave near the right side of the cluster as 70 million degree Celsius gas in the sub-cluster plowed through 100 million degree Celsius gas in the main cluster. Technically the name Bullet cluster refers to the smaller subcluster, which is moving away from the larger one. Image credit: X-ray: NASA/CXC/CfA/M.Markevitch et al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/ U.Arizona/D.Clowe et al. There is one major difference between the Bullet Cluster and MACSJ0025, however, in that MACSJ0025 does not contain a ‘bullet’ of X-ray bright gas powering through the cluster. Nonetheless, the energies of the two collisions are comparable. One of the great accomplishments of modern astronomy has been to establish a complete inventory of the matter and energy content of the Universe. The so-called dark matter makes up approximately 23 percent of this content, five times more than the ordinary matter that can be detected by telescopes. The latest results with MACSJ0025 once again support the case for the existence of dark matter.
http://www.astronomynow.com/080828Clashofclustersprovidesnewdarkmatterclue.html
The Dark Energy Spectroscopic Instrument (DESI) consortium is conducting a five-year survey to map the large-scale structure of the Universe over one-third of the sky and 11 billion years of cosmic history, aiming to study the physics of dark energy. 1 James Webb Space Telescope Advanced Deep Extragalactic Survey (JADES) JADES will use guaranteed time in James Webb Space Telescope (JWST) cycle 1 to produce infrared imaging and spectroscopy of unprecedented depth in the two premier extragalactic deep fields, GOODS-South (CDF-S) and GOODS-North (HDF). These data will reveal the early phases of galaxy formation, probing the rest-frame optical spectroscopy and morphology of galaxies from redshifts 2-3 out to z>10. JADES expects to collect data on about 100,000 galaxies, adding to the extensive legacy of these well-studied fields. 2 Sloan Digital Sky Survey (SDSS) The Sloan Digital Sky Survey continues its twenty-year legacy of wide-field optical/infrared imaging and spectroscopy, which has led astronomy into the era of large archives and data science. Harvard and Smithsonian are both full institutional members of the latest epoch of the survey, SDSS-V, which started observations in 2020. 3 The H3 Stellar Spectroscopic Survey The H3 Survey is answering the question: how did the Milky Way Galaxy grow and assemble over cosmic time? To answer this question, a group of scientists at the CfA and the University of Arizona are mapping the outer limits of the Galaxy with >200,000 stars observed with the 6.5m MMT telescope in Arizona. 4 CASTLES Survey Large galaxies and galaxy clusters sometimes act like lenses. Their gravity distorts the structure of spacetime, magnifying light from more distant objects. This effect, known as strong gravitational lensing, allows astronomers to study galaxies that would ordinarily be too far to see, map the distribution of mass in the galaxies doing the lensing, and measure the expansion rate of the universe. The CASTLeS (CfA-Arizona Space Telescope Lens Survey) is a program jointly managed by the Center for Astrophysics | Harvard & Smithsonian and the University of Arizona, which used NASA’s Hubble Space Telescope to study multiple aspects of strong gravitational lensing as caused by galaxies. Today, CASTLeS team members maintain a catalog of these lenses, updated with new observational data. 5 CfA Redshift Catalog The universe is expanding, carrying galaxies with it like flotsam on a fast-flowing river. This expansion also stretches the wavelength of light, which astronomers call cosmological redshift, since it pushes visible light colors toward the red end of the spectrum. That means astronomers can determine the distance to far-away galaxies by measuring the redshift of light they produce. The CfA Redshift Catalog (ZCAT), created by researchers at the Center for Astrophysics | Harvard & Smithsonian, is a clearinghouse for historical redshift data from a number of observatories, including the 1.5-Meter Tillinghast Telescope and the MMT Observatory, both CfA-operated telescopes located at the Fred Lawrence Whipple Observatory (FLWO) in Arizona. This data provides a map of galaxies in three dimensions, allowing astronomers to piece together how galaxies group on the largest scales in the universe. ZCAT is an essential resource for data on redshift surveys up to 2008, carrying on the legacy of the original CfA Redshift Surveys conducted in the 1970s and ‘80s. 6 2MASS Redshift Survey Galaxies are distributed in long filaments, huge “walls”, and large clusters, which astronomers call the large-scale structure of the cosmos. This structure is a tracer of dark matter, and is a way to understand how the universe has evolved. The 2MASS Redshift Survey (2MRS) is an ambitious map of the galaxies relatively close to the Milky Way. Led by astronomers at the Center for Astrophysics | Harvard & Smithsonian, 2MRS used data collected from the Two Micron All-Sky Survey (2MASS), which is an an atlas of the entire sky in infrared light. The completed 2MRS project resulted in a three-dimensional view of the distributions of nearby galaxies, providing a way to understand the structure of the modern universe and distribution of dark matter. 7 The Star Formation Reference Survey Astronomers study star formation as a way of understanding our own origins, as well as the structure of galaxies and the evolution of the cosmos as a whole. However, the farther back in time, astronomers often rely on a single measurement type for each galaxy to measure star-formation rates. The Star Formation Reference Survey (SFRS) is designed to improve and assess the reliability of all of these measurements by cataloging nearby star formation, using NASA’s Spitzer Infrared Space Telescope and other observatories. The data produced provides a useful reference data across a wide range of wavelengths in the spectrum of light, which can be applied across surveys of star formation in close-by and distant galaxies. The SFRS observational effort is led by astronomers at the Center for Astrophysics | Harvard & Smithsonian, in collaboration with other researchers around the world.
https://cfa.harvard.edu/news/farthest-stars-milky-way-might-be-ripped-another-galaxy
When two objects (the light source and a lensing mass) is observed, the light from the source is bent and deflected by the gravity of the lensing mass. It allows scientists to measure the mass of the lensing mass as well as the surrounding dark matter (which does not interact with light and only can be detected by gravitational effects). Similar to looking at an object through a wine glass, the light is bent and forms a circle which is called an Einstein ring. The lens also magnifies the background light source, acting as a "natural telescope" that allows astronomers a more detailed look at distant galaxies than is normally possible. In the captured image, the formation of the lens is extremely rare as the alignment has a one millimeter separation at a distance of 20 kilometers - a near perfect alignment. Extremely Rare Starbursting Dwarf Galaxy An international team of astronomers has found the most distant gravitational lens yet — a galaxy that, as predicted by Albert Einstein’s general theory of relativity, deflects and intensifies the light of an even more distant object. The discovery provides a rare opportunity to directly measure the mass of a distant galaxy. But it also poses a mystery: lenses of this kind should be exceedingly rare. Given this and other recent finds, astronomers either have been phenomenally lucky — or, more likely, they have underestimated substantially the number of small, very young galaxies in the early Universe. Light is affected by gravity, and light passing a distant galaxy will be deflected as a result. Since the first find in 1979, numerous such gravitational lenses have been discovered. In addition to providing tests of Einstein's theory of general relativity, gravitational lenses have proved to be valuable tools. Notably, one can determine the mass of the matter that is bending the light — including the mass of the still-enigmatic dark matter, which does not emit or absorb light and can only be detected via its gravitational effects. The lens also magnifies the background light source, acting as a "natural telescope" that allows astronomers a more detailed look at distant galaxies than is normally possible. Video: What is Gravitational Lensing? Gravitational lenses consist of two objects: one is further away and supplies the light, and the other, the lensing mass or gravitational lens, which sits between us and the distant light source, and whose gravity deflects the light. When the observer, the lens, and the distant light source are precisely aligned, the observer sees an Einstein ring: a perfect circle of light that is the projected and greatly magnified image of the distant light source. Now, astronomers have found the most distant gravitational lens yet. Lead author Arjen van der Wel (Max Planck Institute for Astronomy, Heidelberg, Germany) explains: "The discovery was completely by chance. I had been reviewing observations from an earlier project when I noticed a galaxy that was decidedly odd. It looked like an extremely young galaxy, but it seemed to be at a much larger distance than expected. It shouldn't even have been part of our observing programme!” Van der Wel wanted to find out more and started to study images taken with the Hubble Space Telescope as part of the CANDELS and COSMOS surveys. In these pictures the mystery object looked like an old galaxy, a plausible target for the original observing programme, but with some irregular features which, he suspected, meant that he was looking at a gravitational lens. Combining the available images and removing the haze of the lensing galaxy's collection of stars, the result was very clear: an almost perfect Einstein ring, indicating a gravitational lens with very precise alignment of the lens and the background light source. The lensing mass is so distant that the light, after deflection, has travelled 9.4 billion years to reach us. Not only is this a new record, the object also serves an important purpose: the amount of distortion caused by the lensing galaxy allows a direct measurement of its mass. This provides an independent test for astronomers' usual methods of estimating distant galaxy masses — which rely on extrapolation from their nearby cousins. Fortunately for astronomers, their usual methods pass the test. But the discovery also poses a puzzle. Gravitational lenses are the result of a chance alignment. In this case, the alignment is very precise. To make matters worse, the magnified object is a starbursting dwarf galaxy: a comparatively light galaxy (it has only about 100 million solar masses in the form of stars), but extremely young (about 10-40 million years old) and producing new stars at an enormous rate. The chances that such a peculiar galaxy would be gravitationally lensed is very small. Yet this is the second starbursting dwarf galaxy that has been found to be lensed. Either astronomers have been phenomenally lucky, or starbursting dwarf galaxies are much more common than previously thought, forcing astronomers to re-think their models of galaxy evolution. Van der Wel concludes: "This has been a weird and interesting discovery. It was a completely serendipitous find, but it has the potential to start a new chapter in our description of galaxy evolution in the early Universe.
http://www.quantumday.com/2013/10/extremely-rare-gravitational-lensed.html
In ordinary visible light, this cluster of galaxies doesn’t look like much. There are bigger clusters with larger and more dramatic-looking galaxies in them. But there’s more to this image than galaxies, even in visible light. The gravity from the cluster magnifies and distorts light passing near it, and mapping that distortion reveals something about a substance ordinarily hidden from us: dark matter. This collection of galaxies is famously called the “Bullet Cluster,” and the dark matter inside it was detected through a method called “weak gravitational lensing.” By tracking distortions in light as it passes through the cluster, astronomers can create a sort of topographical map of the mass in the cluster, where the “hills” are places of strong gravity and “valleys” are places of weak gravity. The reason dark matter—the mysterious substance that makes up most of the mass in the universe—is so hard to study is because it doesn’t emit or absorb light. But it does have gravity, and thus it shows up in a topographical map of this kind. The Bullet Cluster is one of the best places to see the effects of dark matter, but it’s only one object. Much of the real power of weak gravitational lensing involves looking at thousands or millions of galaxies covering large patches of the sky. To do that, we need big telescopes capable of mapping the cosmos in detail. One of these is the Large Synoptic Survey Telescope (LSST), which is under construction in Chile, and should begin operations in 2022 and run until 2032. It’s an ambitious project that will ultimately create a topographical map of the universe. “[LSST] is going to observe roughly half of the sky over a ten-year period,” says LSST deputy director Beth Willman. The observatory has “a broad range of science goals, from dark energy and weak [gravitational] lensing, to studying the solar system, to studying the Milky Way, to studying how the night sky changes with time.” To study the structure of the universe, astronomers employ two basic strategies: going deep, and going wide. The Hubble Space Telescope, for example, is good at going deep: its design lets it look for some of the faintest galaxies in the cosmos. LSST, on the other hand, will go wide. “The size of the telescope itself isn't remarkable,” says Willman. LSST will be 27 feet in diameter, which puts it in the middle range of existing telescopes. “The unique part of LSST's instrumentation is the field of view of [its] camera that's going to be put on it, which is roughly 40 times the size of the full moon.” By contrast, a normal telescope the same size as LSST would view a patch of the sky less than one-quarter of the moon’s size. In other words, LSST will combine the kind of big-picture image of the sky you’d get by using a normal digital camera, with the depth of vision provided by a big telescope. The combination will be breathtaking, and it’s all due to the telescope’s unique design. LSST will employ three large mirrors, where most other large telescopes use two mirrors. (It’s impossible to make lenses as large as astronomers need, so most observatories use mirrors, which can technically be built to any size.) Those mirrors are designed to focus as much light as possible onto the camera, which will be a whopping 63 inches across, with 3.2 billion pixels. Willman says, “Once it's put together and deployed onto the sky, it will be the largest camera being used for astronomical optical observations.” While ordinary cameras are designed to recreate the colors and light levels that can be perceived by the human eye, LSST’s camera will “see” five colors. Some of those colors overlap those seen by the retinal cells in our eyes, but they also include light in the infrared and ultraviolet part of the spectrum. After the Big Bang, the universe was a hot mess—of particles. Soon, that quagmire cooled and expanded to the point where the particles could begin attracting each other, sticking together to form the first stars and galaxies and forming a huge cosmic web. The junctions of which grew into large galaxy clusters, linked by long thin filaments, and separated by mostly-empty voids. At least that’s our best guess, according to computer simulations that show how dark matter should clump together under the pull of gravity. Weak gravitational lensing turns out to be a really good way to test these simulations. Albert Einstein showed mathematically that gravity affects the path of light, pulling it slightly out of its straight-line motion. In 1919, British astronomer Arthur Eddington and his colleagues successfully measured this effect, in what was the first major triumph for Einstein’s theory of general relativity. The amount light bends depends on the strength of the gravitational field it encounters, which is governed by the source’s mass, size and shape. In cosmic terms, the sun is small and low in mass, so it nudges light by only a small amount. But galaxies have billions and billions of stars, and galaxy clusters like the Bullet Cluster consist of hundreds or thousands of galaxies, along with plenty of hot plasma and extra dark matter holding them all together and the cumulative affect on light can be quite significant. (Fun fact: Einstein didn’t think lensing would actually be useful, since he only thought of it in terms of stars, not galaxies.) Strong gravitational lensing is produced by very massive objects that take up relatively little space; an object with the same mass but spread out over a larger volume will still deflect light, but not as dramatically. That’s weak gravitational lensing—usually just called “weak lensing”—in essence. Every direction you look in the universe, you see lots of galaxies. The most distant galaxies may be too faint to see, but we still see some of their light filtering through as background light. When that light reaches a closer galaxy or galaxy cluster on its way to Earth, weak lensing will make that light a little brighter. This is a small effect (that’s why we say “weak”, after all), but astronomers can use it to map the mass in the universe. The 100 billion or so galaxies in the observable universe provide a lot of opportunities for weak lensing, and that’s where observatories like LSST come in. Unlike most other observatories, LSST will survey large patches of the sky in a set pattern, rather than letting individual astronomers dictate where the telescope points. In this way it resembles the Sloan Digital Sky Survey (SDSS), the pioneering observatory that has been a boon to astronomers for nearly 20 years. A major goal of projects like SDSS and LSST is a census of the galactic population. How many galaxies are out there, and how massive are they? Are they randomly scattered across the sky, or do they fall into patterns? Are the apparent voids real—that is, places with few or no galaxies at all? The number and distribution of galaxies gives information about the biggest cosmic mysteries. For example, the same computer simulations that describe the cosmic web tell us we should be seeing more small galaxies than show up in our telescopes, and weak lensing can help us find them. Additionally, mapping galaxies is one guide to dark energy, the name we give the accelerating expansion of the universe. If dark energy has been constant all the time, or if it has different strengths in different places and times, the cosmic web should reflect that. In other words, the topographical map from weak lensing may help us answer one of the biggest questions of all: just what is dark energy? Finally, weak lensing could help us with the lowest-mass particles we know: neutrinos. These fast-moving particles don’t stick around in galaxies as they form, but they carry away energy and mass as they go. If they take away too much, galaxies don’t grow as big, so weak lensing surveys could help us figure out how much mass neutrinos have. Like SDSS, LSST will release its data to astronomers regardless of whether they’re members of the collaboration, enabling any interested scientist to use it in their research. “Running the telescope in survey mode, and then getting those extensive high-level calibrated data products out to the entire scientific community are really gonna combine to make LSST be the most productive facility in the history of astronomy,” says Willman. “That's what I'm aiming for anyway.” The power of astronomy is using interesting ideas—even ones we once thought wouldn’t be useful—in unexpected ways. Weak lensing gives us an indirect way to see invisible or very tiny things. For something called “weak,” weak lensing is a strong ally in our quest to understand the universe.
https://www.smithsonianmag.com/science-nature/weak-lensing-helps-astronomers-map-mass-universe-180959572/
Highlights of my recent and current research collaborations: ALMA Lensing Cluster Survey (ALCS) is an extensive survey with the ALMA radio observatory to study the faint sub-mm emission galaxy population. The key goals of the ALCS are to understand the origin of the extragalactic background light, measure the [CII] luminosity functions near the Epoch of Reionization, and constrain the evolution of the molecular gas mass density up to the peak epoch of cosmic star formation. The survey accomplishes this by obtaining 30-GHz-wide spectral scans of 21 clusters to a depth of 0.05 mJy (1.2 mm continuum, 1 sigma). The sample comes from some of the best-studied gravitational lensing clusters observed with HST treasury programs, i.e., CLASH, HFF, and RELICS. ALCS observations map the high-magnification regions of these clusters, allowing us to study both the faint background galaxies that are magnified as well as the galaxy populations in the clusters. CLASH: Cluster Lensing And Supernova survey with Hubble. By observing 25 massive galaxy clusters with HST's panchromatic imaging capabilities (Wide-field Camera 3, WFC3, and the Advanced Camera for Surveys, ACS), CLASH accomplishes its four primary science goals: (1) Map, with unprecedented accuracy, the distribution of dark matter in galaxy clusters using strong and weak gravitational lensing; (2) Detect Type Ia supernovae out to redshift z ~ 2, allowing us to test the constancy of dark energy's repulsive force over time and look for any evolutionary effects in the supernovae themselves; (3) Detect and characterize some of the most distant galaxies yet discovered at z > 7 (when the Universe was younger than 800 million years old - or less than 6% of its current age); and (4) Study the internal structure and evolution of the galaxies in and behind these clusters. LUVOIR: Space Telescope Concept for the 2030s. LUVOIR's broad wavelength coverage, large aperture, and powerful instruments will revolutionize much of astronomy. LUVOIR will be the first telescope capable of performing a census of the exoplanets in the Habitable Zones of hundreds of stars like the Sun. I serve on the LUVOIR Science and Technology Definition Team and was the lead scientist on the design of LUVOIR's VIS-NIR imager instrument, HDI. See the LUVOIR final report to NASA and the Astro2020 Decadal Review committee for a thorough description of LUVOIR's capabilities.. Local Volume Complete Cluster Survey (LoVoCCS): We are conducting a complete survey of all massive galaxy clusters in the local (z < 0.12) Universe accessible to the Dark Energy Camera, DECam, in the southern hemisphere (104 clusters) and the Hyper Suprime-Cam, HSC, in the northern hemisphere (41 clusters). We obtain u,g,r,i,z images reaching at least to LSST Year-1 depth for the entire virial region of these clusters, which are close enough for detailed spectroscopic, X-ray, and radio analysis. Our data will reveal the full range of cluster galaxy populations, map the dark matter distribution on sub-cluster scales, test cluster scaling relations, reveal lensed sources and serve as the first epoch for the study of supernovae and other transients in and behind clusters with LSST. The full cluster database, including dark matter maps and ancillary data, will be the largest uniform resource for the study of nearby galaxy clusters available before the release of the LSST Year 2 data (depending on the chosen cadence for LSST).
https://www.stsci.edu/~postman/
Astronomers using the NASA/ESA Hubble Space Telescope have studied a giant filament of dark matter in 3D for the first time. Extending 60 million light-years from one of the most massive galaxy clusters known, the filament is part of the cosmic web that constitutes the large-scale structure of the Universe, and is a leftover of the very first moments after the Big Bang. If the high mass measured for the filament is representative of the rest of the Universe, then these structures may contain more than half of all the mass in the Universe. This image shows Hubble’s view of massive galaxy cluster MACS J0717.5+3745. The large field of view is a combination of 18 separate Hubble images. Studying the distorting effects of gravity on light from background galaxies, a team of astronomers has uncovered the presence of a filament of dark matter extending from the core of the cluster. The location of the dark matter is revealed in a map of the mass in the cluster and surrounding region, shown here in blue. The filament visibly extends out and to the left of the cluster core. Using additional observations from ground-based telescopes, the team was able to map the filament’s structure in three dimensions, the first time this has ever been done. The filament was discovered to extend back from the cluster core, meaning we are looking along it. Image Credit: NASA, ESA, Harald Ebeling (University of Hawaii at Manoa) & Jean-Paul Kneib (LAM) The theory of the Big Bang predicts that variations in the density of matter in the very first moments of the Universe led the bulk of the matter in the cosmos to condense into a web of tangled filaments. This view is supported by computer simulations of cosmic evolution, which suggest that the Universe is structured like a web, with long filaments that connect to each other at the locations of massive galaxy clusters. However, these filaments, although vast, are made mainly of dark matter, which is incredibly difficult to observe. The first convincing identification of a section of one of these filaments was made earlier this year . Now a team of astronomers has gone further by probing a filament’s structure in three dimensions. Seeing a filament in 3D eliminates many of the pitfalls that come from studying the flat image of such a structure. “Filaments of the cosmic web are hugely extended and very diffuse, which makes them extremely difficult to detect, let alone study in 3D,” says Mathilde Jauzac (LAM, France and University of KwaZulu-Natal, South Africa), lead author of the study. The team combined high resolution images of the region around the massive galaxy cluster MACS J0717.5+3745 (or MACS J0717 for short), taken using Hubble, NAOJ’s Subaru Telescope and the Canada-France-Hawaii Telescope, with spectroscopic data on the galaxies within it from the WM Keck Observatory and the Gemini Observatory. Analysing these observations together gives a complete view of the shape of the filament as it extends out from the galaxy cluster almost along our line of sight. The team’s recipe for studying the vast but diffuse filament combines several crucial ingredients. Joe Liske (aka Dr J) shows how a team of astronomers has used Hubble and a battery of other telescopes to discover the secrets of massive galaxy cluster MACS J0717. They have found that an invisible filament of dark matter extends out of the cluster. This is our first direct glimpse of the shape of the scaffolding that gives the Universe its structure. Video Credit: ESA/Hubble First ingredient: A promising target. Theories of cosmic evolution suggest that galaxy clusters form where filaments of the cosmic web meet, with the filaments slowly funnelling matter into the clusters. “From our earlier work on MACS J0717, we knew that this cluster is actively growing, and thus a prime target for a detailed study of the cosmic web,” explains co-author Harald Ebeling (University of Hawaii at Manoa, USA), who led the team that discovered MACS J0717 almost a decade ago. Second ingredient: Advanced gravitational lensing techniques. Albert Einstein’s famous theory of general relativity says that the path of light is bent when it passes through or near objects with a large mass. Filaments of the cosmic web are largely made up of dark matter which cannot be seen directly, but their mass is enough to bend the light and distort the images of galaxies in the background, in a process called gravitational lensing. The team has developed new tools to convert the image distortions into a mass map. Third ingredient: High resolution images. Gravitational lensing is a subtle phenomenon, and studying it needs detailed images. Hubble observations let the team study the precise deformation in the shapes of numerous lensed galaxies. This in turn reveals where the hidden dark matter filament is located. “The challenge,” explains co-author Jean-Paul Kneib (LAM, France), “was to find a model of the cluster’s shape which fitted all the lensing features that we observed.” Finally: Measurements of distances and motions. Hubble’s observations of the cluster give the best two-dimensional map yet of a filament, but to see its shape in 3D required additional observations. Colour images , as well as galaxy velocities measured with spectrometers , using data from the Subaru, CFHT, WM Keck, and Gemini North telescopes (all on Mauna Kea, Hawaii), allowed the team to locate thousands of galaxies within the filament and to detect the motions of many of them. A model that combined positional and velocity information for all these galaxies was constructed and this then revealed the 3D shape and orientation of the filamentary structure. As a result, the team was able to measure the true properties of this elusive filamentary structure without the uncertainties and biases that come from projecting the structure onto two dimensions, as is common in such analyses. This video shows a computer simulation of the dark matter filament’s shape as it extends back from the cluster into the background. Video Credit: NASA, ESA, L. Calçada The results obtained push the limits of predictions made by theoretical work and numerical simulations of the cosmic web. With a length of at least 60 million light-years, the MACS J0717 filament is extreme even on astronomical scales. And if its mass content as measured by the team can be taken to be representative of filaments near giant clusters, then these diffuse links between the nodes of the cosmic web may contain even more mass (in the form of dark matter) than theorists predicted. So much that more than half of all the mass in the Universe may be hidden in these structures. The forthcoming NASA/ESA/CSA James Webb Space Telescope, scheduled for launch in 2018, will be a powerful tool for detecting filaments in the cosmic web, thanks to its greatly increased sensitivity. The research is presented in a paper entitled “A Weak-Lensing Mass Reconstruction of the Large-Scale Filament Feeding the Massive Galaxy Cluster MACSJ0717.5+3745”, to be published in the 1 November 2012 issue of Monthly Notices of the Royal Astronomical Society. The paper will be published online this week. The first identification of a dark matter filament was published in J. Dietrich et al, “A filament of dark matter between two clusters of galaxies” published in Nature on 4 July 2012. Dark matter, which makes up around three quarters of all matter in the Universe, cannot be seen directly as it does not emit or reflect any light, and can pass through other matter without friction (it is collisionless). It interacts only by gravity, and its presence must be deduced from its gravitational effects, for example its effect on the rotation rate of galaxies and its ability to deflect light according to the theory of general relativity. The light captured by telescopes encapsulates information about the object that emitted it. One important application of this is to study the redshift of an object (the extent to which its light is reddened by the expansion of the Universe) which can be used to measure distances. Estimating distances based on the relative brightnesses of colours that galaxies appear in images is done using a technique called photometric redshift. Although the precision of the distance estimate is limited, it is a relatively straightforward technique to use on large numbers of galaxies, and it works well even for faint objects. Spectrometers analyse the detailed properties of the light coming from an object. In this study, the subset of galaxies observed with spectrometers provided detailed information on the motion of the objects within the filament.
http://annesastronomynews.com/dark-matter-filament-studied-in-3d-for-the-first-time/
During its 25 year long mission the Hubble Space Telescope has changed our view of the Universe significantly. Some of the most ground-breaking discoveries made in astronomy in the 20th century were made by Hubble, which allows astronomers to better understand the world we live in and investigate its mysteries even further. One of the main scientific justifications for building Hubble was to measure the size and age of the Universe and test theories about its origin. Images of faint galaxies give “fossil” clues as to how the Universe looked in the remote past and how it may have evolved with time. The Deep Fields gave astronomers the first really clear look back to the time when galaxies were forming. The first deep fields — Hubble Deep Field North and South — gave astronomers a peephole to the ancient Universe for the first time, and caused a real revolution in modern astronomy. Subsequent deep imagery from Hubble, including the Hubble Ultra Deep Field, has revealed the most distant galaxies ever observed. Because of the time it has taken their light to reach us, we see some of these galaxies as they were just half a billion years after the Big Bang. Deep field observations are long-lasting observations of a particular region of the sky intended to reveal faint objects by collecting the light from them for an appropriately long time. The ‘deeper’ the observation is (i.e. longer exposure time), the fainter are the objects that become visible on the images. Astronomical objects can either look faint because their natural brightness is low, or because of their distance. In the case of the Hubble Deep and Ultra Deep Fields, it is the extreme distances involved which make them faint, and hence make observations challenging. Using the different Hubble Deep fields astronomers were able to study young galaxies in the early Universe and the most distant primeval galaxies. The different deep fields are also a good gathering grounds to find the most distant objects ever observed. Within 2012 and 2014 Hubble created two new deep fields: The Hubble eXtreme Deep Field is so far the deepest image ever taken of the sky so far and combines the light of one million seconds of observation. The last Hubble Ultra Deep Field released in 2014 was observed in ultraviolet. This image allowed astronomers to study star formation in a region 5 to 10 light- years away from us. The top ranked scientific justification for building Hubble was to determine the size and age of the Universe through observations of Cepheid variables. The periodic brightness variations of these stars depends on physical properties of the stars such as their mass and true brightness. This means that astronomers, just by looking at the variability of their light, can find out about the Cepheids’ physical nature, which then can be used to determine their distance. Astronomers have used Hubble to observe Cepheids with extraordinary results. The Cepheids have then been used as stepping-stones to make distance measurements for supernovae, which have, in turn, given a measure for the scale of the Universe. Today we know the age of the Universe to a much higher precision than before Hubble: around 13.7 billion years. Another purposes of Hubble was to determine the rate of expansion of the Universe, known as the Hubble Constant. After eight years of Cepheid observations this work was concluded by finding that the expansion increases with 70 km/second for every 3.26 million light-years you look further out into space. For many years cosmologists have discussed whether the expansion of the Universe would stop in some distant future or continue ever more slowly. The observations of distant supernovae made by Hubble indicate that the expansion is nowhere near slowing down. In fact, due to some mysterious property of space itself, called dark energy, the expansion is accelerating. This surprising conclusion came from combined measurements of remote supernovae with most of the world’s top-class telescopes, including Hubble. The discovery of the accelerating expansion of the Universe led to three astronomers, Saul Perlmutter, Adam Riess and Brian Schmidt, being awarded the 2011 Nobel Prize in Physics. Most of the light and radiation we can observe in the Universe originates in stars — individual stars, clusters of stars, nebulae lit by stars and galaxies composed of billions of stars. Like human beings stars are born, mature and eventually die. Hubble has gone beyond what can be achieved by other observatories by linking together studies of the births, lives and deaths of individual stars with theories of stellar evolution. In particular Hubble’s ability to probe stars in other galaxies enables scientists to investigate the influence of different environments on the lives of stars. This is crucial in order to be able to complement our understanding of the Milky Way galaxy with that of other galaxies. Hubble’s work allowed it to link star formation with stellar evolution. Its infrared instruments are capable of looking through the dust clouds surrounding newly born stars. Some of the most surprising discoveries so far have come about by peering through the clouds of dust surrounding the centre of our Milky Way. Astronomers found that this centre, which was thought to be a calm and almost dead region, is in fact populated with massive infant stars gathered into clusters. The last phases of solar-like stars have been investigated through observations of planetary nebulae and proto-planetary nebulae. These are colourful shells of gas expelled into space by dying stars. The varying shapes and colours of these intricate structures with different colours tracing different, often newly created, chemical elements, have shown that the final stages of the lives of stars are more complex than once thought and there also seems to exit a bizarre alignment of planetary nebula. Gamma Ray Bursts emit very intense gamma-ray radiation for short periods and are observed a few times per day by special gamma-ray detectors on observatories in space. Today, partly due to Hubble, we know that these bursts originate in other galaxies — often at very large distances. Their origin has eluded scientists for a long time, but, after Hubble observations of the atypical supernova SN1998bw and the Gamma Ray Burst GRB 980425 a physical connection of these became probable. An unusual burst of radiation detected in early 2011 may tell a different story: rather than a star ending its life in a supernova, this burst may be evidence of a star being ripped apart as it falls into a supermassive black hole. If confirmed by further observations, this would be the first time this phenomenon has ever been spotted. Hubble’s high resolution images of the planets and moons in our Solar System can only be surpassed by pictures taken from spacecraft that actually visit them. Hubble even has one advantage over these probes: it can look at these objects periodically and so observe them over much longer periods than any passing probe could. Regular monitoring of planetary surfaces is vital in the study of planetary atmospheres and geology, where evolving weather patterns such as dust storms can reveal much about the underlying processes. In comparison with probes that have to travel vast distances and require years of planning to visit the planets Hubble is also able to react quickly to sudden dramatic events occurring in the Solar System. This allowed it to witness the stunning plunge of comet Shoemaker-Levy 9 into Jupiter’s atmosphere during the period 16-22 July 1994. Hubble followed the comet fragments on their last journey and delivered incredible high-resolution images of the impact scars. The consequences of the impact could be seen for several days afterwards, and by studying the Hubble data astronomers were able to gain fundamental information about the composition and density of the giant planet’s atmosphere. Since the impact of Shoemaker-Levy 9, Hubble has continued to study impacts and events on Jupiter, improving our understanding of the Solar System’s largest planet. Pluto and its surrounding moons have also been the target of Hubble’s observations. Several new moons have been discovered as well as a dwarf planet beyond Pluto, which led to the discussion of Pluto being a planet. Hubble also observed the spectacular break up of comet 73P/Schwassmann-Wachmann 3 as it visited the inner Solar System, the asteroid collision P2010/A2 and a mysterious disintegrating asteroid. Hubble’s high resolution has been indispensable in the investigation of the gas and dust disks, dubbed proplyds, around the newly born stars in the Orion Nebula. The proplyds may very well be young planetary systems in the early stages of creation. Also thanks to Hubble we have visual proof today that dusty disks around young stars are common. The first detection of an atmosphere around an extrasolar planet was seen in a gas-giant planet orbiting the Sun-like star HD 209458, 150 light-years from Earth. The presence of sodium as well as evaporating hydrogen, oxygen and carbon was detected in light filtered through the planet’s atmosphere when it passed in front of its star as seen from Earth. The details revealed by Hubble are superior to anything seen to date with ground-based instruments. In 2012 Hubble even discovered a complete new type of extra-solar planet: a water world enshrouded by a thick, steamy atmosphere. Later Hubble was able to measure for the first time the colour and to create the most detailed weather map an exoplanet. Black holes are objects so dense, and with so much mass, that even light cannot escape their gravity. It is in the study of supermassive black holes that Hubble has made its biggest contribution. It is impossible to observe black holes directly, and astronomers had no way to test their theories until Hubble started it work. The high resolution of Hubble made it possible to see the effects of the gravitational attraction of some of these objects on their surroundings. Hubble has also proved that supermassive black holes are most likely present at the centres of most, if not all, large galaxies. This has important implications for the theories of galaxy formation and evolution. As black holes themselves, by definition, cannot be observed, astronomers have to study their effects on their surroundings. These include powerful jets of electrons that travel many thousands of light years from the centres of the galaxies. Matter falling towards a black hole can also be seen emitting bright light and if the speed of this falling matter can be measured, it is possible to determine the mass of the black hole itself. This is not an easy task and it requires the extraordinary capabilities of Hubble to carry out these sophisticated measurements. Hubble observations have been fundamental in the study of the jets and discs of matter around a number of black holes. Accurate measurements of the masses have been possible for the first time. Hubble has found black holes 3 billion times as massive as our Sun at the centre of some galaxies. While this might have been expected, Hubble has surprised everyone by providing strong evidence that black holes exist at the centres of all large and even small galaxies. Hubble also managed not only to observe the jets created by black holes but also the glowing discs of material surrounding a supermassive black hole. Furthermore, it appears that larger galaxies are the hosts of larger black holes. There must be some mechanism that links the formation of the galaxy to that of its black hole and vice versa. This has profound implications for theories of galaxy formation and evolution and is an ongoing area of research in astronomy. Before Hubble, quasars were considered to be isolated star-like objects of a mysterious nature. Hubble has observed several quasars and found that they all reside at galactic centres. Today most scientists believe that supermassive black holes at the galactic centres are the “engines” that power the quasars. They also believe that quasars, radio galaxies and the centres of so-called active galaxies just are different views of more or less the same phenomenon: a black hole with energetic jets beaming out from two sides. When the beam is directed towards us we see the bright lighthouse of a quasar. When the orientation of the system is different we observe it as an active galaxy or a radio galaxy. This unified model has gained considerable support through a number of Hubble observational programmes. The important clues about star formation lie hidden behind the veil of the dusty, and often very beautiful, star forming molecular clouds. Astronomers turn their eyes to the birth of other stars and stellar systems in neighbouring stellar ‘maternity wards’ and use these to see a replay of the events that created our own Solar System. The large mosaic of 15 Hubble images showing the central part of the Orion complex is one of the most detailed images of a star forming region ever made. Dust clouds scatter visible light, but let infrared light through unimpeded, meaning infrared observations are often the only way to see young stars. During the servicing mission in 2009 the Wide Field Camera 3 (WFC3) was installed. An instrument designed to make detailed images both in visible light and in infrared. The WFC3 offers greatly improved capabilities in the infrared compared to what was possible before. WFC3’s images of the Carina Nebula made in visible light show dense clouds of dust and gas. But the images taken by the camera of the same region in infrared make the dust fade, leaving just a faint outline of its location. The young stars forming inside the cloud are suddenly revealed. Hubble has also contributed to our understanding of star formation beyond the confines of the Milky Way. Neither Hubble nor any other telescope is able to see individual stars outside of the Milky Way and a handful of nearby galaxies. However, the telescope has contributed to major discoveries about star formation in the far reaches of the Universe. Studying starlight from the most distant objects Hubble has observed gives clues about how stars formed in the early years of the Universe, and how they have changed over time. Hubble discoveries in the field of star formation in the early Universe include the realisation that stars and galaxies formed earlier in cosmic history than previously thought. All over the Universe stars work as giant reprocessing plants taking light chemical elements and transforming them into heavier ones. The original, primordial, composition of the Universe is studied in such fine detail because it is one of the keys to our understanding of processes in the very early Universe. Astronomers investigated the nature of the gaseous matter that fills the vast volume of intergalactic space. By observing ultraviolet light from a distant quasar, which would otherwise have been absorbed by the Earth’s atmosphere, scientists found the long-sought signature of helium in the early Universe. This was an important piece of supporting evidence for the Big Bang theory. It also confirmed scientists’ expectation that, in the very early Universe, matter not yet locked up in stars and galaxies was nearly completely ionised (the atoms were stripped of their electrons). This was an important step forward for cosmology. Today astronomers believe that around three quarters of the mass of the Universe consists of dark matter, a substance quite different from the normal matter that makes up the familiar world around us. Hubble has played an important part in work intended to establish the amount of dark matter in the Universe and to determine where it is. The riddle of what the ghostly dark matter is made of is still far from solved, but Hubble’s incredibly sharp observations of gravitational lenses have provided stepping stones for future work in this area. Dark matter only interacts with gravity, which means it neither reflects, emits nor obstructs light. Because of this, it cannot be observed directly. However, Hubble studies of how clusters of galaxies bend the light that passes through them lets astronomers deduce where the hidden mass lies. This means that they are able to make maps of where the dark matter lies in a cluster. One of Hubble’s big breakthroughs in this area is the discovery of how dark matter behaves when clusters collide with each other. Studies of a number of these clusters have shown that the location of dark matter does not match the distribution of hot gas. This strongly supports theories about dark matter: we expect hot gases to slow down as they hit each other and the pressure increases. Dark matter, on the other hand, should not experience friction or pressure, so we would expect it to pass through the collision relatively unhindered. Hubble and Chandra observations have indeed confirmed that this is the case. In 2007 an international team of astronomers used Hubble to create the first three-dimensional map of the large-scale distribution of dark matter in the Universe. It was constructed by measuring the shapes of half a million galaxies observed by Hubble. The light of these galaxies travelled — until it reached Hubble — down a path interrupted by clumps of dark matter which deformed the appearance of the galaxies. Astronomers used the observed distortion of the galaxies shapes to reconstruct their original shape and could therefore also calculate the distribution of dark matter in between. This map showed that normal matter, largely in the form of galaxies, accumulates along the densest concentrations of dark matter. The created map stretches halfway back to the beginning of the Universe and shows how dark matter grew increasingly clumpy as it collapsed under gravity. Mapping dark matter distribution down to even smaller scales is fundamental for our understanding of how galaxies grew and clustered over billions of years. Tracing the growth of clustering in dark matter may eventually also shed light on dark energy. More intriguing still than dark matter is dark energy. Hubble studies of the expansion rate of the Universe have found that the expansion is actually speeding up. Astronomers have explained this using the theory of dark energy, as a sort of negative gravity that pushes the Universe apart ever faster. Studies of the rate of expansion of the cosmos suggests that dark energy is by far the largest part of the Universe’s mass-energy content, far outweighing both normal matter and dark matter. While astronomers have been able to take steps along the path to understanding how dark energy works and what it does, its true nature is still a mystery. Light does not always travel in straight lines. Einstein predicted in his Theory of General Relativity that massive objects will deform the fabric of space itself. When light passes one of these objects, such as a cluster of galaxies, its path is changed slightly. This effect, called gravitational lensing, is only visible in rare cases and only the best telescopes can observe the related phenomena. Hubble’s sensitivity and high resolution allow it to see faint and distant gravitational lenses that cannot be detected with ground-based telescopes. The gravitational lensing results in multiple images of the original galaxy each with a characteristically distorted banana-like shape or even into rings. Hubble was the first telescope to resolve details within these multiple banana-shaped arcs. Its sharp vision can reveal the shape and internal structure of the lensed background galaxies directly and in this way one can easily match the different arcs coming from the same background object — be it a galaxy or even a supernova — by eye. Since the amount of lensing depends on the total mass of the cluster, gravitational lensing can be used to “weigh” clusters. This has considerably improved our understanding of the distribution of the dark matter in galaxy clusters and hence in the Universe as a whole. The effect of gravitational lensing also allowed a first step towards revealing the mystery of the dark energy. As gravitational lenses function as magnification glasses it is possible to use them to study distant galaxies from the early Universe, which otherwise would be impossible to see.
https://www.rocketstem.org/2015/04/20/greatest-discoveries-of-hubble-space-telescope/
Astronomers discovered a rotating baby galaxy 1/100th the size of the Milky Way using the Atacama Large Millimeter/submillimeter Array (ALMA) at a time when the universe was only 7 percent of its current age. The team was able to investigate the nature of small and dark “normal galaxies” in the early universe for the first time, representing the main population of the first galaxies, which greatly advances our understanding of the initial phase of galaxy evolution. “Many of the galaxies that existed in the early universe were so small that their brightness is well below the limit of the current largest telescopes on Earth and in Space,” says Nicolas Laporte, a Kavli Senior Fellow at the University of Cambridge. “However, gravitational lensing magnified the light from the galaxy named RXCJ0600-z6, making it an ideal target for studying the properties and structure of typical baby galaxies.” Astronomers found a rotating baby galaxy 1/100th the size of the Milky Way at a time when the universe was only 7 percent of its present age. Gravitational lensing is a natural phenomenon in which light emitted from a distant object is bent by the gravity of a massive body in the foreground, such as a galaxy or a galaxy cluster. The term “gravitational lensing” refers to the fact that the massive object’s gravity acts as a lens. The light of distant objects is intensified and their shapes are stretched when we look through a gravitational lens. In other words, it is a floating “natural telescope.” The ALMA Lensing Cluster Survey (ALCS) team used ALMA to look for a large number of galaxies enlarged by gravitational lensing in the early universe. Researchers are able to discover and study fainter galaxies by combining the power of ALMA with the assistance of natural telescopes. Why is it critical to investigate the most distant galaxies in the early universe? According to theory and simulations, the majority of galaxies formed a few hundred million years after the Big Bang are small and thus faint. Although several galaxies in the early universe have previously been observed, due to telescope capabilities, those studied were limited to the most massive objects, and thus the less representative galaxies in the early universe. Focusing on the fainter and more numerous galaxies is the only way to understand the standard formation of the first galaxies and obtain a complete picture of galaxy formation. The ALCS team carried out a large-scale observation program that lasted 95 hours, which is unusual for ALMA observations, to observe the central regions of 33 galaxy clusters that could cause gravitational lensing. RXCJ0600-2007, one of these clusters, is located in the direction of the constellation Lepus and has a mass 1000 trillion times that of the Sun. The researchers discovered a single distant galaxy that is being influenced by the gravitational lens created by this natural telescope. ALMA detected light from carbon ions and stardust in the galaxy, which, when combined with data from the Gemini telescope, determined that the galaxy is seen as it was about 900 million years after the Big Bang (12.9 billion years ago). According to further analysis of these data, a portion of this source is seen to be 160 times brighter than it is intrinsically. It is possible to “undo” the gravitational lensing effect and restore the magnified object’s original appearance by precisely measuring the mass distribution of the cluster of galaxies. The team was able to reconstruct the actual shape of the distant galaxy RXCJ0600-z6 by combining data from the Hubble Space Telescope and the European Southern Observatory’s Very Large Telescope with a theoretical model. This galaxy’s total mass is about 2 to 3 billion times that of the Sun, or about 1/100th the size of our own Milky Way Galaxy. The team was taken aback by the fact that RXCJ0600-z6 is rotating. Traditionally, gas in young galaxies was thought to move randomly and chaotically. ALMA has only recently discovered several rotating young galaxies that have thrown the traditional theoretical framework for a loop, but these were several orders of magnitude brighter (larger) than RXCJ0600-z6. “Our study demonstrates for the first time that we can directly measure the internal motion of such faint (less massive) galaxies in the early Universe and compare it to theoretical predictions,” says Kotaro Kohno, a University of Tokyo professor and the ALCS team’s leader. “The fact that the RXCJ0600-z6 has a very high magnification factor raises expectations for future research,” says Seiji Fujimoto, a DAWN fellow at the Niels Bohr Institute. “Among hundreds of candidates, this galaxy has been chosen to be observed by the James Webb Space Telescope (JWST), the next-generation space telescope set to launch this autumn.” We will learn about the properties of gas and stars in a baby galaxy, as well as its internal motions, using ALMA and JWST observations. When the Thirty Meter Telescope and the Extremely Large Telescope are finished, they may be able to detect star clusters in the galaxy and even resolve individual stars.
https://assignmentpoint.com/astronomers-found-a-rotating-baby-galaxy-with-help-of-cosmic-telescope/
Two teams of astronomers using the NASA/ESA Hubble Space Telescope have discovered three distant exploding stars that have been magnified by the immense gravity of foreground galaxy clusters, which act like "cosmic lenses". These supernovae offer astronomers a powerful tool to check the prescription of these massive lenses. Massive clusters of galaxies act as “gravitational lenses” because their powerful gravity bends light passing through them . This lensing phenomenon makes faraway objects behind the clusters appear bigger and brighter — objects that might otherwise be too faint to see, even with the largest telescopes. The new findings are the first steps towards the most precise prescription — or map — ever made for such a lens. How much a gravitationally lensed object is magnified depends on the amount of matter in a cluster — including dark matter, which we cannot see directly . Astronomers develop maps that estimate the location and amount of dark matter lurking in a cluster. These maps are the lens prescriptions of a galaxy cluster and predict how distant objects behind a cluster will be magnified when their light passes through it. But how do astronomers know this prescription is accurate? Now, two independent teams of astronomers from the Supernova Cosmology Project and the Cluster Lensing And Supernova survey with Hubble (CLASH) have found a new method to check the prescription of a gravitational lens. They analysed three supernovae — nicknamed Tiberius, Didius and Caracalla — which were each lensed by a different massive galaxy cluster — Abell 383, RXJ1532.9+3021 and MACS J1720.2+3536, respectively. Luckily, two and possibly all three of these supernovae appeared to be a special type of exploding star that can be used as a standard candle . “Here we have found Type Ia supernovae that can be used like an eye chart for each lensing cluster,” explained Saurabh Jha of Rutgers University, USA, a member of the CLASH team. “Because we can estimate the intrinsic brightness of the Type Ia supernovae, we can independently measure the magnification of the lens, which is not possible with other background sources." The teams measured the brightnesses of the lensed supernovae and compared them to the explosion's intrinsic brightness to calculate how much brighter the exploding stars' were made due to gravitational lensing. One supernova in particular stood out, appearing to be about twice as bright as would have been expected if not for the cluster's magnification power. The three supernovae were discovered in the CLASH survey, which used Hubble to probe the distribution of dark matter in 25 galaxy clusters. Two of the supernovae were found in 2012; the other in 2010 to 2011. To perform their analyses, both teams used Hubble observations alongside observations from both space and ground-based telescopes to provide independent estimates of the distances to these exploding stars . In some cases the observations allowed direct confirmation of a Type Ia pedigree. In other cases the supernova spectrum was weak or overwhelmed by the light of its parent galaxy. In those cases the brightening and fading behaviour of the supernovae in different colours was used to help establish the supernova type. Each team compared its results with independent theoretical models of the clusters' dark matter content. They each came to the same conclusions: that the predictions fit the models. “It is encouraging that the two independent studies reach quite similar conclusions,” explained Supernova Cosmology Project team member Jakob Nordin of the E.O. Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California, Berkeley. “These pilot studies provide very good guidelines for making future observations of lensed supernovae even more accurate.” Nordin is the lead author on the team's science paper describing the findings. “Building on our understanding of these lensing models also has implications for a wide range of key cosmological studies,” added Supernova Cosmology Project leader Saul Perlmutter of Berkeley Lab and the University of California, Berkeley. “These lens prescriptions yield measurements of the cluster masses, allowing us to probe the cosmic competition between gravity and dark energy as matter in the Universe gets pulled into galaxy clusters.” Dark energy being a mysterious and invisible energy that is accelerating the Universe's expansion. The astronomers are optimistic that Hubble surveys such as Frontier Fields and future telescopes, including the infrared James Webb Space Telescope, will find more of these unique exploding stars. This is important as if you want to check the prescription of your lens, you really want to check it in more than one place. “Hubble is already hunting for them in the Frontier Fields, a three-year Hubble survey of the distant universe which uses massive galaxy clusters as gravitational lenses, to reveal what lies beyond them” said CLASH team member Brandon Patel of Rutgers University, the lead author on the science paper announcing the CLASH team's results. The results from the CLASH team will appear in the May 2014 issue of The Astrophysical Journal. The Supernova Cosmology Project's findings will appear in the May 2014 edition of the Monthly Notices of the Royal Astronomical Society. Albert Einstein predicted this effect in his theory of general relativity. Dark matter is believed to make up the bulk of the Universe's matter, and is therefore the source of most of a cluster's gravity. An astronomical "standard candle" is any type of luminous object whose intrinsic power is so accurately determined that it can be used to make distance measurements based on the rate the light dims over astronomical distances. The astronomers obtained observations in visible light from Hubble's Advanced Camera for Surveys and in infrared light from the Wide Field Camera 3. The work is published in two papers, the CLASH team in The Astrophysical Journal and the Supernova Cosmology Project's findings in Monthly Notices of the Royal Astronomical Society. The CLASH survey is led by Marc Postman of the Space Telescope Science Institute. The CLASH supernova project is co-led by Adam Riess of the Space Telescope Science Institute and Johns Hopkins University and Steven Rodney of Johns Hopkins University. Aiding with the analysis on this Hubble study are Curtis McCully of Rutgers University and Julian Merten and Adi Zitrin of the California Institute of Technology in Pasadena. Lead author of the paper was Brandon Patel. The Supernova Cosmology Project included Jakob Nordin and Saul Perlmutter and others who worked on the supernovae analysis are David Rubin of Florida State University in Tallahassee and Greg Aldering of Lawrence Berkeley National Lab.
https://www.spacetelescope.org/news/heic1409/
An international team of researchers has found for the first time that the connection between a galaxy cluster and surrounding dark matter is not characterized solely by the mass of clusters, but also by their formation history. Galaxy clusters are the biggest celestial objects in the sky consisting of thousands of galaxies. They form from nonuniformity in the matter distribution established by cosmic inflation in the beginning of the Universe. Their growth is a constant fight between the gathering of dark matter by gravity and the accelerated expansion of the universe due to dark energy. By studying galaxy clusters, researchers can learn more about these biggest and most mysterious building blocks of the Universe. Led by Hironao Miyatake, (formerly JSPS fellow, currently at NASA’s Jet Propulsion Laboratory), Surhud More and Masahiro Takada of Japan's Kavli Institute for the Physics and Mathematics (Kavli IPMU), the research team challenged the conventional idea that the connection between galaxy clusters and the surrounding dark matter environment is solely characterized by their mass. Based on the nature of the non-uniform matter distribution established by cosmic inflation, it was theoretically predicted that other factors should affect the connection. However, no one had succeeded in seeing it in the real Universe until now. The team divided almost 9000 galaxy clusters from the Sloan Digital Sky Survey DR8 galaxy catalog into two samples based on the spatial distribution of galaxies inside each cluster. By using gravitational lensing they confirmed the two samples have similar masses, but they found that the distribution of clusters was different. Galaxy clusters in which member galaxies bunched up towards the center were less clumpy than clusters in which member galaxies were more spread out. . The difference in distribution is a result of the different dark matter environment in which they form. Researchers say their findings show that the connection between a galaxy cluster and surrounding dark matter is not characterized solely by the mass of clusters, but also by their formation history. The results from this study would need to be taken into account in future large scale studies of the universe, and research looking into the nature of dark matter or dark energy, neutrinos, and the early universe. The study will be published on January 25 in Physical Review Letters, and has been selected as an Editor’s Suggestion. Comment from Surhud More “The signal we measure is puzzlingly large compared to naive theoretical estimates. The sheer number of tests for systematics that we had to perform to convince ourselves that the signal is real, was the most difficult part of this research.” Comment from Masahiro Takada “This is truly exciting finding! We can use the upcoming Subaru Hyper Suprime-Cam (HSC) data to further check and advance our understanding of the assembly history of galaxy clusters.” Comment from Hironao Miyatake “I am thrilled that we have finally found clear evidence of the connection between the internal structure of clusters and surrounding dark matter environment. We checked lots of things to make sure this result, and finally concluded this is real! I am also excited that our findings will give insights on many aspects of the universe, such as large scale structure, dark matter and dark energy, and inflation physics. It is just starting. We hope we can get more exciting results from the upcoming HSC data.” Comment from David Spergel “Cosmologist have long held a very simple theory: " the properties of a cluster is determined solely by its mass”. These results show that the situation is much more complex: the clusters environment also plays an important role. Astronomers have been trying to detect evidence for this more complex picture for many years: this is the first definitive detection.” Paper Details Journal: Physical Review Letters, vol 116 (2016) Title: Evidence of halo assembly bias in massive clusters Authors: Hironao Miyatake (1, 2, 3), Surhud More (2), Masahiro Takada (2), David N. Spergel (1, 2), Rachel Mandelbaum (4), Eli S. Rykoff (5, 6), Eduardo Rozo (7) Author affiliations:
https://www.asiaresearchnews.com/html/article.php/aid/9376/cid/2/research/science/kavli_institute_for_the_physics_and_mathematics_of_the_universe_%28kavli_ipmu%29/galaxy_cluster_environment_not_dictated_by_its_mass_alone.html
Astronomers are always trying to get their hands on bigger and more powerful telescopes. But the most powerful telescopes in the Universe are completely natural, and the size of a galaxy cluster. When you use the gravity of a galaxy as a lens, you can peer right back to the edges of the observable Universe. OGLE 2003-BLG-235/MOA 2003-BLG-53: A planetary microlensing event – I.A. Bond, et. al. A Jovian-mass Planet in Microlensing Event OGLE-2005-BLG-071 – A. Udalski, et. al. Observations Supporting the Existence of an Intrinsic Magnetic Moment Inside the Central Compact Object Within the Quasar Q0957+561 – Rudolph E. Schild et. al. Fraser Cain: This is such a cool topic: here we go. Astronomers have always searched for larger and more powerful telescopes, but the most powerful telescopes in the Universe are completely natural, turning the mass of an entire galaxy into a lens that astronomers can look through. We’re talking about gravitational lenses, which let astronomers peer back into the earliest moments of the Universe. Pamela, what’s a gravitational lens? Dr. Pamela Gay: It’s basically this really neat way that the gravity of an object (a star, galaxy or cluster of galaxies) can work just like an optical lens to bend light. In this way, they can bend light that would otherwise go off in some other direction toward the Earth and increase the total amount of light from some distant object that we’re able to see. Fraser: What’s the underlying principle that’s bending light here? Pamela: There’s gravity! It’s one of those things that, when you start to realise energy and mass are two sides of the exact some coin, and that light is just energy, and gravity can cause that light to be deflected, to move the same way it can cause you and I to move, it’s possible to start using mass to focus light. Fraser: So with a gravitational lens, you’ve got light from some more distant object passing some mass like a galaxy, and that mass is warping the space around it so the light follows a different trajectory and bends. Pamela: A good way to think of it is, if you imagine that it goes: your nose, far, far away a galaxy, and even further back than that, a quasar, light from that quasar is going to be heading off in all directions filling a sphere. Some of that light would normally not just miss your nose and go above your head but miss your nose and hit a star somewhere above your head. That light that would normally have gone up above you, as it grazes over the top of that galaxy between you and the quasar, it can get bent so that its new path brings it straight to the tip of your nose. This also has the neat affect that if the alignments are just right, we can see two images of the exact same object. One is the straight view, and the other is seen reflected in a mirror, the same way you can take and use a mirror to look around a corner. Fraser: Let’s see if I understand this: you’ve got a sphere of light coming out of the quasar, and some of that light is going to be passing very close to this galaxy and what would go in a straight line gets turned in a little or turned as it gets attracted toward the galaxy and so we here on Earth, that’s far down the path, see this light converging back on us because of this warping. So that’s why we see a magnified version of what’s behind it. Pamela: In fact, the gravity can cause a bunch of different effects. It can distort the light from a background object, this is where you get galaxies that appear as strange arcs around Abell clusters. You can also get what’s called a microlensing event, which is where a background object appears to be a great deal brighter due to an intervening mass. You can also get neat affects such as double quasars, quadruple quasars, Einsteinian rings, where the light from a background object is multiplied into multiple images or twisted into a ring where there once was just a single point-like object. Fraser: You say these are wonderful things to look at, but wouldn’t a telescope manufacturer be trying to grind the mistakes out of the mirror? If they saw this kind of stuff? Pamela: In a real telescope, you really don’t want your telescope to produce fun house images. The reality is that looking at galaxies through gravitational lenses is sometimes just as distorting as if you look through the old deformed glass in extremely old houses, or if you are looking at yourself reflected in a carnival glass. But, we’re allowed to see things we can’t otherwise see, and sometimes the stuff that we’re seeing is invisible stuff, like when the gravitational lenses are made out of dark matter. Fraser: So I guess the astronomers are going to take what they can get. They don’t have a telescope that powerful, so the fact that there’s one naturally out there that does provide a bit of a distorted image but still allows you to look much further back… how much further back can they see? Pamela: The very most distant galaxy that has ever been detected was found using an Abell cluster to gravitationally lens a background galaxy. Fraser: So this is an Abell cluster, an intervening cluster of galaxies where it was able to focus the light from this more distant galaxy. Pamela: In this case it was Abell 2218, and back in 2004 they were able to get a redshift to measure the recession rate of a smear of light that they were able to detect because it was being magnified, it was being lensed by the gravity of this giant cluster of galaxies. Fraser: So theoretically, how far back could astronomers push this technique? Pamela: It’s all a matter of how good are we at taking a spectra of a smear of light. If the alignments are right, you can perhaps get multiple galaxy clusters that are gravitationally lensing an object multiple times that, with all of this combined lensing, allows us to look back to objects (that we currently can’t see using existing telescopes) that were formed at the very edge of the Universe in the moments right after the Cosmic Microwave Background was formed. We haven’t found those things yet, but the potential is there, as we look at the smears of light. Fraser: I guess with a more powerful telescope or with a luckier alignment of foreground object and background object, we might find some of those first objects. Pamela: We’re also finding objects within our own galaxy (that we can’t find any other way) using gravitational lensing. There was actually a planet, it had a truly terrible name: OGLE 2005-BLG390 (that’s the parent star). It has a planet going around it that we found because the star and the planet gravitationally microlensed a background object and we were able to see the mass of both the star and the planet as the background star was lensed. Fraser: So we’re just talking about a star here, not a galaxy. This was two stars lining up in our Milky Way and we happened to be in the exact right spot to see the line up. Pamela: What’s neat in this case is as you have the foreground star passing in front of a background object, we can’t see that star. This was a little red dwarf, too far away for us to be able to see with our telescopes because it just doesn’t give off a lot of light. As it orbited in front of a background object, the background object was something bright enough we can see it every day. That background object suddenly increased in brightness in a away that isn’t characteristic of a nova or a flare event or any other normal brightening. It increased in brightness in a perfectly symmetrical way that indicated that an object was passing in front of it and then moving out from in front of it at a constant velocity. In the process of doing this, there was a little blip on the side of that increase and decrease in brightness. That blip corresponded to the planet getting in on the act of microlensing the background light. We were able to find what we think was a rock or an icy planet (one of the smaller mass planets that have been discovered), because of this microlensing event. Fraser: This is a once in a lifetime opportunity, to see this star and its planet, because you need that line-up, so unless it lines up with another star that we know of, we’ll never see it again. Pamela: This was a roughly 13-Earth mass planet that we have a observation of, and we have a observation of its star, but still it’s cool! Fraser: Right, but there’s no chance for follow up observations. Pamela: Not with current technologies. This is where you wait for the OWL telescopes and the other freakishly large telescopes astronomers are planning to build. Fraser: I recall it was quite far away, it was like tens of thousands of light years away. Pamela: It was a star out on the edge of our galaxy, but it’s a new way to get at data in places that we otherwise can’t observe. Fraser: What’s the process for this the, are astronomers watching stars to see them brighten like that? Pamela: There are two different projects: OGLE and MACHO. These two programs are regularly looking at certain areas of the sky night after night after night waiting for microlensing events. What they do is take picture after picture of the same region, and as they take these pictures they subtract them off of a previous night’s images and look to see what is different. In the process of finding these differences, sometimes they’re actually discovering variable stars like the Cepheids and RR Lyraes that I like to study. Sometimes they’re finding nova, sometimes they’re actually finding things like supernova light echoes that are moving through these regions of space. Really cool science. They’re also (unfortunately at lower events) finding microlensing events. They’re finding lots and lots of RR Lyrae stars, lots and lots of other variable stars. Occasionally, out of the noise, they find these microlensing events that indicate there’s a dark object (a white dwarf, a brown or red dwarf, a neutron star, something that we otherwise can’t see) out in the outskirts of our galaxy, plugging along occasionally lensing light from perhaps the Large Magellenic Cloud stars, perhaps background objects. It’s these lensing events that are allowing us to get a sense of how much of the dark matter in the galaxy is made up of perfectly normal stuff that we just otherwise can’t see. All the dark matter in the galaxy could be accounted for if there was roughly one ACME brick per solar system volume of space. If that were true, we wouldn’t be able to see out of the galaxy really well. It’s important to find the actual ACME bricks that are out there (which tend to be shaped more like brown dwarfs) and this is one way of doing that. Fraser: Now, earlier you talked about how the larger gravitational lensing is helping astronomers map dark matter distribution. Can you go into that in some more detail? Pamela: There was actually a really, really neat Hubble result that just came out. If you haven’t read about it yet, go over and look at our friend the Bad Astronomer’s website. Hubble basically found a smoke ring of dark matter around a galaxy cluster, CL0024+1652. What they do is, they look at background objects. They assume in this little tiny region on the sky, I have 100 background galaxies. These galaxies are going to have random orientations, random shapes. If I average together all these galaxies’ shapes, they should average out to perfect little circles on the sky. But, if there’s matter between me and those background galaxies, that matter is going to cause all of them to be systematically twisted a little bit, the same way as if all of them were reflected in the same carnival house mirror. So we look for those slight twists, those slight ellipticities, the slight teardrop shapes that crop up in the background galaxies. When we find these irregularities in their shape, the deviations from being little tiny circles, then we know there’s dark matter and we can map the distribution of the dark matter by reverse engineering what was necessary to make these galaxies not average out to a little disk. Fraser: So in this case, we don’t have a galaxy in front of another galaxy, we have this invisible dark matter that’s acting as this gravitational lens, distorting the image from the background galaxy. Pamela: What’s really cool is this dark matter is forming a donut (one of my favourite shapes apparently) around the cluster of visible galaxies. This is the type of thing that can happen when there’s a collision between two systems. You shock the system, one passes through the other and you end up with a ring of material. We’ve seen this in individual galaxies before and after collisions, but now we’re seeing it in an entire cluster and it’s not just the material of the cluster, it’s the dark matter itself that forms the donut. That’s just really cool. We’re not used to thinking of dark matter as actually forming structures,(at least not forming structures on this type of scale), and it’s a really neat, really hard to do discovery. Every day we’re learning more about the distribution of dark matter. Back in January, after the AAS, we actually reported here on this show about the COSMOS project, and how they’d mapped out the large scale structure of the Universe to find that the structures of the luminous matter generally fell within the structures of dark matter, but didn’t necessarily have precisely the same centres. Fraser: So I guess this was the same technique: they looked everywhere, looking for that distortion, and then carefully mapped it back to figure out where all the dark matter was. Pamela: They mapped a fairly significant area on the sky, and they built a 3-D model of dark matter using gravitational lensing of galaxies at various distances away from us. Again, really hard to do, really good, solid science. We don’t know what dark matter is, but every day we’re getting a better and better map of where it is. We can also use dark matter to sometimes get to repeat our ability to observe specific events. There are quasars out there that have been gravitationally lensed in such a way that when you’re looking at the sky, you see two identical objects that are separated by a few arc seconds to more than a few arc seconds on the sky. This means that you can go out, look directly at the quasar or you can look at the lensed version of the quasar. The first one of these was actually given the name, Old Faithful (or scientifically, Q0957+561). Pamela: Yeah, I like “Old Faithful” too. We can’t name things well in astronomy. I admit to this fully. Fraser: There’s too many objects. Pamela: Yeah, yeah it’s kind of hopeless. But what’s cool about this object is you have two quasars that are far enough apart that any good telescope can clearly resolve them. The two light paths, the one to look directly at the quasar, and the one to look at the gravitationally lensed quasar (where the light has already started to head off somewhere else and then deviates and comes back to us), it’s a difference of over a year. So if you catch the tail end of the quasar doing something cool (and quasars actually flicker and do neat things on short time-scales indicating stuff going on with the supermassive hole in the centre), if you only catch the tail end of an event, you just go back a year later and watch it occur in the lensed version of the quasar. Pamela: You don’t get to repeat observations very often. This is like the only way you can get to get a second try at getting your data. Pamela: Exactly, it just requires the mass to be in just the right place. Fraser: That’s amazing. Are there any other places where gravitational lenses come into astronomy? Pamela: The primary neat places for them are: looking at these quasars where you get multiple images; mapping dark matter; using them to zoom in on objects at high redshifts and using them to zoom in on little objects (well, not zoom in… using them to detect little tiny objects) in the outer part of the galaxy. These are the main directions, but then there’s also some nifty science that comes out of this just in terms of using theory to do funny things. There was a scientist down the hallway from me at the University of Texas. Hugo, Hugo Martel. Great Canadian from Quebec. He figured out what distribution of matter would be required to create a lensed image that looked like a smiley face. It’s just a great abuse of science, but that’s the neat thing: you can take a perfectly normal quasar, with a perfectly normal, nice, happy, “I’m a disk” light and twist its light with intervening matter in ways that you can create arcs, in ways you can create smiley faces and all sorts of other neat patterns. In the process of figuring out what distribution of mass is necessary to make a smiley face, he was able to also figure out what is needed to reverse engineer the distribution of mass between here and there so that when we do find these things that look like waves on the ocean, when we find these things that look like a three-year-old’s version of drawing a seagull, we know what mass is required to get to that observed image. Fraser: I guess that was my question, as you said earlier on, when astronomers look through telescopes they see these distortions, these fun house mirror images, which in some cases is great because you get a chance to see something and not nothing, but are there techniques to try and reverse engineer the light to try and get a better sense of what the object is? Could there be a day either now or in the future when astronomers can use these lenses and actually rebuild a spiral galaxy image as opposed to a smear around the outside of a galaxy? Pamela: We’re already there, at a certain level. Just as we had to figure out how to build a corrector for the Hubble Space Telescope based on the observed distortions in the early images, we have also figured out how to mathematically figure out how to get back at the original shape of these distorted galaxies. What they do is say, “we have these 100+ galaxies that should average out to a nice polite circle on the sky. They don’t.” and then they do the trials. They do the simulations, to figure out where do I need to stick mass in the volume of space between me and these galaxies, to get the perhaps teardrop shape. Once you’ve figured that out, you can reverse engineer the path of the light to get at the original shape. It’s really cool to look at some of these simulations. With the COSMOS team, they can actually trace the pathway for a beam of light that gets lensed multiple times as it passes from high redshift galaxies to the modern epic. You can see it get bent over and over as it zigs and zags, getting bent by multiple intervening blobs of mass. It’s a maze out there, and the light is forced to run this gauntlet of material because gravity bends light. Fraser: Now does this technique work across the entire spectrum, does it work from radio waves all the way to gamma rays? Pamela: Gravity bends everything. There are people, in fact, out there looking to see how gravitational lensing affects our views of the Cosmic Microwave Background. So we’re looking at this in microwaves, we’re looking at this in optical light and infra red light. We’re looking across all the spectrum, trying to understand what is it that we can use this really great artifact of mass and energy being the same thing, to figure out about our Universe. Fraser: Now there’s one piece of terminology I wanted to talk about. I’ve done a couple stories on this, which are called Einstein rings. Fraser: I know they have to do with gravitational lenses. Can you explain what those are? Pamela: This is the neat situation where you get a perfect alignment between us and a distant galaxy or let’s use a quasar (because quasars are neat little point sources). Fraser: And quasars are the actively feeding supermassive black holes at the hearts of galaxies, right? Fraser: pouring out tonnes of energy. Pamela: So you basically have the very centre part of a galaxy pouring out gobs and gobs of light such that an active galaxy that is billions of light years away – so far away that the disc of the galaxy is extremely hard to observe with the largest telescopes – the very centre, the active part, is just the brightness of a normal, faint star. They’re really powerful, fascinating things. Now, if you take one of these (and they exist in the largest numbers in the early parts of the Universe, when there was just more stuff for central supermassive black holes to be eating). If you look at one of these in the distant Universe and plot a concentration of mass exactly on the line of sight between us and them (so it’s a perfect, straight line: our telescope, the lensing object, the background quasar). The lensing object is going to block the light that’s trying to get straight at us from what’s being lensed, but the light that’s trying to go above, below, left, right, diagonals… the light that’s trying to go in a perfect ring off in other directions away from us, all that light is going to get bent toward us. If it doesn’t make it all the way into focus, if it doesn’t make it all the way down to a single point before it reaches us, we’ll see that light that’s getting bent as a ring. Fraser: Is this a temporary situation, will this Einstein ring last for years or could we be anywhere inside the Milky Way and still see it? Pamela: This type of gravitational lens, made up of quasars at large redshifts and galaxy clusters (or other large-mass objects) at moderate redshift distances, here… human life-scales, not seeing any motion going on. But on cosmic timescales, everything in the Universe is in motion, everything changes, some day those particular Einsteinian rings are going to lose their alignment, but other ones will step forward to take their place. Fraser: And being an astronomer focussed on this is all about being at the right place at the right time. Pamela: Well, the whole concept that we’re never really at the exact right place at the right time… this type of thing is always out there, we’re just at the right time for this one particular Einstein ring. Fraser: Right. Great, I think that covers the concept. I think, astronomers who need to go bigger are just going to have to go out and find themselves a galaxy cluster to look through.
http://www.astronomycast.com/2007/05/episode-37-gravitational-lensing/
Most people would probably agree that this year is a real challenge for everyone’s nervous system, the ability to stay calm and focused, and the level of stress resistance. A significant number of persons had to change their lifestyles because of COVID-19, which has also affected students. Due to the virus’s danger, teenagers had to forget their way to their schools and colleges and spend much more time at the computer. Lessons began to be conducted in a completely unusual format, which now raises many questions and doubts before the start of the new academic year. While many people believe that future lessons will remain online, several compelling arguments are supporting the fact that distance learning can never replace traditional classes. Three significant points prove that sooner or later, schools and colleges will return to the usual way of teaching. First of all, it is crucial to keep in mind that “online learning is a fundamentally elite concept” (“Can online learning replace traditional education?”, 2020, para. 8). To e-learn effectively for an extended period, a student needs to have all necessary and high-quality equipment, which is rather expensive and cannot be afforded by more than half of parents. Second of all, some classes require students to participate in spontaneous debates and discussions or ask questions to get certain confidence, which cannot be replaced by typing in anonymous comments. Finally, e-learning is not valid for fields and disciplines that students need to practice at and specialize in (“Can online learning replace traditional education?”, 2020). For example, online classes take away valuable resources like studios and lab facilities, which are vital for some disciplines and may not be recreated at home. Despite these strong arguments, it is possible to understand those people who are sure that distance classes can and will replace traditional education. According to them, “online learning has transformed the education world, with platforms like Coursera enabling those living in remote parts of the world to access university courses” (“Can online learning replace traditional education?”, 2020, para. 8). Indeed, this is true, but this is only available for those students who have a comfortable environment and quality equipment at home, which, as mentioned above, does not happen often. Therefore, it is more likely that distant education will never be able to replace the traditional one. Reference Can online learning replace traditional education? (2020). TC Global Insights. 2020, Web.
https://chalkypapers.com/distance-learning-replacing-traditional-classes-essay/
Course Type: Third-level, CAO & PLC courses Via Blended Learning - Mix Of Classroom & Online There are many third-level, full-time and PLC courses available via a blended method – a mix of classroom-based and online. Blended learning is an approach to education that combines online educational materials and opportunities for interaction online with traditional classroom-based methods. It requires the physical presence of both student and teacher, with some elements of student control over time, place, path, or place. Search below for CAO, third-level and PLC courses available via blended learning in Ireland.
https://www.whichcollege.ie/course-type/blended-learning-elearning-skills-workshops/
Traditional approaches to fighting corruption tend to focus on rules, compliance and enforcement. Regulations are passed and organizations are formed to root out graft. These efforts may provide part of the solution, but are often hamstrung or ineffective in places where opacity and patronage are ingrained, and where there is a gap between legal frameworks and everyday behaviors. To truly build systems and societies with integrity we need to rethink these approaches and close these gaps through reimagining education. This was a critical theme during the World Economic Forum’s Partnering Against Corruption (PACI) Building Foundations for Trust & Integrity project, which we recently co-led through our organizations the Accountability Lab and the eGovlab. Whether brainstorming about primary school civics courses in Mexico; developing creative accountability approaches through a film school in Liberia; or using technology for transparency in Sweden, the project has helped to clarify how to use education to build integrity from the bottom-up. A starting point is moving beyond traditional compliance and ethics courses, which tend to quickly become check-box exercises that students or employees find uninteresting or token. This means looking at approaches like values round-tables, targeted and creative accountability campaigns, and ethical dilemmas. Moreover, traditional classroom-based learning may allow for information transfer, but does not generally support the shifts in behaviors we need to build cultures of accountability. So let’s think about focusing on interactive materials, group-based work and action-learning initiatives. Technology is, of course, transforming the ways that education can be delivered and experienced. Almost 4 billion people or 47% of the global population are now online, which presents an incredible opportunity for mass learning. But the content provided has to be tailored to address the specific needs of participants, particularly around issues of integrity and anti-corruption, where dynamics can vary greatly across contexts. This is where co-creation and co-design within e-learning play a critical role: to make the content relevant and realistic to address the local needs of stakeholders. Through PACI, our organizations are now bringing together the very best engineers, anthropologists, design scientists, academics and programmers from around the world to develop new educational approaches and materials for building integrity based on these principles. The Knowledge 4 Trust Initiative will provide practical support for those change-makers across the public, private and civic sectors, who are looking to fight corruption. Have you read? Using blended learning, interactive tools and an online and offline approach, the curriculum will focus on lessons, practices and ideas from a variety of domains and contexts. Topics will range from public sector reform issues to information systems innovations, and will aim to build a learning agenda that can revolutionize anti-corruption approaches. It will create an active community of responsible leaders that have the relevant skills, policy ideas and strategies to fight corruption in the modern era. The programme will utilize eGovlab/Stockholm University’s online learning platform and combine it with Accountability Lab’s face-to-face workshops and co-creation sessions led by experts in the field. Participants will then work with the ideas they develop through the process to implement real change and validate their learning through practice. As the political space for reformers seems to be shrinking globally, we need to find ways to make sure we are building societies with integrity, supporting values-based leadership and ensuring that advocates for positive change have the skills and knowledge they need. Now is the time for us to reimagine education to fight corruption.
https://www.weforum.org/agenda/2017/03/why-we-have-to-reimagine-education-to-fight-corruption/
For more than a century, the most predominant form of instruction in higher education has been classroom-based and instructor-led. Today, this traditional approach to learning is being challenged by new technologies such as multimedia, telecommunications, and the Internet. It has been suggested that the effectiveness of traditional pedagogical methods in alternative learning environments may be resolved through the creation of a new domain for educational interaction referred to as online education (Harasim, 1990); There has been much research focused on the advantages of teaching university courses online (Davis, Odell, Abbitt, Amos, 1999; Hiltz, 1994; Harasim, 1990). However, there is little research that has focused on the effectiveness of traditional instructional methods when used in an online learning environment. This study examined the effectiveness of traditional classroom teaching methods used in an online learning environment; Academic outcomes of preservice education students who received online instruction were compared with preservice education students who received traditional teacher-based instruction. In this quasi-experimental, mixed model study, all students participated in both traditional (control) and online (experimental) interventions. Three different traditional methods of instructional delivery were compared: (a) lecture, (b) guided instruction, and (c) collaborative discussion. Interventions were created in which the intact traditional instruction was delivered through an online learning environment created specifically for this study; The results of this study show that overall, there were no significant differences between experimental and control groups. That is, student performance was the same whether instruction was delivered in a traditional classroom or through an online learning environment. Traditional instructional methods, such as those used in this study, produce similar academic outcomes when delivered through online learning environments. Keywords Distance Education; Effectiveness; Environment; Instructional; Learning; Learning Environment; Methods; Online; Traditional Instructional Methods Controlled Subject Educational technology; Education, Higher; Educational psychology File Format File Size 4085.76 KB Degree Grantor University of Nevada, Las Vegas Language English Permissions If you are the rightful copyright holder of this dissertation or thesis and wish to have the full text removed from Digital Scholarship@UNLV, please submit a request to [email protected] and include clear identification of the work, preferably with URL. Repository Citation Smith, Steven Bradford, "The effectiveness of traditional instructional methods in an online learning environment" (1998). UNLV Retrospective Theses & Dissertations. 3072.
https://digitalscholarship.unlv.edu/rtds/3072/
Towards the validation of adaptive educational hypermedia using CAVIAr(ah2008.org, 2009)Migrating from static courseware to Adaptive Educational Hypermedia presents significant risk to the course creator. In this paper we alleviate some of this risk by outlining how the CAVIAr courseware validation framework ... - The case for cloud service trustmarks and assurance-as-a-service(SciTePress, 2013)Cloud computing represents a significant economic opportunity for Europe. However, this growth is threatened by adoption barriers largely related to trust. This position paper examines trust and confidence issues in cloud ... - A Domain-Specific Model for Data Quality Constraints in Service Process Adaptations(Springer, 2013)Service processes are often enacted across different boundaries such as organisations, countries or even languages. Specifically, looking at the quality and governance of data or content processed by services in this context ... - Semi-automatic distribution pattern modeling of web service compositions using semantics(IEEE, 2006)Enterprise systems are frequently built by combining a number of discrete Web services together, a process termed composition. There are a number of architectural configurations or distribution patterns, which express how ... - PatEvol - A pattern language for evolution in component-based software architectures(école Polytechnique de Montréal, 2013)Modern software systems are prone to a continuous evolution under frequently varying requirements. Architecture-centric software evolution (ACSE) enables change in system structure and behavior while maintaining a global ... - Personalized quality prediction for dynamic service management based on invocation patterns(Springer, 2013)Recent service management needs, e.g., in the cloud, require ser-vices to be managed dynamically. Services might need to be selected or re-placed at runtime. For services with similar functionality, one approach is to ... - A template description framework for services as a utility for cloud brokerage(SCITEPRESS, 2014)Integration and mediation are two core functions that a cloud service broker needs to perform. The description of services involved plays a central role in this endeavour to enable services to be considered as commoditised ... - A service localisation platform(Curran, 2013)The fundamental purpose of service-oriented computing is the ability to quickly provide software and hardware resources to global users. The main aim of service localisation is to provide a method for facilitating the ... - An active learning and training environment for database programming(Association for the Advancement of Computers in Education, 2005)Active learning facilitated through interactive, self-controlled learning environments differs substantially from traditional instructor-oriented, classroom-based teaching. We present a tool for database programming that ...
https://bia.unibz.it/handle/10863/122/discover?rpp=10&etal=0&group_by=none&page=4&filtertype_0=has_content_in_original_bundle&filtertype_1=author&filtertype_2=type&filter_relational_operator_1=equals&filter_relational_operator_0=equals&filter_2=Book+chapter&filter_1=Pahl+C&filter_relational_operator_2=equals&filter_0=false
Visualizing Online Learning to Demonstrate Course Structure and Provide Intelligent Feedback (Georgia Institute of Technology, 2015-12)Proposal of a visual representation for online course structure in order to drive intelligent feedback. The project considers the need to utilize advanced technical features over traditional non-digital educational methods ... - Reducing Students’ Cognitive Load Using Smartphones (Georgia Institute of Technology, 2016)This paper reviews cognitive load theory and how it plays a role in student attrition, and describes an experiment performed to see if mobile apps will reduce cognitive load for students. - Reducing Cognitive Load Through Reminders (Georgia Institute of Technology, 2016)Describes an experiment using email to reduce students' cognitive load. - Duplicate Question Detection Using Online Learning (Georgia Institute of Technology, 2016)An integral part the learning experiences for both on-campus and off-campus courses are online communities such as Piazza. Piazza is described as a “Q&A” forum which is created for every class. An integral part of many ... - Online Student Engagement: Problems and Potential Solutions (Georgia Institute of Technology, 2016-02)Students in online-only degree-seeking programs are less likely to complete courses and programs of study than their peers in face-to-face classes. Student disengagement is one of the greatest risk factors for non-completion ... - Methods to Improve the Field of Intelligent Tutoring Systems using Emotion-based Agents (Georgia Institute of Technology, 2016-02)The aim of this paper is to review select current methods used in the field of Intelligent Tutoring Systems (ITS) with respect to the use of emotion-based agents and how those systems interact with the learner to capture ... - Apache Spark Performance Compared to a Traditional Relational Database using Open Source Big Data Health Software (Georgia Institute of Technology, 2016-04-24)The author outlines how big data software can be utilized to speed up health analytics software when faced with big data problems. Specific data analytics from the Observational Health Data Sciences and Informatics (OHDSI) ... - Knowledge Space Framework An API for Representation, Persistence and Visualization of Knowledge Spaces (Georgia Institute of Technology, 2016-05)This paper discusses the challenges in tooling around the management and utilization of knowledge space structures, via standardized APIs for external Adaptive Learning Systems (ALS) to consume. It then describes how ... - Perception of Peer Review in the OMSCS Program (Georgia Institute of Technology, 2016-05)This study investigates the perception of OMSCS students on the topic of the current peer review process in the OMSCS program. This includes the discussion of the potential steps that could be taken to improve the peer ... - ITS Analyzers - Meta-Analysis Research on the Effectiveness of Intelligent Tutoring System (ITS) Products (Georgia Institute of Technology, 2016-12)Evaluation of intelligent tutoring systems (ITS) is an important area of research in current educational practices. There are many work has been done to analyse the effectiveness of various ITS products to aid student ... - Web Application to Teach Elementary School Students Sentence Parts via Diagramming (Georgia Institute of Technology, 2016-12)While many educators avoid the subject of diagramming sentences when teaching English sentence parts and grammar, this paper presents a web application that leverages simplified sentence diagramming to teach or reinforce ... - Piana High-Performance Data-as-a-Service (DaaS) Microservice Framework (Georgia Institute of Technology, 2016-12)The project “Piazzalytics” for the time being is, 1) to create a set of RestFul APIs to allow users extracting the entire online post information with the same hierarchical manner, 2) to persist the extracted data in the ... - Geography Education for Pre-School to K-8 Children using Mobile Technologies - A High-Fidelity Prototype (Georgia Institute of Technology, 2017-08)Amalgamation of intelligent tutoring systems, game based learning and simulation based learning in a mobile app can be effective way to engage preschool to K-8 student in geography education. Research on different ... - Evaluation of Modern Tools for an OMSCS Advisor Chatbot (Georgia Institute of Technology, 2017-08)This paper is a survey of modern chatbot platforms, Natural Language Processing tools, and their application and design. A chatbot is proposed for the GA Tech. OMSCS program to answer prospective students questions immediately ... - Learn Together: Collaborative Distance Learning in Immersive Virtual Reality (Georgia Institute of Technology, 2018-07)Distance learning has increased the reach of quality education to more individuals, but at the risk of a loss of a sense of presence and collaboration with peers and instructors. Learn Together is an attempt to address ... - Mobile Technologies for Frontline Health Workers (Georgia Institute of Technology, 2018-12-18)This student project aims to investigate the appetite for and applicability of mobile technologies for the training and education, coaching, and professional development of frontline health workers. The fundamental research ... - HelperBot : A Prototype System for Reducing Cognitive Load (Georgia Institute of Technology, 2019)HelperBot is a prototype Slackbot designed to demonstrate how a conversational bot could be used to reduce the high attrition rate for online students by reducing students’ cognitive load. - Evaluating the Effectiveness of Digital Game Based Learning in Second Language Vocabulary Acquisition (Georgia Institute of Technology, 2019)Success in language learning is largely dependent on the ability to acquire vocabulary in the second language. Sadly, vocabulary acquisition appears to be one of the more challenging aspects of language learning. In recent ... - How Can Technology Combined With Educational Theory Be Used To Improve The Quality Of Catechesis In The Lutheran Church? (Georgia Institute of Technology, 2019-04)Catechesis has been a critical part of Christianity since Biblical times, however the adoption of popular educational theories and technology to aid in catechesis has been slow. This research study shows the result of a ... - Establishing a Data Science 101 Pedagogy: Reimagining the MOOC Learning Experience Through a Case-Based Learning Methodology (Georgia Institute of Technology, 2019-04)This work involved a comparative analysis of randomly selected Data Science Massive Open Online Courses (MOOCs) and master’s degree programs in investigating how effectively interdisciplinary curricula approaches were being ...
https://smartech.gatech.edu/handle/1853/54518/browse?type=dateissued
I'm not a teacher. I've never been a teacher. But this is my 20th year in K-12 education in finance, marketing, and product development, and I've had some thoughts lately about how education has changed in my time and what it will look like in the near future. While there have been some changes... 01August, 2019 20:1 Class Size and More Blue Track Enhancements for 2019-20 We’re continuously refining our program to better serve students of all backgrounds, abilities and interests. And as we enter our sixth school year, we're more equipped and committed than ever to support growth for each-and-every student. We have made many enhancements to better support online,... 28February, 2019 Online Summer School With Method Online Summer School For many of us, summer school means sitting in a class for 4 hours every day turning through textbook chapters, completing packets, and taking exams, but it doesn’t have to be that way. Method Schools Summer program allows for mobility and flexibility while still earning... 06August, 2018 How To Make A Successful Online School My personal experience in the online learning space Over 20 years ago, I can remember being one of the first in my professional circle, pushing to implement virtual content in a variety of educational settings. I met this push with fierce resistance and many questions that I didn’t yet have... 16July, 2018 How Online Learning Has Disrupted Traditional Schooling Whether examining the auto, tech, educational or any other industry, the threat of being outperformed by innovative newcomers is continuously growing. Generally, innovative newcomers outperform others within their industry by introducing more and convenient affordable products or services that... 26October, 2016 Creating a Schedule for Online Home School: 5 Questions to Ask One of the hardest things about transitioning to online home school is learning to adhere to a schedule. You have what initially feels like an endless amount of freedom in front of you, and it's easy to fall into the trap of ignoring those school responsibilities. There are so many other things... 05October, 2016 Five Obvious Benefits Of Online High School As your child enters their high school years, you may begin to question which option will best meet their educational needs. With public schools, private schools, charter schools, and online charter schools available, there are resources available to meet the learning needs of a wide variety of... 14September, 2016 Balancing "On and Off Screen Time" In An Independent Study Charter School At Method Schools we aim to educate the whole student by integrating a variety of modalities into our program. As an independent charter school, we have been afforded the creative space to combine adaptive and personalized online learning with small group instruction along with collaborative and... 01September, 2016 Should My Student Attend An Online High School? For many, online courses offer an attractive solution to the distractions of traditional high school classes. However, despite the benefits of online courses, many students still prefer traditional on site education as USA Today has reported. So what are the pros and cons of taking a high school...
https://www.methodschools.org/blog/topic/online-learning
The 4IR marks a new period of rapid transition driven by technology. The amalgamation of automation, connectivity, artificial intelligence and robotics, is creating entire industries, sectors and careers, and leading others to vanish entirely. Indeed, it is estimated that automation will threaten more than 800m jobs worldwide by 2030. Since the first industrial revolution over two hundred years ago, each leap in technology has changed the way we live and work. These paradigm shifts have led to higher living standards and productivity, driven economic growth and often helped extend life expectancy. However, to ensure that we gain the most opportunities from the 4IR, deliberate, considered and coordinated action to reinvent how we lean, train and work must be taken. Here are three ways education must evolve to prepare for the digital age: 1. Personalised-Learning Current research shows that workplace skills on average change every 2.5 years. Therefore, learning is key to career sustainability and yet, traditional classroom formats, textbooks and generic testing do not complement this rapid pace of transition. For some, the answer is ‘continuous up-training’, where all employees are required to devote time to learning and perfecting new skills that have evolved alongside the technology. To do this, courses and learning materials that anticipate and adapt to the needs of the student are crucial as a one-size-fits-all approach will no longer apply. Many universities, are already implementing this approach by offering pioneering online programmes. MIT, for example, offers MicroMasters, providing quality, on-demand, industry-relevant skills that are recognised by leading employers, and at a fraction of the price of traditional education models. In the private sector, forward-thinking organisations are also adapting their approaches to skill enhancement. For example, Deloitte have been utilising artificial intelligence to curate and customise training content that anticipates the educational needs of employees based on their role, level, and courses their peers are taking. The programme organises content available and sets out a bespoke approach instead of their previous ‘stock content’. 2. Platforms to Replace Classrooms As well as reimagining content, we must also change the method of training delivery. Traditional classrooms could become a thing of the past as digital transformation opens up new possibilities for dynamic, online platforms that curate courses that are relevant, convenient and efficient. The rise in artificial intelligence, robotics and intelligent tutoring systems across the educational sector is changing the teaching format as human lecturers with the required experience and teaching skills may no longer deliver enough. As a result, digital tutors may take over. For example, “Yuki”, the first robot lecturer, was introduced in Germany in 2019 delivering lectures to university students at The Philipps University of Marburg. The robot acts as a teaching assistant during lectures and is able to analyse how students are doing academically, what kind of support they need and design tests for them. Although Yuki requires significant improvements, his deployment perhaps signals a future path of digital teachers. Whilst there is always likely to be a place for student/teacher interaction, taking the body of the learning online or automating it utilising the benefits of AI and machine learning could save time and costs, leaving the more complex discussions and feedback to meetings where more value can be gained. 3. Adapting to Flexible working Alongside technological development of the 4IR is the societal evolution that is currently happening. Over the past decade, there have been substantial changes to how we approach work with the popularity of flexible working led by benefits to both the worker and the employer becoming more and more commonplace. Long gone is the 9-5, five days a week workday as remote working, contracting, freelancing and global, cohesive teams fast become the norm. Acknowledging this trend, companies must develop training strategies that ensures that no matter their working structure, each employee or student delivers the same approach, quality and objectives regardless of their location. This may redefine the learning environment but several studies have proved its benefits. For example, the Evangelical School Berlin Centre in Germany gives no grades until students turn 15, has no timetables and no lecture-style instructions. The pupils decide which subjects they want to study for each lesson and when they want to take an exam. The students are set ‘challenges’ instead of regular tests to prove their learning, for example, they may demonstrate how to code a computer game instead of sitting a maths exam or be encouraged to travel to learn a language instead of studying it in a classroom. And it is an approach that is delivering impressive results. The ESBC has consistently gained the best grades among Berlin’s gesamtschulen (comprehensive schools) and in 2017, school leavers achieved an average grade of 2.0, the equivalent of a straight B – even though 40% of the year had been advised not to continue to abitur, the German equivalent of A-levels, before they joined the school. Individuals, companies, sectors and government must all place focus on learning and indeed continuous learning, in order to reach the potential 4IR presents. Learning should be within a creative and smart environment that allows workers and students to prove that they are innovators and provide them with the skills they need to adapt as rapidly as the technology does in order to prepare them for the road ahead. To do this, a total transformation of the global education sector is needed, to integrate technology throughout the learning journey empowering the student to work with the technology, and not fight against it.
https://www.gmisummit.com/gmis-2019/knowledge-hub/how-to-educate-in-the-age-of-4ir/
The OEPS Final Report brings together the learning from the 3 years of the project. It highlights key successes and challenges encountered by the project and makes recommendations for the future of open education in Scotland. The executive summary is provided here, however it is useful to read the full document. The availability of free, openly licensed online courses and the ubiquity of digital technology is relevant to learners in the formal and informal learning sectors in Scotland. Openly licensed educational resources are regarded by some educationalists as having the potential to open up new pathways into higher education; however, currently their use is heavily skewed towards existing graduates. Developments in open education have tended to focus on technology. To ensure effective use there is a strong case for reorienting effort on practice, pedagogy and new models of student support. Open licensing and digital platforms open new possibilities for knowledge dissemination and exchange. Digital technology and open licenses open new and innovative possibilities for curriculum development. These include greater use of collaborative development and the provision of short courses and micro-credentials that provide flexible pathways for lifelong learning and support transitions into formal education, between further and higher education, between education and employment and in the workplace. Free, openly licensed, online courses are now part of the educational mainstream. However, the educational practices, organisational and business models to make best use of these resources are not fully developed. The project worked with 68 organisations across Scotland, including universities, colleges, schools, third sector organisations, unions and businesses. It held 79 workshops, gave 44 presentations, organised four one-day open forums and one seminar and co-organised a two day symposium. The project website1 hosts an archive of the project’s activity and outputs. The entire range of project outputs, comprising exemplar courses, reports, briefings and resources are hosted in the Opening Educational Practices in Scotland collection. The project outputs include fifteen new openly licensed courses co-created with organisations in Scotland. All but one of the OEPS courses offered recognition through Open Badges and the project established evidence of the use of badges at scale. Awareness of open education among educators and policy makers in the university and college sectors is low. There is a case for including open education and open licensing in initial professional development programmes like the TQFE and PGCert and in subsequent CPD. Co-production of online resources with organisations in the informal learning sector has benefits for academic institutions, their partners and for students. There remains a need for a cross-sector approach to supporting development. The informal learning sector in Scotland is leading the way in the use of Open Badges. Policy discussion on open education is too narrowly focused on the use of MOOCs in the university sector. Developments in learning technology are affecting all parts of the education system. The availability of free openly licensed content poses new challenges for colleges and universities. In this context open practice has the potential to support learning journeys and enhance the quality and reputation of Scottish Education. To achieve this, however, systemic change that starts from student centred pedagogy is necessary. Colleges and universities and the Scottish Government should consider formal adoption of the Scottish Open Educational Declaration. Professional development is critical to the future development of open education in Scotland. The study of open pedagogy should be incorporated as a mainstream part of teacher education, the TQFE and higher education Post Graduate Certificates in Learning and Teaching. Educational institutions in Scotland should release much more of their content in openly licensed format and should consider adopting an approach of open by default. In addition the SFC should consider encouraging sharing and collaborative initiatives between institutions. To enable widening access to colleges and university, as well as to support lifelong learning, colleges and universities should work in partnership with the informal learning and third sector to create open resources and open practice. This can include supporting transitions into education and professional development in employment. The creation of open courses and other openly licensed materials should be recognised by the Scottish Funding Council as a component of knowledge exchange and appropriate funding arrangements established. There should be consideration by the Scottish Government and the Scottish Funding Council of the systems, support mechanisms, and policies required to facilitate and sustain institutional collaborations in open education. Outcome agreements might be one avenue that could be used.
https://oepscotland.org/oeps-final-report/?shared=email&msg=fail
Distance learning, especially online education, has become popular among young and adult learners alike. In 2019, over 7.3 million students were reported as enrolled in any distance education courses at degree-granting postsecondary institutions. (NCES, 2019) Come 2020, the pandemic compelled governments to use online education tools while schools were closed in 83% of countries. However, this only allowed for reaching around a fifth of kids globally. (UNICEF, 2020) Online learning is not as effective as classroom-based instruction, according to a 2020 poll of over 2,500 teachers in eight countries, (Fleming, 2021) that is at least for kids. For adult learners, however, distance learning is empowering. Here is a probe into distance learning: The ultimate guide to online learning that could serve as a distance learning guide for parents as well. Distance education or distance learning refers to teaching and learning outside of a brick-and-mortar classroom. More technically, the Integrated Postsecondary Education Data System (IPEDS) defines it as education using one or more types of technology in delivering instruction to students separated from the instructor. Such technologies serve to support regular and substantive interaction between students and instructors synchronously or asynchronously. Some institutions even leverage a curriculum management system to align their programs both on-campus and off-campus. The technologies for instruction listed by IPEDS are as follows: The United States Distance learning Association (USDLA) notes that distance learning does not only refer to video conferencing or specific types of technology. The term distance learning encompasses the full array of current and emerging technologies that organizations use to deliver educational experiences and products. (Flores, 2009) Distance learning, therefore, according to USDLA includes “e-learning, texting, social networking, virtual worlds, game-based learning, and webinars.” It involves various means of gathering knowledge: “It’s the Internet. It’s Google. It’s broadband and satellite and cable and wireless,” and students can use their phones, computers, or whatever communication device might emerge next for learning. (Flores, 2009) Major universities are not the only ones that offer distance learning. There are also boot camps and Massive Open Online Courses (MOOC). Simply, distance learning is a means to bring education and training to where students or trainees are connecting their world to worldwide learning communities. Source: Online Schools Center There are different types of distance learning, and they can be classified by the method of delivery and by mode or pacing. Distance learning can be classified according to the method by which it is delivered or made available to the learner. While distance learning is often associated with online learning, it could be done offline as well. The growth of digital technology, especially video conferencing software, has made online distribution the preferred technique for modern distance learning. Essentially, this refers to remote learning that is conducted entirely online. This delivery technique is rising at the expense of more traditional in-person classes. Now, students can learn online even outside campus through online universities, boot camps, or MOOCs. In 2021, the most visited education site in the US was Instructure.com with 391 million visits. (SEMrush, 2021) Enrollment in online courses has increased as overall student enrollment has decreased. (NCES, 2021) Offline distance learning is when instructional materials, assignments, and exams are sent to students and back to schools by courier services. This is actually how the concept of distance education began. It is now considered outdated, but some academic institutions, such as colleges and universities, still use it. Although offline distance learning is slower than modern online options, it allows access to education even without a stable high-speed internet connection. It is sometimes used in places where learners struggle with internet connections, thus making still distance learning accessible. Distance learning can be classified into two modes of delivery: synchronous or asynchronous. Synchronous mode is also referred to as paced learning, as it requires students to attend regular meetings or lectures. Meanwhile, the asynchronous mode of learning is self-paced in that it allows students to access materials, ask questions, and practice skills whenever they choose. While this can occur in a regular classroom, it is most commonly used for online courses. For synchronous or paced learning, schools, colleges, universities, or training providers set schedules that students follow, allowing learners to know the start and end of a course and the modules it contains. There are meetings or lectures that students must attend, and deadlines for exams, assignments, or projects are fixed. As such, academic institutions or distance learning providers control the pace of students’ development so that everyone finishes at around the same time. Examples of synchronous online learning may include, but are not limited to, the following: The synchronous or paced learning benefits educators since they can organize their courses and follow structures. Just the same, it is ideal for students who have issues working independently and require supervision to accomplish tasks. Asynchronous learning or self-paced learning, on the other hand, allows students to choose when to start school work and how much time to dedicate to each task. The pace at which students finish a lesson or a course relies on their willingness and capacity to devote time to their studies. That said, students who allocate more hours to a module in a week could progress and finish faster than those who devote less time. Adult learners who have other work or family obligations benefit from self-paced learning since they can adjust their learning activities according to their commitments. Meanwhile, this pacing is also ideal for students who are often held back by the pressure of deadlines and of the faster progress of their peers. Such students are able to learn and grow at their own pace. However, self-paced learning tends to encourage independent work rather than teamwork. Meanwhile, asynchronous learning utilizes other tools and systems, allowing instructors and students to interact on their own schedules. The following are included in asynchronous learning: Asynchronous learning became prevalent especially at the onset of the pandemic. In 2020, 63% of polled learning and development professionals used self-paced virtual training and a similar percentage used self-paced offline training. Nearly a third of these same set of professionals expect their departments to use custom learning in the next two years, and 18% believe their departments will adopt self-paced virtual training for the first time. (Mimeo, 2020) Besides, a 2021 poll of K–12 educators by EdTech shows the majority of the respondents want the asynchronous learning element to be carried out into the classroom in 2022. Meanwhile, some institutions offer hybrid modes, which combine elements of synchronous and asynchronous learning. Students must convene at a specific time in an online chat room or classroom. On this platform, they work at their own pace. Hybrid courses are frequently offered when educational institutions lack room for all program course loads. Source: Mimeo Diving into distance learning instantly may not be such a wise decision. Among the distance learning guidelines for students to use would be simply asking themselves certain questions to avoid any rash decisions. Here are some of the questions to ask. Contemplating these questions help students decide whether distance learning is for them. It must be clear to them why they are considering distance education, if their time and skills will allow it, and if they have the skills needed to succeed. Some students may encounter difficulty with remote learning or issues with online education. It may not be the ideal fit for everyone. However, other students will find considerable value in distance education. Western Governors University (2021) provides a categorized list of pros and cons as remote learning guidelines for students to make informed decisions. |Category||Pros||Cons| Technical Element | | Credibility | | Flexibility | | Interactions | | With the many options of distance education providers, making a choice could be challenging to many students, especially for adult learners seeking to earn a degree online or to further their education. There are certain things to check when evaluating a school or provider, and here are distance learning guidelines for students who are on the hunt for providers. The primary contention about distance learning remains to be the quality. In the study “Student attitude to distance education: Pros and cons,” published in the Cypriot Journal of Educational Sciences, Illarionova et al. (2021) noted that “The transition to online education entails the accessibility and massification of higher education, caused by its significant reduction in price, which, on the one hand, eliminates the problem of unequal access to education, but on the other, inevitably leads to a decrease in the quality of university education.” While price and accessibility may be enticing factors, other indicators of education quality should be probed. The easiest way to investigate is to explore every nook and cranny of the schools’ website to discover if the program content, features, schedule, and other details are the right fit. If possible, email or call the program coordinator to clarify the information. It may be challenging to succeed in distance learning. As such, here are some tips that could aid learners, young and adults alike, in going through distance learning with ease. The U.S. Department of Education (2010) reported that online students did modestly better than traditional face-to-face students. Online learning is also beneficial to a variety of subject and learner types. As such, employers are recognizing graduates of online degrees. In fact, a Northeastern University (2018) poll found that 61% of HR leaders believe online learning is of equal quality to traditional learning methods, if not better. Meanwhile, 71% of organizations said they had hired a job applicant with an online degree in the past 12 months. (Gallagher, 2018) In the same study, more than half the organizations believe most advanced degrees in the future will be completed online. Some 33% believe online education will eventually be better than traditional face-to-face instruction given the development of technology. (Gallagher, 2018) Still, the quality of the institution from which a degree is earned is vital, as 83% of business leaders believe an online degree from a “well-known” college equals an on-campus degree. However, if they are unfamiliar with a school or its offerings, they might not value the degree earned highly. Employers are more likely to accept an online degree from a school that also offers traditional on-campus courses. They assume that traditional colleges and universities construct online courses with the same care that they do in-person courses. As such, only 42% of employers would consider a candidate with an online degree from a university that only runs online, according to the Society for Human Resource Management. Distance learning, be it for young or adult learners, has a positive outlook. Despite claims that distance learning, particularly e-learning, is not as effective among young learners as it is among adults, some elements of it are appreciated by educators. As earlier mentioned, EdTech’s poll shows a significant number of educators hoping to incorporate asynchronous learning in the classroom. Even in the learning and development departments of organizations, professionals are expecting self-paced virtual training to become mainstream, and more employers believe that soon, the most advanced degrees can be completed online. This is no surprise, as even now, there are undergraduate engineering degrees offered online. The future of distance learning, online learning most especially, is bright. References:
https://research.com/education/distance-learning-the-ultimate-guide-to-online-learning
EUROVERSITY is a three-year project co-funded by the EU Lifelong Learning Program, Key Activity 3 ICT. The project started in December 2011 and will finish in November 2014. Euroversity is a network of 18 partner from 10 European countries (the United Kingdom, France, Italy, the Netherlands, Norway, Sweden, Austria, Portugal, Cyprus, Germany), and a third country partner from Israel. This network is made up of a collection of organisations that have had significant experience of the use of 2D/3D virtual worlds in learning and teaching contexts (in European projects such as AVALON, AVATAR, START, LANCELOT and NIFLAR), a smaller number of partners with limited experience who wish to apply the experiences of the network to the design and construction of new courses within virtual worlds in their own contexts, and other institutions who have no direct experience of virtual world education but who have a specific interest in education, media, and in the transformation of knowledge. The project aims to impact on the following defined target groups: - the individuals involved in the network and the learners on courses associated with the network; - the other members of the organisations involved in the network. - organisations (including learners) who are not initially involved in the network but have experience of working in virtual world environments. - organisations (including learners) who have little to no experience of learning and teaching in virtual world environments, but are (or are not) interested in establishing courses in these environments. Since the late 1980's and early 1990's, complex 2D and 3D virtual environments have been investigated to support learning and teaching scenarios in lower and higher educational contexts. Today's online 2D and 3D virtual platforms offer a wealth of opportunities for education that early examples did not, for example highly detailed 3D models, multi-platform delivery, and large communities. With these platforms now beginning to mature, the range of courses within them continuing to grow, and the availability of a vast range of research describing their use, there is now a need to bring experienced organisations and individuals together. So far, no advanced networking community exists with the express aim of helping organisations and individual lecturers in getting started with virtual worlds from a platform of good practice and identified support tools. This network brings together for the first time, a range of public and private institutions from across Europe and in Israel, who have experience in the use and development of online virtual platforms for education across a range of disciplines (i.e. language education, cultural studies, literature, economics, religious studies, media studies, intercultural communication, digital design, computer science and software engineering, science including physics) and contexts (lower and higher education, educational and business). The aim of the project is to engage enough of a critical mass of users of online 2D/3D virtual environments, so as to generate more users by transferring their newly acquired knowledge to their specific contexts. The duration of the network is such that there will be the possibility to initiate, train and follow-up new end users during the project. The community resource created by the project outlining the good practice framework and the experiential video data bank will provide content which will outlive the project's duration. The website and project content will remain live for a minimum of five years following project completion. The following main activities will be carried out throughout the project: Shared experiences and collective knowledge aimed at creating an opportunity for organisations throughout the network to share experiences of using online virtual platforms to support educational provision. This sharing of experiences will focus on items such as the successes and failures of previous projects, challenges faced, solutions found, learner experiences and the impact of online virtual platforms on learner performance. Learning Lessons and Transferring Knowledge focused on the transformation of the experiential content into guidance on good practice. This guidance on good practice will be formulated in the development of a good practice framework for the creation, delivery and evaluation of learning experiences provided within online virtual platforms Exploitation will look to take advantage of new and existing collective experiences and materials created within the consortium. Licensing strategies for consortium materials will be investigated and applied to enable materials to be more widely shared. Strategies for promoting sustainability and open sharing of resources will be published. Measuring Impact has essentially two core elements: the first is the evaluation of the good practice framework produced from within the network; the second is quality planning and the evaluation of the EuroVersity project as a whole. Dissemination is focused on managing the promotion and dissemination of the project. It will also provide the means for its maintenance at the end of the three years of funding.
http://euroversity.unimarconi.it/index.php/network/projects
The 2012 Summer Institute, cosponsored by the University of Washington Computer Science & Engineering and Microsoft Research will be held at the Suncadia Resort in Cle Elum, Washington from July 18-20, 2012. Cle Elum is located in the Cascades, ninety minutes southeast of Seattle. Abstract Strong recent enthusiasm for leveraging online platforms for education has highlighted opportunities to leverage "the crowd" in novel ways. We seek to explore challenges and opportunities at the intersection of online education and crowdsourcing--at a time when ideas and methods in both areas are accelerating. In the arena of crowdsourcing and human computation, Wikipedia, Stackoverflow, Mechanical Turk, Odesk, the ESP Game, FoldIt and Tweak the Tweet illustrate the huge variety of socially-mediated collaboration. More broadly, research in crowdsourcing is evolving into a mature discipline, with multiple vibrant workshops and now an international conference (HCOMP). In online education, Khan Academy, the Stanford online courses, and intelligent tutoring systems are the harbingers of adaptive educational interactions which stimulate interpersonal discussions and tutoring across the cloud. On the latter, efforts on intelligent tutoring systems with small groups of students and limited evaluations have had mixed results. Having access to large populations promises to catalyze a renaissance in developing and refining intelligent automated tutoring methods. Overall, opportunities are broad and extend to leveraging the massive skills, variance in abilities and learning styles, and efforts and insights of the crowd to author and provide unique, personalized experiences to future students. We see several reasons why "now" is the time to forge connections across these areas: The symposium will discuss many areas for innovation, including (but not limited to) the following: Organizers: Dan Weld (UW CSE), Mausam (UW CSE), Eric Horvitz (MSR), and Meredith Ringel Morris (MSR) Descriptions of past summer institutes may be viewed at: www.cs.washington.edu/events/mssi/.
https://www.cs.washington.edu/mssi/2012/
This paper investigates e-learning (distance learning) infrastructure, most traditional courses need extensive work to be delivered in the e-learning format. PDirks SSU Community-Based Learning Resources Community-Based Learning Teaching Resources E. M. (2003). Distance learning and service-learning in the accelerated format. Implementing E-learning in Higher Open and Distance Learning Implementing e-learning in open and distance learning E-learning is good to be used to up-skill or re-skill workers, students, adult learners, and employees. OPEN SOURCE LEARNING MANAGEMENT SYSTEMS TOJET: The Turkish Online Journal of Educational Technology – April 2010, volume 9 Issue 2 Copyright The Turkish Online Journal of Educational Technology 176 Learning Styles in Distance Education Students Learning Styles in Distance Education Students Learning to Program. were made available and distributed via the web in electronic format to 60 volunteers prior to Interaction in distance-learning courses aspects of the distance-learning format of education, as compared with traditional “face-to-face” delivery, yet many Distance Learning Template for Transition of Courses at the 4 lectures into a distance learning format, and the possibility of restructuring the sequencing of their lectures to optimize student learning. In the distance E-Learning Survey Questionnaire - EDUCAUSE Homepage E-Learning Survey Questionaire Vol. 3, 2003 E-Learning Survey Questionnaire Online distance-learning courses where the instructor conducts all class sessions prima- 5-O-A - Virtual Learning, Distance Learning, Independent Study Michigan Department of Education 2014-15 Pupil Accounting Manual or within 30 calendar days of an excused absence. 11)Grade Eligibility: a. Virtual learning Evidence-Based Practices in Online Learning - U.S. Department of Page 1. Evaluation of Evidence-Based Practices in. Online Learning. A Meta- Analysis and Review of Online Learning Studies. Page 2. Page 3. Evaluation of? Distance Learning in Montana A Survey-Based Assessment Appendix B: Senior Citizens Interest in Distance Learning . distance learning statewide and in major labor markets throughout Montana. Using a random. Distance Learning - Framingham State University The University eLearning platform is positioned to enrich the. eLearning environment This learning format has become increasing popular over the years; in. Strategic Plan - Office of Distance Education and eLearning - Th Office of Distance Education and eLearning - StratEgic PLan 2014 2018 learning, distance education, creative learning spaces, and dynamic online learning systems, the Office of Distance Education and education format to reach. Handbook of e-Learning Strategy - The eLearning Coach may have disappeared or been changed between the date this book was written Sponsored Content: Adobe Captivate 3 and Adobe Captivate CS3: Create Sponsored Content: Adobe Solutions for the Virtual Classroom: Opening the campus, button?cour E-learning: what the literature tells us about distance educatio Keywords Distance learning, Attitudes, Communication technologies. Paper type vast array of platforms, formats and delivery mechanisms. These can be? High School Distance Learning: Online/Technology Enhanced Distance Learning: Effective for students entering the ninth grade in the Incorporate a variety of media and formats to design, develop, publish, and present products. e. Collaborate in content-related projects that integrate a variety of media. E-Service-Learning - Volunteer and Service-Learning Center vice-eLearning as an integrative pedagogy that engages learners Burton, E. (2003). Distance learning and service-learning in the accelerated format.
https://www.pdfsdownload.com/download-pdf-for-free/e+learning+and+distance+learning+format
The thyroid is a vital gland in the body, responsible for producing a hormone called thyroxine. Thyroxine helps regulate metabolism and protein synthesis along with numerous other functions, including development. When the thyroid produces too much thyroxine, this is referred to as hyperthyroidism, or overactive thyroid. On the other hand, when too little thyroxine is produced, this is known as hypothyroidism, or underactive thyroid. What are the differences between these thyroid disorders? Let’s take a look. Hypothyroidism Hypothyroidism occurs if the thyroid gland can’t produce enough thyroid hormones. As the gland’s hormone production slows down, so does metabolism, which is directly controlled by thyroxine. In turn, this can lead to weight gain, fatigue, constipation, cold intolerance, period irregularities and some negative side effects on the fetus if a person gets pregnant while diagnosed with hypothyroidism. Hypothyroidism is common, affecting about 4.6 percent of the United States population. There is no cure for hypothyroidism, but there are some medications that can treat it by improving thyroid function and restoring hormone levels. Hypothyroidism is most commonly caused by a condition called Hashimoto’s thyroiditis, a condition where the body attacks its own immune system and causes the thyroid to stop producing hormones in the way it should. Hashimoto’s thyroiditis is more common in women than in men. Hyperthyroidism In patients with hyperthyroidism, the thyroid produces too much thyroxine. It will often accelerate metabolism, leading to sudden weight loss, a rapid or irregular heartbeat, sweating, and nervousness or irritability. Hyperthyroidism is commonly seen in one of three ways: - • Thyroiditis, or inflammation of the thyroid: An inflamed thyroid causes too much thyroid hormone to enter the blood, leading to pain and discomfort that’s usually short term. - • A thyroid nodule that produces too much thyroxine: This is common in both hyperthyroidism and hypothyroidism, and nodules are often benign. - • An autoimmune condition called Graves’ disease: This condition causes the body to attack itself, making the thyroid gland produce too much thyroxine. Hyperthyroidism can be treated using medications, radioactive iodine or surgery. If left untreated, it can cause bone loss or an irregular heartbeat. The Differences There are a few distinct differences between the two: - • Hypothyroidism leads to a decrease in hormones, while hyperthyroidism leads to an increase. - • Hypothyroidism leads to slowed metabolism, tiredness and a general slowing of body functions. • Alternatively, hyperthyroidism can lead to weight loss, instead of weight gain. You may feel anxious instead of depressed. - • Hypothyroidism is more common in the United States than hyperthyroidism. It’s not uncommon to have an overactive thyroid after an underactive thyroid, or vice versa. If you have symptoms of hyperthyroidism or hypothyroidism, speak to your doctor about treatment options. At Revere Health Endocrinology, our specialists are uniquely trained to diagnose and treat diseases and disorders related to the glands. Sources:
https://reverehealth.com/live-better/hyperthyroidism-vs-hypothyroidism/
How Does the Thyroid Gland Communicate With the Brain? 26 SEP 2017 CLASS The thyroid gland communicates with the hypothalamus, a part of the brain. The hypothalamus sends messages to the pituitary, a pea-sized gland connected to the hypothalamus. After receiving the messages, this gland, called the pituitary, produces thyroid stimulating hormone, or TSH, which acts on the thyroid gland to produce thyroid hormones. Thyroid hormone determines the metabolic rate of the body and affects nearly all cells. Production of thyroid hormone is closely linked to the amount of TSH circulating in the bloodstream. The communication between the thyroid gland and brain is known as the hypothalamic-pituitary-thyroid axis. 1 Hypothalamus The hypothalamus is located in the lower central part of the brain. The nerve cells in the hypothalamus produce chemicals that stimulate the pituitary gland, also called the "master gland," because it controls the thyroid and other glands in the body. When the thyroid hormone concentration in the blood falls, the hypothalamus responds by making thyrotropin-releasing hormone, TRH, which stimulates the pituitary gland to make thyroid-stimulating hormone, TSH. 2 Pituitary Gland The pituitary gland, located at the bottom of the hypothalamus, consists of two distinct parts, the anterior and posterior lobes. The anterior pituitary lobe makes TSH. TSH stimulates the thyroid gland to make thyroid hormones. The amount of TSH produced is not only controlled by TRH from the hypothalamus, but also by the amounts of thyroid hormones, T4 and T3, in the bloodstream. If thyroid hormone T4 and T3 concentrations are high, the pituitary gland stops making TSH. This is called "negative feedback regulation." The pituitary gland is positively stimulated by the hypothalamus via TRH, and it is negatively regulated by T4 and T3 thyroid hormones. 3 Thyroid Gland The two lobes of the butterfly-shaped thyroid gland sit in front of the windpipe just below the Adam's apple. The thyroid gland produces mostly the thyroid hormone thyroxine, also called T4. For full activity, T4 has to be converted to T3. Body organs, such as the kidneys and liver and to some extend the thyroid gland itself, convert T4 into active T3. To make thyroid hormone, the thyroid gland needs iodine, which it obtains from the diet. Examples of sources of iodine are seafood and iodized salt. Thyroid disorders develop when insufficient amounts of iodine are available. 4 Communication Breakdown In healthy people, the hormones that communicate between the thyroid, pituitary and hypothalamus are balanced. This balance can disappear due to the lack of iodine, damage to the thyroid or pituitary, certain medications and autoimmune disorders. A breakdown in communication can result in the thyroid gland producing too much or too little thyroid hormone. The condition in which the thyroid pumps out too much hormone is called hyperthyroidism. Hyperthyroidism causes the heart rate to race and may cause you to feel overheated and irritable. Hypothyroidism is the opposite condition, in which the thyroid makes too little thyroid hormone, even when the pituitary gland produces lots of TSH. Low levels of thyroid hormones or hypothyroidism cause you to have low energy, make you feel cold, and you may gain weight even if you eat less.
https://classroom.synonym.com/thyroid-gland-communicate-brain-39094.html
The thyroid gland is an integral part of the body’s endocrine system. This gland is found in the front portion of the neck, and is primarily responsible for the production of thyroxine and triiodothyronine, also known as thyroid hormone. In this article, we will be giving you a brief overview of the function of the thyroid gland, in addition to the hormones it produces. Why is the Thyroid Gland So Important? What Does Your Thyroid Do? As mentioned, the thyroid gland is a part of the endocrine system. Given this, it is responsible for the maintenance of the hormone levels in the body. Aside from producing thyroid hormone, the thyroid actually helps regulate several other functions in the body. The following are a few examples of bodily processes involving the thyroid gland: - Respiration - Heart Rate - Body temperature - Rate of Burning Calories - Digestion Aside from the mentioned functions, the thyroid hormone also plays a significant role in fostering brain development and growth. It is for this reason, that thyroid function and iodine levels are monitored in children. The Thyroid Gland Needs Iodine The thyroid gland absorbs iodine, an element found in many of the foods that we eat. This iodine is then combined with tyrosine, and converted into the thyroid hormones thyroxine (T4) and triiodothyronine (T3). Following this, the thyroid hormones are then released into circulation which allows them to control the body’s metabolic functions. Given this, it is important that we do not neglect consuming iodine-rich foods such as seafood. Medical practitioners encourage this especially in pregnant women, young children, and individuals with existing thyroid problems. What Hormones Does the Thyroid Gland Secrete? As mentioned, there are two primary hormones released by the thyroid gland. T3 and T4 function together to regulate the body’s metabolic functions. Some examples of metabolic functioning are the ability to convert glucose into energy as well as digestion, and burning calories. However, the production of thyroid hormones does not solely rely on the thyroid gland. The pituitary gland secretes a hormone called thyroid stimulating hormone (TSH) which dictates the amount of hormones that the thyroid gland ends up secreting. Doctors often request blood tests for those who are experiencing problems related to thyroid function in order to monitor the amount of TSH and thyroid hormones produced. In turn, the hypothalamus dictates the stimulation of the pituitary gland secretion by means of it releasing thyroid releasing hormone (TRH). It is important to emphasize that the normal values for pregnant women and infants are significantly higher in comparison to non-pregnant adults. This is in connection with the need for thyroid hormones for brain development and growth. Triiodothyronine (T3) Abnormalities in the amount of T3 present in blood circulation can be indicative of a dysfunctional thyroid. Approximately 20% of the total triiodothyronine in the bloodstream is from the thyroid gland. On the other hand, the remaining 80% is produced by means of the conversion of thyroxine by internal organs (e.g. liver and kidneys). T3 is the active hormone, T4 must be first converted into T3 prior to it being used. It functions primarily in delivering oxygen and energy to other cells. Upon blood examination, the normal range of results for T3 are the following: - Total T3 – 80-200ng/dL - Free T3 – 2.3-4.2 pg/mL Thyroxine (T4) This is another key thyroid hormone and comprises the majority of the hormone that the thyroid gland produces. This hormone is regarded as a “storage hormone.” Furthermore, it must undergo a process called monodeiodination wherein, it loses an atom of iodine. Following the monodeiodination, T4 will then be converted to the usable T3. The following are the reference ranges for T4: - Total T4 – 4.5-12.5 ug/dL - Free T4 – 0.9-1.8 ng/dL Often, doctors who request blood examinations do not test total T4 nor free T4. However, total T4 or free T4 readings that fall below the given reference range can be indicative of hypothyroidism. Another pertinent cause of hypothyroidism is a lack of the production and release of TSH. Conversely, elevated TSH and T4 levels can indicate the development of hyperthyroidism. Calcitonin Bounty offered: What will happen if there is high concentration of calcitonin and PTH altogether… https://t.co/5hclvoB7c9 #endocrinology — Biology SE (@StackBiology) July 21, 2017 In addition to T3 and T4, the thyroid gland also produces the hormone calcitonin. C cells, also known as parafollicular cells, secrete this particular hormone. These cells are next to the thyroid follicles and in connective tissue. C cells play a minor role in regulating the level of calcium and phosphate in your blood. Conversely, the amount of calcitonin is reliant on the amount of calcium in the blood. Should blood calcium levels decrease, so does the production of calcitonin. This process is necessary for the development of our bones. Having an Excess of Thyroid Hormones Thyrotoxicosis refers to an elevated amount of thyroid hormone in the bloodstream. The occurrence of this is often due to the over-stimulation of the thyroid gland which, in turn, refers to hyperthyroidism. Furthermore, this hyperthyroidism often occurs due to conditions such as Grave’s disease, thyroid inflammation, or a thyroid tumor. Given this, one of the first signs of thyrotoxicosis is the formation in the mass on the thyroid gland. The following are some other symptoms that often occur along with hyperthyroidism: - Heat intolerance - Weight loss - Increased appetite - Over-stimulation of bowel movement - Irregular menstrual cycle - Palpitations (rapid heartbeat) - Fatigue - Irritability - Tremor - Hair Loss - Retraction of Eyelids (eyes look larger, staring) How Hypothyroidism Affects the Body Hypothyroidism occurs when there is not enough available thyroid hormone. Additionally, hypothyroidism may be linked to a thyroid that has sustained damage. There are several causes for this condition: - Autoimmune Disease - Poor Iodine Intake - Injury to Thyroid Gland - Adverse Drug Reaction Given that the thyroid hormones are an essential part of brain growth and development, this condition can be especially harmful for children. The permanent effects of hypothyroidism can come in the form of learning disabilities or stunted physical growth, among many other possible outcomes. Some of the symptoms associated with hypothyroidism in adults are the following: - Fatigue - Intolerance to Cold Temperatures - Slower Heart Rate - Weight Gain - Reduced Appetite - Forgetfulness - Depression - Stiffness of Muscles (tetany) - Infertility If you want to learn more about the thyroid gland and the hormones that it secretes, watch this video by Armando Hasudungan. The thyroid gland plays a major role in maintaining many of the body’s processes. We must actively include the condition of our thyroid and the hormones that it produces when evaluating our overall health. Should you feel any lumps in your neck or experience any of the symptoms related to abnormalities in thyroid hormone levels, consult with your doctor immediately. Do you have any experience dealing with problems with your thyroid gland? Please feel free to share your thoughts with us in the comments section below!
https://thyroidsymptoms.com/thyroid-gland-functions-hormones/
What is Hashimoto’s disease? Hashimoto’s disease, also called chronic lymphocytic thyroiditis or autoimmune thyroiditis, is a form of chronic inflammation of the thyroid gland. The inflammation results in damage to the thyroid gland and reduced thyroid function or hypothyroidism, meaning the gland doesn’t make enough thyroid hormone for the needs of the body. Hashimoto’s disease is the most common cause of hypothyroidism in the United States. The thyroid is a small, butterfly-shaped gland in the front of the neck below the larynx, or voice box. The thyroid gland makes two thyroid hormones, triiodothyronine (T3) and thyroxine (T4). Thyroid hormones circulate throughout the body in the bloodstream and act on virtually every tissue and cell in the body. These hormones affect metabolism, brain development, breathing, heart rate, nervous system functions, body temperature, muscle strength, skin moisture levels, menstrual cycles, weight, cholesterol levels, and more. Thyroid hormone production is regulated by another hormone called thyroid-stimulating hormone (TSH). Thyroid-stimulating hormone is made by the pituitary gland, a pea-sized gland located in the brain. When thyroid hormone levels in the blood are low, the pituitary releases more TSH. When thyroid hormone levels are high, the pituitary responds by dropping thyroid-stimulating hormone production. Hashimoto’s disease is an autoimmune disorder, meaning the body’s immune system attacks its own healthy cells and tissues. In Hashimoto’s disease, the immune system makes antibodies that attack cells in the thyroid and interfere with their ability to produce thyroid hormone. Large numbers of white blood cells called lymphocytes accumulate in the thyroid. Lymphocytes make the antibodies that drive the autoimmune process. What are the symptoms of Hashimoto’s disease? Many people with Hashimoto’s disease have no symptoms at first. As the disease slowly progresses, the thyroid usually enlarges and may cause the front of the neck to look swollen. The enlarged gland, called a goiter, may create a feeling of fullness in the throat but is usually not painful. After years, or even decades, the damage to the thyroid causes it to shrink and the goiter to disappear. Not everyone with Hashimoto’s disease develops hypothyroidism. For those who do, the hypothyroidism may be subclinical—mild and without symptoms. Other people have one or more of these common symptoms of hypothyroidism: - fatigue - weight gain - cold intolerance - joint and muscle pain - constipation - dry, thinning hair - heavy or irregular menstrual periods and impaired fertility - depression - a slowed heart rate Who is likely to develop Hashimoto’s disease? Hashimoto’s disease is about seven times more common in women than men. Although it often occurs in adolescent or young women, the disease more commonly appears between 40 and 60 years of age. Hashimoto’s disease tends to run in families. Scientists are working to identify the gene or genes that cause the disease to be passed from one generation to the next. Possible environmental influences are also being studied. For example, researchers have found that excess iodine consumption may inhibit thyroid hormone production in susceptible individuals. Certain drugs or viral infections may also contribute to autoimmune thyroid diseases. People with other autoimmune disorders are more likely to develop Hashimoto’s disease and vice versa. These disorders include: - vitiligo, a condition in which some areas of the skin lose their natural color - rheumatoid arthritis - Addison’s disease, in which the adrenal glands are damaged and cannot produce enough of certain critical hormones - type 1 diabetes - pernicious anemia, a type of anemia caused by inadequate vitamin B12 in the body How is Hashimoto’s disease diagnosed? Diagnosis begins with a physical examination and medical history. An enlarged thyroid gland may be detectable during a physical exam and symptoms may suggest hypothyroidism. Doctors will then do several thyroid function tests to confirm the diagnosis. The ultrasensitive TSH test is usually the first test performed. This blood test is the most accurate measure of thyroid activity available. Generally, a TSH reading above normal means a person has hypothyroidism. In people who produce too little thyroid hormone, the pituitary makes TSH continuously, trying to get the thyroid to produce more thyroid hormone. The T4 test measures the actual amount of circulating thyroid hormone in the blood. In subclinical hypothyroidism, the level of T4 in the blood is normal, but as the disease progresses, T4 levels drop below normal. The antithyroid peroxidase (anti-TPO) antibody test looks for the presence of thyroid autoantibodies. Most people with Hashimoto’s disease have these antibodies, but people whose hypothyroidism is caused by other conditions do not. How is Hashimoto’s disease treated? Treatment generally depends on whether the thyroid is damaged enough to cause hypothyroidism. In the absence of hypothyroidism, some doctors treat Hashimoto’s disease to reduce the size of the goiter. Others choose not to treat the disease and simply monitor their patients for disease progression. Hashimoto’s disease, with or without hypothyroidism, is treated with synthetic thyroid hormone. Doctors prefer to use synthetic T4 such as Synthroid rather than synthetic T3 because T4 stays in the body longer, ensuring a steady supply of thyroid hormone throughout the day. The so-called “natural” thyroid preparations made with desiccated animal thyroid are rarely prescribed today. The exact dose of synthetic thyroid hormone depends on a person’s age and weight; the severity of the hypothyroidism, if present; the presence of other health problems; and the use of other medications such as cholesterol-lowering drugs that could interfere with the action of synthetic thyroid hormone. Doctors routinely test the blood of patients taking synthetic thyroid hormone and make dosage adjustments as necessary. A normal, healthy thyroid and metabolic state can be restored with the use of synthetic thyroid hormone.
https://vivacare.com/toolkit/simhaeeobgyn/HealthTopic/hashimotos-thyroiditis-hypothyroidism
How Does the Thyroid Gland Affect Fertility?April 10, 2017 4:35 pm The thyroid gland is shaped like a butterfly, sitting low on the front of your neck. The hormones secreted by the thyroid gland influence factors such as metabolism, growth and body temperature. However, on occasion, the thyroid gland can produce too much or too little thyroid hormone and cause complications for couples hoping to start a family. Hypothyroidism and Fertility If a woman has an under-active thyroid condition (hypothyroidism) that is left untreated, her ability to conceive a baby will become hindered. She will likely experience heavier or prolonged periods or even no periods at all. This can cause anaemia. Luckily, the problems with the thyroid that interfere with pregnancy can be treated easily. By taking medication, the thyroid hormone levels will return to normal and the chances of becoming pregnant will drastically improve. During pregnancy, a woman taking medication for hypothyroidism should take advice from her GP, who may increase the dosage. Depending on circumstances, the GP may also recommend that regular blood tests be carried out throughout the nine months so that the dosage can be altered if required. Hyperthyroidism and Fertility On the other hand, an untreated over-active thyroid (hyperthyroidism) will cause lighter, irregular periods and once again, the woman will struggle with conception. Hyperthyroidism is typically caused by Graves’ disease and can result in reduced sperm count in men, another cause of infertility. If hyperthyroidism is not appropriately regulated when a woman is pregnant, there is an increased risk of miscarriage in the early stages. During pregnancy, a woman’s body requires enough thyroid hormone to support not only her own increased metabolic needs but also the growing baby’s brain and nervous system. In fact, during the first trimester, the foetus relies solely on the mother’s source of the thyroid hormone, which is received through the placenta. This is why it is so important for the mother to have a normal level of thyroid hormone, both before and during pregnancy, and to seek advice from her GP.
https://www.conceptfertility.co.uk/2017/04/10/how-does-the-thyroid-affect-pregnancy/
Why does thyroid gland enlarge both in hypothyroidism and hyperthyroidism? Hypothyroid goitre is due to lack of iodine in diet and hyperthyroid goitre (Exopthalmic goitre, Grave's disease) is due to oversecretion of thyroxine, or Thyroid stimulating Hormone. How does these two apparently opposite effects produce similar physical manifestation of the disorder (symptom) i.e is the enlargement of the thyroid gland? Why does thyroid gland enlarge both in hypothyroidism and hyperthyroidism? - $\begingroup$ I would bet it has to do with retaining water and/or minerals. $\endgroup$ Jan 14, 2014 at 0:55 - 1 Answer The primary activity of the thyroid gland is to concentrate iodine from the blood to make thyroid hormone. The gland cannot make enough thyroid hormone if it does not have enough iodine. Therefore, with iodine deficiency the individual will become hypothyroid. Consequently, the pituitary gland in the brain senses the thyroid hormone level is too low and sends a signal to the thyroid. This signal is called thyroid stimulating hormone (TSH). As the name implies, this hormone stimulates the thyroid to produce thyroid hormone and to grow in size. In Graves' disease one’s immune system produces a protein, called thyroid stimulating immunoglobulin (TSI). As with TSH, TSI stimulates the thyroid gland to enlarge producing a goiter. However, TSI also stimulates the thyroid to make too much thyroid hormone (causes hyperthyroidism). Since the pituitary senses too much thyroid hormone, it stops secreting TSH. In spite of this the thyroid gland continues to grow and make thyroid hormone. Therefore, Graves’ disease produces a goiter and hyperthyroidism.
https://biology.stackexchange.com/questions/14459/enlargement-of-thyroid-gland
8 Interesting Facts About the Thyroid 1. What is the Thyroid? Your thyroid is a butterfly-shaped gland that measures about 2 inches long and sits at the base of your neck, in front of your throat. The thyroid has two symmetrical sides, like butterfly wings, called lobes that sit on each side of your windpipe. The isthmus connects the two sides. 2. What Does the Thyroid Do? The thyroid gland releases specific hormones that travel through the body and regulate vital functions. Thyroid hormones help regulate breathing, heart rate, metabolism, menstrual cycles, body temperature, blood pressure, and even your mood. For this reason, an imbalance in thyroid hormone levels can negatively affects your bodily functions and your mood. 3. How Does it Work? The thyroid gland—a part of the endocrine system—produces, stores and circulates hormones throughout the bloodstream. Many of these hormones affect cell production. The thyroid produces two main hormones: triiodothyronine (T3) and thyroxine (T4). The production of these two hormones is monitored and controlled by two glands in the brain, the hypothalamus and the pituitary. The pituitary gland releases thyroid stimulating hormones (TSH) in order to regulate the activity of the thyroid gland. 4. Why Do You Need a Thyroid? Thyroid hormones T3 and T4 interact with almost every cell in the body, regulating the speed of their many processes. For example, if your T3 and T4 levels are high, they will increase your heart rate and metabolism, whereas low levels would decrease them. 5. What is Hyperthyroidism? Hyperthyroidism is a condition where a person’s thyroid is overactive, meaning that it produces too much of the thyroid hormones. In this condition, too much T3 and T4 hormones are released into the bloodstream, accelerating the speed of cellular processes. This can cause symptoms like, but not limited to, unintentional weight loss, rapid heart rate, irritability, anxiety, and increased sensitivity to high temperatures. 6. What is Hypothyroidism? Hypothyroidism is a condition where a person’s thyroid is underactive, meaning that it does not produce enough of the thyroid hormones. An underactive thyroid, or hypothyroidism occurs when too little T3 and T4 hormones are produced, slowing down cell processes. This can cause symptoms such as, but not limited to, fatigue, diarrhea or constipation, difficulty concentrating, dry skin and hair, and joint or muscle pain. 7. Symptoms of Thyroid Cancer Swelling, lumps, or nodes found in your neck are the most common symptoms of thyroid cancer. Large thyroid tumors may also cause neck or facial pain, difficulty swallowing, hoarseness, coughing, voice changes, and shortness of breath. 8. Possible Causes & Risk Factors - Age: Thyroid cancer generally occurs in people between the ages of 20 and 55, and is 2 to 3 times more common in females than males. - Thyroid cancer is often diagnosed after pregnancy or menopause. - Radiation: Exposure to high levels of radiation can increase your risk for thyroid cancer. - Heredity: While the direct cause of thyroid cancer is unknown, thyroid disease (even non-cancerous) can be genetic.
https://ksmedcenter.com/8-interesting-facts-about-the-thyroid/
The thyroid gland regulates the metabolic functions of the body and regulates body temperature. A growing number of Americans are experiencing low thyroid function, also known as hypothyroidism. The thyroid gland is a butterfly-shaped gland of the endocrine system (hormonal system). It is located in the front of the neck. Its function is to produce thyroid hormones (which include T4 and T3) that are responsible for many functions of the body including metabolism and growth. The synthesis of the thyroid hormones (T4 and T3) is regulated by TSH secreted by the pituitary gland. What is hypothyroidism? Hypothyroidism is a dysfunction that causes the thyroid gland not to produce enough thyroid hormones, slowing down normal function. As a result, there is a greater secretion of a hormone called thyroid-stimulating hormone (TSH) from the pituitary gland. The secretion of TSH is the body’s way of regulating the production of thyroid hormones. Generally, the more TSH is secreted, the more thyroid hormones are produced. This is an attempt by the body to get the thyroid gland to step up production of thyroid hormones so as to recover the normal level of thyroid hormones. However, in primary hypothyroidism, this does not work because the thyroid gland is dysfunctional. In order to diagnose hypothyroidism, determining the levels of TSH, T4 and T3 are important since they give a thorough look of the health of the thyroid gland. High levels of TSH with low levels of T4 and T3 may indicate a damaged thyroid gland (primary hypothyroidism). Low levels of TSH with low levels T4 indicate trouble in the brain – with the pituitary gland (secondary hypothyroidism) or with the hypothalamus (tertiary hypothyroidism). Occasionally, there are also cases of subclinical hypothyroidism, which is a mild failure in the thyroid gland and can occur when symptoms are present, but thyroid hormone levels are low-normal. What are the causes of hypothyroidism in adults? Autoimmune diseases The most common is a condition called Hashimoto’s thyroiditis. It happens when the immune system, which protects the body against infections, confuse the thyroid cells and enzymes as invading organisms and attack them. This can be checked by testing for thyroid antibodies. Thyroiditis An illness caused by a problem with the immune system or by a viral infection, which causes an inflammation of the thyroid and, consequently, that the thyroid hormones are released suddenly. This causes a short duration of hyperthyroidism, which then gives way to hypothyroidism. Environmental toxins The thyroid gland is extremely sensitive to environmental toxins from radiation and endocrine disruptors (chemicals that interrupt the hormones in the body), such as fire retardants and phthalates. Certain medications Medications, for example, like amiodarone and lithium may trigger hypothyroidism, but usually in those with a genetic predisposition. Also some antitussive and expectorant syrups, iodinated salts and some antiseptics can precipitate it. Other conditions Conditions such as pituitary conditions, partial or surgical removal of the thyroid gland due to conditions such as thyroid nodules, thyroid cancer, or Graves disease, and other diseases such as amyloidosis and sarcoidosis. What are the symptoms? - Weight gain - Dry skin and hair - Tiredness and/or drowsiness - Reduced concentration capacity - Memory failures and forgetfulness - Greater sensitivity to cold - Hoarse voice and swollen face - Constipation - Pain and/or muscle cramps - Rigidity or swelling in the joints - In women, sometimes menstrual disorders All these symptoms can go unnoticed for a while since they are nonspecific, that is, they can be common to other diseases and are often a reflection of a slowing metabolism. Some complications that can occur if hypothyroidism is not treated. Goiters, which happens due to an enlarged thyroid gland, can occur and if the enlarged thyroid gland presses against the windpipe, it may lead to breathing problems, mental health problems, complications in pregnancy, and myxedema which can lead to coma and death. How is it diagnosed? Hypothyroidism does not have any specific symptoms, in addition, all the symptoms that people with hypothyroidism may have other diseases in addition to hypothyroidism, therefore, one way to know for sure if the patient has hypothyroidism is to perform blood tests such as which looks for the levels of TSH, T4, and T3. An ultrasound scan may be done in addition to blood tests to detect any physical damages to the thyroid gland. Treatment of hypothyroidism: The conventional treatment of hypothyroidism usually is to treat with synthetic thyroid hormones like levothyroxine (T4), also known as Synthroid. Many times, conventional doctors only test TSH and T4 to check how thyroid function is working, leaving out important information about the active thyroid hormone, T3. Naturopathic medicine and functional medicine look at the thyroid holistically and naturally. Testing for a full thyroid panel, Naturopathic doctors look at many different factors that relate to thyroid dysfunction. Diet, lifestyle, nutritional deficiencies and exposures of environmental toxins are taken account for overall thyroid health. Some may do better with synthetic hormones, but there are also natural hormones that may have better results, balancing both T4 and T3. Sometimes, hormones are not necessary. Correcting nutritional deficiencies and supporting the thyroid with herbs and minerals can correct a low functioning thyroid. A full naturopathic intake can help uncover the cause of hypothyroidism and treat the underlying dysfunction for optimal health and a well balanced thyroid.
https://baynaturalmedicine.com/hypothyroidism/
Facts about Thyroid Disorders - Hyperthyroidism & Hypothyroidism The thyroid gland produces hormones that are responsible for the proper functioning of the body system. Thyroid hormones are essential for the proper growth and metabolism of the body. They directly affect the functioning of most of the organs in the body. Thyroid disorders occur when the thyroid gland does not produce the right amount of hormones that are needed by the body. To fully understand thyroid disorders, let us have a look into the structure and function of the thyroid gland. Structure of the Thyroid Gland The thyroid gland is a small bow shaped gland that sits in front of the windpipe right below the larynx also known as the voice box. The thyroid gland has two lobes located on either side of the windpipe connected by a narrow strip of connective tissue called “isthumus”. Hormones Produced by the Thyroid Gland The three hormones produced by the thyroid gland are - - Thyroxine (T4) - Triiododthyronine (T3) - Calcitonin The main hormone produced by the thyroid gland is the “Thyroxine” hormone also referred to as T4. Once Thyroxine enters the blood stream a small amount of the hormone is converted into “Triiododthyronine” that is also referred to as T3. Calcitonin is another hormone produced by the thyroid gland. Calcitonin reduces calcium levels in the bloodstream when the concentration of calcium is above the normal level. The brain plays an important role in the proper functioning of the thyroid gland. When the thyroid hormone levels are low, the brain produces a hormone called, “Thyrotropin Releasing Hormone” also referred to as TSH. TSH stimulates the thyroid gland to produce more T4 when T4 levels are low. Importance of Iodine The main component of thyroid hormones is iodine. Iodine is very important for the proper functioning of the thyroid glands. Iodine cannot be produced by the body; it has to be absorbed from the nutrients that enter the blood stream while digesting food. Functions of the Thyroid Gland The thyroid gland is responsible for the following functions in the body - - maintaining the body temperature - heart beating properly and the strength of the pulse - proper utilization of food - proper growth of the brain in children - overall growth in children - the quick response of the nervous system What is Hyperthyroidism - Causes and Symptoms Hyperthyroidism is a condition in which thyroid hormones in the blood are above the normal level. Hyperthyroidism occurs when there is an increase in the production of the thyroid hormone, thyroxine. A condition called Grave’s Disease is responsible for most hyperthyroidism conditions. Grave’s Disease occurs when the body fights against and attacks the thyroid gland. The thyroid gland responds to the attack by producing more thyroxine. When this happens, the thyroxine level in the blood shoots up above the normal level. Sometimes hyperthyroidism can be because of a swollen thyroid gland or small growths in the thyroid gland called nodules. Symptoms - sudden weight loss - the rapid increase in heartbeat - nervousness, anxiety - tremors - excess sweating - fatigue - menstrual irregularities - experience difficulty in sleeping - hair becomes thin and brittle - enlarged thyroid gland (goitre) There are medications to cure hyperthyroidism. Hyperthyroidism can be life-threatening if it is not treated. What is Hypothyroidism - Causes and Symptoms Hypothyroidism is a condition that occurs when thyroid hormone levels in the blood are low. Hypothyroidism occurs when the thyroid gland does not function properly and produce enough thyroid hormones. The main cause of hypothyroidism is a condition called, "Thyroiditis". In this condition, the thyroid gland is swollen, inflamed and does not produce enough amounts of thyroid hormones for the body to function properly. Hypothyroidism can also be due to Hashimoto’s Disease, a condition in which the body attacks the thyroid gland and destroys it. Other causes of hypothyroidism are - - effect of certain medication - birth defects - surgical removal of a part or the whole thyroid gland - pituitary tumor or surgery - treatment with radioactive iodine Symptoms Hypothyroidism may include the following symptoms - - fatigue - extreme sensitivity to cold temperatures - dry skin - puffy face - increase in weight without reason - muscle weakness - muscle stiffness and aches - swelling of joints - slow heart beat - depression - irregular or heavy menstrual periods When hypothyroidism is not treated, it may lead to a condition called myxoedema. Myxoedema occurs in advanced stages of hypothyroidism and can be life threatening. Myxedema can lead to decreased blood pressure, decreased breathing and decreased body temperatures. There are tests to detect thyroid conditions. Medications are available for treatment of thyroid disorders. References thyroid.ca medicinenet.com ncbi.nlm.nih.gov © 2013 Nithya Venkat Comments Some in our family has hypothyroidism so this is very enlightening. This was excellently written and researched - easy to understand too. I have hypothyroid and it's no fun. Medication has evened it out and I am doing well. It's nice to learn about it. Very Informative, tweeted, shared in Google+, pinterest My sister has hypothyroidism and I always wonder if I might get it. That said, reading this was a nice refresher to know what signs to look for. Thanks - well-researched. ;) I have overcome my thyroid problem with the use of natural iodine. It may not work for others, but it did the job for me. Thanks for covering this important topic with great detail. I know there are many out there that will benefit from your sharing. Thyroid Disorders - Hyperthyroidism & Hyporthyroidism is an informative and helpful hub to all readers. - - Interesting and useful Vellur. Voted up for sure and I wish you and yours a Happy New Year.. Eddy. Thanks for sharing the useful information about two important and unpleasant disorders, Vellur. Some scary stuff, very well explained. Up and sharing. Vellur, Hyper and Hyopo thyroid diseases have been explained by you in a very simple way. These diseases are very rampant these days. This article says that Grave 's disease and Hashimoto's disease are outcome of attacks of our own body. Strange are ways of our body. I think, may be hectic and reckless lifestyle plus bizarre food habits are also to blame for this. Really knowledgeable stuff. Very useful information about Thyroid disorders! It has become a very common disorder nowadays, especially among women. Thanks for explaining the causes and symptoms to watch out for. Voted up as useful! Thanks! Insightful article here on thyroid function! Thank you for sharing what to watch for in such a condition. Up and more and sharing Happy New Year. Faith Reaper Great explanation of thyroid function vellur, I have both hyper and hypo thyroid, and yes it can be really dangerous, especially when the Docs know nothing about it! I wrote about it, its appalling how doctors don't learn about it properly and how dangerous it can be, voted up!
https://hubpages.com/education/Thyroid-Disorders-Hypothyroidism-Hyperthyroidism
What is Thyroid? Numerous glands in the human body tend to synthesize and release substances in the bloodstream. The thyroid gland is located on the front side of the neck. It is a small butterfly-shaped organ, wrapped around the trachea (windpipe). The secreted substances produced by various glands help the body parts perform specific tasks. Similarly, the thyroid gland produces substances that help the body perform some vital tasks related to the body’s metabolic control. There are two further distinct conditions based on thyroid hormone levels. If the body produces too much of the thyroid hormone, one can develop hyperthyroidism. On the other hand, if the body produces less than the needed amount of thyroid hormone, hypothyroidism can be developed. Tasks Performed by the Gland Hormones like T4 (thyroxine, containing four iodide atoms) and T3 (triiodothyronine, containing three iodide atoms) control the body’s metabolism. The two hormones guide the body’s cells to utilize energy in the right amount. How do we get energy from the food? Metabolism is a process through which energy is transferred from food. Metabolism of the body works as a generator, where the energy produced performs all the tasks. The pituitary gland oversees the process by adjusting the number of hormones whenever it is there in excess or scarce. The pituitary gland is situated in the centre of the skull. The monitoring is done by the hormone called Thyroid Stimulating Hormone (TSH). Things To Know About the Thyroid Disease As mentioned above, the thyroid gland’s function is to produce and secrete substances to balance the body’s metabolism. However, when the gland fails to produce the right amount of the substance, it results in thyroid disease. In hyperthyroidism, one’s body receives too much of the thyroid hormone, which leads to quick usage of it. This results in higher heart rate, weight loss, and nerve issues. On the contrary, in hypothyroidism, one encounters weight gain, lethargicness, and intolerance to low temperatures. Symptoms: |Hyperthyroidism||Hypothyroidism| | | Weight loss Anxiety, nervousness, and irritability Eye problems (vision and/or irritation) Irregular menstrual flow Enlarged thyroid or goitre Intolerance to high temperature | | Weight gain Fatigue Minor amnesia Higher menstrual flow Dry scalp Intolerance to low temperature Causes: Hypothyroidism Thyroiditis: In this case, the thyroid gland gets swelled up, which leads to a lower amount of thyroid hormone production. Postpartum thyroiditis: It is a temporary condition that occurs in 5% to 9% of women after pregnancy and childbirth. Iodine deficiency: Since iodine is used for the production of the thyroid hormones, its deficiency leads to various hurdles faced by millions of people in the world. Hashimoto’s thyroiditis: It is an inherited, painless, and autoimmune condition where the cells attack the thyroid system. Non-functioning thyroid gland: In many cases, children are born with a non-function thyroid gland, which leads to psychological and mental conditions in the future, if left untreated. Thus after birth, every child goes through a screening process to check the working of the gland. Hyperthyroidism Graves’ disease: In this condition, as the cause itself declares, the gland gets overactive and produces more than needed thyroid hormone. The condition is also known as the diffuse toxic goitre. Nodules: There are numerous nodules within a gland. In this condition, the nodules get swelled up (overacting). A gland containing several nodules is known as a toxic multinodular goitre, and a single nodule is known as a toxic autonomously functioning thyroid nodule. Thyroiditis: It is a temporary condition, lasting for weeks or months, that can cause either a lot of pain or cannot be felt at all. In this condition, the thyroid releases the hormones that were stored. Excessive iodine: As mentioned earlier, iodine is used to make thyroid hormones. Thus, when it is present in an excessive amount, the thyroid gland produces more hormones than needed. How to Prevent Thyroid? Food Items and Habits for Thyroid Care and Thyroid Hormone Treatment Brazil Nuts: Being a high source of selenium, fibre, calcium, protein, and magnesium, it fulfils the deficiency present in thyroid patients. Selenium converts the thyroxine to its active hormone form, T3. These nuts are also called hazelnuts or macadamia nuts. The recommended daily portion of brazil nuts in a day is 1-3. It is important to check how much selenium is present in the brazil nuts you purchase. The upper limit is 400 mcg. Iodized Salt: One of the ideal ways to cope with iodine deficiency is to opt for iodized salt. As mentioned above, balance thyroid hormones are necessary. Iodine is only recommended in the right amount as its overdosage is also hazardous. The National Institute of Health recommends nearly 150 mcg of iodine for adults. Without iodine in the body, the gland cannot have enough building blocks to bring T3 and T4 into an active state. Iodine helps produce thyroid hormone in the body. A deficiency of the same can cause hypothyroidism. It is a condition in which the body stops producing thyroid hormone. Nonetheless, it is easy to prevent. Add a pinch of iodized salt over your meals and you can meet your daily requirement. Some of the excellent food sources of iodine are seaweed, yoghurt, seafood like tuna fish, shrimps, etc. Eggs: Eggs are an eminent source of iodine as well as protein. As per research, one egg can ionize 16% of the whole diet. It also contains a good amount of selenium. Selenium plays a vital role in thyroid function and helps overcome oxidative stress. While the whites are full of protein, the yolk of the egg contains healthy cholesterol, thus catering to a healthy diet. Prefer Decaf Over Caffeine: Those who consume coffee usually add milk and sugar in it. Milk has natural sugar present in it which is broken down into glucose. This can spike the insulin level in the body. Thus, it is not recommended for people having thyroid issues as it can disrupt their blood sugar levels. Drinking several cups of coffee also produces a stress hormone called cortisol which can increase the anxiety level. If you are a person who can not make do without their daily cuppa Joe, then it is vital you switch to Decaf. However, moderate consumption is recommended for decaf as well. Aerobic Exercise: A study found that moderate aerobic exercise can fight hypothyroidism by boosting energy levels and can combat hyperthyroidism by promoting better sleep. Maximum heart rate raises the levels of thyroid hormones thyroxine (T4), free thyroxine (fT4), and thyroid-stimulating hormone (TSH). So, it is necessary to indulge in work out habits to keep the heart pumping well. Exercise helps boost metabolism which helps burn calories faster and promotes muscle growth. The more the muscle, the faster you lose the excess fat. This is beneficial for people with hypothyroidism. For people with hyperthyroidism, strength training can help them increase their bone density. Taking care of the thyroid gland is necessary to maintain the metabolic process of the body. One needs to take care of the system by following a balanced diet, working out, and consulting a doctor when facing any symptoms. We hope this article provided you all the information that you needed. Stay tuned for more health related blogs!
https://nhassurance.com/blog/5-ways-to-reset-your-thyroid-hormone-levels
Endocrinology is a branch of medicine dedicated to diagnosing and treating diseases related to the endocrine glands and hormonal system. Human endocrine system comprises different glands (e.g. thyroid gland, pancreas and adrenal glands) that produce and secrete hormones. Depending on their target, the hormones have several different effects and tasks. It is often thought that hormonal turmoil affects only teenagers, but actually people at all ages may have hormone-related disorders. Below we describe some diseases related to the endocrine glands and hormones. It is estimated that in the general population the endocrine disorders are most frequently related to the thyroid gland (manifesting mainly as hyper- and hypothyroidism). Thyroid-related problems may affect up to 10% of people. The second most common is diabetes, which affects 8.5% of the population. Although diabetes is at present less common than thyroid-related diseases, during the last decades there has been a staggering increase in its frequency in the Western world, so that according to the “black” scenario, in the 21st century diabetes will become one of the most important causes of death. Less frequent endocrine disorders include the Cushing’s syndrome, affecting about 2–3% of people, and Addison’s disease, which is very rare, affecting less than 0.5% of the population. Endocrinological diseases are usually diagnosed based on blood analysis. In that case the presence of certain complaints and symptoms has usually already raised the suspicion of a hormonal disorder. Depending on the diagnosis, additional tests are performed, such as the ultrasound or stimulation tests (e.g. glucagon stimulation test and insulin tolerance test to study the secretion of growth hormone). There are also several rare diseases (like the MEN syndrome, caused by a known gene mutation), which can be diagnosed by means of genetic tests. - Diabetes is a metabolic disorder, whereby the blood sugar is above normal during a longer period of time. There are two main types of this disease: type 1 is insulin-dependent and type 2 non-insulin-dependent. Type 1 diabetes is caused by the inability of the pancreas to produce enough insulin. Type 2 diabetes is characterized by insulin resistence or, in other words, inability of the organism to respond to insulin in a normal way. Type 2 diabetes may lead to insulin deficiency as well. Another separate type of diabetes is pregnancy diabetes or gestation diabetes, which develops during pregnancy and usually disappears after giving birth. The symptoms of diabetes include frequent urination, and increased thirst and appetite. If left untreated, diabetes may lead to severe complications. For treatment, both oral and injectable medications are used. Healthy lifestyle and nutrition are also very important in the prevention and treatment of this disease. - Hypothyroidism, also called underactive thyroid, is a frequent endocrine disorder. Hypothyroidism means that the thyroid gland does not produce enough thyroid hormone. This leads to symptoms like constant feeling of cold, fatigue, depression, constipation, slow metabolism, weight gain, and even obesity. Hypothyroidism can be diagnosed based on blood analysis. For treatment, oral thyroid hormone replacement therapy is applied. - Hyperthyroidism is caused by a thyroid gland that is too active and produces too much thyroid hormone. The symptoms are very individual: some patients have no symptoms at all, but others experience irritability, sleep problems, tachycardia, diarrhoea and weight loss. Hyperthyroidism can also be diagnosed based on simple blood analysis. - Disorders related to sex hormones: sex hormones include androgens, estrogens and progestagens. Androgens are considered to be male sex hormones and estrogens and progestagens are considered to be female sex hormones. The level of sex hormones in the organism depends on the gender, age and individual differences. With aging, the level of sex hormones decreases to a certain extent, but this in itself is not harmful or dangerous. However, the production of sex hormones may decrease as a result of certain diseases as well. On the other hand, in some people the level of sex hormones is too high. The best known such condition is hyperandrogenism, also known as androgen excess. In women, the most common hyperandrogenism-related condition is polycystic ovary syndrome. Hair loss, also known as alopecia or baldness, has also been associated with hyperandogenism, however, according to the present knowledge, it is rather caused by the combined effect of genetic background and a certain male sex hormone. - Growth hormone-related disorders: growth hormone is a hormone that stimulates growth and the multiplication and renewal of cells. In certain diseases and conditions the organism may produce too much or too little growth hormone. Excessive growth hormone may lead to gigantism (if overproduction ot this hormone happens in childhood) or acromegaly (if excessive production happens in adulthood). - Cushing’s syndrome is a cluster of symptoms caused by excessive cortisol. The main reasons for developing the Cushing’s syndrome include prolonged use of certain medications (e.g. prednisolone) and tumors. If this condition is caused by an adenoma of hypophysis, then it is called the Cushing’s disease. The symptoms of the Cushing’s syndrome include high blood pressure, gaining fat in the trunk and lower belly, stretch marks, „moon face“, and acne. - Addison’s disease, also known as primary adrenal insufficiency, is a chronic endocrine disorder the symptoms of which generally appear slowly. The reason for the Addison’s disease is the inability of the adrenal glands to produce enough steroid hormones (cortisol and aldosterone). The symptoms of the Addison’s disease include darkening of the skin (which actually looks like a sun tan), stomach-ache, weakness, and weight loss. Without treatment this condition may lead to death. Although this disease cannot be cured, its symptoms can be alleviated or even eliminated with hormone replacement therapy. - Sleep disorders: melatonin is a so-called sleep hormone, produced by the body itself. Melatonin regulates the sleep/wake cycle and the circadian rhythm. Quite often melatonin is used for treating sleep disorders and correcting the circadian rhythm. Services: Endocrinologist’s consultation18 clinics An endocrinologist is a specialized physician who treats the diseases and abnormalities of the hormonal system, including the pathologies of the hypothalamus and the pituitary, adrenal, pancreatic and thyroid glands. The most common hormonal diseases are diabetes, hypothyroidism (underactive thyroid gland) and hyperthyroidism (overactive thyroid gland). During the first appointment, the endocrinologist examines the patient and prescribes necessary tests and analyses.
https://medihub.org/en/all-services/skin-inflammation-infection/endocrinology
Resistance to thyroid hormones can lead to medication-resistant depression and bipolar disorder. That’s not all, the thyroid gland has an impact on almost all of the metabolic processes in the body. There Are 2 Main Types of Thyroid Disorders: - Hyperthyroidism - Hypothyroidism Here Are the Most Commonly Experienced Hypothyroidism Symptoms: Hypothyroidism is a thyroid disorder characterized by thyroid hormone underproduction. In other words, it occurs when the thyroid does not produce enough thyroid hormones. In addition, as a result of the fact that the thyroid gland is responsible for metabolism regulation, slow metabolism is one of warning signs of hypothyroidism. Approximately 20 million Americans experience some kind of a thyroid disorder. In fact, a great number of people experience hypothyroidism, but they are not even aware of it. Therefore, in case you experience any of the symptoms below, consult your physician.
https://100yummy.com/everything-you-need-to-know-about-hypothyroidism-signs-and-symptoms-how-to-understand-your-blood-tests-and-how-to-treat-it/
Table of Contents Hypothyroidism is a condition in which the body lacks sufficient thyroid hormone. The main purpose of the thyroid hormone is to run the body's metabolism, which is why people with this condition will have symptoms associated with a slow metabolism. Hypothyroidism is an extremely common condition; over five million Americans have it. Also, as many as 10% of all women may have some degree of thyroid hormone deficiency. Unfortunately, when we talk about slow metabolism, we talk about a patient’s battles against their weight. Possible causes of hypothyroidism There are two fairly common causes of hypothyroidism. 1. Inflammation of the thyroid gland: This inflammation leaves a large percentage of the cells of the thyroid damaged and incapable of producing sufficient hormone. The most common inflammation of the thyroid is a disorder called autoimmune thyroiditis (also called Hashimoto's thyroiditis), a form of thyroid inflammation caused by the body’s own immune system. 2. Thyroid gland surgery: The second major cause of hypothyroidism is a previous medical treatment of the thyroid gland. The treatment of many thyroid conditions includes surgical removal of a portion of the thyroid gland. If the total mass of thyroid-producing cells left after the surgery is not enough to meet the needs of the body, the patient will develop hypothyroidism. In case of benign conditions, the purpose of the radioactive iodine therapy is to kill a portion of the thyroid to prevent goiters from growing larger, or producing too much hormone. Rare causes of hypothyroidism There are several other rare causes of hypothyroidism. The strangest condition of them causes a completely normal thyroid gland to fail to produce enough hormones because of a problem in the pituitary gland. If the pituitary does not produce enough Thyroid-Stimulating Hormone (TSH) then the thyroid simply does not have the signal to make hormone. Signs and symptoms of hypothyroidism The signs and symptoms of hypothyroidism can vary widely, depending on the severity of the hormone deficiency. Also, symptoms can develop gradually over the years. At first, patients complain of barely noticeable symptoms such as fatigue and sluggishness. But after some time, most patients develop more obvious signs and symptoms, including: - Unexplained weight gain - Muscle aches, tenderness and stiffness - Pain, stiffness or swelling in joints - Muscle weakness - Heavier menstrual periods - Increased sensitivity to cold - Constipation - Pale, dry skin - A puffy face - Hoarse voice - Elevated blood cholesterol levels - Depression When hypothyroidism isn't treated, signs and symptoms can gradually become more severe, which could represent a bigger problem. Advanced hypothyroidism is a condition known as myxedema. This is a rare condition, but when it occurs it can be life-threatening. Signs and symptoms include: - low blood pressure - decreased breathing - decreased body temperature - unresponsiveness Hypothyroidism in children and teens Although hypothyroidism most often affects middle-aged and older women, almost anyone can develop the condition, including infants and teenagers. Initially, babies born without a thyroid gland or with a gland that doesn't work properly may have only a few symptoms. The most common symptoms of this congenital hypothyroidism, they may include: - Yellowing of the skin and whites of the eyes (jaundice) - Frequent choking - Protruding tongue - Constipation - Poor muscle tone - Excessive sleepiness Untreated hypothyroidism in infants can lead to severe physical and mental retardation. As adults, they may exhibit several other symptoms such as:
https://www.steadyhealth.com/articles/hypothyroidism-and-the-weight-battle
7 Symptoms Of Thyroid IssuesDecember 6, 2020 Your thyroid gland produces hormones that aid in cellular and tissue functions throughout your body. A malfunctioning thyroid gland can result in a number of signs of thyroid issues. If your thyroid does not produce an adequate amount of thyroid hormones, you will be diagnosed with hypothyroidism. Conversely, if too much thyroid hormones are created, you will be diagnosed with hyperthyroidism. Signs of Thyroid Issues The American Thyroid Association reports that up to 20 million Americans have a thyroid condition. Unfortunately, an estimated 60 percent of those who have thyroid disease are unaware of their condition. Women are over eight times more likely to have thyroid problems than men. Below are some signs your thyroid may not be functioning properly. Bowel Changes The thyroid gland produces hormones that help regulate your digestion. When the thyroid gland is not producing enough thyroid hormones, your digestive system can become sluggish, leading to abdominal cramping, gas, nausea, and constipation. If the thyroid gland is producing too much thyroid hormone, gut motility can increase, resulting in gas, vomiting, diarrhea, and abdominal cramping. Brain Fog Are there days when confusion seems to reign supreme? Does it feel like you are walking around in a fog? If so, your thyroid may not be functioning properly. Hyperthyroidism can interfere with your ability to focus. Hypothyroidism, on the other hand, can cause forgetfulness and brain fog. Goiter A goiter is a painless, abnormal enlargement of the thyroid gland or the surrounding tissues. This swelling can be caused by hypothyroidism, hyperthyroidism, cancer, or a cause that is unrelated to the thyroid. Typically, a goiter causes no problems; however, if the goiter becomes large, you may experience a dry cough, experience breathing difficulties, or it can become difficult for you to swallow. Infertility Hypothyroidism can affect your ovulation cycle, resulting in infertility. When you have low thyroid hormones, your body may not release an egg. Those with hyperthyroidism can have an irregular period, which is lighter than normal. Hyperthyroid women may have trouble conceiving, experience miscarriages, or their baby can experience health issues while in the womb. Sleep Problems Thyroid conditions can interfere with your sleep. When you suffer from hyperthyroidism, different body functions increase, which can cause you to feel jittery or wired. You may experience a racing pulse and anxious thoughts which can prevent you from getting to sleep. With hypothyroidism, your thyroid gland does not make an ample amount of thyroid hormone, which can cause you to feel exhausted or fatigued, even after getting a full night’s sleep. Temperature Issues If you have a thyroid issue, your body may have difficulty regulating your core temperature. When your thyroid overproduces thyroid hormones, your core temperature can rise, which can cause you to feel hot all of the time. Conversely, when the thyroid gland underproduces thyroid hormones, your core body temperature decreases, which can make you feel cold most of the time. Unexplained Weight Gain or Weight Loss If you experience an unexpected change in your weight, it can be a sign of a problem with your thyroid gland. When your thyroid is not functioning properly and is producing too much thyroid hormone, your metabolism will be ramped up so high that it will be impossible to maintain your body weight. Instead, you will lose weight unexpectedly. If your thyroid is not producing enough thyroid hormone, your metabolism will decrease, your body will begin storing fat, and you will begin gaining weight. Hypothyroidism can make it extremely difficult to lose weight, even with diet and exercise. For help with sustainable weight loss, click here. Thyroid issues can be treated once a proper diagnosis has been made. This small gland creates hormones that influence every organ and system within the body. There are two thyroid hormones – T3 and T4. Both of these thyroid hormones travel from the thyroid gland to various parts of the body (the heart, brain, liver, skin, bones, etc.) via the bloodstream. When the thyroid hormones get to the various parts of the body, your genes that regulate specific bodily functions are activated. A malfunctioning thyroid gland can affect numerous systems throughout the body, causing a plethora of symptoms. In the beginning, the symptoms can be so mild that you do not even realize that you have a thyroid problem. Over time, these symptoms can worsen. Unfortunately, the symptoms of a malfunctioning thyroid gland can mimic a number of other health issues. If you are experiencing any of the above symptoms, it may be time to have your thyroid checked by an integrative thyroid specialist. In order to diagnose your thyroid condition, a variety of blood tests, along with a physical exam may be required. Thyroid blood tests measure the amount of thyroid hormone in your bloodstream. Experiencing brain fog, temperature issues or unexplained weight changes? These are signs of thyroid issues.
https://www.rosewellness.com/7-symptoms-of-thyroid-issues/
The thyroid gland is a butterfly-shaped organ and is composed of two cone-like lobes or wings, right lobe and left lobe, connected via the isthmus. The thyroid is one of the larger endocrine glands, weighing 2-3 grams in neonates and 18-60 grams in adults, and is increased in pregnancy. The organ is situated on the anterior side of the neck, lying against and around the larynx and trachea, reaching posteriorly the oesophagus and carotid sheath. It starts cranially at the oblique line on the thyroid cartilage and extends inferiorly to approximately the fifth or sixth tracheal ring. The thyroid gland is covered by a thin fibrous sheath, the capsula glandulae thyroidea, composed of an internal and external layer. The external layer is anteriorly continuous with the lamina pretrachealis fasciae cervicalis and posteriorolaterally continuous with the carotid sheath. The gland is covered anteriorly with infrahyoid muscles and laterally with the sternocleidomastoid muscle . On the posterior side, the gland is fixed to the cricoid and tracheal cartilage and cricopharyngeus muscle by a thickening of the fascia to form the posterior suspensory ligament of Berry In variable extent, Lalouette’s Pyramid, a pyramidal extension of the thyroid lobe, is present at the most anterior side of the lobe. In this region, the recurrent laryngeal nerve and the inferior thyroid artery pass next to or in the ligament and tubercle. The thyroid is supplied with arterial blood from the superior thyroid artery, a branch of the external carotid artery, and the inferior thyroid artery, a branch of the thyrocervical trunk, and sometimes by the thyroid ima artery, branching directly from the brachiocephalic trunk. The venous blood is drained via superior thyroid veins, draining in the internal jugular vein, and via inferior thyroid veins, draining via the plexus thyroideus impar in the left brachiocephalic vein. Lymphatic drainage passes frequently the lateral deep cervical lymph nodes and the pre- and parathracheal lymph nodes. The gland is supplied by parasympathetic nerve input from the superior laryngeal nerve and the recurrent laryngeal nerve. EMBRYOLOGY In the fetus at 3–4 weeks of gestation, the thyroid gland appears as an epithelial proliferation in the floor of the pharynx at the base of the tongue between the tuberculum impar and the copula linguae. Over the next few weeks, it migrates to the base of the neck, passing anterior to the hyoid bone. During migration, the thyroid remains connected to the tongue by a narrow canal, the thyroglossal duct. Thyrotropin-releasing hormone (TRH) and thyroid-stimulating hormone (TSH) start being secreted from the fetal hypothalamus and pituitary at 18-20 weeks of gestation, and fetal production of thyroxine (T4) reach a clinically significant level at 18–20 weeks. PHYSIOLOGY The thyroid gland uses iodine (mostly available from the diet in foods such as seafood, bread, and salt) to produce thyroid hormones. The primary function of the thyroid is production of the hormones triiodothyronine (T3), thyroxine (T4) which account for 99% and 1% of thyroid hormones present in the blood respectively, and calcitonin. Up to 80% of the T4 is converted to T3- the active hormone that affects the metabolism of cells. by peripheral organs such as the liver, kidney and spleen. Role of hormones The thyroid hormones act on nearly every cell in the body. * They act to increase the basal metabolic rate. * They affect protein synthesis. * They help regulate long bone growth (synergistically with growth hormone) * Help in neuronal maturation. * The thyroid hormones are essential to proper development and differentiation of all cells of the human body. * These hormones also regulate protein, fat, and carbohydrate metabolism, * They also stimulate vitamin metabolism. * Thyroid hormone leads to heat generation in humans. TRIIODOTHYRONINE (T3) Triiodothyronine, also known as T3, is a thyroid hormone. Production of T3 is activated by thyroid-stimulating hormone (TSH), which is released from the pituitary gland. As the true hormone, the effects of T3 on target tissues are roughly four times more potent than those of T4. In any case, the concentration of T3 in the human blood plasma is about one-fortieth that of T4. * T3 increases the basal metabolic rate * It increases the production of the Na+/K+ -ATPase * T3 stimulates the production of RNA polymerase I and II and, therefore, increases the rate of protein synthesis and also the rate of protein degradation. * It increases the rate of glycogen breakdown and glucose synthesis in gluconeogenesis. * T3 increases the heart rate and force of contraction, thus increasing cardiac output * It increases the rate of lipolysis. * It affects the lungs and influences the postnatal growth of the central nervous system. THYROXINE (T4) Thyroxine is the main hormone secreted into the bloodstream by the thyroid gland. It is inactive and most of it is converted to an active form called triiodothyronine by organs such as the liver, spleen and kidneys. Thyroxine is also known as T4, tetraiodothyronine, thyroxin Thyroxine is formed by the molecular addition of iodine to the amino acid tyrosine while the latter is bound to the protein thyroglobulin. * Stimulate the consumption of oxygen * Regulates body’s metabolic rate. * Regulates heart and digestive functions * Regulates muscle control * Helps in brain development and maintenance of bones. THYROID STIMULATING HORMONE(TSH) Thyroid-stimulating hormone (also known as TSH or thyrotropin) is a hormone that stimulates the thyroid gland to produce thyroxine (T4), and then triiodothyronine (T3) which stimulates the metabolism of almost every tissue in the body. It is a glycoprotein hormone synthesized and secreted by thyrotrope cells in the anterior pituitary gland, which regulates the endocrine function of the thyroid gland. THYROTROPIN RELEASING HORMONE (TRH) Thyrotropin-releasing hormone (TRH), also called thyrotropin-releasing factor (TRF), thyroliberin or protirelin, is a tropic, tripeptidal hormone that stimulates the release of TSH and prolactin from the anterior pituitary. DISORDERS RELATED TO THYROID GLAND Thyroid disorders include hyperthyroidism (abnormally increased activity), hypothyroidism (abnormally decreased activity) and thyroid nodules, which are generally benign thyroid neoplasms, but may be thyroid cancers. All these disorders may give rise to goiter, that is, an enlarged thyroid. Thyroid problems are more common in women than men. Thyroid problems are among the most common medical conditions but, because their symptoms often appear gradually, they are commonly misdiagnosed. HYPOTHYROIDISM Hypothyroidism is the underproduction of the thyroid hormones T3 and T4. It is estimated that 3% to 5% of the population has some form of hypothyroidism. The condition is more common in women than in men, and its incidence increases with age. Hypothyroid disorders may occur as a result of congenital thyroid abnormalities autoimmune disorders such as * Hashimoto’s thyroiditis * iodine deficiency * By the removal of the thyroid following surgery to treat severe hyperthyroidism or thyroid cancer or from radioactive iodine. * Lymphocytic thyroiditis (which may occur after hyperthyroidism) * Pituitary or hypothalamic disease Negative feedback mechanisms result in growth of the thyroid gland when thyroid hormones are being produced in sufficiently low quantities as a means of increasing the thyroid output. SYMPTOMS OF HYPOTHYROIDISM Patients with mild hypothyroidism may have no signs or symptoms. The symptoms generally become more obvious as the condition worsens Other symptoms are- * Fatigue * Modest weight gain * Cold intolerance * Excessive sleepiness * Dry, coarse hair and dry skin * Constipation * Vague aches and pains * Increased cholesterol levels * Swelling of the legs * Menstrual Irregularities As the disease becomes more severe, there may be puffiness around the eyes, a slowing of the heart rate, a drop in body temperature, and heart failure. TREATMENT OF HYPOTHYROIDISM Hypothyroidism is treated with hormone replacement therapy, such as levothyroxine, which is typically required for the rest of the patient’s life. Thyroid hormone treatment is given under the care of a physician and may take a few weeks to become effective. Treatment of Underactive Thyroid is long term. HOMEOPATHIC TREATMENT Homeopathic treatment aims at stimulating the Thyroid gland to produce its own thyroid hormones. External supply of the hormone is not the treatment but an arrangement. This is possible in in many cases if not all. If achieved successfully, lifelong need for thyroid supplement may not require. CASE STUDY OF HYPOTHYROIDISM A 32-year-old woman has a history of nephrotic syndrome She has complaining of numbness of her right index and ring fingers had gained about 6kg the previous year. She exhibited a tired look with slight periorbital puffiness. For the last six months she had noted dry skin, decreased energy and a change in her voice. Thyroid function tests TEST OBSERVED VALUE NORMAL RANGE Free thyroxine 0.39 nq/dl 0.8-2.0 nq/dl TSH 68µIU/ml 0.2-5.5µIU/ml What is the diagnosis for this reaction ? According to the above report the patient had been suffering from all the above mentioned symptoms of hypothyroidism also patient’s thyroid function test indicates that the TSH observed value and thyroxine observed value is high and low respectively when compared to the normal value. Therefore patient is suffering from hypothyroidism and should follow the treatment for the same under endocrinologist . HYPERTHYROIDISM Hyperthyroidism, or overactive thyroid, is the overproduction of the thyroid hormones T3 and T4, and is most commonly caused by the development of Graves’ disease an autoimmune disease in which antibodies are produced which stimulate the thyroid to secrete excessive quantities of thyroid hormones. The disease can result in the formation of a toxic goiter as a result of thyroid growth in response to a lack of negative feedback mechanisms. SYMPTOMS OF HYPERTHYROIDISM It presents with symptoms such as : * Thyroid goiter * Protruding eyes * Palpitations * Excess sweating * Diarrhea * Weight loss * Muscle weakness * Unusual sensitivity to heat * The appetite is often increased. TREATMENT OF HYPERTHYROIDISM Beta blockers are used to decrease symptoms of hyperthyroidism such as increased heart rate, tremors, anxiety and heart palpitations. Anti-thyroid drugs are used to decrease the production of thyroid hormones, in particular, in the case of Graves’ disease. These medications take several months to take full effect and have side-effects such as skin rash or a drop in white blood cell count. These drugs involve frequent dosing (often one pill every 8 hours) and often require frequent doctor visits and blood tests to monitor the treatment. Due to the side-effects and inconvenience of such drug regimens, some patients choose to undergo radioactive iodine-131 treatment. Radioactive iodine is administered in order to destroy a portion of or the entire thyroid gland, since the radioactive iodine is selectively taken up by the gland and gradually destroys the cells of the gland. Alternatively, the gland may be partially or entirely removed surgically. CONCLUSION The thyroid is the master gland of metabolism and energy, and problems with the gland affect everything from weight, to mental health, to fertility, heart disease risk, and many other important aspects of our day-to-day health. Thyroid issues are becoming more and more of a problem for people, and are a growing concern in the medical field. But, many thyroid conditions can be easily prevented with a proper diet and lifestyle. A healthy diet, exercise, proper nutrition, and stress reduction can all minimize the chance of developing thyroid disease. Reducing stress using effective mind-body techniques can play a part in preventing thyroid disease. Preventing thyroid problems can help you live a long and happy life, and may even help to prevent other conditions that result from thyroid problems. CASE STUDY OF HYPERTHYROIDISM A 55-year-old man complained of nervousness and fatigue which had been apparent for 3 months. He had lost 10 pounds. He is a thin, anxious appearing man, Pulse is 110 BPM and blood pressure is 140/70 mmHg.. He is unable to move his eyes completely into the superior-temporal position. The thyroid gland is moderately firm and symmetrically enlarged to an estimated 50 g (normal, 15-20 g). The skin is warm and smooth. Hair is of fine texture. THYROID FUNCTION TEST TEST OBSERVED VALUE NORMAL RANGE TSH 0.1 µIU/ml 0.5-4.6 µIU/ml T4 free 3.8 ng/dL 0.7-2.0 ng/dl What is the diagnosis for the above report ?
https://blablawriting.net/three-mistakes-of-my-life-essay
The thyroid gland, which is located right below your voice box, is an important part of the adrenal system that produces the hormones thyroxine (also called T4) and triiodothyronine (T3). These hormones regulate body temperature; they’re also intricately responsible for how well your cells do their jobs, including their chemical reactions and interactions with other cells. How well the cells and organs in your body perform these functions is referred to as the body’s metabolic rate. Regulation of the body’s metabolism maintains the integrity of your cells, tissues, and body systems. Graves’ disease, which is an autoimmune condition, is the most common cause of hyperthyroidism (an overactive thyroid). Hypothyroidism (an underactive thyroid) is much more prevalent in the United States than hyperthyroidism. Hypothyroidism is very common in women in their 30s and 40s, and the most common cause is an autoimmune condition called Hashimoto’s thyroiditis. Being diagnosed with Hashimoto’s thyroiditis dramatically increases your risk of developing other autoimmune conditions. In 2010, the American Journal of Medicine reported the results of a questionnaire that looked at several hundred patients diagnosed with either Graves’ disease or Hashimoto’s thyroiditis. Researchers found that approximately 10 percent of those diagnosed with Graves’ disease had another autoimmune condition. Among the patients diagnosed with Hashimoto’s thyroiditis, approximately 14 percent had another autoimmune condition. The most frequently reported condition in this study was rheumatoid arthritis. Other conditions seen with increasing frequency included lupus; pernicious anemia, an autoimmune cause of B12 deficiency; vitiligo, an autoimmune condition affecting the skin; celiac disease; and Addison’s disease, an autoimmune condition that can cause your adrenal glands to fail abruptly — think of it as adrenal shock requiring life-saving administration of steroids intravenously. Hyperthyroidism: A thyroid gland that’s overactive or hyperfunctioning (producing too much thyroid hormone) can lead to high body temperatures, fast heart rate, diarrhea, intolerance to heat, and heart issues, including cardiac arrhythmias. Hypothyroidism: A thyroid gland that’s underactive or producing too little thyroid hormone can produce symptoms of low body temperature, intolerance to cold, weight gain, fatigue, and lethargy. Because the thyroid produces hormones, it’s part of the body’s endocrine system. A thyroid gland that isn’t working well can affect the other organs that it interacts with, namely the hypothalamus, pituitary gland, and the adrenal glands. The traditional treatments for hyperthyroidism include the use of beta blockers to decrease some of the symptoms of hyperthyroidism as well as medications such as propylthiouracil (PTU), which decreases the production of thyroid hormone. Depending on the cause of hyperthyroidism (there are several), an endocrinologist may order a special radioactive iodine test; this form of iodine can destroy thyroid tissue. The traditional treatments for hypothyroidism include the prescription medication levothyroxine (Synthroid). However, you should be aware that there are other treatment options. One example is the use of another form of thyroid hormone called Armour Thyroid, which has higher amount of T3 compared to T4. The inclusion of iodine and trace minerals, including selenium, is important for supporting thyroid function. Be sure to speak with your healthcare practitioner to find out whether another option would be right for you.
https://www.dummies.com/health/diseases/adrenal-fatigue/thyroid-dysfunction-as-a-cause-of-adrenal-fatigue/
What is Hashimoto Thyroiditis? At first, what is Thyroiditis? Thyroiditis is inflammation (swelling) of the thyroid gland. It is caused by either unusually high or low levels of thyroid hormones in the blood. The thyroid gland lies at the front of the throat, below the larynx (Adam`s apple). It is made up of two lobes that sit on either side of the trachea (windpipe). The thyroid gland makes chemicals called hormones that regulate many metabolic processes, including growth and the rate at which your body burns up energy. Hypothyroidism means the thyroid gland is sluggish or underactive And what exactly is Hashimoto Thyroiditis disease? Hashimoto thyroiditis is caused by the immune system attacking the thyroid gland, making it swell and become damaged. It is autoimmune disease – a disaster in which the immune system turns against the body`s own tissues. As the thyroid is destroyed over time, it becomes unable to produce enough thyroid hormone. This leads to symptoms of an underactive thyroid gland (hypothyroidism) such as tiredness and dry skin. As an autoimmune disease means the immune system attacks the thyroid. This can lead to hypothyroidism – a condition that the thyroid does not make enough hormones for our body needs. Located in the front of your neck, the thyroid gland creates hormones that control metabolism. The thyroid is responsible for regulating metabolism, growth, temperature, and energy, so it is incredibly important to keep thyroid hormones in balance. This is includes your heart rate and how quickly your body uses calories from the foods you eat. Symptoms When you have Hashimoto`s thyroiditis, your immune cells mistakenly attack your healthy thyroid tissue. When this occurs, your thyroid can become inflamed and enlarged to the point that you develop a goiter. The most common and easily recognized symptoms of Hashimoto are goiters, fatigue, weight gain, and constipation. The primary sign of a goiter is visible swelling in the front of your neck. At first, the bulge may be painless. But if left untreated, it can put pressure on your lower neck. In advanced stages, a goiter can interfere with proper breathing and swallowing. Diagnosis If you have symptoms of hypothyroidism, your doctor or nurse will do an exam and order one or more tests. Tests used to find out whether you have hypothyroidism and Hashimoto`s disease include: Treatment Hashimoto`s disease is treated with a daily dose of levothyroxine. This is the same hormone that your thyroid gland makes. The patient with Hashimoto`s thyroiditis disease will probably need to take thyroid hormone pills for the rest of their life. Talk to your doctor about any further concerns. You may have to see your doctor a few times to test the level of thyroid stimulating hormone (TSH) in your body. Thyroid hormone acts very slowly in the body, so it can take several months after the start of treatment for symptoms to go away. The same treatment dose usually works for many years. But your TSH levels may change sometimes, especially during pregnancy, if you have heart disease or if you take menopausal hormone therapy. Your doctor may need to adjust your dose. Tips A diet optimizing the nutrients is vital to an overall recovery plan. The top nutrients are iodine, selenium and zinc. They are specific nutrients you should regularly take to maintain your healthy and functional thyroid. Try to avoid eating any food within 1 or 2 hours of taking your thyroid medication, since it will affect how your medication is absorbed in your body. Always discuss the best diet strategy and medication with your doctor. References:
https://www.internationalglobalhealth.com/blog/Autoimmune-disease-Hashimoto-is-Thyroiditis-Disease/
A patient with hypothyroidism should avoid all foods that interfere with thyroid function, including vegetables from the cabbage family, soybeans, peanuts, kale and spinach. In hypothyroidism, the thyroid gland does not produce enough thyroid hormone, causing the body's metabolism to slow down, according to the University of Maryland Medical Center.Continue Reading Hypothyroidism may be due to Hashimoto's thyroiditis, post-therapeutic hypothyroidism or goiter, according to UMMC. Hashimoto's thyroiditis is a condition where antibodies from the blood mistake the thyroid gland as a threat and begin to destroy it. Treatment for hyperthyroidism often involves removal or destruction of part or all of the thyroid gland, which can result in hypothyroidism. Goiter develops due to a lack of iodine in the diet. It is uncommon in developed countries due to the addition of iodine to salt.
https://www.reference.com/health/foods-should-someone-hypothyroidism-avoid-4dc894246a522675
Hypothyroidism is a very common condition. Approximately 3% to 4% of the U.S. population has some form of hypothyroidism. This type of thyroid disorder is more common in women than in men, and its incidence increases with age. Examples of common causes of hypothyroidism in adults include Hashimoto's thyroiditis, an autoimmune form of overactive thyroid, lymphocytic thyroiditis, which may occur after hyperthyroidism (underactive thyroid), thyroid destruction from radioactive iodine or surgery, pituitary or hypothalamic disease, medications, and severe iodine deficiency. Despite these successes, authors have questioned the efficacy of l-thyroxine monotherapy because about 10% to 15% of patients are dissatisfied as a result of residual symptoms of hypothyroidism (1, 2), including neurocognitive impairment (3), and about 15% of patients do not achieve normal serum triiodothyronine (T3) levels (4). Studies of several animal models indicate that maintaining normal serum T3 levels is a biological priority (5). Although the clinical significance of relatively low serum T3 in humans is not well-defined (1), evidence shows that elevating serum T3 through the administration of both l-thyroxine and l-triiodothyronine has benefited some patients (6, 7). However, this has not been consistently demonstrated across trials (1). Novel findings highlight the molecular mechanisms underlying the inability of l-thyroxine monotherapy to universally normalize measures of thyroid hormone signaling (8, 9), and new evidence may lay the foundation for a role of personalized medicine (10). Understanding the historical rationale for the trend toward l-thyroxine monotherapy allows us to identify scientific and clinical targets for future trials. When it comes to thyroid medications, it’s important for RDs to know the medications can interact with common nutritional supplements. Calcium supplements have the potential to interfere with proper absorption of thyroid medications, so patients must consider the timing when taking both. Studies recommend spacing calcium supplements and thyroid medications by at least four hours.21 Coffee and fiber supplements lower the absorption of thyroid medication, so patients should take them one hour apart.22 Dietitians should confirm whether clients have received and are adhering to these guidelines for optimal health. Hypothyroidism symptoms include: family history of thyroid disorders, hormonal imbalances, irregular periods, infertility, constipation and other digestion issues, weight gain, bloating, puffy face, irregular hair loss and/or thinning of your hair and/or your hair has become coarse, dry, breaking, brittle, and/or is falling out, acne and/or dry or thinning skin, mood disorders, like anxiety or depression, fatigue, low energy and/or low libido, increased sensitivity to cold, low body temperature usually below 98.6 degrees and/or cold hands and feet, muscle weakness, aches, tenderness and stiffness and/or pain, stiffness or swelling in your joints, trouble falling asleep or staying asleep, numbness or tingling in your hands & fingers, difficulty concentrating, focusing or remembering things and brain fog. The thyroid gland uses iodine (mostly from foods in the diet like seafood, bread, and salt) to produce thyroid hormones. The two most important thyroid hormones are thyroxine (tetraiodothyronine or T4) and tri-iodothyronine (T3), which account for 99% and 1% of thyroid hormones present in the blood respectively. However, the hormone with the most biological activity is T3. Once released from the thyroid gland into the blood, a large amount of T4 is converted as needed into T3 - the active hormone that affects the metabolism of cells. The symptoms of hypothyroidism are often subtle. They are not specific (which means they can mimic the symptoms of many other conditions) and are often attributed to aging. People with mild hypothyroidism may have no signs or symptoms. The symptoms generally become more obvious as the condition worsens and the majority of these complaints are related to a metabolic slowing of the body. Common symptoms and signs include: The thyroid gland is a 2-inch butterfly-shaped organ located at the front of the neck. Though the thyroid is small, it’s a major gland in the endocrine system and affects nearly every organ in the body. It regulates fat and carbohydrate metabolism, respiration, body temperature, brain development, cholesterol levels, the heart and nervous system, blood calcium levels, menstrual cycles, skin integrity, and more.1 You want to detox your liver and your gut, as this is where the T4 hormone (inactive hormone) gets converted to T3, the active hormone that actually powers us up. Most of our body cells need T3, not just T4. If you are taking Synthroid, you are taking a synthetic version of T4 that still needs to be converted to T3. If you have a sluggish liver and gut, you won’t convert properly. Hyperthyroidism, or overactive thyroid gland, is another common thyroid condition. The most prevalent form is Graves’ disease in which the body’s autoimmune response causes the thyroid gland to produce too much T3 and T4. Symptoms of hyperthyroidism can include weight loss, high blood pressure, diarrhea, and a rapid heartbeat. Graves’ disease also disproportionately affects women and typically presents before the age of 40.4 Giving appropriate doses of T3 is trickier than appropriately dosing T4. T4 is inactive, so if you give too much there is no immediate, direct tissue effect. T3 is a different story, though, as it is the active thyroid hormone. So if you give too much T3, you can produce hyperthyroid effects directly—a risk, for instance, to people with cardiac disease. Another great source of selenium, nuts make a handy snack that you can take anywhere. They also go well in salads or stir-fries. Brazil nuts, macadamia nuts, and hazelnuts are all particularly high in selenium, which helps the thyroid function properly. With Brazil nuts, you only need to eat one or two; with other nuts, a small handful is enough to get your daily nutrients — and be sure to keep an eye on portion size, as nuts are also very high fat. For starters, consider the effect that hypothyroidism can have on weight. Hypothyroidism (also called low thyroid or underactive thyroid) is marked by insufficient hormone production in the thyroid — the butterfly-shaped gland located at the bottom-front of your neck. This gland affects the body’s metabolic processes, and often, sudden weight gain is an early sign of low thyroid. Affiliate Disclosure: There are links on this site that can be defined as affiliate links. This means that I may receive a small commission (at no cost to you) if you purchase something when clicking on the links that take you through to a different website. By clicking on the links, you are in no way obligated to buy. Please Note: The material on this site is provided for informational purposes only and is not medical advice. Always consult your physician before beginning any diet or exercise program.
https://www.thembcookbook.com/healthy-thyroid-diet-recipes-thyroid-diet-revolution-by-mary-shomon.html
#S16 Guard Standouts Last week's Super 16 took place at Conn College. 5th, 6th, 7th and 8th grade divisions were all under one roof. Today we will recap the guards while tomorrow we will look back at the wing and forward standouts. 5th Expression-Elijah McNair- Mcnair is a pure scorer who despite being on the smaller side knows how to use his body to his advantage. He can score out to midrange and works hard on the defensive end to create turnovers and pressure opposing guards. Storm-Ethan Njenga-A slashing guard, Njenga was unstoppable scoring 20+ point multiple times over the weekend. While his athleticism was eye-catching, what also stood out was his knowledge of when to attack the hoop and when to pass the ball. He has all the tools to perhaps be a special player in the future. CT Elite G Lucas Pivovar-Pivovar is an athletic guard who knows how to put his speed to use. He is an absolute workhorse with a variety of moves to get to the basket and score, even against bigger opponents. AKO Kehari Walker-A point guard with the ability to take over games, Walker showed excellent playmaking ability with the ball in his hands. He also is a high-level defender and knows where he needs to be to pick the ball off. Mount Vernon Elite-William Robinson- An excellent scorer, Robinson has a game IQ and handling ability far beyond his years with a little euro step to the basket as well as the ability to dribble through a sea of defenders. Robinson could definitely be one to watch coming out of a town where basketball is a way of life. 6th NE Extreme-Jack Koutrobis-A lights out shooter, Koutrobis was a standout on a highly talented NE Extreme team. His ability to make it rain from beyond the arc was complimented by a strong work ethic on the other end. Cap City-Niko Badoian- An influential piece on the Cap City squad that won a title, Badoian does it all. Offensively he can make pinpoint passes or step out to hit the long-range jumper. More impressive though, was his ability to disrupt passing lanes defensively and make life tough for opposing guards. Nightmare G Da’Shaun Phillips-Philips is an explosive guard who uses his athleticism to affect the game both inside and out. He can attack the rim and hustle on boards but also showed impressive off-ball movement and knowledge of the game. 7th CT Roughriders-Shayvon Hutchinson-A shifty primary ball handler who can finish at the rim through contact, Hutchinson displays good off-ball movement and game control with his ability to changes speeds. MV Rivals- Taurn Sreekanth- While Sreekanth was one of the smaller players on the court, he embodied "heart over height". A speedy point guard with handles to match, Sreekanth ran the show for MV Rivals on offense and showed an ability to pickpockets on the other end. Berkshire Bulldogs-Isaiah Keefner-Undersized, Keefner was an essential part of a strong Bulldogs squad that made their division final. He knows how to create space for his shot and can knock down the three from almost anywhere on the court. As he is a little undersized, it will be interesting to see how he develops. 8th Vale-Nasire McDaniels and Tyrese Allick-The guard duo were a defensive force for Vale on the weekend. While both are quick and cerebral offensive players, their work rate on the defensive end shown through and got results in the form of steals and ten-second WrightWay Skills-Hakeem Daphins-A well-built guard, Daphins can shoot the three, attack the rim or move the ball around the perimeter. He showed good lateral quickness defensively which can be developed upon in the future making him an intriguing prospect. Team Sims- Daejon Gibson- Able to scores in bunches, Gibson was able to score in the paint even with bigger defenders against him as well as hit the three ball. Most impressive was his court vision which helped his team dominate in transition. HHBC Jake Paluzzi- A lights out knockdown shooter, Paluzzi has a shot form that is near perfect. His release is quick and he can get it off even with defenders closing out. It will be interesting to watch how he expands his game at the high school level. Hoop Haven G Thomas Berkhart- A smaller guard with a sweet stroke, Burkhart showed good range from around the arc. Along with his offensive prowess, Berkhart is a deceptively tight on-ball defender who can make life difficult for opposing guards. CT Elite G Ben Peatrie-An absolute workhorse, Peatrie knows exactly where to be on the floor to help his team. He is always in a position to clean up broken plays and turn mistakes into transition baskets and steals Rose City Intensity G Lebron Maurice- A standout on a loaded RCI team, Maurice was able to affect the game every time he touched the ball. He has innate court vision and IQ paired along with an ability to score at all three levels.
https://hooprootz.tv/news/s16-guard-standouts
WATERLOO, Ia. — Throughout his basketball career, Maishe Dailey has been told he’s too unselfish. You won’t get recruited if you don’t score more, friends back home in Ohio would say. When Dailey sprouted from 6-foot-1 to 6-6 in just two years, the Division I offers started pouring in — more than 20 of them last summer. He refused to let that change his style of play, averaging a mere 14 points per game as a senior at Beachwood High School in suburban Cleveland, along with 8.5 rebounds and 5 assists. Dailey verbally committed to Rutgers in January, then backed away from those plans when coach Eddie Jordan was fired in March. He wanted to be sure the new coaching staff still wanted him. In the meantime, Florida, Connecticut and Providence College entered the picture. So did Iowa. Dailey visited, and he was hooked. Dailey is a difficult player to get a read on because of his insistence on playing team basketball in a summer-league setting that tends to reward showy players. In his first PTL game, he was noticeable for passing up open shots to deliver the basketball to teammates closer to the basket. Dailey is a spritely 185 pounds and says he’s not done growing. He thinks he will end up at 6-8 and hopes to add another 10 pounds of muscle before his rookie college season begins in November. A diet heavy on protein and a strict weightlifting regimen should get him there. Still, he may end up redshirting on an Iowa team that has five incoming freshmen. He said that has not been discussed yet, and he’s preparing as if he will play this winter. “They’ve been talking about me just being ready for the season — just keep progressing and I’m on the right path to contribute this year,” Dailey said of the feedback he’s getting from coach Fran McCaffery and the Iowa staff. But what position he will play remains an open question. In high school, he played both guard spots and both forward positions. At Iowa, he’s been spending time at point guard and at each wing. His strength, Dailey believes, is that versatility. “My ability to guard the smallest player on the court and the biggest player on the court” is what he brings to the team, says Dailey, whose wing span approaches 6-10. “My energy. I think I can really ramp it up and do whatever the coach asks me to do. A young Hawkeyes team can certainly use all of that. So perhaps Dailey will get the chance to showcase his talents and strong basketball IQ beyond just the PTL this year. Regardless, he feels vindicated for not listening to the chatter in Ohio and trying to change who he is as a player.
https://www.hawkcentral.com/story/sports/2016/07/17/maishe-dailey-brings-unselfish-style-hawkeyes/87228986/
BY JOE LAURICH, STAFF WRITER Following a fantastic 2019 recruiting class, Travis Steele and company struck gold again in 2020 adding three of the top 150 players in the class. The 2020 Xavier recruiting class was ranked 24th in the country by 247 Sports and was ranked third in the Big East. One of the big themes of the class was players with winning pedigrees who could help bring a winning culture to Xavier. The first player that committed to Xavier was Dwon Odom out of Saint Francis School in Alpharetta, Ga. He was ranked the 48th best player overall and the ninth best point guard. After losing point guard Quentin Goodin last season, Xavier was looking to add a playmaking point guard that could create space and run a controlled fast tempo offense. Odom brings exactly that: fantastic court vision, unmatched explosiveness and a great basketball IQ to run a fast offense while being able to maintain control. The next player to commit to Xavier was CJ Wilcher out of Plainfield N.J., where he attended Roselle Catholic. Wilcher is 6-foot-5, 195 lbs and was ranked as the 107th best overall and 15th best shooting guard in the class. One of the big problems that Coach Steele’s teams faced in the first two years was a complete lack of perimeter shooting. Last year’s team only shot 31.2% from behind the arc and something needed to be done, so Xavier brought in one of the best three point shooters in the class in Wilcher. He has the ability to take over games and catch fire very quickly. One example of this was in the New Jersey North Non-Public title game. Roselle Catholic was down 10 midway through the 4th quarter and Wilcher proceeded to drop three straight three-pointers to get them right back in a game that Roselle Catholic ended up winning by one point. The final player that was added to the class was Colby Jones from Alabama, where he attended Mountain Brook High School. Jones is 6-foot-5, 195 lbs and was ranked the 119th best overall player and the 17th best shooting guard in the class. Jones can do a little bit of everything, and he will likely play the small forward position for Xavier, but he has the abilities to play point guard and shooting guard as well. Jones has the ability to be a talented shooter, and he also brings fantastic basketball IQ and the ability to find the open man. Expect Jones to be in contention to start at small forward for the Musketeers.
https://xaviernewswire.com/2020/11/23/xavier-brings-in-three-new-freshmen/
He won a championship and an NBA Finals Most Valuable Player Award in his rookie season, and won four more championships with the Lakers during the 1980s. Johnson retired abruptly in 1991 after announcing that he had contracted HIV, but returned to play in the 1992 All Star Game, winning the 5 NBA titles, 3 MVP awards, Olympic Gold, NCAA title, High School title. Magic Johnson revolutionized the point guard position at 6 9? He could play 5 positions and dominated you with a cold blooded smile. Amazingly, he is probably a better businessman than he was a basketball player. He won 5 titles in 9 appearances. All he did was win. He made the art of passing fashionable in the NBA. Michael Jordan Wilt Chamberlain Greatest NBA Players of All Time Larry Bird Kareem Abdul Jabbar Shaquille O neal Kobe Bryant Magic Johnson Bill Russell Tim Duncan Kevin Durant More ...
https://mobsea.com/Greatest-NBA-Players-of-All-Time
Evaluation: Lefty PG is truly the definition of dominant at his age group. Pure mid-range jumper, that is beginning to fall out to the three point line, elite passing instincts, creative and tight handle where he can crossover any defender and create space, and the ability to score around the rim all makes him Ohio’s top PG in 2015. Lightning quick first step and good open court speed make him a threat on either end. Flashy guard that is despite his speed is always under control and ahead of everyone else in his understanding of what is going on, great basketball IQ and natural instincts. Tough competitor that thrives in the biggest challenges and is considered one of the nation’s best in his class. Bottom Line: AJ is one of Ohio’s most elite prospects in his class. Tremendous potential as a dual threat PG that flat out wins he looks like a high major lock if he can just get a little bigger. Notes: Has already received an offer from Dayton, before even playing a HS game, and has received interest from Xavier, Cincinnati, Ohio State, and many more.
https://tripledoubleprospects.com/2011/09/02/aj-harris/
Rising Senior Spotlight - Jake Daly Jake Daly was a basketball star in his hometown of Lagrangeville, New York. He attened Millbrook High School where he was a three-year starter who had essentially earned every accolade there was to earn - he was the top scorer in the county, sectional MVP, All-County, and All-State. He needed to measure himself against higher levels of competition though and so when Paul Lee left his coaching position at nearby Marist College to take over at the Kent School, Daly followed suit. After a strong first year in the NEPSAC, Daly already has wide-ranging recruitment from top academic division III schools, but he's also pinging the scholarship radar, and rightfully so, as schools like Adelphia, VMI, and Air Force have recognized his unique potential as a true six-foot-five point guard with considerable upside. Prospect Profile Height: 6'5" Position: Point Guard School: Kent School By the Numbers 2019-20 Stats: 12pts, 8ast, 7reb, 2blk, & 2stl Academics: 3.3 GPA Personal Statement I am an unselfish, high IQ point guard who is a three level scorer and has extremely good court vision. Recommendation "Unique is a great word to describe Jake. As a 6-foot-5 point guard, he has the size and length that poses match-up problems for opposing teams. He also possesses tremendous vision and the ability to see over the defense and deliver the ball on-time and on-target. He can score at all three levels but his greatest attribute is his unselfishness. He comes from a terrific, supportive family with parents coaches themselves. He is a gym rat who is only going to get better as he continues to physically mature." - Paul Lee, Kent School Head Coach & longtime college coach Scouting Report "Whenever I see a prospect claim to be a six-foot-five point guard, I'm conditioned to be immediately skeptical. History has taught me that there's a good chance they're either exaggerating their height, far from a true point guard, or most likely, both. Well Jake Daly is the exception to that rule. Not only is he a true six-foot-five point guard, he's a throwback style, pass-first, point guard at that. He's a distributor in that he passes not just to accumulate assists but for the value of moving the ball himself. His ability to see and pass over top of the defense is a huge asset. Physically, he's a long way from being a finished product, which in the long-run, is a good sign in terms of his upside since his game is only going to keep evolving alongside his frame."
https://newenglandrecruitingreport.com/in-the-news/rising-senior-spotlight-jake-daley
2003 Generation Ranking UpdateMrCrossover #1 prospect of this generation is Nikola Radovanović. Nikola is very talented 6’7 lefty forward from KK Budućnost Bijeljina. His body is still developing and he has a thin frame, but wide shoulders that suggest that he can add considerable weight with no problem in the future. Already played some senior minutes in his club. Tristan Vukčević is #2 prospect on the list. With 6’10 has tremendous size, length and long wingspan. Super agile for his size, long strides in transition. Needs to improve his footwork and understanding the game. Dejan Pavlović is 6’3 combo guard that can play both ways equally good. Explosive and quick player, with good body strength. Plays hard on the defensive end. Aggressive on ball. Gets many steals . Doesn’t really have PG organizing skills. Drives to the basket only on straight lines. Matija Belić is 6’6 forward who mostly relies on his strength but also had good basketball IQ. He loves to play through contact and has the frame to absorb it when going to the rim. Can knock down shoots from mid range and from 3PT line too. This season he play against older competition, and he was dominated. He also got chance to play on senior level and gain priceless experience for his age. Srđan Popović is probably the most complete player in this generation. Basketball skills are on high level. Great ball handler, passer and shooter. Also he is big competitor and has great leadership quallity. His body is prety much developed at this point and that could be a problem in future, because he is undersized in height at 6’0, but he compensate it with great physicality, athleticism, and aggression. Marko Marković and Stefan Dabović are towers of this generation, both standing at 6’10 at the moment and looks like they still didn’t finish with growing, especially Marković. Ognjen Razić is 6’5 SG/SF who plays for KK Polet. He is good athlete with great run and jump ability. Good slashing ability, excels when driving to the basket. Promising as a shooter. This season played U17 Triglav KLS as two year younger than competition and had solid role. Igor Vidačić is one of the most talented players in this generation. He has laid back personality and sometimes he looks uninterested on the court, but it’s incredible how he play with ease. Around 6’3 at the moment, has thin body but he is still in early stage of development, weight will come over the years. Still undefined position. Has solid court vision and good IQ for PG, but also can be effective shooting coming out of sceens as SG. Marko Mihailović is 6’5 SG from KK Loznica. Very talented and skilled player. Has a nice feel for the game, but still has lot of room for improvement. Didn’t have proper competition this year played lower league in his region. New players on the list: Dino Bocevski is son of former basketball player Dusan Bocevski. From very young age Dino went from home and come to Stella Azzura basketball academy. He is late-bloomer physically, his body is still in development phase and that is why he still can’t have bigger impact on the game against quicker and stronger opponents. Has a nice feel for the game and good hands and touch around the basket. Needs to keep working on lateral movement. Danilo Radulović is one of the most athletic players from this generation. He plays for Igokea,he came this season from KK Sloboda Novi Grad. Standing around 6’6 , can play both forward position. Runs the floor well with great agility. Quick leaper – light on his feet. Has ability to defend multiple positions. Good rebounder. Needs to improve offensive skil set. Đorđe Miličić is 6’5 guard from KK Sloga Elite. Already has great size for the guard, and still didn’t finish with growing. Can play both guard position equally good. His biggest strength is great court vision.Highly unselfish. Sees the floor well and very willing passer, always plays with his head up and finding his open teammates. Great rebounder on both ends of the floor. Needs to work on his shoot, he is not a treat from outside. Also has problems finishing through contact and in traffic. Danilo Labović is 6’2 PG from Foka. He is pass first PG. Vey good court vision, passing ability, and understanding of the game. High IQ. Plays well within the offense, doesn’t force his scoring or try to do too much, he keeps it simply. Athleticism isn’t his strong side, he is not to quickest and fastest guy on the floor but use his body well while he is driving and going to the rim. Needs to work on shot consistency, especially his mechanics and foot positioning . Vuk Bošković is brother of Serbian NT volleyball player, Tijana Bošković. Versatile player , standing around 6’5, who can play from point guard to small forward. Has a chubby body, but he already lost a lot of pounds this season when he came to Igokea from KK Hea Bileca and he is late-bloomer so when he grow couple more inches shouldn’t be a problem to get his body in shape.Great offensive skill set. Has great awareness and reads the game well. Good ball handling and passing ability. Can attack from the perimeter and has an ability to finish low post plays. Defensively, lacks speed and lateral quickness. Marko Vasiljević is 6’2 SG from KK Sumadija. His main weapon is 1on1 game. He is good ball handler with a great change of pace dribble, also does a great job of changing directions while driving to basket. When he goes to the basket, he is adept at using his body to shield himself from bigger defenders and finish through contact. Can finish on multiple ways, using floaters, lay ups. Has nice touch around the basket. Very good shooter, can score off the dribble with ease. Does not have ideal size at 6-2 or length for SG position, and he is just an average athlete.
https://www.serbiahoop.com/2003-generation-ranking-update/
Lonzo Ball is one of the unique point guards in the NBA. Not a traditional point guard that slows down and dictates the opposite, Ball uses his high basketball IQ to push the pace and run a high-tempo offense. Part of what makes him unique is also what holds him back in other aspects. Ball has never been a particularly effective guard in pick-and-roll situations, and only last season did he become a respectable shooter from range. In speaking with the media on Monday, Stan Van Gundy acknowledged the uniqueness of Ball while discussing many of the areas he needs to improve upon. “He’s a guy that I want to try to put in some different situations but I think the thing that we really want to focus on improvement with him in terms of playing the game is being a little bit more aggressive on pick-and-rolls in particular to turn the corner and go to the basket to score. I think he can be a better finisher turning the corner on his drives and I think it’s a mindset. He’s so unselfish and such a great passer that I think there’s times he’s coming off looking to pass and then what opens up is the basket and, my experience is, it’s hard to adjust from the pass to the shot whereas if you’re thinking score and the help comes, a guy with Lonzo’s instincts will see the play. So, we just sort of need to turn that around with him but his passing ability, his ability in transition and his 3-point shooting alone make him a very, very good player. And I think, like, we’ve got a lot of guys who are playmakers now, and I think playing multiple playmakers together is a good thing in today’s game particularly in the amount of switching you see and everything else. Your ability to break people down off the dribble and make plays is absolutely essential and I think having more of those guys gives you an advantage.” Van Gundy gave a more detailed answer as to how he’d use Ball when discussing the Pelicans on Zach Lowe’s podcast “The Lowe Post” prior to the NBA bubble and him being New Orleans’ head coach. “…Like, we’ve been reading about…with Philly Ben Simmons playing more as a traditional four. I sort of think of Ben Simmons as the point guard in transition, lets go ahead and outlet to him or he’s such a great rebounder like Lonzo Ball is. Take it off the glass and go ahead and be our point guard in transition. And then in the halfcourt, fine, we’ll use Ben Simmons as the screener in the pick and roll. I think there’s going to be more and more guys in the league like that where, in transition they’re going to play one position and in the half-court, they’re going to have a whole other role and I think that’s where Lonzo Ball fits for these guys.” Last season, Ball ranked in the 11th percentile as a pick-and-roll ballhandler, per Synergy, despite it being his second-most common play type. As Van Gundy noted, he was poor as a finisher in those situations, ranking in the 14th percentile on drives to the basket out of ball screen situations. Also as Van Gundy notes, Ball was a willing and able passer. On offense derived from pick and rolls for Ball, he ranked in the 66th percentile and was particularly adept in passes to the roll man, ranking in the 74th percentile. If Ball can add an element of scoring to his pick-and-roll repertoire, he could become a significantly more effective offensive player. Unfortunately, the shortened offseason may hinder how much Ball or any other player can improve their game. It’ll certainly be a storyline worth watching as the season nears.
https://lonzowire.usatoday.com/2020/12/01/lonzo-ball-news-stan-van-gundy-new-orleans-pelicans-different-situations/
Nets convinced D’Angelo Russell wants to be great The Brooklyn Nets traded for point guard D’Angelo Russell for a reason, and despite his indifferent time at the franchise so far, head coach Kenny Atkinson and general manager Sean Marks still believe in his ability to lead the franchise. Talking to ESPN Radio, Atkinson sang Russell’s praises (via the New York Post). “He’s a talented guy. He’s got great court vision, incredible hand-eye coordination, really understands the game,” Nets coach Kenny Atkinson said. “He wants to be great. He’s been in our gym all summer. In the NBA, it’s not an obligation. You’ve got to want to be there. So he’s been there. He’s been really working on his body. He’s got to make strides there. He’s got to get stronger. He worked on his explosiveness. But he’s proven it to me by being there every day the offseason.” It’s obvious both men wholeheartedly believe Russell can be a truly great player in the NBA. To be fair to them, Russell has shown glimpses of genius with his passing and court vision, and displayed he can definitely excel in this era of NBA basketball with his outside shooting. Russell has struggled to stay healthy since his move last summer, as he featured in only 48 games last campaign due to a left knee injury. The Nets sorely missed his production, as they stumbled to yet another losing season. For his career, Russell is good for 14.6 points, 3.6 rebounds and 4.3 assists per game. His numbers aren’t jaw-dropping just yet, but with his high basketball IQ and cerebral playing style, his numbers sometimes don’t tell the full story when looking at his promise and upside. Russell definitely has the tools to be a special guard in the NBA, but it’s up to him to work hard and maximize his potential. With Russell back, as well as the additions of Kenneth Faried and Ed Davis, the Nets are looking to make a playoff push and reach the postseason for the first time since 2014.
https://clutchpoints.com/nets-news-brooklyn-convinced-dangelo-russell-wants-to-be-great/
During my Southern California swing, one of my goals was to evaluate 6-foot-5 guard Justin Simon, who we are thinking about moving up in our next 2015 rankings. After a strong summer on the circuit with Gamepoint, Simon moved up his way up to No. 55 nationally and No. 10 at the point guard position. Early returns this season indicate that, that ranking is too low. Simon's performance on Thursday, confirmed those sentiments. Simon led his team to a double-digit victory and filled the box score, finishing with 20 points, 10 rebounds, six assists, five steals, three blocks and just two turnovers. Initially what sticks out about Simon is his terrific frame. Simon stands 6-foot-5, has a wide set of shoulders and arms that stretch nearly to his knees. He's been measured with a 6-foot-11 wingspan. Once you get past his physical features, it's easy to see Simon's versatility. He impacts the game in a variety of ways and on both ends of the floor. Offensively he plays with the ball in his hands for Temecula Valley. For a guy his size, he handles the ball well, has an impressive basketball IQ and sees the floor quite well. Throughout the contest on Thursday, Simon made good decision and looked to get his teammates involved. Perhaps the biggest critique of Simon's game has centered on his shooting ability. But Simon opened up the game with a pair of mid-range jumpers, one from 10-feet and then another from the elbow about 15-feet from the rim. Shortly after hitting a pair of jumpers, Simon began to attack the rim and make plays in transition. He caught a lob from a teammate, showing off his tremendous athleticism, and began to make his way to the free throw line. Simon appears to be at his best when he's in attack mode. He's good at creating shot opportunities for his teams, but because of his physical tools, he's a strong finisher in the lane and is able to slice his way to the rim. Simon did miss both of his three-point attempts and his long-range shot is an area of his game that he'll need to spruce up, but his form is fine and he finishing with a high release. In all Simon, scored off three mid-range pull-ups, two floaters in traffic and three finishes at the rim. He also made his way to the free throw line nine times. He finished 8-for-14 from the field and 5-for-9 from the free throw line. Defensively, is an area that Simon excels. Simon has a terrific combination of his size, length, athleticism, lateral quickness and strength. All of those attributes combined make him a terrific defender. Simon is capable of defending multiple positions and on Thursday he guarded every position on the floor. At the next level, Simon will be capable of defending the one, two or three and likely at a high level. One question surrounding Simon will be what position to project him at. Personally I like him on the ball because of his skill, passing, IQ and ability to attack off the bounce. With that said, I could see why some would prefer him off of it and in reality, if we had the combo guard classification at Scout.com, that's what we would go with. "Really I'm just a guard that can come in and guard the one, two or three," he told me after the game. "I'd really like to develop myself as a point guard at whichever University I attend. I don't mind playing guard or either position. I just want to be on the floor. That's what it comes down to. If you can defend you can stay on the floor." Regardless of position, Simon has a lot of value and the attention he's beginning to get on the recruiting trail is warranted. This time last year, Simon's only scholarship offer was from UC Irvine, now he's a surefire high major prospect and capable of playing for any school in the country. Simon Shows Impressive Skillset Bruin Report Online Top Stories UCLA Prospects In Games This Weekend - Week 5Sept. 30 -- Check out where UCLA prospects will be playing this weekend throughout the country... Bruin Report OnlineYesterday at 1:25 PM UCLA vs. Arizona Statistical PreviewSep. 30 -- What do the stats tell us about UCLA's upcoming game against Arizona? Bruin Report OnlineYesterday at 1:05 PM Arizonan Rush End Breaks Down UCLA OfferSept. 30 -- Back from his first official visit, Tempe (Ariz.) defensive end My-King Johnson breaks down his recruitment, looks ahead to more trips and mulls his new offer from UCLA… Bruin Report OnlineYesterday at 11:55 AM Three-star DB Elijah Hicks Talks Future PlansSept. 30 -- La Mirada (Calif.) defensive back Elijah Hicks visited Northwestern last weekend and he talks about his trip plus plans for future visits...
http://www.scout.com/college/ucla/story/1371403-simon-shows-impressive-skillset
Ashton Hagans has been one of the best defensive guards in college basketball during his two years at the University of Kentucky. He’s a willing passer who looks to create shots for others within the offense before searching for his shot. Hagans’ unselfishness hurts him at times, and he can be too passive instead of forcing the defense to treat him as a scoring threat. At 6’3″, 198 lbs, and possessing a 6’6″ wingspan, Hagans has an excellent physical profile to make an impact at the NBA level. Hagans is one of the more athletic point guards in the class, but to maximize his effectiveness at the next level, he will need to improve his decision-making and jump shooting. If he becomes a more confident jump shooter and looks to score more, a team could find themselves a steal in the second round of the draft. Strengths Defense – Lightning-quick hands lead to steals, plays the passing lanes well Transition Scoring – feeds off of defensive plays in the open floor Playmaking – Excellent vision in transition, drop-off opportunities Intangibles – Team-oriented player, mature beyond his years Weaknesses Jump Shooting – Lack of confidence in a jump shot, poor mechanics Offensive Assertiveness – Plays too passive at times, defense plays him as a passer Turnovers – Unwillingness to shoot can lead to forced passes Shot selection – Can force the drive on occasion, unwilling to shoot open threes Best Landing Spot Portland Trailblazers. Outside their star backcourt of Damian Lillard and CJ McCollum, the Blazers lack any sort of playmaking or defense on the perimeter. Hagans will provide the team with just that. He’s one of the better defenders at his position in the draft and will provide Portland with a nice boost of playmaking off the bench. I don’t see him playing a ton of minutes in Terry Stott’s offensive-oriented system, but if he can improve his jump shot and offensive arsenal, he could find himself in Portland’s rotation. He’s more of a defensive stopper and project on offense, but he has a ton of upside if he improves his consistency as a jump shooter. Worst Landing Spot Boston Celtics. I don’t like the fit between Hagans and the Celtics, being that the roster has a ton of players who are primary ball handlers. This would force Hagans into a role as a spot-up shooter, and he is nowhere near ready to shoot the NBA three at a high volume. I’m not sure if his game is suited for Brad Steven’s system either, and the Celtics already have a similar player in Hagans with Marcus Smart. He lacks the shooting potential even to be considered as an option for Boston, and I wouldn’t say I like the fit at all. Ratings Breakdown Basketball IQ: 7. Overall IQ for the game is solid on both ends. Hagans knows how to space out the floor and move without the ball on offense, but must improve his shot IQ. At times he is far too passive and turns down open jump shots for contested shots in the paint. Shooting: 6. His mechanics aren’t terrible, and his jump shot isn’t broken, but he is very hesitant and has very little confidence in his jumper. Most of his makes outside the paint come off the dribble, and he isn’t a reliable catch-and-shoot option. He hits the occasional mid-range shot but mainly focuses on attacking the basket. That being said, Hagans is an excellent finisher going to his right and shows the ability to get to the free-throw line and finish through contact. Passing: 7. Above-average at creating looks for his teammates, is a pass-first point guard. Excellent in kick out and drop-off situations. He must work on making the right reads in the halfcourt and cut down on turnovers. Excels at making reads in transition and is always a threat to look down the floor to find open teammates on the break. Dribbling: 7. Decent ball security and handle. Lacks creativity and speed with his moves to blow by defenders. Hustle: 7. Good energy and effort on both ends, battles for the occasional loose ball and will fight for rebounds. Rebounding: 4. Most rebounds come off of long jump shots. He tends to hover around the perimeter after shots but will battle for rebounds occasionally. Defense: 8. Probably his best attribute. Is a pest in the passing lanes and off the ball, but must work on staying in front of ball-handlers. He gambles far too much resulting in weak team defensive possessions. Leadership: 8. Values team success and feeds off of teammate success. Athleticism: 8. A very fluid athlete but isn’t overly explosive but has solid athletic ability and excellent speed. Upside: 8. Has a ton of upside if he can improve his shot mechanics and offensive IQ. He will contribute right away as a defender and player who can initiate offensive sets for the second unit. Total Rating: 70/100 Ashton Hagans NBA Comparison Patrick Beverley. If everything works right, Hagans has much more potential as an offensive player than Beverley. He’s much more athletic and has a more versatile offensive game than Beverley does. Like the Clipper’s point guard, Hagans will begin his career as a defensive player. I’m not sure if Hagans has the same edge to his game as Beverley, but both are similar in stature and playstyle. Hagans must improve his outside shooting, much like Beverley has done to gain a more substantial role on an NBA team.
https://www.nbamockdraft.com/ashton-hagans/